The perceptual timescape: Perceptual history on the sub-second scale

There is a high-capacity store of brief time span (~1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of millisec-onds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.


Introduction
There is abundant evidence for a store of information on a time scale up to ~1000 ms, variously called "visual information storage" (Sperling, 1960), "iconic memory" (Neisser, 1967), "sensory register" (Atkinson & Shiffrin, 1968), "information persistence" (Coltheart, 1980), and "sensory memory" ( Ögmen & Herzog, 2016).Here the neutral term "sub-second store" will be used; this terminological issue will be further discussed in section 1.2.In early research the store was conceived as holding a rapidly fading representation of a static visual image, such as a 3 rows × 4 columns grid of letters and/or digits presented for 50 ms (Sperling, 1960).Haber (1983) argued that a static, fading image would not be useful for perception: "persistencelike mechanisms are unrelated and even antithetical to normal perception… normal perception is not made up of brief discrete flashes, single or in combination.Persistence, and therefore icons, are irrelevant" (p.3).Preservation of a time slice, he argued, would "smear hopelessly the information to be processedsmear it by freezing it" (p.5).He argued that it is "the continuity of change that constitutes information for perception, not the frozen content of any particular slice" (p.5).
This paper takes Haber's (1983) argument as its starting point and proposes a store of temporally differentiated information about E-mail address: whitepa@cardiff.ac.uk.recent perceptual history, with features that support the representation of all kinds of persistence and change on the sub-second scale.
The proposed structure addresses the smear problem raised by Haber (1983) by segregating information according to temporal features.There is first an introduction to the sub-second store (remainder of section 1) and an introduction to time marking, which plays an important part in the proposal (section 2), and then the proposal is developed in section 3.
The sub-second store can also be distinguished from two forms of memory on longer time scales, working memory and fragile very short-term memory.Working memory retains information on a time scale of seconds, and information can be maintained in it by means of rehearsal, which is not the case for the sub-second store (Atkinson & Shiffrin, 1968;Baddeley, 2000;Baddeley & Hitch, 1974;Lewis-Peacock, Kessler, & Oberauer, 2018;Oberauer, 2019).Since the present paper focusses mainly on vision, visual working memory will be briefly considered.The sub-second store has higher capacity than visual working memory but also rapid decay, with most information lost after about 300 ms and a maximum duration of information of ~1000 ms (Coltheart, 1980;Ögmen & Herzog, 2016;Sligte, Vandenbroucke, Scholte, & Lamme, 2010;Sperling, 1960).Sligte et al. (2010) found evidence for a capacity of six high resolution complex visual objects in the sub-second store compared to one high resolution and one low resolution visual object in visual working memory.Vogel and Awh (2008) found a visual working memory capacity of three or four coloured squares, but these were simpler objects than those used as stimuli by Sligte et al. (2010).Information storage in visual working memory shows little decrement over a few seconds, seemingly due to internal noise accumulation (Kuuramo, Saarinen, & Kurki, 2022) unless there are interference effects (Mercer, Jarvis, Lawton, & Walters, 2022;Oberauer & Lin, 2017).The store of verbal information called the phonological loop (Baddeley, 2000;Baddeley & Hitch, 1974) has a capacity of two or three items.It was originally thought that information in it decays rapidly (Baddeley & Hitch, 1974) but some studies have suggested that information can persist in it indefinitely and is subject only to forgetting by intereference (Berman, Jonides, & Lewis, 2009;Lewandowsky, Oberauer, & Brown, 2009).However the decay hypothesis has not been conclusively refuted (Barouillet, Uittenhove, Lucidi, & Langerock, 2018) and the store has a complex relationship with other processing mechanisms such as rehearsal and stores such as long term memory so that it is hard to determine a specific decay function for it (Baddeley & Hitch, 2019;Norris, 2017;Vallar & Papagno, 2002).The episodic buffer (Baddeley, 2000;Wilson & Baddeley, 1988) has a capacity of one integrated episode up to about four chunks (meaningful units) of information.However the constant interplay between working memory and long-term memory means that it is difficult to determine working memory capacity precisely (Caplan & Waters, 2013;Norris, 2017).Fragile very short-term memory has a capacity intermediate between that of the subsecond store and working memory and information appears to persist in it for a few seconds with little decay, much longer than it persists in the sub-second store (Landman, Spekreijse, & Lamme, 2003;Sligte, Scholte, & Lamme, 2008;Sligte et al., 2010;Vandenbroucke, Sligte, Barrett, Seth, Fahrenfort, & Lamme, 2014;Vandenbroucke, Sligte, de Vries, Cohen, & Lamme, 2015).The subsecond store, then, is a distinct store with properties different from those of other stores.
Active persistence of information is widespread in the brain: in addition to the various memory stores (the sub-second store, fragile very short-term memory, and working memory), information persists in visual processing, often for hundreds of milliseconds, as will be shown later, and there is also visible persistence, mentioned above.It is not claimed that persistence of perceptual information is exclusive to the sub-second store, only that the sub-second store has a particular function which is instantiated in a particular way of P.A. White representing the information that persists in it, that of temporal differentiation.

Terminology
Several names have previously been put forward for what is here called the sub-second store and all of them are problematic.Sperling (1960) wrote in terms of a persisting visual image; for example, "the persistence is that of a rapidly fading, visual image of the stimulus" (p.26).He also used the term "visual information storage".There is a difference between a surface visual image or representation from which information can be generated, and a memory representation that itself carries information.Take the shape "A" presented as a stimulus.To a being without language, such as a cat, this would be perceived just as a black shape against a white ground.To a human familiar with the Latin alphabet, it is that, but is hard not to see it also as a letter of the alphabet.The term "visual image" could be taken as implying the former, a mere visual shape without meaning which, as was shown in the previous section, is not all there is to information in the sub-second store.Neisser (1967) introduced the terms "icon" and "iconic memory".These terms are usually used with specific reference to the visual modality and they have been adopted by many subsequent authors (e.g.Banks & Barber, 1977;Barban et al., 2013;Botta et al., 2023;Haber, 1983;Kattner & Clausen, 2020;Kuhbandner et al., 2011;Palmer, Kellman, & Shipley, 2006;Persuh et al., 2012;Skóra & Wierzchoń, 2016;Sligte et al., 2010;Sugita, Hidaka, & Teramoto, 2018;Vandenbroucke et al., 2014;Yi et al., 2018).The term "icon" has the same drawback as "visual image", and also implies a static image (reminiscent of its original meaning in reference to religious icons), which, as will be shown, is not appropriate.
Atkinson and Shiffrin (1968) used the term "sensory register".Authors since then have tended to prefer "sensory memory" to "sensory register" (e.g.Botta et al., 2023;Hu et al., 2019;Huynh et al., 2017;Ögmen & Herzog, 2016;Vandenbroucke et al., 2014;Vlassova & Pearson, 2013).The term "sensory" is unsatisfactory in this context because it implies reference to early stages of processing (sensory as opposed to perceptual processing), whereas some of the research cited above has shown that what Coltheart (1980) called postcategorical information is held on that time scale.To give just two examples among many, complex visual objects with feature binding can be retained there (Sligte et al., 2010), as can coherent information about complex natural scenes (Clarke & Mack, 2015)."Sensory" does not seem the right term for a post-perceptual store containing that kind of information.Atkinson and Shiffrin (1968).Products of perceptual processing feed into a brief, high capacity store called sensory memory or iconic memory; some information is transferred from sensory memory to short-term memory, a limited capacity store with a longer duration; and some information is transferred from there to long-term memory, a high capacity store of indefinite duration.1(b): Revised version of the Atkinson and Shiffrin model.Products of perceptual processing enter the perceived present, a very high capacity store of very brief duration (<100 ms), and also directly to the sub-second store, a high capacity nonretinotopic store of longer duration (~1000 ms) that incorporates semantic information.Information is also transferred to the sub-second store from the perceived present.Some information is transferred from the sub-second store to working memory, a suite of limited capacity stores and processing functions where information may be maintained by active rehearsal, and from there to long-term memory.
P.A. White Coltheart (1980) identified three kinds of persistence: "First, neural activity in the visual system evoked by the stimulus may continue after stimulus offset ("neural persistence").Second, the stimulus may continue to be visible for some time after its offset ("visible persistence").Finally, information about visual properties of the stimulus may continue to be available to an observer for some time after stimulus offset ("informational persistence") (p.183).The term "informational persistence" or "information persistence" has been used frequently since, sometimes to distinguish persistence up to 1000 ms from the shorter time scale of visible persistence (e.g.Akyürek & Wolff, 2016;Clarke & Mack, 2015;Greene, 2016;Irwin & Yeomans, 1986;Loftus & Irwin, 1998).This is also an unsatisfactory term because it implies a persisting but static body of information, such as perceptually processed information about a grid of letters of the sort used by Sperling (1960).
Other terms that have occasionally been used include "dynamic visual icon" (Palmer et al., 2006), "iconic informational persistence" (Jacob et al., 2013), and "visual short-term memory" (Scharnowski, Rüter, Jolij, Hermens, Kammer, & Herzog, 2009); the context makes it clear that Scharnowski et al. (2009) meant the store being discussed here, not the visual store associated with working memory."Visual short-term memory" has been used as an umbrella term covering the sub-second store, fragile visual short term memory, and visual working memory (Sligte et al., 2010).
The term "sub-second store" is used here in order to avoid the unwanted connotations of the terms surveyed in this section.
This paper is deliberately agnostic on the subject of consciousness and conscious perception, mainly because such terms are deeply ambiguous.The topic of the present paper can be addressed in a satisfactory way without referring to them.Instead the term "information" is used, and perceptual systems are treated largely as information processing systems.There may well be problems with that way of looking at the brain, but at least "information" can be precisely defined, and considering processing mechanisms in terms of their functional properties, in terms of what they do to or with information, does not seem terminally problematic.It is not clear what would be added to the present account by saying that information in the perceptual timescape is conscious, or indeed that it is not.Therefore the issue of conscious perception will not be taken any further in this paper.
1.4.Location of the sub-second store in the processing stream Atkinson and Shiffrin (1968) located the sensory register, to use their term, in a series of stores, illustrated in Fig. 1(a).In their proposal, products of perceptual processing enter the sensory register, then conceived as a visual iconic store, proceed from there through a bottleneck to short term memory, conceived as a limited capacity store holding information on a time scale of seconds, and thence to long term memory, from where information could be retrieved into short term memory.The dotted arrow from sensory register to long term store represented the possibility that the contents of the sensory register could activate long term memory representations that would add meaning to the stimulus, but they would be activated into the short-term store, not into the sensory register.
A revised version of the Atkinson and Shiffrin model, reflecting more recent developments in research, is depicted in Fig. 1(b).In Fig. 1(b) the sensory register is named the sub-second store and is distinct from retinal and visible persistence (which are not shown in the figure).Whether the contents of the sub-second store can activate long-term memory representations remains uncertain.Semantic information is activated through re-entrant processing in perceptual processing (Di Lollo, Enns, & Rensink, 2000;Enns & Di Lollo, 2000) and is, therefore, already present by the time information enters the sub-second store.For that reason the dotted arrow connecting the sub-second store to long-term memory has not been included in Fig. 1(b), but the possibility of information being activated from long term memory directly into the sub-second store as well has not been ruled out.The term "working memory" is used instead of "short-term store".Atkinson and Shiffrin (1968) were aware of what they called control processes operating on material in short-term store, such as rehearsal, but the understanding of multiple stores on the time scale of short-term memory and associated processing functions has greatly expanded since their proposal (Baddeley & Hitch, 1974, 2019;Cowan, 2017;Luck & Vogel, 2013;Martin & Romani, 1994;Norris, 2017;Oberauer, 2002;Schulze & Koelsch, 2012).
Fig. 1(b) incorporates the proposed very high capacity and very brief duration store called the perceived present by White (2020), as the initial holding area for the products of perceptual processing.There is a direct connection from perceptual processing to the subsecond store; that is, perceptual information can enter the sub-second store from perceptual processing without passing through the perceived present.That is a key feature of the present proposal and it will be shown in section 4 that it provides a way to understand some puzzling findings to do with postdictive phenomena in perception.There is no link going back from the sub-second store to either perceptual processing or the perceived present.The way from perceptual processing to working memory is a one-way street.The importance of this will be elucidated in section 5 where it is argued that temporal integration occurs in perceptual processing and does not occur in, or draw information from, the sub-second store.This is not meant to imply that top-down or re-entrant processing is not important in perception; as stated above, it is clear from much research that pre-existing information plays an important part in the interpretation of sensory input (e.g.Di Lollo et al., 2000;Enns & Di Lollo, 2000).For that reason, links are shown in Fig. 1(b) from both P.A. White working memory and long-term store back to perceptual processing.These allow for the possible influence of existing information in those locations on perceptual processing.

Why the sub-second store is needed for continuity of change information
Research on temporal order discrimination thresholds suggests a minimum temporal resolution of about 20 ms, possibly even less (Babkoff, 1975;Babkoff & Sutton, 1963;Brown & Sainsbury, 2000;Craig & Baihua, 1990;Divenyi & Hirsh, 1974;Eimer & Grubert, 2015;Fink, Churan, & Wittmann, 2005;Fostick & Babkoff, 2013;Hirsh & Sherrick, 1961;Nicholls, 1994;Pastore, Harris, & Kaplan, 1982;Stevens & Weaver, 2005;Tadin, Lappin, Blake, & Glasser, 2010).Perceptual discrimination thresholds vary over a wide range, and the possibility of some perceptual information in the perceived present covering a longer span of time, perhaps up to ~100 ms, cannot be ruled out.For most perceptual information, however, temporal resolution is likely to be less than that.This is important in relation to the perceived present.If two events separated by 20 ms or more are perceived as occurring one after the other, they cannot both be in the perceived present at the same time.When the second one is in the present, the first is in the past.Perceiving any kind of continuity or connection over time, as opposed to a momentary present, therefore requires retention of information about the recent past.That information cannot be in the perceived present.
As a thought experiment, imagine a perceptual world covering a time span of ~20 ms, and that anything beyond that in recent history is lost.The thought experiment requires us to imagine a moment in the perceived present frozen in time so that we can examine it at leisure.In the example of a ball that has been thrown and is in the air, the ball and its features would be perceived at a single, precise location in space and time, with a particular velocity at that moment, and that is all the information there would be about it.Its history further back than ~20 ms ago would not be there.For a static object such as a building in the background, that would also be perceived at a single location in space and time and that is all the information there would be about it: it might be perceived as having zero velocity at that moment but there would be no other information about its persistence over time.If an ascending musical scale is presented with each note having a duration of 20 ms, at any given moment only one note would be in the perceived present and information about preceding notes would be lost.Clearly that is not the case: we hear the scale as a scale, temporally ordered and with change information represented.The same applies to all of perception.If we imagine moving on from our selected frozen moment to the next, that whole body of information would have ceased to exist and another body of information would be there in its place.It would not matter that the information in the new moment was very similar to, and indeed largely a perpetuation of, that in the previous one: the previous one is gone, so that similarity could not be apprehended.
The hypothesis, then, is that the sub-second store solves that problem, supporting the perceived present with an information structure that links a representation of the recent past to the perceived present.It is proposed that information in the sub-second store is segregated in terms of temporal differentiation, constituting what will be called a perceptual timescape, and connected to information in the perceived present.The perceptual timescape organises and differentiates information by temporal properties, meaning that items of information have labels describing their temporal features.Those labels are time markers (White, 2023).Before elucidating the information structure of the perceptual timescape, therefore, a brief account of time markers will be presented.

Time marking in perceptual processing
Space is not directly perceived.Space as perceived is a construct of information generated in perceptual processing.The percept of a stick as a few inches long is not itself a few inches long; it is neural activity that encodes that length information.In some particulars the perceptual construct of spatial information differs from actual space.Multiple kinds of spatial information are encoded in perceptual representations (Grieves & Jeffrey, 2017;Hafting et al., 2005;McNaughton, Battaglia, Jensen, Moser, & Moser, 2006;O'Keefe, 1976;Sugar & Moser, 2019), and they go well beyond the mere dimensionality of physical space.
For the same basic reason, time is not directly perceived.Initially, if a stimulus lasts for a few hundred milliseconds, information about the stimulus enters the eye and is transmitted to the visual system for that amount of time.When the information is processed in the visual system, the result is neural activity that encodes temporal information.It is possible that the stimulus might be perceived for a duration that corresponds closely to the objective duration of the stimulus.However even that has been disputed, by Herzog, Kammer, and Scharnowski (2016), and they made the important point that the perceived duration of a stimulus is not the same as the duration for which the stimulus information persists.Just as spatial features of stimuli are encoded in neural activity, so are temporal features.Just as there are multiple kinds of perceived spatial information that go beyond the mere dimensionality of space, so there are multiple kinds of temporal information that go beyond the mere dimensionality of physical time.Those kinds of information are time markers (White, 2023).
Time markers are a kind of semantic information.They are labels encoding features, not themselves visible (or audible or tactile, etc.) but part of the perceptual interpretation of the stimulus.In the example of the ball that has been thrown, the constructed perceptual object comprises both surface visual features, such as shape and colour, and nonvisual semantic features, such as categorical identity (Wutz & Melcher, 2014) and mass (Runeson & Frykholm, 1981).Spatial location is also semantically encoded: a representation of the (changing) location of the ball can persist as information when the ball is temporarily unseen, such as when it is occluded by another object (Krekelberg, 2001;Lappe & Krekelberg, 1998).Time marking is, to some degree, analogous to spatial location marking.Indeed, the position markers studied by Lappe and Krekelberg (1998) are spatiotemporal: they track the changing position of an object over time and therefore encode time as well as space.Temporal features, then, are semantically encoded components of perceptual representations.Time marking and spatial information marking are both fundamental to the construction of a coherent perceptual world.

P.A. White
Time marking has previously been proposed by other authors (Dennett & Kinsbourne, 1992;Herzog et al., 2016;Libet, Wright, Feinstein, & Pearl, 1979;Nishida & Johnson, 2002;White, 2023) but in connection with early perceptual processing on short time scales, such as integration and synchronisation of local feature information.An example will be given.Rapidly alternating features are sometimes bound incorrectly, resulting in perceptual objects with feature combinations that were never presented (Moutoussis & Zeki, 1997a;Zeki, 2015).Nishida and Johnston (2002) proposed that synchronisation of feature information depends on time marking information, such that information about different features is bound together if they share the same time marker.The incorrect bindings were interpreted as an outcome of the way in which the time marking process operates.
In the present account a different function for time marking is proposed, and different kinds of time marking are invoked in relation to that function.The next section elucidates those kinds of time marking and their function.

The perceptual timescape
As preliminary note, this paper is not about perceptual processing.Fig. 1(a and b) both locate the sub-second store outside perceptual processing.The principal aim of the paper is to make a case for a particular function of the sub-second store.That function is to provide a temporally differentiated representation of recent history, to fill out things that are important to ongoing activities but lie beyond the temporal scope of the perceived present.Making that case requires reference to relevant aspects of perceptual processing, based on the current state of research, but the innovative theoretical propositions are entirely concerned with the functional significance of (part or whole of) the sub-second store.The perceptual timescape is proposed as a distinct memory system forming part if not all of the sub-second store.The possibility of other components of the sub-second store is not ruled out here.Its location in relation to other processing stages and stores is depicted in Fig. 1(b).
Most of the research to be discussed here is in the visual domain.That is only because of a dearth of relevant research on other modalities.The perceptual timescape is, like the sub-second store in general, multi-modal and it is anticipated that a similar structure of time marking and connectives may be found for other modalities, principally the auditory and tactile/kinaesthetic modalities. 1 .
Time marking is essential to the proposed perceptual timescape.Time marking in perceptual processing and time marking in the perceptual timescape are not the same, and neither is a copy of the other.Some kinds of time marking might determine where in the perceptual timescape the information is entered (those involved in postdictive processing are an examplesee section 4) but others probably do not (e.g.those proposed by Nishida and Johnston, 2002).In any case, there are two important differences between time marking in perceptual processing and time marking in the perceptual timescape.One is that time marking in perceptual processing is local; that is, it is attached to individual items of information but they are not bound into a coherent global representation of the passage of time.The perceptual timescape is a global representation in which information is organised and related by time markers.The other is that time marking in perceptual processing has specific functions such as determining which features are attached to which other features.Time marking in the perceptual timescape has a different function, broadly speaking the segregation of recent historical information into a coherent temporal sequence.

Time markers in the perceptual timescape
In the hypothesis proposed here the perceptual timescape constitutes a temporally differentiated representation of recent perceptual history.Temporal differentiation is accomplished with two kinds of time markers.One kind is time distance information.Time distance information represents how far a given item of perceptual information is in the past.That is defined in relation to the perceived present.All information in the perceived present is time marked as in the present (t 0 in the conventions used here).Such information is perceived as present not because it is but because it is time marked as being so (White, 2020).In the same way, information in the perceptual timescape is perceived as in the past not because it is (obviously it cannot be perceived in the present if it is actually in the past) but because it is time marked as in the past.The temporally differentiated representation all exists in the present.The time distance marker for a given item of information changes to mark the information as receding ever further into the past.Time distance information would correspond at least approximately to the veridical passage of time.For example, an item of information currently time marked as t − 300ms would have occurred ~300 ms before the information currently in the perceived present (though that may be subject to error and recalibration or resynchronisation in perceptual processing.)Time distance information (and other kinds of temporal information) would not literally be represented with numbers.It is a semantic representation of temporal features, just as length information in perception is a semantic representation of a spatial feature.The time distance information could perhaps be converted into numbers, or used as a basis for some kind of temporal judgment, at some later stage of processing, but that is not its primary function.
The other kind of time marking information is ordinal temporal information.Perceptual information is marked with information about temporal adjacency to other items: in effect an ordinal time marker says that this item was immediately before or immediately after that one."Immediately" means that it refers to the time distance marker adjacent to the one in question, where time distance markers have a finite temporal resolution, possibly ~20 ms.Again, the words are symbols standing for semantic information: the information itself would not be verbal.Together, time distance information and ordinal temporal information provide the temporal differentiation that enables representations of history as connected across the time span of the perceptual timescape (White, 2021).All information in the perceptual timescape has time distance and ordinal temporal information attached to it, to locate it in a temporally differentiated representation.Events that have the same time distance marker are thereby perceived as contemporaneous.The events are perceived as contemporaneous not because they are or were, but because they have the same time marker.
Having recent historical information in the same representational space as the perceived present would result in a massive superimposition of information about multiple time points, with consequent severe blurring.The temporal differentiation of the perceptual timescape by time distance and ordinal temporal information has the function of avoiding that superimposition problem: pieces of information in the same store but concerning different times do not interfere with each other because they are segregated by being allocated different temporal locations in the timescape.The perceptual timescape has lower capacity than the perceived present but a more extended temporal representation.
The temporal resolution of the perceived present is set by local temporal discrimination thresholds.It is, therefore, variable but with a possible maximum of ~100 ms (White, 2020).From the perceived present, perceptual information is transferred, subject to loss, through a bottleneck to the perceptual timescape (Jacob et al., 2013;Ögmen, Ekiz, Huynh, Bedell, & Tripathy, 2013).It is then time marked as in the recent past (t − 1 and on, with whatever temporal resolution holds in the perceptual timescape) but still has informational connections to the perceived present.Thus, an apparent sequence of events, such as a rising diatonic scale, is perceived one at a time, each successive note going through the perceived present in turn.But perception of the notes as a temporal sequence depends on the informational representation in the timescape.The rising diatonic scale is perceived as a rising diatonic scale and not as isolated individual notes, nor as a jumble of notes in no particular order, because, when the last note is in the perceived present, there is a perceptual context of temporally differentiated information about the previous notes that is connected to the last note in the perceived present.Ordinal temporal information marks the order in which the notes occurred.
What is novel about the present proposal can be schematically depicted.Fig. 2(a) depicts the traditional view of the decay of the icon in the sub-second store.In this view, the stimulus (in this case a Sperling-type grid of letters and digits) is presented and perceptually processed, and the perceptual information is then transferred to the sub-second store where it gradually fades (represented by greyscale changes in the figure) over the time span of the store.Fig. 2(b) shows a more dynamic representation in the spirit of Haber's (1983) proposal: a stimulus comprising a moving object is presented and perceptually processed, and then transferred to the sub-second store where information about the object and its motion is represented.Arrows represent information about direction of motion.The fading images show earlier moments in the stimulus presentation fading across the time scale of the store.The use of four separate representations is a presentational convenience and is certainly oversimplified.
The problem with Fig. 2(a and b) is that they conflate the representation of time in memory with the objective passage of time.

P.A. White
There must be a representation of time in memory that exists at one moment.To see this, Fig. 2(a and b) can be compared with Fig. 2(c) which illustrates, in simplified form, the kind of information proposed as an organising framework for the perceptual timescape.The figure illustrates the main features of the perceptual timescape with the same example of perception of a black square in motion.The information is now oriented vertically.This change symbolically represents the fact that the history of the stimulus on the time scale of the sub-second store is all there in the perceptual timescape at a single moment.That is indicated by the word "now" at the bottom of the figure.The perceived present, just above the bottom of the figure and labelled "perceptual product", captures a single moment in the motion of the square, defined in terms of the temporal resolution of perceptual information.Above it is a series of four representations of historical information about the square in the perceptual timescape.The choice of four is a convenient simplification and is not a hypothesis about the number of representations there would be in the perceptual timescape, nor is it meant to imply that the historical information in the perceptual timescape is represented as a series of discrete frames. 2 The perceptual timescape has a minimum temporal resolution (possibly ~20 ms) but (i) that does not imply that the entire timescape is organised in 20 ms blocks, and (ii) coarser temporal resolution may occur, for example where perceptual processing is unable to resolve some stimulus information on that fine a time scale so that some temporal blurring, as it were, is carried over to the perceptual timescape.Four kinds of information are symbolically depicted in part of the figure showing the perceptual timescape.Two of these are the kinds of time marker information.Time distance information (labels t − 1 to t − 4 ) represents how far a given representation of the square is in the past, defined in relation to the perceived present (t 0 ).The number of time distance labels in the figure is also a convenient simplification and is not a hypothesis about how many there are in reality.Abstract designations are used in the figure but the time distance information would correspond at least approximately to the veridical passage of time.Ordinal temporal information is depicted to the right of time distance information in Fig. 2(c), and represents each time distance label as being immediately before or after its neighbours in the timeline.For example, "t − 4 i.b. t − 3 ″ means that the time distance marker for t − 4 is identified as immediately before ("i.b.") the time distance marker for t − 3 .The convention "immediately after" could just as well have been used.The words are symbols standing for the semantic information in the figure: the information itself would not be verbal.
The other two kinds of information concern ongoing change (or lack of change).The arrows indicating direction of motion in Fig. 2  (c) are an example of that kind of information.This was called "vector information" in White (2021).However, the term "vector" has several meanings, so to avoid ambiguity the term "rate of change information" (slightly adapted from Hogendoorn & Burkitt, 2019) will be used here instead.Rate of change information indicates change that is going on at that moment and is attached to features or to the object as a whole.If there is no change going on for a given feature then the value for rate of change is zero.All perceptual features carry rate of change information.The vertical arrowed line running through the squares represents connectives (White, 2021).Connectives bind successive representations of objects and features into a connected series.They say, in effect, that this object at this moment is a continuation of that object at that moment.In this way, representations of objects at different temporal locations in the timeline are bound into an overall representation of a single object persisting across those representations, with all and any change that is going on across that period of time.One square is perceived persisting and moving, not multiple momentary disconnected squares.The arrowed line represents that symbolically, showing links of persistence and successive becoming encompassing all representations of the square.Rate of change information and connectives are not the central focus of this paper but they are included because they are important components of the perception of happening, change, and persistence (White, 2021).
Duration or interval information is not postulated as part of the perceptual timescape.Duration information might be represented  ) showing in sketch form the main features of the present proposal: the perceptual timescape is a timeline of perceptual information; information from perceptual processing can be entered directly into the perceptual timescape, constituting postdictive reinterpretation; the only control process operating on information in the perceptual timescape is attentive selection for transfer of information to working memory; and there is no route back from the perceptual timescape to perceptual processing, so temporal integration does not utilise information recycled from the perceptual timescape.
in the sub-second store, and has been included in other proposals concerning perception of recent history (Herzog et al., 2016;Hogendoorn, 2022).But it is not clear what function duration information would serve in the perceptual timescape.Duration information neither locates nor relates information in time.Duration information requires at least two readings from different moments in time, representing onset and offset of the duration or interval to be represented.Duration information cannot even be computed until the event in question has terminated, whereas time distance and ordinal temporal information can be attached as each moment in an event is processed.So duration information is not likely to be fundamental to the information structure of the perceptual timescape.
Duration judgments can be made on the sub-second scale (Ivry & Spencer, 2004;van Wassenhove, Buonomano, Shimojo, & Shams, 2008) but that does not necessarily imply that duration is represented in the perceptual timescape.Judgment of duration on the subsecond scale could result from an automatic judgment process that might (or might not) be independent of the perceptual timescape (Rammsayer et al., 2016;Rammsayer & Lima, 1991;Rammsayer & Ulrich, 2011).
There is no satisfactory way to represent the timescape in a static visual figure.The components of Fig. 2(c) should be understood as symbolic representations of information in the brain, information that may be visual but is also semantic, continuous down to the minimum temporal resolution of the system rather than discrete (as they appear to be in the figure), bound into perceptual representations of objects and features, and connected over the time span of the timeline, to make a fully integrated representation of what is going on.The aim of Fig. 2(c) is to capture the essential features and for that reason extraneous details and decoration are kept to a minimum.
Fig. 3 is a sketch that depicts the main features of the present proposal and it can be regarded as an expanded version of Fig. 1(b), though still greatly simplified.The central feature is the perceptual timescape, indicated by time markers in the figure, receiving information from perceptual processing via the perceived present.Two additional features will each be the subject of separate sections of the account below.
1. Information from perceptual processing can be entered directly into the perceptual timescape.That is postdictive reinterpretation, alteration to information at some point or points in the perceptual timescape.How postdictive interpretations are represented in the timeline is addressed in section 4. 2. Temporal integration is contained within perceptual processing.There are multiple temporal integration mechanisms with different operating characteristics (see section 5), so the single box for that in the figure is a simplification.The main point is that information does not return from the perceptual timescape to temporal integration in perceptual processing.There is a one-way street from perceptual processing through the perceptual timescape to working memory.Claims to the contrary will be analysed in section 5.

Capacity of the perceptual timescape
The results of Sperling's (1960) study indicated a capacity of ~9 letters or digits in the sub-second store.Sligte et al. (2010) found evidence for a capacity of six high resolution complex visual objects.Those studies, however, considered only what might be called spatial capacity, the capacity of the store at a given moment in time, and did not consider overall capacity which, in the present account, is a function of capacity at any given time distance marker summed across time distance markers.Without evidence of the temporal resolution of information in the perceptual timescape there is no basis for working out the overall capacity.If the temporal resolution is around 20 ms, then there would be about 5 time distance markers per 100 ms of the timescape.If the timescape covers 1000 ms, that would mean about 50 time distance markers.However some limiting factors should be borne in mind.One is that decay of information in the perceptual timescape is rapid (Shooner, Tripathy, Bedell, & Ögmen, 2010;Sperling, 1960), with most information lost after about 250 ms.Thus, an initial capacity (i.e. at t − 1 ) of 6 complex visual objects would be reduced to perhaps one or two after 250 ms.That would be consistent with results of studies of multiple object tracking on short time scales (Narasimhan, Tripathy, & Barrett, 2009;Shooner et al., 2010;Tripathy et al., 2007).Another consideration is that the contents of the sub-second store are mainly if not entirely attentively processed stimuli ( Ögmen & Herzog, 2016), and that would also limit the amount of information that could be maintained both initially and at subsequent time distance markers.Finally, attentive selection for further processing and reporting is limited by both the limited capacity of working memory and the time taken for processing of the information, during which information continues to decay (Posner & Mitchell, 1967).Thus, although the research evidence does not allow a precise conclusion to be drawn about the capacity of the perceptual timescape, it must be greater than the estimates obtained by Sperling (1960) and Sligte et al. (2010), but only a limited amount of information can be extracted from it for further processing.That limited amount does, however, include information about change over time (Shooner et al., 2010).

Updating of time marker information
Time distance markers on individual items of information undergo change all the time.This change must be temporally finegrained, possibly with a resolution of ~ 20 ms, and must proceed in at least approximate synchrony with elapsed objective time, so that each item of information is represented as moving further into the past.It is proposed that changes in time distance markers are generated by an updating mechanism that is driven by a timing mechanism.
Many timing mechanisms have been proposed (Buonomano & Merzenich, 1995;Gorea, 2011;Grondin, 2010;Mauk & Buonomano, 2004;Merchant, Harrington, & Meck, 2013;Paton & Buonomano, 2018;Rao, Mayer, & Harrington, 2001;Rolls & Mills, 2019;Sugar & Moser, 2019).However in most cases the range of time scales studied has been from hundreds of milliseconds upwards (Rao et al., 2001).Üstün, Kale, and Çiçek (2018) even claimed that "[a]ll types of time perception tasks require working memory" (p.2).There P.A. White have been studies of interval timing in the millisecond range (e.g.Rammsayer, 2009) but the requirement to report the judged durations, and the commonly used method of comparing two consecutively presented intervals, both imply involvement of working memory, so it is hard to know exactly where in the stream of processing the duration is computed.Timing mechanisms operating on the supra-second scale, or requiring working memory involvement, are not appropriate for the kind of function proposed here.
In addition, most proposed timing mechanisms have been concerned with interval or duration timing (Merchant et al., 2013;Rao et al., 2001).They are special purpose mechanisms that are initiated by the onset of a target stimulus or interval, and generate a judgment of its duration.This is particularly the case for the best established hypothesised timing mechanism, the pacemakeraccumulator model (Gibbon, 1977;Gibbon, Church, & Meck, 1984;Rao et al., 2001;Treisman, 1963).That is not appropriate for the adjustment of time distance markers, where no judgment of duration is involved.
Some proposed timing mechanisms can be used to time brief intervals or durations.That is the case for state-dependent networks, for example (SDNs; Buonomano & Merzenich, 1995;Karmarkar & Buonomano, 2007).Once a given interval has passed and been timed, the SDN needs to reset before it can time another interval.Several studies have found evidence for this resetting and the resetting interval is currently estimated at 250-333 ms (Buonomano, Bramen, & Khodadadifar, 2009;Sadibolova, Sun, & Terhune, 2021;Spencer, Karmarkar, & Ivry, 2009).That rules SDNs out as a candidate timing mechanism for the perceptual timescape because, for that, timing information must be generated all the time.In addition, it is not clear that the temporal resolution of SDNs would be fine enough to support the minimum temporal resolution of the perceptual timescape (Buonomano & Merzenich, 1997).
A further problem is that there is little evidence for any particular timing mechanism as existing in the human brain.Eichenbaum, 2017;Rao et al., 2001;Rolls & Mills, 2019;Sugar & Moser, 2019;Üstün et al., 2018).But the particular mechanisms operating in those and other areas have yet to be elucidated.
Given the requirements of non-stop operation supporting the specific function of time marker adjustment, fine temporal differentiation, and precise calibration to objective time, there is at present only one hypothesised timing mechanism that satisfies all of those.That is the synfire ring.This possibility will be briefly elucidated.
The synfire chain model was originally proposed by Abeles (1982), and researchers since then have shown that neural networks with the same properties as synfire chains can develop as loops or rings (Cabessa & Tchaptchet, 2020;Diesmann, Gewaltig, & Aertsen, 1999;Levy, Horn, Meilijson, & Ruppin, 2001;Miller & Jin, 2013;Zheng & Triesch, 2014).A synfire chain is a series of groups of neurons with feedforward excitatory connections.A simple model of a synfire chain is shown in Fig. 4a, and a synfire ring in Fig. 4b.In real synfire chains and rings there would be many more members in each group, and probably more groups in the structure.Fig. 4a and  b show the organisational principles, not the likely quantities of components.Neurons within a group tend to fire synchronously; each neuron in a given group receives synchronous input from the neurons in the group that connects to it, as a result of which they are very likely to fire, and in like manner synchronised group activity proceeds down the chain.When a group of neurons is made to fire by excitatory input from a previous group, there is an inhibitory connection back to the neurons in the previous group (not shown in Fig. 4a and b) that terminates the activation in that group.In a synfire ring the groups of neurons are connected in a loop, as illustrated in Fig. 4b.With this arrangement feedforward excitatory activity can be perpetuated around the loop indefinitely.The main limiting factor is that a given neuron cannot fire again while it is in its refractory period, so the time taken for activity to circulate around the loop must be greater than the refractory period of the neurons in it (Cabessa & Tchaptchet, 2020;Diesmann et al., 1999).The refractory period can be as little as 1 ms (Goldstein, 2014) but cortical neuron firing rate is effectively limited to 5 ms and usually 20-25 ms (Hancock, Walton, Mitchell, Plenderleith, & Phillips, 2008;Rolls, Franco, Aggelopoulos, & Jerez, 2006).That is already sufficient to support temporal discrimination of 20 ms in the perceptual timescape.Temporal resolution could be improved by increasing the number of groups in a ring, and by having arrangements of rings that operate at the same rate but temporally offset from each other.So   Given some neurobiologically plausible assumptions, computational modelling has shown that synfire chains in general, and synfire rings in particular, have some noteworthy properties.Where a target neuron receives input from multiple synchronously firing neurons, its spike response can be more temporally precise than the input (Diesmann et al., 1999).Thus, if all neurons in a group receive synchronous input from all neurons in the group that connects to it, instead of losing temporal precision, groups of neurons tend to become more precisely synchronised in their activity, presumably up to some asymptotic level.The mere perpetuation of activity around a synfire ring therefore promotes and maintains temporal precision and synchrony in the firing patterns.Activity in a synfire ring can be indefinitely self-sustaining and is robust to perturbations.These issues are explored in more depth in several papers (Cabessa & Tchaptchet, 2020;Diesmann et al., 1999;Miller & Jin, 2013;Obeid, Zavatone-Veth, & Pehlevan;Zheng & Triesch, 2014).Although synfire rings might appear to be rather arbitrary and unlikely configurations of neurons, computational modelling has shown that they can easily emerge from the operation of a few neurobiologically plausible learning rules (Jun & Jin, 2007;Miller & Jin, 2013;Zheng & Triesch, 2014).
Research has shown patterns of neural activity in brains consistent with what would be expected for synfire chains (Hahnloser, Kozhevnikov, & Fee, 2002;Ikegaya, Aaron, Cossart, Aronov, Lampl, Ferster, & Yuste, 2004;Long & Fee, 2008;Long, Jin, & Fee, 2010;Lynch, Okubo, Hanuschkin, Hahnloser, & Fee, 2016;Obeid et al., 2020;Picardo et al., 2016).Most of this evidence has come from studies of brain activity during birdsong, and the main focus of research has been on synfire chains and rings as timing mechanisms for motor output.For example, Long et al. (2010) found evidence of a sequential burst of spikes that coincided in time with song and was consistent with what would be expected if the temporal structure of the song was controlled by a synfire chain.There is as yet no direct evidence for synfire rings in human brains, but it would be extremely difficult to obtain such evidence, and there is at least no evidence that they are not there.There is evidence of millisecond precision in cortical networks, involving a dispersion of spike timing of no more than 10 ms and possibly <2 ms (Singer, 1999a(Singer, , 1999b)).Singer (1999aSinger ( , 1999b) ) referred to synfire chains as a possible determinant of high temporal precision in neuronal signalling.
Mediating between the synfire ring and the perceptual timescape might involve more than just changing labels on information.It is not necessarily the case that the temporal precision of synfire ring operation will match the temporal resolution of the perceptual timescape.Given that, there would need to be a mediating mechanism transforming output from the synfire ring mechanism to a periodic command to update the time marker information.In pacemaker-accumulator models a linear accumulator effectively counts the number of signals arriving from the pacemaker until the stimulus being timed terminates (Grondin, 2010;Treisman, 1963;Treisman, Faulkner, & Naish, 1992;Ulrich, Nitschke, & Rammsayer, 2006).Thus, in the time marker adjustment mechanism there could be a linear accumulator that similarly stores successive inputs from the synfire ring until a critical number is reached, and a time marker executive would then issue an adjustment to the time marker information in the perceptual timescape and reset the Fig. 5. Hypothesised time marker adjustment mechanism.Both kinds of temporal information in the perceptual timescape are shown on the right (as in Fig. 1), with other kinds of information omitted for the sake of clarity.A synfire ring is shown at left feeding into an executive control mechanism in the middle, which comprises an accumulator represented by a stopwatch and an executive that alters the values on the time markers in the perceptual timescape.
accumulator to zero.This would effectively transform the time scale of the synfire ring signals into that of the information in the perceptual timescape.A schematic depiction of the arrangement is shown in Fig. 5.
What matters in the present account is the functional properties of a system for adjusting time markers.Synfire rings fit the bill.If any other mechanism is discovered that would meet the functional requirements, then it would be a plausible alternative to the synfire ring model.Although timing is ubiquitous and fundamental in human brain function, the shortage of evidence for particular timing mechanisms as operating in the human brain makes it difficult to stipulate any one possibility with confidence.Therefore the postulation of a model based on the synfire ring hypothesis is tentative, but is considered worthy of further investigation.

Modification of information in the perceptual timescape
Postdictive modification of perceptual content means that the past of a given percept is reconstructed in light of subsequent input.At first glance, this seems impossible: A is perceived as occurring before B even though A cannot be perceptually constructed until after B has been perceptually processed.Here it is argued that the perceptual timescape provides the means to make sense of postdictive reinterpretations.To be clear, the mechanisms in perceptual processing that generate postdictive reinterpretations are not at issue here.There could be many such mechanisms, different ones for different postdictive phenomena.What is at issue is how events can be perceived as in a temporal order that is the reverse of the order in which they were constructed in perceptual processing.For that, the perceptual timescape is required.

Apparent motion
Under some circumstances, if two stimuli are presented consecutively at different but nearby locations, a percept of motion from the location of the first stimulus to that of the second can occur.This is called apparent motion (Braddick, 1980;Exner, 1888;Kolers, 1972;Kolers & Pomerantz, 1971;Shimojo, 2014;Ramachandran & Anstis, 1986;Tse, Cavanagh, & Nakayama, 1998;Ullman, 1979;Wertheimer, 1912).The motion is perceived as preceding the second stimulus in time even though it cannot be constructed in perception until the second stimulus has been processed.
Two kinds of apparent motion have been identified: a perceptual impression of an object moving from one location to another, called beta motion, and an impression of motion but without an object percept, called phi motion (Kolers, 1972;Wertheimer, 1912).A distinction has also been made between short-range and long-range apparent motion, possibly resulting from two different mechanisms, one operating in early, low-level visual processing, and one operating in later processing (Braddick, 1974(Braddick, , 1980;;McBeath, Morikawa, & Kaiser, 1992;Shimojo, 2014).There is evidence that the latter can be understood as a form of hypothesis testing, the visual system seeking an interpretation that would be consistent with the stimuli presented (Braddick, 1980;Shepard & Zare, 1983;Sigman & Rock, 1974).The present account is concerned with that latter type.

Postdictive reinterpretations represented in the perceptual timescape
When perceptual information emerges from perceptual processing it is assigned a time marker.Usually information enters the perceived present (White, 2020), where it is time marked as in the present.It is proposed that, under some circumstances, information can be entered into the perceptual timescape and assigned a time distance marker some time in the past.That is the possibility represented by the arrow from perceptual processing to the perceptual timescape in Fig. 1(b).The assignment of information to particular points on the perceptual timescape, and the modification of information in the perceptual timescape, are realised by mechanisms in perceptual processing.The perceptual timescape is just a repository of processed information.The main proposal here is that the information structure of the perceptual timescape is needed for the products of postdictive processing to be represented; that is, for events to be perceived as in the proper temporal order despite the postdictive nature of the construct.
A typical stimulus for apparent motion will be used to illustrate what happens.This is shown in Fig. 6(a), which depicts both the spatial array and temporal information.A small visual stimulus such as a flash of light, shown in the figure as a yellow dot, is presented at location A (hereafter dot A) for 150 ms, followed by an inter-stimulus interval (ISI) of 50 ms, followed by a similar stimulus at a nearby location B (dot B) for 150 ms.This typically results in a percept of either phi or beta motion from location A to location B, perceived as occurring before dot B is perceived.No motion impression would occur if the dot at B was not presented, so the motion percept is a reconstruction of the history between the time of dot A and that of dot B. And observers do not report an empty temporal gap occurring between offset of dot A and onset of dot B before the apparent motion percept occurred.They report only the postdictive reconstruction.
Under the hypothesis of postdictive reinterpretation, the sequence of events as perceived is schematically depicted in Fig. 6(b).The columnar format of Fig. 2(c) is retained but multiple moments are represented, successive moments in successive columns.These are labelled at the bottom from m 1 to m 6 .To see the state of information in the perceptual timescape at a given moment, look up and down the column above that moment.At that moment, only the information in that column exists and other moments are either in the past or in the future.To keep things simple, rate of change and connective information are not shown, and the temporal resolution of the perceptual timescape (shown by time markers from t − 1 to t − 5 ) is coarser than would actually be the case.The aim is just to show the changing information construct over time, not to represent the actual temporal resolution of the perceptual timescape.
The information that enters the perceived present is shown in the bottom row of frames, starting at m 1 with information about dot A. At m 2 the information about dot A has been transferred to the perceptual timescape and the blank ISI between the two stimuli is in the perceived present.This continues at m 3 , where the early part of the blank period is depicted as having entered the perceptual timescape.At m 4 information about dot B enters the perceived present and simultaneously, or with minimal delay, the constructed representation of motion from dot A to dot B (represented by an arrow) is entered into the perceptual timescape.At this point, and not before, the stimulus is perceived as dot A followed by motion to the location of dot B, followed by dot B. This set of perceptual information persists in the perceptual timescape (subject to decay), proceeding through consecutive time distance markers, as shown at m 5 and m 6 .The figure shows that the impression of motion is a reconstruction of the perceived past.The postdictive reinterpretation replaces the information that was previously there.By "there" is meant at the same time distance marker and the same perceived spatial location.This is consistent with evidence that information in the perceptual timescape is represented in spatiotopic or nonretinotopic co-ordinates ( Ögmen & Herzog, 2016).The time marking information in the perceptual timescape yields the perceived temporal order of events.
The figure illustrates how a reconstruction of the recent past can be perceived, based on the temporal differentiation information in the perceptual timescape.The other main features of the perceptual timescape, rate of change and connective information, explain how the constuct generates a perceptual experience of connected motion from dot A to dot B. The illustration in Fig. 6(b) is for phi motion, so rate of change information only for the motion information is involved.The construct specifies motion properties at each time distance co-ordinate.Connectives link the motion properties, in effect saying that this motion at time t n is a continuation of this motion at time t n− 1 .Perceptual processing constructs a time-ordered representation of the stimulus, and the perceptual timescape represents that construction, and it is the timescape's main properties of temporal differentiation information (time distance and ordinal temporal markers) and feature linkage information (rate of change and connectives) that enable that.
In the interpretation proposed here, the whole postdictive reconstruction emerges at the same time and components of the reinterpretation are simultaneously assigned to time distance markers in the perceptual timescape, as depicted in Fig. 6(b).That will be termed simultaneous release.An alternative possibility is temporally ordered release: the postdictively revised sequence of events would be released not all at once but all through the perceived present and thence to the perceptual timescape in a temporal order corresponding to that of the stimulus.Under the temporally ordered release hypothesis, information about dot A is released first, then the motion information, with a duration that corresponds to the duration of the gap between the stimuli, and then the information about dot B is entered into the perceived present.Under that hypothesis, for the stimulus to be perceived as in the order dot Amotion dot B, two things are required.One requirement is that all the information must be held back in perceptual processing before being released.The motion information is generated after dot B is processed.That implies that information about dot B must be held back until after the motion processing is complete.If that happens, the temporally ordered release hypothesis requires that information about dot A must also be held back for a similar duration, otherwise the temporal relation between dot A and dot B would be represented as much longer than it actually was.So the sequence of events would be: (i) all stimulus information is processed and held in some sort of buffer in perceptual processing until processing of dot B is finished; (ii) information about dot A is released, then the apparent motion construct, then dot B, on the same time scale as the stimulus.The other requirement is time marking information.Time marking is generated locally for the stimulus information in perceptual processing and that then governs the time marking that gives rise to the perceived order of events in the perceptual timescape.
Thus, with simultaneous release, the time marked information emerges all at the same time; with temporally ordered release, it emerges in a temporal order, and on a time scale, that corresponds to that in the time marking information.That implies that simultaneous and temporally ordered release of the postdictively constructed interpretation generate different predictions for reaction times.With simultaneous release, reaction times to different pieces of information generated in the postdictive reinterpretation should be similar, regardless of differences in their time distance markers.With temporally ordered release, reaction time to the motion information should be shorter than that to dot B. There is evidence relevant to those predictions, from a study by Cowan and Greenspahn (1995).They presented a standard apparent motion stimulus with small circles as elements.Participants were asked to make a key press when they perceived motion reaching either the point midway between the locations of the two elements or the end-Fig.7. Schematic representation of apparent motion stimuli used by Cowan and Greenspahn (1995).The "o" shows the location of the first circle presented.The second circle appeared at the location under the right-hand exclamation mark in both conditions.In each condition participants reported when the apparent motion reached the location of the asterisk.
point, arrival at the location of the second element.Fig. 7 shows the stimulus presentations.The three exclamation marks, which were part of the stimulus presentation, indicated, from left to right, the locations of the first circle, the mid-point, and the second circle.The first circle is shown.Reactions were timed from the appearance of the second circle, which initiated the apparent motion percept.The mean reaction time, measured from onset of the second circle, was 338 ms for both judgments.This lack of difference is predicted by the simultaneous release hypothesis.It does not fit with the temporally ordered release hypothesis because, under that hypothesis, the apparent motion would be perceived in real time, so that the end of it would be perceived later than the middle of it.That did not happen.The perceptual impression of temporal order is created by the labelling of that perceptual information with time distance and ordinal time markers, and exists only in the perceptual timescape.Some clarification is in order.The temporal differentiation of information in the perceptual timescape could be taken as implying actual differences in time of occurrence of the sequence of events.That in turn might be taken as implying that the identical reaction times count against the perceptual timescape hypothesis.It is important to distinguish between time as represented in the perceptual timescape and the objective time at which things happen.Time markers in the perceptual timescape give the impression of temporal order in the apparent motion percept.But that information emerges and and is assigned its various different time distance markers in the perceptual timescape at the same time, and that is why the simultaneous release hypothesis predicts the identical reaction times.Real time and experienced time are not the same: it is the real time of emergence that determines the reaction times.
Reporting what is perceived involves transfer of information from the perceptual timescape to working memory, where it can be accessed by post-perceptual processes that include verbal report generation processes.If the original interpretation is replaced before information about it is transferred to working memory, it will not be reportable, and only the postdictive reinterpretation will be reportable.As illustrated in Fig. 6(b), the postdictive information replaces whatever information was at the time distance (and spatial) co-ordinates in question up to that point, in this case after 200 ms (at m 4 in the figure).It is not likely that the original perceived past could be reported because it lasts for a very brief time.In fact, as Fig. 6(b) shows, there is never a moment at which there is a percept of dot A followed by a blank followed by dot B: as soon as information about dot B enters the perceived present, the motion information is entered into the perceptual timescape (compare the information shown at m 3 with that at m 4 ).Once there, the reinterpreted version persists on the time scale of the perceptual timescape, as shown in Fig. 6(b) (from m 4 to m 6 ), and that would enable information about it to be transferred to working memory and hence to verbal report generation processes.Another possibility is that the verbal report generation process is initiated only after perceived termination of the stimulus, at which point only the postdictively modified interpretation would be available to it.Either possibility could explain why only the postdictive reconstruction can be reported.
The argument so far has assumed that postdictive interpretation is involved in apparent motion, but there could be other ways of explaining apparent motion.Sterzer, Haynes, and Rees (2006) found evidence for recurrent or feedback connectivity filling in the path of apparent motion, but that is not inconsistent with the postdictive nature of the percept.Keuninckx and Cleeremans (2021) noted that different features of perceptual objects are processed in different pathways, and those different pathways may have different processing latencies.If motion processing has a shorter latency than processing of form information then motion from dot A to dot B could be constructed before dot B is constructed.Keuninckx and Cleeremans (2021) argued that a kind of priming effect could be involved.Applied to a typical apparent motion stimulus, when dot A suddenly appears, this primes all nearby locations to detect motion, so motion processing is speeded when dot B is presented nearby.Speeded processing of motion could result in a percept of phi motion occurring before dot B is perceived as a coherent perceptual object.If that happens then postdictive reinterpretation would not occur and there would be no need for direct entry of information into the perceptual timescape as proposed here.
Research on processing latencies does not support that argument.There is evidence that both colour and form information are processed more rapidly than motion information (Bartels & Zeki, 2006;Moutoussis & Zeki, 1997a, 1997b;Zeki, 2015), which is the opposite direction of difference to that required by the processing latency difference hypothesis.It is not impossible that the priming effect for motion accelerates processing sufficiently to reverse that difference, but there is as yet no evidence concerning that possibility.A second problem is that the priming hypothesis predicts that motion would be perceived in temporal order, so that the endpoint would be perceived after the mid-point was perceived.That prediction is disconfirmed by the finding of similar reaction times to both (Cowan & Greenspahn, 1995).
Any hypothesis not involving postdictive reinterpretation would require that information about both dots is held back in perceptual processing.Dot A would be presented and processed but not perceived because it was held in some sort of buffer in perceptual processing.Then, after the ISI, dot B is presented and processed but would also be held in a buffer.When the complete percept incorporating beta or phi motion had been constructed, the processed information would be released in its proper temporal order through the perceived present.The total duration of the stimulus in Fig. 6(a) (and commonly in stimuli used in apparent motion research; e.g.Kolers & von Grunau, 1975, 1976) is 350 ms, so the minimum latency to emergence of dot A as a percept would be the processing latency for dot A plus 350 ms.A similar delay applies to dot B. The results reported by Cowan and Greenspahn (1995) do not favour it, however.First, the hypothesis would imply that reaction times would be longer for later components of the percept, which Cowan and Greenspahn did not find.Second, the mean reaction time of 338 ms is the time from stimulus onset to production of a response, so it includes not just perceptual processing time but also time for "neuro-muscular conduction, direct motor commands, and decisionmaking processes" (Railo et al. 2015, p. 7; see also Posner & Mitchell, 1967).Emergence of the percept must have occurred earlier, possibly as soon as 200 ms after stimulus onset (Teichner, 1954;Railo et al., 2015).That is not compatible with the minimum delay required if postdictive reinterpretation is not going on.
It is likely, therefore, though not certain at present, that postdictive reinterpretation does occur for apparent motion stimuli.Whatever mechanism may generate the perceptual phenomena, the perceptual timescape is still required to represent the resultant information.When the postdictive reinterpretation is made, it is inserted into the timescape all at once, and replaces the previous information that was there about the recent perceived past.The time distance and ordinal time information in the perceptual timescape P.A. White enable the postdictively reinterpreted information to be perceived as temporally differentiated and ordered.

Temporal integration occurs in perceptual processing and does not draw on information in the perceptual timescape
As was shown in Fig. 1(b), it is hypothesised that the road between perceptual processing and the perceptual timescape is a one-way street: information generated in perceptual processing can be entered, in principle, to any location in the perceptual timescape, but information does not return from the perceptual timescape to perceptual processing.That hypothesis is at odds with several claims that information held in the sub-second store can be returned to perceptual processing and entered into temporal summation or integration processes.This section evaluates those claims.

Temporal integration
Temporal integration involves some form of accumulation of information over time in perceptual processing.This implies retention of collected information over whatever span of time is involved (allowing for the possibility of decay), and some processing mechanism operating on the accumulating information to generate some informational product.Several processing mechanisms or methods for temporal integration have been proposed, including weighted averaging (Lappe & Krekelberg, 1998;Miller & Arnold, 2015;Morgan & Watt, 1983), leaky integrator models, which account for decay of information while in the temporal integration window (e.g.Locke 2007), and evidence accumulator models (Smith & Vickers, 1998;Vickers, 1970;Vigano, Maloney, & Clifford, 2017), which involve choice between alternative hypotheses based on accumulation of evidence until some decision criterion is reached.

Claims of information being fed back from the sub-second store to temporal integration processes
In the studies reviewed above showing temporal integration over long periods, there has been no suggestion that that integration occurs anywhere other than in perceptual processing.There have been several claims, however, that information is fed back from the sub-second store into perceptual processing and integrated with new input there.These claims have come from studies of integration of offsets in successively presented vertical bars called verniers (Scharnowski et al., 2007;Scharnowski et al., 2009), integration of temporally successive geometrical fragments into a coherent figure (Akyürek & Wolff, 2016), integration of temporally separated partstimuli into a whole (Greene, 2016), integration of object fragments under dynamic occlusion (Palmer et al., 2006), integration of temporally separated components of a figure for the Poggendorff illusion (Sugita et al., 2018), and perception of coherent motion direction of multiple dots (Vlassova & Pearson, 2013).There is insufficient space here for a thorough analysis of all the claims, but it can be argued that there are several grounds for thinking that the claims are not justified and that temporal integration operates entirely within perceptual processing.

Time scales of temporal integration
It has been found that information in the sub-second store decays rapidly, with a maximum duration of ~1000 ms, but with most of the information having decayed within the first 300 ms (Coltheart, 1980;Ögmen & Herzog, 2016;Sligte et al., 2010;Sperling, 1960).If temporal integration involves information in, or transferred from, the sub-second store, then the time scale of decay of that information should resemble that of the sub-second store.In fact the evidence shows temporal integration of different kinds having multiple different time scales.Among those with purported involvement of the sub-second store, temporal integration has been found to occur over 160-270 ms (Palmer & Kellman, 2014), ~200 ms (Sugita et al., 2018), ~240 ms (Akyürek, Eshuis, Nieuwenstein, Saija, Baskent, & Hommel, 2012;Akyürek & Wolff, 2016;Wyble, Bowman, & Nieuwenstein, 2009), 370-400 ms (Scharnowski et al., 2007(Scharnowski et al., , 2009)), ~600 ms (Greene, 2016), and ~800 ms (Vlassova & Pearson, 2013).None of these is outside the time scale of the sub-second store but they differ from each other, whereas similar time scales would be expected if the information was held in the sub-second store.As a brief illustration, the window of integration found by Akyürek and Wolff (2016) is limited to 240 ms.Wyble et al. (2009) found that order information is lost for stimuli presented within a time span of ~200 ms but not for stimuli presented beyond that window.Akyürek et al. (2012) found evidence that integration did not occur over longer spans of time.That window of integration is clearly shorter than some cited above, and also much shorter than the maximum persistence of information in the sub-second store.
Moreover, given that temporal integration within perceptual processing can operate over time scales of hundreds and even thousands of milliseconds, as reviewed above, it is not clear why the involvement of the sub-second store should be postulated for these kinds of temporal integration.Some of them are similar in kind to others that appear to take place within perceptual processing: perception of coherent motion of numbers of dots (Vlassova & Pearson, 2013) is an example (cf.Burr & Santoro, 2001).The differences in results between studies probably indicate that there are multiple different temporal integration mechanisms within perceptual P.A. White processing, operating on different time scales.

Decay curve of information
If the information was held in the sub-second store, retention would be maximal at zero delay and rapid decay would follow that.Where decay information is available in the studies where involvement of the sub-second store has been postulated, the decay curve does not fit with that.In Scharnowski et al., (2007Scharnowski et al., ( ,2009) ) there was a shift from dominance of the second stimulus, up to about 120 ms after the end of the stimulus presentation, to dominance of the first stimulus from ~170 ms on.That indicates a slower rate of decay for the first stimulus than for the second, whereas if the stimuli were held in the sub-second store both would decay at the same rate.Sugita et al. (2018) investigated the Poggendorff illusion, in which a straight diagonal line is partly occluded by a rectangle and the line segments appear not to be collinear when in fact they are.Fig. 8(a) shows the illusion and Fig. 8(b) shows the same line segments but with the rectangle removed.Sugita et al. presented the components of the Poggendorff figure sequentially to ascertain how the magnitude of the illusion varied with the interval between the components.The illusion was found when the rectangle was presented from 50 ms before to 200 ms after the line segments, and was maximal about 100-150 ms after.If the information was held in the subsecond store, the illusion would have been maximal at zero delay, and would have declined from that point on, which does not fit with what they found.Vlassova and Pearson (2013) presented stimuli comprising multiple dots in motion for 250 ms and set the task of judging global motion direction, with judgment delayed by varying amounts of time.They found peak performance at 400 ms delay and then a decline at 600 ms and 800 ms delay.This also does not fit with the expected decay curve for information in the sub-second store, which should yield peak performance at zero delay and decline thereafter.

Reportability of stimulus information
Information in the sub-second store can be reported (Bronfman et al., 2014;Sperling, 1960).To be precise, it can be transferred to working memory where it can be accessed by report generating mechanisms.Much of the information decays before it can be reported but some survives and is reportable.That was the first evidential basis for postulation of the store (Sperling, 1960).Thus, if a certain state of information cannot be reported, that would indicate that it is not in the sub-second store.Two of the studies under consideration in this section have yielded relevant evidence.Scharnowski et al., (2007Scharnowski et al., ( ,2009) ) found integration of two temporally separate stimuli on a time scale of 400 ms but only the product of the integration could be reported.The individual stimuli could not be reported, even though there was evidence that both persisted as individual stimuli on that time scale (Scharnowski et al., 2009).The inability to report the individual stimuli indicates that they were not in the sub-second store.Akyürek et al. (2012) reported evidence for temporal integration of successively presented stimuli on a time scale of about 200 ms.As an example, they successively presented "/" and "\" in a stream of stimuli at intervals of 80 ms and found a tendency to report "×" as a percept, indicating integration of the two Fig. 9. Schematic depiction of the fading of the icon.A stimulus of the kind used by Sperling (1960) is depicted at the left of the figure.Information about this stimulus enters the perceived present and then the perceptual timescape.Each representation shown in the perceptual timescape exists at a different moment; that is, there is only one representation of the stimulus in the perceptual timescape at any given moment, though this can be represented as having a duration of 50 ms and consequently as spread out across more than one time distance marker.Information transferred to working memory is shown above the perceptual timescape but it is not part of the perceptual timescape.A semantic (represented verbally) representation of the impression of fading generated by a process acting on information in working memory is shown, along with surviving information about the stimulus content.Access to that information by a verbal report generation process is shown.

P.A. White
stimuli.Participants reporting "×" were not able to report the individual slash stimuli as such, nor were they able to report the temporal order in which they occurred.This also indicates that the individual stimuli were not in the sub-second store.

The perceptual timescape segregates and does not integrate
The final argument is theoretical.In the present proposal, information in the perceptual timescape is segregated, not integrated.It is segregated by time marker information, supporting a coherent representation of recent history.If the information in it were integrated to form some overall percept, that would result in massive conflation of stimulus information presented at different moments in time, and that would be contrary to the main function proposed for the perceptual timescape.
If the perceptual timescape deals only in segregation, what is happening in the cases of temporal integration considered here?As in other examples of temporal integration cited in section 5.1, information is held in perceptual processing for the duration of the integration process.When the process concludes, either because a limit on the time span of the temporal integration window has been reached (Scharnowski et al., 2009) or because a decision criterion has been reached (Smith & Vickers, 1998), the resultant integrated set of perceptual information is entered into the perceived present as part of the ongoing stream of visual processing.In many instances of temporal integration, such as those where a signal detection mechanism is operating (e.g.Burr & Santoro, 2001), it is not likely that nothing at all is released from perceptual processing before integration is complete.On the contrary, participants would probably be perceiving the dots and their motion that were presented as stimuli, but the specific percept of optic flow would not be added in to the ongoing perceptual products until the integration mechanism had generated it.If the stimulus terminated before the optic flow percept was constructed, dots and their motion would still have been perceived, just without the optic flow construct.The operation of a temporal integration mechanism, therefore, need not imply that nothing at all is perceived on a shorter time scale; only the particular product of the integration process would be missing.
In summary, perceptual processing integrates (among other things) and the perceptual timescape segregates.There is a one-way street from perceptual processing through the perceptual timescape: once information has entered the perceptual timescape, it does not return to perceptual processing.Evidence that has been claimed as showing that in fact concerns temporal integration occurring within perceptual processing, with no input from the perceptual timescape.Sperling (1960) reported that his stimulus presentations gave rise to an initially rich representation that was experienced or reported as fading on the time scale of the sub-second store and before all of it could be reported (see, e.g., Sperling, 1960, p. 20, p. 23, p. 26): this was characterised as the "introspective feeling of rich perception" by Vandenbroucke et al., (2014,p. 861).Stimulus presentation time in Sperling's (1960) experiments was in most cases 50 ms with rapid decay of illumination after offset of the light flash (Sperling, 1960, p. 3) so the reported fading is a feature of memory, not of the stimulus itself.The question for this section is how fading of the memory trace can be experienced and reported.The traditional view of the fading of the icon was represented in Fig. 2(a), where fading is depicted along an objective timeline.Fading of the icon does occur but the representation of that fading in Fig. 2(a) conflates multiple moments in time and therefore does not capture how fading is experienced.We only have access to a single moment of time so perceiving an icon as fading would require multiple simultaneous representations of the icon at different time distance markers in the perceptual timescape.It will be shown that, for stimuli of the kind used by Sperling (1960), that cannot happen.

The fading of the icon
The interpretation of the fading of the icon in terms of the perceptual timescape is schematically depicted in Fig. 9.A stimulus is presented for 50 ms, shown at the lower left of Fig. 9.It is perceptually processed and the product of that processing is entered into the perceived present at m 1 .It is transferred from there to the perceptual timescape at m 2 .Once there, the information decays or fades on the time scale of the perceptual timescape.The fading is shown in the figure.Successive representations of the stimulus are shown along a diagonal because they represent both the objective passage of time (from m 2 to m 5 ) and the stimulus information moving to further time distance locations, indicated by abstract designations "t − 1 ″ to "t − 4 ″.As in previous figures, to see the state of information at a particular moment in time, look up a column, not along the figure from left to right.At any given moment there is just one representation of the stimulus in the perceptual timescape, with whatever degree of fading has occurred at that point.A single moment does not show fading actually happening, it shows only one moment in the history of fading.There could be more than one representation at a given moment (i.e.information about the stimulus at more than one time distance marker) if the temporal resolution of the visual information is less than 50 ms.That would still not show fading, however, because fading is a change in the state of information in the perceptual timescape, not a change in the stimulus.That is why the representation proceeds up the diagonal in the figure, not up a column.The fading of the icon cannot be represented at a single moment in the perceptual timescape because it is something that happens over objective time, i.e. over the decay curve of the sub-second store, not from one time distance marker to the next at one time.
It follows that the impression of fading is not given in the perceptual timescape.It must result from information being entered into working memory over the span of time of its representation in the perceptual timescape and being integrated there into a unitary representation of fading.That might, for example, comprise an abstract impression that more information had been there originally than remained later on.That impression could then be accessed by verbal report generation processes.That possibility is represented in the upper part of Fig. 9.Some integration or comparison process operates on information that has been transferred into working memory.That process results in some kind of representation of fading in working memory, alongside whatever letters or digits can be transferred to working memory.(In that latter respect Fig. 9 illustrates the full report condition of Sperling's study.)In a nutshell, the perceptual timescape is not representing the stimulus as fading; it houses a fading representation of a non-fading stimulus. 3.
Compare that with a visual stimulus that objectively fades, over 300 ms, for example.In that case, objective fading could be represented in the perceptual timescape.Fig. 10 schematically depicts the sequence of events and can be compared with Fig. 9. Again, the number of boxes shown is a presentational convenience and is not meant as a hypothesis about discrete frames of perception, nor as an indicator of the temporal resolution of information in the perceptual timescape.Fig. 10(a) shows the stimulus objectively fading over the time course of its presentation.Fig. 10(b) shows the stimulus information in the perceptual timescape.At m 1 , the end of the stimulus presentation is in the perceived present and the surviving history of the stimulus is in the perceptual timescape.As time moves on, the successive representations of the stimulus move to time distance markers further in the past until, by m 5 , only the end of the stimulus remains.It can be seen that, for a time approximating to the stimulus presentation duration, a representation of the fading of the stimulus is held in the perceptual timescape.That representation constitutes the perceptual experience of fading.The fading information would be represented with rate of change information and connectives (not shown in the figure), anchored to time distance and ordinal time markers in the perceptual timescape.
For the perceptual impression of fading to be reported, it would still be necessary for information to be transferred to working memory where it could be accessed by verbal report generation processes.That might still require integration of the perceptual timescape information into a form compatible with the limited capacity of working memory, such as a semantic summary representation.The difference is just that the integration could be based on a set of information covering multiple time distance markers at a single moment in the perceptual timescape.In Fig. 10(b) the semantic summary generation is shown drawing on information at m 2 ; it could draw on any representation from m 1 to m 4 .Thus, the fading of this stimulus can be perceived (represented in the perceptual timescape) because the perceptual timescape holds simultaneous representations of multiple moments in the history of the stimulus, and those show fading occurring.The perceptual information about fading just needs to be transferred to working memory to enable a report to be made.Vandenbroucke et al. (2014) investigated the impression of richness in the stimulus information before it faded by posing a metacognitive task.An item in a stimulus array of several items was cued for recall and, in addition, participants were asked to report their confidence in their perceptual judgments.Participants did not know in advance which item would be cued for recall, so their level of confidence on that item "represents metacognition for any item in the display" (p.870).Vandenbroucke et al. (2014) interpreted their results as showing that more objects could be stored in the sub-second store than participants were able to report, and that the level of accuracy in confidence judgments for the sub-second store was similar to that for working memory.They concluded that Fig. 10.Schematic depiction of the representation in the perceptual timescape of a stimulus that objectively fades over 300 ms.10(a): Depiction of the stimulus as fading, with four snapshots taken at intervals over the 300 ms presentation duration.10(b): At m 1 the end of the stimulus is in the perceived present and the recent history of the stimulus is represented at different time distance markers in the perceptual timescape.As time goes on, the successive representations of the stimulus move through the time distance markers until, at m 5 , only the end of the stimulus remains.Division into four time distance markers is just for convenience of presentation and is not meant to imply either that the stimulus information is represented in discrete frames or that the temporal resolution is that coarse.

P.A. White
"unattended, sensory memory items are a meaningful part of visual experience" (p.870).In other words, the impression of richness is veridical and can be accurately reported.
Whether information in sensory memory is conscious or, as Vandenbroucke et al. (2014) claimed, part of visual experience, is not at issue here.The issue concerns how the metacognitive judgments are generated.The initial stimulus was presented for 250 ms.This was followed by an interval of 1050 ms, within which a cue was briefly presented.Then a second stimulus was presented and participants reported whether the cued item had changed or not.They then made their metacognitive judgment.The whole sequence covers a span of time much greater than the outer limit on storage in the sub-second store.By the time the metacognitive judgment is made, all information about the initial stimulus will have been lost from the sub-second store.Because of that, accuracy in metacognitive judgment must be based on information transferred from the sub-second store to working memory.This is similar to the situation depicted in Fig. 9.This would require the surviving information to be reduced to something compatible with the limited capacity of working memory.
Exactly what form the information would take is not clear.It could, however, be analogous to the statistical information about multiple contemporaneous stimuli that is extracted by ensemble coding (Bronfman et al., 2014).Ensemble coding is automatic and outside the focus of attention, indicating that it probably occurs in perceptual processing (Alvarez & Oliva, 2009;Watamaniuk et al., 1989;Whitney & Yamanashi Leib, 2018).That possibility is further supported by evidence that ensemble coding occurs in the visual system (Im et al., 2017;Whitney & Yamanashi Leib, 2018;Ying & Xu, 2017).Im et al. (2021) found activation of the dorsal stream for crowd stimuli (eight faces), and not for individual face stimuli, from 68 ms after stimulus onset.That is long before information would enter the perceptual timescape.If that is the case, accuracy in metacognitive judgment could reflect extraction of relevant information in perceptual processing, even before entry to the perceptual timescape.The subjective richness of experience could itself be an ensemble property extracted during perceptual processing.
Whatever the source of accuracy in metacognitive judgment might be, the judgment itself is generated from information that has been transferred to working memory.Fading and richness are two sides of the same coin; both represent a summary impression that could be generated in perceptual processing and then transferred to working memory, or constructed in working memory from information accumulated there in reduced form over time.Regardless of which of those is the case, post-perceptual processing, of which metacognitive judgment is an example, operates on information in working memory, not on information while it is still in the perceptual timescape.Information can be transferred from the perceptual timescape to working memory by attentive selection ( Ögmen There would in addition be a connective for the object representation as a whole.This is the basic function of the perceptual timescape, to generate a representation of perceptual objects as persisting over time.& Herzog, 2016;Otto et al., 2010), and that is where processes such as metacognitive judgment and verbal report generation operate on it.

General discussion
Initially the sub-second store was regarded as holding a fading icon, a static image of recent input (Coltheart, 1980;Neisser, 1967;Sperling, 1960).Haber (1983) pointed out the need for the sub-second store to maintain a dynamic representation of continuity of change but did not develop a theoretical proposal or report any research on it.There have been theoretical developments in understanding of the sub-second store, notably that it houses a nonretinotopic or spatiotopic representation ( Ögmen & Herzog, 2016), but there has been no theoretical proposal about the form that information takes in the sub-second store and the function subserved by it.The aim of the present paper was mainly to address those two issues.
The form of information, namely the timescape with time distance and ordinal temporal information, has been expounded earlier in the paper.In terms of function, the proposal addresses a fundamental problem for perception, that we have access to only one moment at a time, but perceiving change of any kind requires information about more than one moment.The solution to that is a memorial representation of recent history.The temporally differentiated representation that is the perceptual timescape is that memorial representation.
There can be spatially and temporally local markers of change, such as velocity information at a moment in time, but that would not suffice for the temporally extended representation of things going on in the world that characterises perceptual experience.We do not experience a series of momentary images with change information attached.We experience ongoing motion, change, and persistence of perceptual objects, all of which are enabled by the perceptual timescape.When we see a ball flying through the air, we perceive its recent history as well: whether it has been in flight for a while, or whether it was hit by a bat half a second ago, that information is still there as a historical background to the ball's current location and motion.Without the perceptual timescape, perceptual experience would be severely impoverished with debilitating consequences for action.
The time marker information in the perceptual timescape is vital for the segregation of information into an order defined by temporality.However that does not by itself generate perception of things going on.That is accomplished by the other two kinds of information discussed earlier in the paper, rate of change information and connectives.Rate of change information is attached to every feature of every object and describes how that feature is changing (or not) at a given moment in the representation.Connectives bind the rate of change information and the object and feature representations together over the historical period covered in the perceptual timescape.It is those connections that generate a representation of the perceived world, not only as having a history, but as comprising coherent objects that change over time while maintaining their identity as objects.A musical note in the perceived present can be perceived as the same ongoing note as one at a time distance marker of t-100 ms because there is a thread of connectives linking the representations of the note in the perceptual timescape across that span of time.Such a connected representation is necessary for the perception of object motion studied by, for example, Narasimhan et al. (2009) andShooner et al. (2010).Temporal binding is as important for perception of objects as spatial binding is, and connectives are the informational markers of temporal binding.Fig. 11 gives a simple graphical illustration of this.
One possible objection to the present proposal might be that there is a natural temporal order in things, that the natural temporal order is perpetuated in perception, and that suffices for perception of things happening.A ball in flight occupies position A, then position B, then position C.That temporal order is there also in the light from the ball that enters the eye.It is then perpetuated through perceptual processing, and that results in perception of the ball as passing through positions A, B, and C in that order.That much is true.Things are indeed (usually) perceived in the order in which they enter perceptual processing, which respects the order and time scale of the event being perceived.But to perceive things as occurring in that order, the perceptual timescape is needed.Without the perceptual timescape our perceptual experience would be entirely of now, where that term can be defined with reference to the temporal resolution of information in the perceived present.The example of the ball in flight discussed earlier in this section shows that.For the natural temporal order of things to be perceived as such, information about recent history must form part of perception.That is what the perceptual timescape adds to what is in the perceived present.

Evidence
There is at present no evidence that directly confirms the existence of the proposed timescape.There are, however, several lines of evidence that are consistent with it, and some evidence that would be hard to explain without it.Smith, Mollon, Bhardwaj, and Smithson (2011) found evidence for encoding of temporal order of stimuli in the sub-second store.There is evidence that motion information is represented in the sub-second store (Bradley & Pearson, 2012;Demkiw & Michaels, 1976;Huynh et al., 2017;Narasimhan et al., 2009;Ögmen & Herzog, 2016;Ögmen, Otto, & Herzog, 2006;Shooner et al., 2010;Tripathy & Ögmen, 2018).For example, in a study of memory for multiple object motions, Narasimhan et al. (2009) found evidence for persistence of motion trajectory information in the sub-second store, decaying over a few hundred milliseconds.That research shows evidence for representation of dynamic change and temporality in the sub-second store, but there is no explanation of how dynamic change and temporality can be represented there.That explanation requires the postulation of a temporally differentiated information structure on the subsecond time scale.Without something of that sort, the persisting representations of object motion found by those researchers could not exist.
There have been claims of evidence for the sub-second store in the brain in the form of briefly persisting patterns of neural activity in at least three areas: the temporal visual cortex (Keysers, Xiao, Földiák, & Perrett, 2001, 2005;Rolls & Tovée, 1994;Rolls, Tovée, & Panzeri, 1999), the lateral occipital cortex (Ruff, Kristjansson, & Driver, 2007), and the right middle frontal gyrus (Ruff et al., 2007).It is difficult to know what to make of this evidence because the time scale of retention of information is less than would be expected for the perceptual timescape.The studies by Rolls and Tovée (1994) and Rolls et al. (1999) found evidence of activity persisting for 100-300 ms, which is longer than the duration of visible persistence but still shorter than the maximum duration of information in the sub-second store.The evidence also says little about what goes on in the sub-second store.However, a few studies hint at possible foundations for representations of temporal order.
There is evidence for neural responses to a stimulus that differ depending on what the previous stimulus was, on the millisecond time scale (Klampfl, David, Yin, Shamma, & Maass, 2012;Nikolić, Häusler, Singer, & Maass, 2009;Nortmann, Rekauzke, Onat, König, & Jancke, 2015).This shows registration of ordinality and change on a short time scale and could support a mechanism for constructing a series of connected ordinal representations.To illustrate, Nortmann et al. (2015) found cells in V1 that responded not to individual stimuli but to differences between the present and previous stimuli.For example, if a stimulus of vertically oriented bars was followed by a superposition of vertically and horizontally oriented bars, the cell responses represented the horizontal orientation almost to the exclusion of the vertical.Nortmann et al. argued that this could support change detection, but it could also support temporal ordinality information in perception.One problem is that the stimulus frame rate had to be quite slow (~10 Hz).At faster rates, cells responded just to individual stimulus features.To function as a register of temporal order and change information in the perceptual timescape, finer temporal resolution would be required.Keysers et al. (2005) presented a series of different stimuli at a rate of 18 ms per stimulus and found activity in the superior temporal sulcus that persisted beyond each stimulus for ~60 ms.They suggested that this briefly persisting activity would support integration of temporally successive visual inputs, such as successive views of a head rotating.In a normal visual environment, where things tend to be connected over time, such activity could function to bind together successive inputs to construct a connected representation of unfolding events, which could include informational connection of discontinuities between stimuli of the sort they used in their study.That would not itself be the perceptual timescape because the time scale of persistence of any single stimulus representation is too short, but it could be the binding function that supports the temporal connectivity of information that is then entered into the perceptual timescape.That is, it could support temporally differentiated information represented in the perceptual timescape in the manner proposed here.
The research discussed in this section shows representation of motion and of temporal features on a time scale consistent with that of the perceptual timescape, but it is not sufficient to pin down the precise properties of the perceptual timescape proposed here.Research in general has focussed too much on isolated, static images as stimuli, not enough on temporally extended and differentiated stimuli.The sub-second store has so far received little attention from neuroscientsts.The present proposal might help to specify targets that could take research forward in new directions, concerning how dynamic or time-ordered stimuli are represented in the brain.

Stability of the visual world across saccades
Saccades are rapid, ballistic movements of the eyes.Under normal circumstances they occur approximately three times per second, and the duration of a saccade is about 30 to 50 ms (Bridgeman, Van der Heijden, & Velichkovsky, 1994;Rayner, 1998).Visual sensitivity is reduced just before and during a saccade.A given visual location or object stimulates a different part of the retina before and after the saccade, but the visual world is perceived as stable and continuous across saccades despite the brief period of minimal sensitivity and the substantial changes in retinal projections.Higgins and Rayner (2014) argued that there are two possibly connected challenges associated with saccades: maintaining the phenomenal experience of visual stability despite frequent shifts of gaze direction, and integrating visual information across discrete fixations, e.g. for maintaining object representations over time.The question to be considered here is whether visual memory specifically involving the sub-second store is involved in maintenance of a stable visual world across saccades.
Several hypotheses have been proposed to account for visual stability (Bridgman et al., 1994;Cronin & Irwin, 2018;Higgins & Rayner, 2014;Sommer & Wurtz, 2008a).Currently evidence favours a proposal by Burr and Morrone (2012) that there are "two classes of mechanisms for visual stability: (1) a rapid-acting, high-resolution system of transient spatiotopy, based largely on retinotopic representations that dynamically adapt on each saccade, bridging the disruption of the saccade to provide transsaccadic perceptual continuity; (2) a slowly-developing, low-resolution system of spatiotopic maps, coarse representations of the world in real-world coordinates, tightly linked with perceptual memory and motor planning" (p.1357).Each will be briefly discussed.
On the first class of mechanisms, Burr, Ross, Binda, and Morrone (2010) discussed evidence that the spatial location of a stimulus presented around the time of a saccade is misplaced towards the target of the saccade.Burr et al. (2010) interpreted that as indicating a shift in the receptive fields of neurons, functionally remapping the visual world to new coordinates.Burr and Morrone (2012) pointed out that the proposed shift in receptive fields would exacerbate, not ameliorate, the visual stability problem: "first the field shifts on the retina, then the retina shiftsin the same directionwith the eye movement so the receptive field is displaced from its original position by twice the saccadic amplitude" (p.1357).They reported evidence for a return movement of the receptive field to its original position and they argued that this return movement maintains visual stability by annulling the displacement caused by the saccade.This mechanism "[anticipates] the problems that the saccade will cause and [solves] them pre-emptively" (Burr & Morrone, 2012, p. 1363).The remapping process, however it occurs, might be driven by a corollary discharge, a model of the planned eye movement that is used to determine the precise remapping that occurs (Cavanaugh, Berman, Joiner, & Wurtz, 2016;Melcher & Colby, 2008;Sommer & Wurtz, 2008a, 2008b).
That is not the only viable hypothesis for visual stability, but the relevant point here is that neither that nor any other proposed mechanism has invoked memory systems.The saccadic time scale of approximately 50 ms is too short for memory on the scale of the P.A. White sub-second store to be involved; and what is going on is not the storage of information about the past but the maintenance of information about the present.Moreover, it has to be accomplished at the time of the saccade and therefore cannot involve any information that emerges with a substantial processing latency, as the information that enters the sub-second store does.
On the second class of mechanisms, Burr and Morrone (2012) reviewed several lines of evidence supporting the hypothesis that "there exist spatiotopic neural maps in human visual cortex.But these maps are not static snapshots, but highly plastic information maps signalling the position, velocity, duration, and numerosity of objects" (p.1367).Two features of that summary point to possible involvement of the sub-second store.One is that the maps are spatiotopic, and that is in agreement with the work of Ögmen and Herzog (2016) showing that information in the sub-second store is spatiotopically organised.The other is that the maps signal features implying temporality, namely motion and duration (Ong, Hooshvar, Zhang, & Bisley, 2009).
Other features of the proposed maps are less easily reconcilable with the properties of the sub-second store.Several lines of evidence support the hypothesis that the spatiotopic map takes up to 500 ms to develop to maximum accuracy (Burr & Morrone, 2012;Yoshimoto, Uchida-Ota, & Takeuchi, 2014;Zimmermann, Burr, & Morrone, 2012;Zimmermann, Morrone, & Burr, 2013, 2014).Perceptual information enters the sub-second store beginning around 150-200 ms after the onset of a stimulus (Dembski et al., 2021;Förster, Koivisto, & Revonsuo, 2020;Jacob et al., 2013;Koivisto & Revonsuo, 2010;Verleger, 2020) and then decays rapidly, as shown earlier, so the temporal parameters of information accumulation and decay in the sub-second store do not match those of the build-up of the spatial representation.
More importantly, the spatial map is not a representation of the past.The increase in accuracy over ~500 ms is not the preservation of information in memory, it is more akin to a deblurring process in which the initial location of an object is unclear, but becomes more clear as the signal is more precisely resolved over a period of time.Numerous processes in perception involve the detection and resolution of signal in noise and some of them operate on longer time scales than the proposed spatial map (e.g., Burr & Santoro, 2001).The map is not registering where a stimulus was, it is is refining its estimate of where the stimulus still is.
That leaves the issue of integrating visual information across fixations.Identifying a target object after a saccade is not straightforward because the object must be perceptually processed and matched with whatever representation of it was present before the saccade, bearing in mind that there is a temporal gap between pre-and post-saccade perception because of saccadic suppression.In the saccade target object theory proposed by McConkie and Currie (1996), before a saccade is initiated, the visual system encodes and retains features of a target object but not the remainder of the scene.After the saccade the visual system seeks a matching set of features near the target location of the saccade.If a match is made to the retained set of features then object continuity is established and perceptual stability is maintained.
That implies retention of information across the saccade.McConkie and Currie (1996) hypothesised that the pre-saccadic representation is held in visual working memory.Cronin and Irwin (2018) tested that by setting the task of making a saccade to a target that might change location during the saccade, while simultaneously having a visual working memory task, detecting a change in a feature of the object (e.g.colour).Visual working memory has limited capacity so a task that draws on that capacity leaves less available for a second task, and performance should be impaired as a result.In fact performance was impaired on both tasks.Cronin and Irwin concluded that establishing object persistence across a saccade involves visual working memory.As they put it, visual working memory "maintains features of the saccade target object before the saccade so that, after the saccade, we can search for those features near foveal vision" (p.1756).
In conclusion, no author has yet proposed a role for the sub-second store in perceptual stability.The evidence to date indicates that the maintenance of stability across a saccade depends on either maintained visual representations in the temporary absence of input or, at least in part, on visual working memory, but not on the sub-second scale memory.

Hogendoorn's (2022) timeline proposal
White (2021) proposed integrated representations of vector (rate of change) information and connectives, time distance and ordinal temporal information, as a set of organising principles for perceptual information about happening, things, going on, changing, or persisting, on the sub-second time scale.The fundamental problem was the fact that the present moment (in physics) has infinitesimal duration and the brain has to deal with that by finding some way of retaining information about times that are recent (on the sub-second scale) but no longer present.A subsequent paper by Hogendoorn (2022) presented a proposal that has some points of resemblance with that put forward by White (2021), but the starting point for it was a different problem, the problem of how perceptual systems cope with the fact that there is a continuous stream of input.
To address that problem, Hogendoorn (2022) proposed an informational timeline stretching into both the past and the future.The timeline can be edited, meaning that products of perceptual processing function as adjustments to specific content in the timeline.The fact that the timeline extends into the future illustrates a key feature of the proposal: perception is essentially predictive, using currently available information to predict the immediate future.Thus, delays caused by neural conduction and processing latencies are compensated by the generation of predictions about the present, which are updated as perceptual processing generates information about the present.The proposal followed Herzog et al. (2016) and White (2021) in contending that timing of events is represented symbolically.The symbolic information was likened to a date stamp on a letter, which means that, no matter when an event enters the timeline, its timing in relation to other events is specified by the date stamp it carries.
The date stamp proposal, originally put forward by Dennett and Kinsbourne (1992), differs from the time distance information in White (2021) and in the present proposal in that it is an absolute marker of time of occurrence rather than a marker of distance in time from the present.An absolute marker records the time at which the event so marked occurred, whereas a time distance marker changes continually as the event recedes into the past.The difference is that between saying that World War 2 began in 1939 and saying that it P.A. White began 85 years ago (at time of writing).But the other three organising principles of the proposal in White (2021), rate of change information, connectives, and ordinal temporal information are not in Hogendoorn's model.
The figure that illustrates Hogendoorn's model (Fig. 2 in Hogendoorn, 2022) used the metaphor of a reel of celluloid with the discrete frames that make up the movie on it.In that proposal there is nothing to link the frames together into a coherent and connected sequence of events.Of course, one might say that, when a movie is projected onto a screen, we do perceive a connected stream of events.That is traditionally explained in terms of flicker fusion (Andrews, White, Binder, & Purves, 1996;Holcombe, 2009;Kelly, 1972).The projection rate of the frames is set at such a speed that the flicker that would correspond to the temporal gap between frames is not perceived.In a similar way it could be argued that any temporal gap between successive frames in the informational timeline would not be perceived, so that there was a perceptual impression of continuity.But flicker suppression is all that flicker fusion explains.Flicker fusion does not explain how a series of discrete static frames can be perceived as as a connected stream of events.The same applies to the timeline in Hogendoorn's proposal.We might not notice the transition from one frame to the next, but there is still a need for specification of the informational connections that tie together information entering processing at different times.If there are frames, there must be connections that bind object representations across frames so that they can be perceived as the same object persisting over time.The model proposed by Hogendoorn (2022) does not supply those informational connections.The aim of the present account is to do just that (though without signing up to the frame hypothesis), in the form of rate of change and connective information, supported by time distance and ordinality information, and to argue that the perceptual timescape is the location of the information that is linked to the perceived present, thereby generating temporal continuity in perception.
The role of predictive processes in perception was not discussed in White (2021), and not much in this paper, but predictive forms of processing are compatible with the present conceptualisation.Predictive processing clearly matters for action, such as interception of moving objects (Brenner & Smeets, 2015;Zago & Lacquaniti, 2005), and construction of a timeline into a projected future is a plausible means of representing such predictive processing.Action and predictive processing are not discussed here simply because the focus of this paper has been on the nature and function of the perceptual timescape, which is a register of recent past information.One relevant form of predictive processing is extrapolation to the present: (Nijhawan 1994;2008;Nijhawan & Wu, 2009) proposed that, wherever possible, perceptual processing uses available information to extrapolate forward to the present moment.Unpredicted events are entered into this representation as soon as processing latencies allow but inevitably lag behind.This was the basis for Nijhawan's (1994Nijhawan's ( , 2008) ) interpretation of the flash-lag effect, where an object in continuous motion (predictable) and a transitory flash of light (unpredictable) are presented, and the flash is spatially mislocalised relative to the continuously moving object.It is not clear whether and to what extent extrapolation to the present happens (White, 2018) but the advantages are so great that it is plausible that perceptual systems would be adapted to that end.To the extent that that is the case, the perceived present as proposed here and in White (2020White ( , 2021) ) is likely to comprise predicted as well as perceived information.
7.4.Ögmen and Herzog's (2016) nonretinotopic sensory memory proposal Ögmen and Herzog (2016) distinguished two forms of sensory memory, retinotopic and nonretinotopic.Retinotopic sensory memory corresponds to visible persistence (Coltheart, 1980).Nonretinotopic sensory memory was proposed by Ögmen and Herzog (2016) as an alternative route from perceptual processing to working memory.They reviewed a body of research evidence to make a case for a brief but high capacity memory store that "uses motion-grouping-based non-retinotopic reference-frames or coordinate systems" (p. 6).Moving dots are grouped by motion similarity and the common motion vector of a group is the reference frame "according to which the contents of memory are encoded" (p. 6).The store can contain multiple simultaneous reference frames.Their nonretinotopic memory is immue to masking, unlike retinotopic sensory memory, and it has both pre-attentive and attentive components.They argued that their proposal can account for results of research on anorthoscopic perception (Aydin, Herzog, & Ögmen, 2008) and multiple results from their research with vernier stimuli (Otto et al., 2006).
The nonretinotopic nature of the sub-second store (Huynh et al., 2017) and the claim for representation of motion and motionbased grouping are in accord with the present proposal.However Ögmen and Herzog (2016) did not discuss other properties of their proposed store; in particular they did not propose a decay curve or time scale of decay of information, and they did not say whether they regarded it as a new view of the sub-second store or a separate proposal, in effect a separate memory system.Some of the research that was assessed as supporting their proposal involved long temporal integration times: in the study of anorthoscopic perception by Aydin et al. (2008) stimulus presentation times ranged from ~700 ms to ~2000 ms.That is well beyond the time scale of the sub-second store, and there is evidence that the sub-second store is not involved in it.Orlov and Zohary (2018) studied anorthoscopic perception with fMRI and found that shape information was registered in the lateral occipital complex (LOC)."LOC represents the outcome of the shape temporal integration and is likely to mediate the percept of the integrated shape" (p.676).There is evidence that maintenance of object representations in LOC does not involve the sub-second store.Ferber, Humphrey, and Vilis (2003) presented fragmented line drawings in a context of many random lines that effectively camouflaged them.When the object moved it could be perceived.When motion stopped, a representation of the object persisted for a few seconds and Ferber et al. (2003) identified LOC as the location of that persisting representation.However, that only happened if the fragmented line drawing of the object continued in the stimulus: if it vanished, then the object representation in LOC also terminated (Ferber & Emrich, 2007).
That result establishes an important point.The research shows construction and maintenance of an object representation guided by cues to the continued presence of the object.The information is being maintained in perceptual processing because the stimulus information is still entering the system and being processed.The maintenance of the object representation in that study is not a memory; that is demonstrated by the fact that the representation terminated when the line drawing stimulus terminated.Ferber and Emrich (2007) ruled out the possibility that the object representation was in the sub-second store.First, they pointed out that the duration of P.A. White object persistence in their research is on the supra-second scale, well beyond the time scale of information persistence in the subsecond store.Second, they found evidence that duration of the object representation was affected by how complete the line drawing was, a factor that would not affect persistence in the sub-second store.They also ruled out working memory as an alternative possible location for the object representation.Working memory has limited capacity, so setting a cognitively demanding task should affect persistence if persistence is occurring in working memory.They found no such effect.A manipulation of complexity of the figure also had no significant effect on duration of persistence, again contrary to what would be expected of storage in working memory.
That research implies that anorthoscopic percepts are not in the sub-second store either.They are, like the percepts investigated by Ferber and Emrich (2007), constructed and maintained in LOC, and they terminate when the stimulus input terminates.There could also be persistence of information in the perceptual timescape, about the stimuli used by Ferber and Emrich (2007) and the anorthoscopic stimuli.Such information would decay on the time scale of the perceptual timescape.The research discussed in this section has not found evidence for that, however.
In summary, there is evidence consistent with the claim that information representation in the sub-second store is nonretinotopic or spatiotopic and the present account agrees with the claim that motion information is represented there.But some of the findings called upon by Ögmen and Herzog (2016) involve processes in perception, not the sub-second store.That includes the vernier research that was analysed earlier (Scharnowski et al., 2009) as well as the anorthoscopic perception research.

Conclusion
It is important that the perceptual world should hold together across time as well as across space.Just as the representation of space is organised by information about different locations, held together by information about adjacency, among other things, so the representation of time is organised by information about different moments, held together by relations of adjacency.The perceptual timescape is proposed here as an information structure that can embody that temporal representation, constructing a perceptual world in which the perceived present is located in relation to the recent past.The perceptual timescape has features of both perception and memory.It is perception because the recent history of information held in it is connected to the perceived present and supplements what is held (briefly) there with information that can be of practical use.It is memory because it concerns times previous to the perceived present; in other words, it is information about how things were in the recent past.We perceive the past, meaning specifically what is informationally identified as in the past, as well as, and as connected to, perception of what is informationally identified as the present. Footnotes.
1.The term "auditory sensory memory" is frequently used in research on auditory unit formation and detection of regularities and deviations from regularity (e.g.Bartha-Doering, Deuster, Giordano, Zehnhoff-Dinneson, & Dobel, 2015;Näätänen & Winkler, 1999;Winkler & Schröger, 2015) but is not well defined.Research in that area indicates at least two time scales of auditory information processing and storage.There is evidence for an auditory temporal window of integration of approximately 200 ms (Bartha-Doring et al., 2015;Näätänen, Paavilainen, Rinne, & Alho, 2007;Sussman, Ritter, & Vaughan, 1998;Vaz Pato et al., 2002) which plays a role in auditory perceptual object construction and unitisation.That is not the time scale of the auditory version of the sub-second store (Darwin et al., 1972).There is also evidence for a longer store with a time scale up to about 10 s ( Bartha-Doering et al., 2015;Näätänen & Winkler, 1999;Sams, Hari, Rif, & Knuutila, 1993), seemingly representing temporal order and regularity information.The time scale of the latter is far beyond that of the sub-second store in vision, and beyond that of the auditory equivalent of iconic memory (Darwin et al., 1972).It is postulated here that there is an auditory version of the perceptual timescape, but existing research is almost entirely concerned with perceptual processing, and with time scales different from what the auditory perceptual timescape would have.In any case, it should be noted that the term "auditory sensory memory" as used in the research cited here does not refer to the sub-second store.2. The postulation of a minimum temporal resolution in the perceptual timescape might seem to imply that information is updated periodically on that time scale, but that is not the case.In most frame proposals, an entire body of perceptual information is maintained for the postulated duration of the frame and then all is replaced by the next frame (e.g.Crick & Koch, 2003;Harter, 1967;Herzog et al., 2016;Kozma & Freeman, 2017;Stroud, 1967;VanRullen & Koch, 2003).Nothing like that is postulated here.It is possible that all time markers are adjusted in lockstep (see section 3.3).However, that does not mean that one set of information is replaced by another; on the contrary, in the perceptual timescape all sets of information persist, subject to decay, across changes in time distance markers.Moreover, information at one time distance marker is informationally connected to information at adjacent time markers on either side by connectives (described in the main text), which hold the historical information together in a coherent representation.3. Fig. 9 depicts fading as a gradual process affecting all elements equally.There is evidence that elements are lost in an all-or-nothing way, so that each element is clear as long as it survives but fewer survive as time goes by (Pratte, 2018).If that is correct, then the fading depicted in the figure should be replaced with a gradual decrease in the number of elements remaining in the representation.
The information bottleneck to the limited capacity of working memory still applies, so not all of the elements can be transferred there.

Submission declaration and verification.
The work described has not been published previously and is not under consideration for publication elsewhere.If accepted, it will not be published elsewhere in the same form, in English or in any other language, including electronically, without the written consent P.A. White

Fig. 1 .
Fig. 1.Models of memory stores.1(a): Model proposed byAtkinson and Shiffrin (1968).Products of perceptual processing feed into a brief, high capacity store called sensory memory or iconic memory; some information is transferred from sensory memory to short-term memory, a limited capacity store with a longer duration; and some information is transferred from there to long-term memory, a high capacity store of indefinite duration.1(b): Revised version of the Atkinson and Shiffrin model.Products of perceptual processing enter the perceived present, a very high capacity store of very brief duration (<100 ms), and also directly to the sub-second store, a high capacity nonretinotopic store of longer duration (~1000 ms) that incorporates semantic information.Information is also transferred to the sub-second store from the perceived present.Some information is transferred from the sub-second store to working memory, a suite of limited capacity stores and processing functions where information may be maintained by active rehearsal, and from there to long-term memory.

Fig. 2 .
Fig. 2. Schematic depictions of different proposals for information representation and decay in the sub-second store.Fig. 2(a) shows the traditional concept of the icon as a static and rapidly fading image of a stimulus.The stimulus is perceptually processed and a percept of the stimulus emerges; information about that is transferred to the sub-second store where it decays over time.Fig. 2(b) shows a modified version of that concept allowing for representation of multiple object motions.The fading images represent successively earlier moments in the history of the stimulus.Fig. 2(c) shows the features of the perceptual timescape proposed here.The entire representation of an object in motion exists at a single moment.The body of information is rendered perceptually coherent by four kinds of information: rate of change information (in this case specifying motion direction at each moment); connectives, symbolised by a vertical arrow, tying the sets of information into a representation of an object persisting and changing ove time; time distance information, a time marker indicating how long ago each moment in the object's history occurred; and ordinal temporal information, indicating which set of information can after or before which other set.

Fig. 3 .
Fig. 3. Expanded version ofFig.1(b)  showing in sketch form the main features of the present proposal: the perceptual timescape is a timeline of perceptual information; information from perceptual processing can be entered directly into the perceptual timescape, constituting postdictive reinterpretation; the only control process operating on information in the perceptual timescape is attentive selection for transfer of information to working memory; and there is no route back from the perceptual timescape to perceptual processing, so temporal integration does not utilise information recycled from the perceptual timescape.

Fig. 4 .
Fig. 4. Schematic depictions of a synfire chain (a) and a synfire ring (b).Circles represent individual neurons and arrows represent connections between them.Black circles are the initial group to fire.Figures are redrawn fromFig.5 in Cabessa and Tchaptchet (2020).
Fig. 4. Schematic depictions of a synfire chain (a) and a synfire ring (b).Circles represent individual neurons and arrows represent connections between them.Black circles are the initial group to fire.Figures are redrawn fromFig.5 in Cabessa and Tchaptchet (2020).
P.A. Whitefine temporal resolution is, in principle, not a problem for networks of synfire rings.

Fig. 6 .
Fig. 6.Fig. 6(a) shows the spatial array and temporal information for an apparent motion stimulus comprising successive flashes of light (represented as yellow dots) at locations A and B. Fig. 6(b) shows multiple moments in the history of perception of the stimulus.The bottom row of boxesshows the changing state of information in the perceived present as the stimulus unfolds.At m 1 the first dot is in the perceived present.The other moments up to m 6 show the temporal sequence of events running through the perceived present, with two moments representing perception of the blank screen between the two dots.Columns above each moment show the state of information in the perceptual timescape at that moment, simplified for expository convenience.At m 4 the second dot has entered the perceived present and the information in the perceptual timescape is postdictively modified.Motion information is represented by a vertical arrow.Further moments (m 5 and m 6 ) show progression of that representation through time distance co-ordinates in the perceptual timescape as time goes by.The representation is schematic: in reality there would be finer temporal differentiation of the information in the perceptual timescape; and, to avoid overburdening the figure with information, ordinal temporal information, rate of change information, and connectives are not shown.

Fig. 8 .
Fig. 8. 8(a): The Poggendorff illusion.The two diagonal lines are collinear but are perceived as offset.8(b): The same diagonal lines but with the rectangle removed.The lines are now perceived as collinear.

Fig. 11 .
Fig. 11.Illustration of the temporal binding function of connectives.Black circles define spatially individuated perceptual objects.Three selected features, shape, colour, and motion are verbally represented along with abstract indicators that they each have rate of change information attached to them.Time distance markers at the right of the figure indicate that these are at adjacent moments in the perceptual timescape.An information structure corresponding to the left side of the figure would not give rise to perception of a persisting object.A structure corresponding to the right side of the figure would do so because the connectives (symbolised by dotted double arrow-headed lines) bind successive representations over time.There would in addition be a connective for the object representation as a whole.This is the basic function of the perceptual timescape, to generate a representation of perceptual objects as persisting over time.