Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?

Our brains are constantly adapting to changes in our visual environment. Neural adaptation exerts a persistent inﬂuence on the activity of sensory neurons and our perceptual experience, however there is a lack of consensus regarding how adaptation is implemented in the visual system. One account describes fatigue-based mechanisms embedded within local networks of stimulus-selective neurons (networked fatigue models). Another depicts adaptation as a product of stimulus expectations (predictive coding models). In this review, I evaluate neuroimaging and psychophysical evidence that poses fundamental problems for predictive coding models of neural adaptation. Speciﬁcally, I discuss observations of distinct repetition and expectation effects, as well as incorrect predictions of repulsive adaptation aftereffects made by predictive coding accounts. Based on this evidence, I argue that networked fatigue models provide a more parsimonious account of adaptation effects in the visual system. Although stimulus expectations can be formed based on recent stimulation history, any consequences of these expectations are likely to co-occur (or interact) with effects of fatigue-based adaptation. I conclude by proposing novel, testable hypotheses relating to interactions between fatigue-based adaptation and other predictive processes, focusing on stimulus feature extrapolation phenomena. ©


Introduction
Humans and other animals can exploit a range of patterns that occur within their sensory environments.In the visual system, circuit-level mechanisms in the retina allow us to rapidly adjust to fluctuations in luminance levels (Rieke & Rudd, 2009;Shapley & Enroth-Cugell, 1984), environmental image structure (Hosoya et al., 2005), and periodic stimulation patterns (Schwartz & Berry, 2008;Schwartz et al., 2007).Researchers have characterised the functional and structural properties of the visual system that allow us to exploit statistical regularities, recurring patterns, and predictably changing features in our visual world.These have been collectively termed predictive processes.
Predictive processes are enabled by a range of mechanisms implemented throughout the visual system.Implementations can fit into one of two broad categories in the taxonomy proposed by Teufel and Fletcher (2020).The first category relates to structural features of cells, synapses, and local circuits.For example, the anisotropic distribution of orientation-selective neurons in area V1 approximates the prevalence of differently oriented lines in natural scenes (Furmanski & Engel, 2000;Li et al., 2003).Similar biases across retinotopic space occur for object-selective neurons in high-level visual cortex (Kaiser et al., 2019).These distributions of orientationand object-selective cells presumably do not change over short timescales, even when there are large, sudden changes in environmental image statistics (e.g., when entering a virtual reality environment that only consists of diagonal lines or cats).Such processes correspond to environmental attributes that are highly stable over time (termed 'spatiotemporally global regularities' in Teufel & Fletcher, 2020).We have little voluntary control over these processes or their consequences, except via actions that change the retinal input we receive.Despite being labelled 'predictive' processes in Teufel and Fletcher (2020) and other work, these processes typically do not rely on 'top-down' predictions or expectations about future or current events.They are only predictive insofar as they enable exploitation of regularities within one's sensory environment.
The second category instead pertains to internal models of visual environments that are instantiated in the brain and updated over time.For example, we can rapidly direct our attention to objects, locations, or time windows that we expect to be informative or relevant to a task-at-hand (Carrasco, 2011;Gottlieb, 2023;Nobre & Coull, 2010).Attention also modulates the activity of stimulus-selective neurons in visual cortex (Corbetta et al., 1990;Maunsell, 2015;Reynolds & Heeger, 2009).These predictive processes relate to highly context-dependent features of sensory environments, such as cued stimulus appearance probabilities (termed 'spatiotemporally local regularities').
One process, known as adaptation (Gross et al., 1979;Kohn, 2007;Solomon & Kohn, 2014;Vogels, 2016), produces persistent and systematic effects on neural activity within the visual system.Despite its pervasive influence on the activity of stimulus-selective neurons, adaptation has eluded consensus regarding its categorisation as a spatiotemporally local or global process.Competing mechanistic accounts of adaptation (e.g., Auksztulewicz & Friston, 2016;Solomon & Kohn, 2014) characterise adaptation-related phenomena in fundamentally different ways.
In general terms, adaptation effects describe influences on neural response measures (e.g., firing rates, local field potentials, EEG/MEG signals or fMRI BOLD signals) based on what has been seen in the recent past, depending on how prior sensory input relates to currently presented stimuli.A well-known consequence of adaptation is repetition suppression, whereby neural response measures are reduced for immediately repeated compared to unrepeated stimuli (depicted in Fig. 1A-B, Movshon & Lennie, 1979;Desimone, 1996;Ringo, 1996;reviewed in Grill-Spector et al., 2006).Beyond suppressed responses to repeated stimuli, adaptation effects also include graded degrees of response suppression (or enhancement) depending on the feature discrepancy between successively presented stimuli (Kaliukhovich & Vogels, 2016;Kohn, 2007;Q2 bib_li_and_glickfeld_2023Li & Glickfeld, 2023;Williams & Olson, 2022b).When two different stimuli are presented in succession, changes in stimulus selectivity and the shapes of tuning curves of feature-selective neurons have also been observed (De Baene & Vogels, 2010;Ghisovan et al., 2009;Kohn & Movshon, 2004;Tailby et al., 2008;Teich & Qian, 2003;Wissig & Kohn, 2012).Adaptation in one visual area can also influence afferent input to other visual areas, resulting in effects that cascade through the visual system (Kohn, 2007;Larsson et al., 2016).
By influencing population responses of stimulus-selective visual neurons, adaptation is widely understood to also influence our visual perception, as evidenced by visual adaptation aftereffects (Clifford et al., 2007;Clifford & Rhodes, 2005; Fig. 1 e Effects of adaptation in the visual system.A) Examples of experiment designs used to measure adaptation effects.An adaptor stimulus is followed by either the same stimulus (repetition trial) or a different stimulus (alternating trial).The degree of feature discrepancy (e.g., the difference in orientation between a pair of gratings) can also be parametrically manipulated.B) Examples of stimulus repetition effects on firing rates in inferior temporal cortex (left panel, depiction of results in Sawamura et al., 2006), BOLD signals in the object-and face-selective ventral temporal cortex (middle panel, similar to Grotheer & Kov acs, 2015) and event-related potentials at occipito-parietal scalp electrodes over ventral temporal cortex (right panel, adapted from den Ouden et al., 2023).Animal stimulus images are taken from Rossion and Pourtois (2004).a.u.¼ arbitrary units.Weber & Fairhall, 2019;Webster, 2015).Adaptation can even erase objects in a visual scene from our conscious awareness, as exemplified by Troxler fading following prolonged exposure to a stimulus in the same retinotopic location (Troxler, 1804;reviewed in Martinez-Conde et al., 2006).Effects of adaptation also present important experimental confounds when testing for effects of other predictive processes (e.g., Feuerriegel, Vogels, & Kov acs, 2021;Vinken & Vogels, 2017;Yan et al., 2021).
Here, I discuss adaptation as observed in experiments whereby successive stimuli are presented within time windows ranging from hundreds of milliseconds to several seconds (similar to Vogels, 2016).This does not include effects of stimulation history over longer timescales (e.g., minutes) such as repetition priming (Henson, 2016), which appear to rely on different underlying mechanisms (discussed in Solomon & Kohn, 2014;Webster, 2015).I also exclusively focus on the visual system (for reviews of auditory and somatosensory adaptation see Garrido et al., 2009;Whitmire & Stanley, 2016;Carbajal & Malmierca, 2018).
In this review, I describe two competing mechanistic accounts of adaptation in the visual system.The first set of networked fatigue models describe adaptation as built into the structure of stimulus-selective neurons and local circuits (a spatiotemporally global process).By contrast, the second set of predictive coding models instead posit that adaptation is context-dependent and a product of past-weighted stimulus expectations (a spatiotemporally local process).I subsequently review two bodies of empirical evidence for which networked fatigue and predictive coding models make differing predictions.This evidence poses fundamental problems for predictive coding accounts of adaptation but are in line with networked fatigue models.I also describe novel hypotheses derived from theorised intersections between fatigue-based adaptation and other predictive processes in the visual system.

Fatigue within local networks of stimulus-selective neurons
Networked fatigue models (e.g., Solomon & Kohn, 2014) describe multiple cellular and synaptic mechanisms that jointly contribute to adaptation effects observed in different cortical areas within the visual system.These mechanisms are embedded within local networks of recurrently connected, stimulus-selective excitatory and inhibitory neurons, for example those which comprise circuits that implement divisive normalisation (Carandini & Heeger, 2012;Westrick et al., 2016).Networked fatigue models describe adaptation as critically dependent on an organism's history of visual stimulation and the recent activity of stimulus-selective neurons.They do not specify that adaptation is determined by an observer's expectations about future sensory input.Consequently, networked fatigue accounts portray adaptation as a largely 'automatic' phenomenon that inevitably occurs during ongoing visual stimulation.The mechanistic descriptions below are based on comprehensive theoretical reviews by Solomon and Kohn (2014), Vogels (2016), and Whitmire and Stanley (2016).
The first set of mechanisms are commonly termed "fatigue" or "response fatigue" (Grill-Spector et al., 2006;Vogels, 2016; depicted in Fig. 2A).Response fatigue includes fast spike rate adaptation that occurs over hundreds of milliseconds, whereby the sustained firing of a neuron inhibits its ability to fire for a short period (Ahmed et al., 1998;Benda & Herz, 2003).With prolonged exposure to an adaptor stimulus there is also after hyperpolarisation that follows membrane depolarisation, which can last multiple seconds (Carandini & Ferster, 1997;Sanchez-Vives et al., 2000;Priebe et al., 2010;Pozzorini et al., 2013;discussed in Fabbrini et al., 2019;Li & Glickfeld, 2023).The degree of adaptation produced by these mechanisms primarily depends on firing rates following presentation of an adapting stimulus.Notably, such firing rate-dependent effects have not been consistently observed in some visual areas, for example macaque inferior temporal (IT) cortex (De Baene & Vogels, 2010;Fabbrini et al., 2019;Kaliukhovich & Vogels, 2014;Liu et al., 2009;Sawamura et al., 2006).However, this does not exclude effects of response fatigue in other visual areas and does not necessarily imply that response fatigue is absent in IT.
A second set of mechanisms are collectively called "input fatigue" (De Baene & Vogels, 2010;Vogels, 2016; depicted in Fig. 2B), which describes depression of synaptic inputs to a stimulus-selective neuron (e.g., Chance & Abbott, 2001;Li & Glickfeld, 2023) occurring via short-term synaptic plasticity (Chance et al., 1998;Finlayson & Cynader, 1995;Fioravante & Regehr, 2011;Haas et al., 2016;Zucker & Regehr, 2002) or suppressed neurotransmitter release from cells providing input to a target neuron (Abbott et al., 1997;Manookin & Demb, 2006;Markram & Tsodyks, 1996).Similar effects can also arise via reduced firing of other cells that provide excitatory drive to a stimulus-selective neuron.Input fatigue is theorised to produce adaptation effects that are primarily dependent on the history of recently presented stimulus features or identities, rather than firing rates evoked by an adaptor stimulus.Notably, input fatigue better accounts for the observed stimulus-specificity of adaptation effects (e.g., Sawamura et al., 2006).
Response and input fatigue have also been embedded in models that incorporate recurrent connectivity within local circuits and networks of excitatory and inhibitory neurons (Kaliukhovich & Vogels, 2016;Li & Glickfeld, 2023;Solomon & Kohn, 2014;Whitmire & Stanley, 2016).Because stimulusselective neurons are recurrently connected with other proximal neurons within the same brain area (Douglas et al., 1995;Felsen et al., 2002;Fujita & Fujita, 1996;Reinhold et al., 2015), response or input fatigue could also reduce recurrent excitation over the time-course of the stimulus-evoked response (Fig. 2C, see Vogels, 2016).In addition, stimulus-selective excitatory neurons within a visual area receive afferent feedforward excitatory input from earlier stages in the visual hierarchy.Adaptation occurring at earlier stages is theorised to alter patterns of afferent input to a target population of neurons with overlapping receptive fields, a phenomenon known as inherited adaptation (Kohn & Movshon, 2003;De Baene & Vogels, 2010;Dhruv & Carandini, 2014;Larsson & Harrison, 2015;King et al., 2016;Reviewed in Clifford et al., 2007;Larsson et al., 2016, depicted in Fig. 2D).However, Networked fatigue models also describe adaptation of inhibitory neurons following stimulus exposure and subsequent effects on divisive normalisation operations (Wainwright et al., 2002).This can result in disinhibition of excitatory neurons and even response enhancement under The right plot depicts a reduction in firing rates following an immediate repetition of that stimulus due to mechanisms intrinsic to the target neuron (spike rate adaptation or afterhyperpolarisation). B) Input fatigue.Multiple neurons provide input to a target neuron.With stimulus repetition, excitatory synaptic input to the target neuron decreases due to synaptic depression or synaptic plasticity (middle plot) or response fatigue within the input neurons (right plot).C) Fatigue in recurrently connected neurons.A pair of neurons provide excitatory drive to each other.Adaptation due to response or input fatigue reduces the degree of recurrent excitatory input.D) Inherited adaptation.A target neuron in one visual brain area integrates inputs from excitatory neurons that reside in an earlier area within the visual hierarchy.Reduced excitatory input from these neurons reduces the response of the target neuron.E) Adaptation of inhibitory neurons.The blue shaded neuron provides input to an inhibitory neuron, which subsequently inhibits a broad set of excitatory neurons, including those selective for a different stimulus (green shaded neurons).When the stimulus that preferentially excites the green shaded neurons is presented, responses of these neurons are disinhibited (enhanced) due to adaptation of the inhibitory neuron.F) Adaptation of the suppressive surround.A stimulated excitatory neuron also stimulates inhibitory neurons that reduce excitatory responses across a broad area exceeding the excitatory neuron's classical receptive field (i.e., the suppressive surround).The inhibitory neurons also adapt and less effectively inhibit responses of a subsequently stimulated excitatory neuron with a classical receptive field that is located within the suppressive surround.G) Prediction error minimisation via expectation suppression.In this circuit, superficial pyramidal cells (triangles) signal prediction errors, whereas deep pyramidal cells (squares) signal predictions about the current or future state of sensory or corticocortical input.Upon the first presentation of a stimulus there are not specific expectations to see that stimulus, resulting in large responses of superficial pyramidal neurons (i.e., large prediction error signals).Upon repetition of the stimulus, an observer's expectations have been adjusted to see that stimulus again.Increased activity of the deep pyramidal neurons (corresponding to expectations for the presented stimulus) inhibits responses of superficial pyramidal cells via actions of inhibitory interneurons.H) Bayesian formulation of adaptation as described in predictive coding models.Following exposure to an adaptor stimulus (e.g., a grating with vertically oriented lines), the predictions of an internal model (the prior) are centred on the feature values of that adapting stimulus.Likelihood distributions reflect population-level patterns of sensory input within the relevant visual area.A repetition of the same vertical grating leads to greatly overlapping prior and likelihood distributions and low prediction error (approximated using Kullback-Leibler divergence).By contrast, a stimulus that is oriented away from the adaptor results in relatively less overlap and higher overall prediction error.Rep ¼ repeated, Alt ¼ alternating., a.u.¼ arbitrary units.Dhruv et al., 2011;Wissig & Kohn, 2012;Solomon & Kohn, 2014;Kaliukhovich & Vogels, 2016;Li & Glickfeld, 2023; depicted in Fig. 2E and F).Adaptation as implemented within local excitatory-inhibitory circuits can produce a range of counterintuitive effects beyond suppression of excitatory stimulus-evoked responses.For example, adaptation-related response enhancement can be observed when two consecutively presented stimuli differ in their location in the visual field (Webb et al., 2005;Wissig & Kohn, 2012) or their stimulus features (cross-adaptation, Kaliukhovich & Vogels, 2016).Adaptation of inhibitory interneurons can also produce effects that occur later in the time course of the stimulus-evoked response, as compared to earlier effects on recurrent excitatory interactions (e.g., Patterson et al., 2013), as well as effects that are dependent on visual stimulus contrast (reviewed in Solomon & Kohn, 2014).In some cases, adaptation in combination with divisive normalisation can produce shifts in observed tuning curves of a neurons either toward or away from the adapting stimulus (known as flank adaptation; Mu ¨ller et al., 1999;Dragoi et al., 2002;Felsen et al., 2002;Teich & Qian, 2003;Patterson et al., 2013;Whitmire & Stanley, 2016).
Existing networked fatigue models have been formulated to capture dynamics within specific areas of cortex (such as V1, MT and IT), and the contribution of each hypothesised mechanism within each area has not been precisely quantified (reviewed in Solomon & Kohn, 2014;Vogels, 2016;Whitmire & Stanley, 2016;Li & Glickfeld, 2023).Generally, the mechanisms described in networked fatigue models portray adaptation as highly dependent on the history of stimulus features presented at different locations across the visual field.These models do not necessarily exclude additional effects of other predictive processes, such as those associated with surprise and attentional capture (Alink & Blank, 2021;Itti & Baldi, 2009;Press et al., 2020).However, such effects are not specified to be produced by the mechanisms described above, and are viewed as distinct from effects of adaptation.

Prediction error minimisation via past-weighted stimulus expectations
Predictive coding models describe adaptation as a product of expectations to see stimuli that were presented immediately beforehand or in the recent past (Auksztulewicz & Friston, 2016;Bastos et al., 2012;Clark, 2013;Ewbank et al., 2011;Friston, 2005Friston, , 2010;;Garrido et al., 2009;Grotheer & Kov acs, 2016;Srinivasan et al., 1982 , 2014;Summerfield et al., 2008).The description below is primarily based on Auksztulewicz and Friston (2016) who provide the clearest and most detailed treatment.Please note that the descriptions below specifically relate to predictive coding-based mechanistic accounts of adaptation.More general models of predictive coding-based perceptual inference can be formulated that do not attempt to explain adaptation effects.
According to predictive coding models, stimulus repetition and adaptation effects are examples of expectation suppression (Egner et al., 2010;Richter & de Lange, 2019;Summerfield et al., 2008) whereby neural responses to expected stimuli are suppressed, relative to those that are not expected.Graded effects of adaptation (based on the degree of similarity between successively presented stimuli) can be understood as reflecting graded degrees of prediction error signalling that index the extent of mismatch between expectations and sensory input (e.g., Smout et al., 2019).Expectation suppression is hypothesised to reduce the activity of superficial pyramidal neurons (which signal prediction errors) via top-down and lateral inhibition (Friston, 2005;Bastos et al., 2012, Fig. 2G).This suppression occurs because generative models instantiated within the visual system are thought to silence, suppress or 'explain away' superficial pyramidal neuron responses that accord with expected sensory input.When expectations closely match patterns of sensory input, this might also reduce the duration of the iterative model updating process following stimulus presentation.This would produce smaller BOLD signals due to a reduced duration of stimulus-evoked metabolic activity within a visual area (Grill-Spector et al., 2006;Henson & Rugg, 2003).
The states of internal models and the iterative model updating process can be conceptualized using Bayesian perceptual inference frameworks (Mathys et al., 2014;Lin & Garrido, 2022, depicted in Fig. 2H).At a given point in time, the predictions of an internal model are denoted by the location and precision of a (typically Gaussian) prior distribution with respect to some stimulus feature space (such as line orientation in V1 or motion direction in MT).Neural population responses corresponding to afferent sensory input are denoted by the likelihood distribution.The precision (inverse variance) of this distribution indexes the degree of uncertainty in the population response (analogous to the signal-to-noise ratio).A prediction error is approximated by the discrepancy between the prior and likelihood distributions (e.g., as measured using Kullback-Leibler divergence).The resulting posterior distribution reflects the combined influence of the prior and likelihood distributions, and this posterior becomes the prior in the next iteration of the perceptual inference process.As the model is recurrently updated, the prior will converge on a stable estimate of sensory input that minimises prediction error.Here, the priors represent the 'best guess' of internal models and are thought to correspond to our perceptual experience (Howhy, 2017;Seth, 2019).According to predictive coding models of adaptation, the prior is typically shifted toward stimulus feature values that have been encountered in the recent past.
There are two prominent variants of predictive coding models of adaptation, which are often conflated in the literature.Each of these specifies the same underlying mechanisms involving top-down and lateral inhibition, however they differ regarding the nature of the relevant expectations that are formed.
The first variant specifies that the expectations responsible for adaptation effects are context specific.Expectations to see (or not to see) repetitions of stimuli can be modified using predictive cues (e.g., Summerfield et al., 2008).These models assume a general prior biased toward sensory input encountered in the recent past, which exploits the temporally-autocorrelated nature of visual input in everyday life.However, this past-weighted prior can be reversed to instead predict an unrepeated stimulus (or a large change in sensory input) if an observer does not expect a repetition to occur.
c o r t e x x x x ( x x x x ) x x x Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003 The second variant instead specifies persistent, pastweighted priors that are determined by recent sensory input (also proposed in Summerfield et al., 2008).Importantly, these are not modified by factors such as contextual stimulus appearance probability, reflecting a 'stubborn prior' in the terminology of Yon et al. (2019Yon et al. ( , 2023)).This may reflect the persistence of a prior that is formed during presentation of an adapting stimulus, which exerts lingering effects even after the offset of that stimulus.Although the method of forming the prior is assumed to be inflexible, the prior itself would nevertheless need to be updated on a moment-by-moment basis to incorporate an organism's ever-changing history of sensory input.
Predictive coding models of adaptation portray the visual system as an active participant in generating our perceptual experience and aspire to provide a unified mechanistic explanation for adaptation, expectation suppression and perceptual inference.Such a unification would allow researchers to benefit from decades of careful neurophysiological work on adaptation effects to better understand how expectations shape neural responses in the visual system.

Key differences between networked fatigue and predictive coding models
Networked fatigue and predictive coding models of adaptation propose distinct mechanisms instantiated within the same local networks of excitatory and inhibitory neurons.Networked fatigue models portray adaptation as a product of structural features of cells and local circuits in visual cortex, and primarily focus on feedforward transmission of afferent sensory signals.By contrast, predictive coding models portray adaptation as a product of stimulus expectations that suppress neuronal activity via lateral and feedback inhibition.
Consequently, the theoretical model adopted by a researcher fundamentally changes the way that adaptation is investigated.Proponents of networked fatigue models typically focus on manipulations of stimulus configurations and the timing and duration of stimulus presentation (e.g., Kaliukhovich & Vogels, 2016;Kuravi & Vogels, 2017;Li & Glickfeld, 2023;Patterson et al., 2013).By contrast, proponents of predictive coding models instead model an observer's time-evolving expectations to estimate adaptation effect magnitudes (e.g., Summerfield et al., 2008Summerfield et al., , 2011;;Bell et al., 2016;discussed in Vinken & Vogels, 2017).A researcher's assumed mechanistic model also determines how adaptation effects are interpreted.Adaptation has been counted as a key piece of evidence for prediction error minimisation as specified in predictive coding accounts (Clark, 2013; see also Walsh, 2020).Differences in measured adaptation effects by context or participant group have also been interpreted as differences in the quality or nature of underlying expectations about one's visual world (e.g., Ewbank et al., 2016;Palmer et al., 2017;Pellicano & Burr, 2012).By contrast, proponents of networked fatigue models describe adaptation as modulating the salience of sensory events and facilitating detection of rapid changes in one's visual environment (Solomon & Kohn, 2014;Vogels, 2016).
Despite the clear differences in their mechanistic descriptions, these models can often be parameterised to predict very similar patterns of neuroimaging results.However, distinct model predictions can be derived for certain patterns of empirical data.In sections 3 and 4 below I discuss two bodies of neuroimaging and psychophysical evidence that pose challenges for adaptation mechanisms described in predictive coding models, but are easily accommodated by networked fatigue models.
Here, it is also important to note that there is ample evidence for fatigue-based mechanisms of adaptation as identified in vitro (reviewed in Vogels, 2016;Whitmire & Stanley, 2016) and in vivo (e.g., Li & Glickfeld, 2023).Adaptation has also been observed in the retina, which does not arise from cortico-retinal feedback (e.g., Rieke & Rudd, 2009).Given this very strong evidence for fatigue-based adaptation, the issue of competing theories can be reframed as a question of whether predictive coding-based mechanisms afford any explanatory power beyond that provided by fatigue-based accounts.The following sections are also relevant to resolving this question.

Neuroimaging evidence relating to adaptation and stimulus expectations
In this section, I review evidence relating to whether adaptation is dependent on stimulus expectations as specified in the first variant of predictive coding models described above (e.g., Auksztulewicz & Friston, 2016).For more extensive discussion of expectation effects on neural response measures in the visual system see Walsh et al. (2020), Alink and Blank (2021) and Feuerriegel, Vogels, and Kov acs (2021).

Is adaptation modulated by stimulus expectations?
A key finding that is often cited as support for predictive coding models was reported by Summerfield et al. (2008).They presented pairs of faces within an experimental trial that could either be the same (a repetition trial) or different identities (an alternation trial, Fig. 3A).They also manipulated the relative proportions of repetition and alternation trials across blocks.In some blocks, face repetitions were frequent and therefore expected, whereas in others they were rare and surprising.Summerfield and colleagues found that BOLD signal differences in the fusiform face area (FFA, Kanwisher et al., 1997) between repeated and alternating stimuli were larger in blocks whereby repetitions were expected, and smaller in blocks where repetitions were surprising (depicted in Fig. 3B).This finding was interpreted as repetition suppression being modulated by stimulus expectations, and therefore being a consequence of expectation suppression.This landmark study inspired many subsequent experiments that yielded similar patterns of BOLD signal modulations in high-level visual areas (Larsson & Smith, 2012;Kov acs et al., 2012Kov acs et al., , 2013;;de Gardelle et al., 2013;Choi et al., 2017).However, as noted by Grotheer and Kov acs (2015), the analyses performed in Summerfield et al. (2008)  Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003 blocks, expected repetition trials were compared with surprising alternation trials.By contrast, in frequent alternation blocks the surprising repetition trials were compared with expected alternation trials (Fig. 3B).Because surprising stimuli evoke larger BOLD signals than expected stimuli (Egner et al., 2010;Grotheer & Kov acs, 2015;Richter & de Lange, 2019), additive effects of stimulus repetition and expectation within the same sets of voxels could produce the observed blockwise differences in repetition suppression reported in Summerfield et al. (2008) and subsequent replications.These studies did not provide clear evidence for a modulation of repetition suppression by stimulus expectations.
More recent work has shown that repetition and expectation effects on BOLD signals are indeed additive and separable (Grotheer & Kov acs, 2015).Studies using electrophysiological recordings have also identified distinct effects of repetition and cued probabilistic expectations (Amado & Kov acs, 2016;Feuerriegel et al., 2018;Tang et al., 2018).Other work has identified large stimulus repetition effects in contexts where expectation effects are absent or undetectable, when analysing BOLD signals (Grotheer & Kov acs, 2014;Kov acs et al., 2013;Larsson & Smith, 2012) and electrophysiological measures (Kaliukhovich & Vogels, 2011, 2014;Larsson & Smith, 2012;Vinken et al., 2018;Solomon et al., 2021;den Ouden et al., 2023).Taken together, this evidence indicates that, although stimulus repetition and expectations can influence BOLD signal magnitudes in the same visual areas, this does not constitute support for a shared underlying mechanism.Instead, the existing evidence indicates distinct effects of adaptation and expectation suppression.

The evidence for expectation suppression in the visual system
There is also considerable doubt about whether the expectedsurprising BOLD signal differences in Summerfield et al. (2008) and other studies (e.g., Egner et al., 2010;Richter & de Lange, 2019) provide clear evidence for genuine expectation suppression that reflects reduced prediction error signalling.According to the theoretical interpretation of expectation suppression effects (e.g., Egner et al., 2010;Summerfield & De Lange, 2014), activity of superficial pyramidal neurons (that signal prediction errors) should be reduced, dampened or 'silenced' when one's expectations to see a specific stimulus are fulfilled, compared to conditions without any strong expectations to see a particular stimulus (depicted in Fig. 3C and  D).For example, one would expect smaller BOLD signals following a stimulus image appearing with 90% probability as compared to one that appears with 50% probability.Similarly, predictive coding models of adaptation would categorise a repeated stimulus as an expected stimulus based on expectations derived from recent stimulation history.When key confounding factors were taken into account, there was very limited evidence for genuine expectation suppression defined as a silencing of activity that occurs with fulfilled stimulus expectations.However, there was clear evidence of BOLD signal increases following surprising events (e.g., Amado et al., 2016) which appeared to be responsible for the expected-surprising condition differences widely reported in previous work.In other words, differences across BOLD signals for expected and surprising stimuli appeared to be due to signal increases that specifically occurred when an observer's expectations were violated by a surprising event, rather than a suppression or silencing of activity when expectations are fulfilled (depicted in Fig. 3E).Although predictive coding accounts posit that expectation suppression and surprise effects are two sides of the same coin (e.g., Richter et al., 2022;Walsh et al., 2020), they are treated as distinct effects within other theoretical frameworks (e.g., Alink & Blank, 2021;Grotheer & Kov acs, 2016;Kov acs & Vogels, 2014;Press et al., 2020;Schomaker & Meeter, 2015).Although an absence of evidence does not constitute evidence of absence, there appears to be a lack of empirical support for the type of expectation suppression that could produce repetition suppression in predictive coding models of adaptation.
Notably, effects of expectation and surprise (with sources in the visual system) have not been identified when using electrophysiological recordings and predictive cueing designs (Summerfield et al., 2011;Kaliukhovich & Vogels, 2011, 2014 Rungratsameetaweemana et al., 2018;Vinken, 2018;Solomon et al., 2021;reviewed in den Ouden et al., 2023).Expectation effects have been consistently observed in macaques in statistical learning experiments that involved weeks of sequence learning prior to neural response measurement (Meyer & Olson, 2011;Meyer et al., 2014;Ramachandran et al., 2017;Schwiedrzik & Freiwald, 2017;Kaposvari et al., 2018;Esmailpour et al., 2022).However, these effects have not been observed in comparable designs whereby statistical regularities were learned within a single testing session (e.g., Kaliukhovich et al., 2011Kaliukhovich et al., , 2014;;Solomon et al., 2021;Vinken et al., 2018).Such effects also have not been consistently replicated across human studies using statistical learning designs (e.g., Manahova et al., 2018;Zhou et al., 2020).Expectation suppression that requires weeks of sequence learning would not account for adaptation effects that can be observed for completely novel stimuli, and for stimulus repetitions that occur within hundreds of milliseconds (e.g., Kov acs et al., 2013;Summerfield et al., 2008).
It is also unclear why surprise-related BOLD signal increases are reliably observed in experimental contexts where electrophysiological measures do not differ across expectancy conditions.BOLD signals may be particularly sensitive to factors that co-occur with surprise but are not necessarily due to prediction error signalling, such as increased pupil dilation (O'Reilly et al., 2013;Richter & de Lange, 2019), time-on-task effects linked to response times (Mumford et al., 2023;Yarkoni et al., 2009) and attentional capture following identification of a surprising stimulus (Herrmann et al., 2010;Press et al., 2020;Alink & Blank, 2021; for further discussion see Richter & de Lange, 2019;den Ouden et al., 2023).

Is adaptation a consequence of stimulus expectations?
Despite early observations by Summerfield et al. (2008) suggesting a modulation of adaptation effects by stimulus expectations, subsequent work has shown that effects of stimulus repetition and expectation are separable and distinct.In addition, there is currently very limited empirical evidence for the kind of expectation suppression in the visual system that is theorised to underlie adaptation effects.Predictive coding models of adaptation could be reformulated to specify that non-repetitions are a type of surprising event associated with BOLD signal increases.However, surpriserelated effects also do not replicate in equivalent experimental contexts when using electrophysiological measures, which are arguably more direct measures of prediction error signalling within predictive coding frameworks (Bastos et al., 2012;Friston, 2005).
To be clear, this body of evidence does not imply that probabilistic stimulus expectations cannot be formed based on recent stimulation history.Such expectations could plausibly lead to measurable effects in experiments where stimulation history and expectations co-vary (discussed in Feuerriegel, Yook, et al., 2021).However, in contexts where stimulation history and expectations are dissociable (such as probabilistic cueing designs, e.g., den Ouden et al., 2023), expectation manipulations have not produced effects that resemble repetition suppression or adaptation.
Overall, the existing neuroimaging evidence does not support the claims of Summerfield et al. (2008) and Auksztulewicz and Friston (2016) that adaptation and repetition suppression effects are a product of stimulus expectations.By contrast, this neuroimaging evidence is congruent with networked fatigue models that describe adaptation as distinct from effects of stimulus expectations.

Repulsive motion aftereffects
Motion aftereffects describe phenomena whereby exposure to a motion in a certain direction (such as a moving grating, depicted in Fig. 4A) leads to an illusory percept of motion in the opposite direction.This percept occurs following the cessation of the adapting stimulus or upon presentation of a subsequent inducer stimulus, which may be a stationary image (e.g., Addams, 1835;Glasser et al., 2011;Rezec et al., 2004;Thompson, 1880), a stimulus with ambiguous motion information (Levinson & Sekuler, 1976) or a continuously changing, noisy image (Wexler et al., 2013).Motion aftereffects are widely understood to be a consequence of adaptation in the visual system (Anstis et al., 1998;Mather et al., 2008).In recordings of motion-sensitive visual areas in macaques and marmosets, firing rates are reduced for neurons that are selective to the adapted direction of motion (e.g., Glasser et al., 2011;Kohn & Movshon, 2003;Larsson et al., 2010;Lee & Lee, 2012;Patterson et al., 2014;Zavitz et al., 2016).Please note that the arguments here assume a causal link between neural adaptation in the visual system and perceptual motion aftereffects.
Within predictive coding models, adaptation aftereffects can be described using Bayesian perceptual inference frameworks (Lin & Garrido, 2022;Mathys et al., 2014;Yon & Frith, 2021).According to these models (e.g., Auksztulewicz & Friston, 2016;Friston, 2005Friston, , 2010;;Rao & Ballard, 1999;Fig. 4 e Motion aftereffects and model predictions within a Bayesian perceptual inference framework.A) Depiction of a motion aftereffect.Following exposure to a grating with bars that steadily drift leftward (the adaptor), the subsequent presentation of a stationary grating (the inducer) results in an illusory percept of rightward motion (a repulsive aftereffect).During the presentation of the stationary image, firing rates in motion-selective areas are suppressed for neurons that prefer the adapted (leftward) direction of motion.The same degree of suppression does not occur for neurons that prefer the opposite (rightward) direction of motion.B) Predictive coding model predictions.Following adaptation, expectations are biased toward the leftward direction of motion (indicated by the left-shifted prior distribution).As the stationary inducer stimulus does not include any motion, the likelihood is flat around the circle.The resulting posterior distribution (corresponding to the observer's percept) is biased toward the adapting direction of motion, rather than away from it.C) Predictions of networked fatigue models.These models instead describe a shift in the likelihood due to biases in population responses away from the adapted direction of motion.This produces a posterior that is repulsed away from the adapting direction of motion and congruent with the illusory percept.In this situation both models correctly predict reduced firing rates for neurons selective for the adapted direction of motion.a.u.¼ arbitrary units.Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003Summerfield & De Lange, 2014), stimulus expectations influence perception in systematic ways.These models specify a prior distribution (here determined by one's expectations about current sensory input) and a likelihood distribution (reflecting patterns of afferent sensory input that are received).The resulting posterior distribution reflects the combined influence of the prior and likelihood.In a recurrent, rapidly iterative process such as perceptual inference, the posterior in one iteration becomes the prior in the next iteration, and the prior corresponds to one's percept at a given point in time (Seth, 2019).
The prior and likelihood rarely carry equal weight in the battle for our perceptual inferences.The relative influence of each is determined by its precision.For example, very strong expectations to see a particular stimulus would lead to precise priors.Similarly, the signal-to-noise ratio of afferent sensory input closely corresponds to the precision of the likelihood.Clearly visible stimuli (such as high-contrast gratings) would be more effective in shaping our perception than a highly ambiguous stimulus (such as a blurry image).In cases where we have strong expectations (precise priors) and relatively ambiguous sensory input (imprecise likelihoods), our perception will be strongly biased toward the prior (Yon & Frith, 2021).
In the stimulation sequence described above that produces motion aftereffects, prolonged exposure to the leftwardmoving grating (the adaptor) would be theorised to shift an observer's posterior distribution to align with leftward motion.Within an iterative perceptual inference loop this would produce a precise prior centred on that motion direction (depicted in Fig. 4B).The subsequently presented, stationary stimulus provides no coherent motion signals, corresponding to a likelihood with equal weighting in all directions.In this example with a strong, stubborn prior and an imprecise likelihood, perception would be biased toward the adapted direction of motion (e.g., Seth, 2019;Summerfield & De Lange, 2014).In other words, these models predict perceptual aftereffects in the opposite direction to what is observed.This is despite predictive coding models correctly predicting that responses of motion-selective neurons will be suppressed for cells selective to the direction of adapted motion.By the same line of reasoning, these models would also generate incorrect predictions for other types of repulsive perceptual aftereffects listed above.By contrast, networked fatigue models correctly predict suppressed neural responses for the direction of adapted motion and repulsive motion aftereffects (Anstis et al., 1998, model predictions displayed in Fig. 4C).Rather than assuming a past-weighted prior, networked fatigue models specify change in the likelihood corresponding to biased population responses of motion-selective neurons.This agrees with existing theoretical and psychophysical work that specified changes in likelihood distributions due to adaptation (Clifford et al., 2007;Schwiedrzik et al., 2014;Stocker & Simoncelli, 2005).
Please note that this does not negate the existence of priors that are weighted toward recent experience, which may be associated with other predictive processes.Effects of such priors may co-occur with larger changes to likelihood distributions that produce repulsive adaptation aftereffects (Schwiedrzik et al., 2015 Q4 ; Wei & Stocker, 2015).However, appeals to stimulation-dependent priors are not necessary to explain repulsive adaptation aftereffects.

Summary of evidence against predictive coding models of adaptation
As discussed in Sections 3 and 4, the existing neuroimaging and psychophysical evidence does not support either variant of predictive coding models of adaptation.There is little evidence for adaptation effects being modulated by stimulus expectations, and a lack of evidence for the kind of expectation suppression that is theorised to underlie adaptation effects.Other models that assume a persistent, past-weighted prior can be formulated to account for separable adaptation and expectation effects, however they make incorrect predictions about repulsive adaptation aftereffects.By contrast, networked fatigue models can account for the patterns of neuroimaging and psychophysical findings described above.Based on this evidence, it appears that networked fatigue models provide a more parsimonious account of adaptation in the visual system, and that adaptation is best categorised as a spatiotemporally global process in the taxonomy of Teufel and Fletcher (2020).
Within some predictive processing frameworks, fatiguebased adaptation may serve to reduce prediction errors that correspond to stable sensory input.This would make the visual system exquisitely sensitive to change, as characteristic of the sensory systems of many different animals.However, this would be an example of a process that appears as if it minimises prediction error without explicitly representing or estimating such errors (discussed in Williams, 2022;Press et al., 2023).There are many examples of similar mechanisms that can simplify the computational burden faced by an organism (Miłkowski, 2018;Teufel & Fletcher, 2020).For example, luminance in the retina allows us to see over a wide range of ambient luminance levels via local circuit mechanisms (Rieke & Rudd, 2009).Importantly, fatigue-based adaptation does not constitute evidence for lateral or top-down inhibitory mechanisms as specified in predictive coding accounts (e.g., as proposed by Friston, 2005;Bastos et al., 2012;Clark, 2013;Keller & Mrsic-Flogel, 2018;Tscshantz et al., 2023).
Fatigue-based adaptation would also not be expected to ensure veridical perception, as exemplified by repulsive perceptual aftereffects.Notably, adaptation effects in the retina, lateral geniculate nucleus and early visual areas do not appear to be compensated for by populations of neurons in higher visual areas that receive adapted input (Dhruv & Carandini, 2014;Seri es et al., 2009;Solomon & Kohn, 2014;Zavitz et al., 2016).As discussed below, adaptation-induced perceptual biases may confer advantages in certain situations.

Interactions between adaptation and other predictive processes
In addition to emphasising changes in sensory environments, adaptation may also assist with another fundamental problem faced by the brain: that it takes time to transform light into perception.Processes such as photochemical bleaching in c o r t e x x x x ( x x x x ) x x x Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003 the retina and synaptic transmission incur delays that accumulate throughout the visual system (Berry et al., 1999;Johnson et al., 2023;Schmolesky et al., 1998).The brain must somehow account for these delays to effectively interact with objects that move or change over time.
Several predictive processes have been proposed to describe how the visual system can compensate for transmission and processing delays to track moving objects (reviewed in Nijhawan, 2008;Hogendoorn, 2020;Turner et al., 2023).Below I discuss how cortical adaption can also contribute to motion extrapolation as well as the extrapolation of other stimulus features (such as location or colour).This could produce visual representations of probable future object states based on trajectories of recent visual input.

Extrapolation of stimulus position
Our capacity to perceive expected future positions of moving stimuli (termed motion extrapolation) is demonstrated in the flash-lag illusion (Metzger, 1932;Nijhawan, 1994).In Nijhawan's version of this illusion (depicted in Fig. 5A), a bar centred at fixation smoothly rotates around a circle.During this sequence another bar is flashed onscreen with the same orientation as the rotating bar.Observers report that the moving bar appears to be rotated further in the direction of motion compared to the flashed bar.This illusion has spurred widespread interest in motion extrapolation and how it is achieved (reviewed in Hogendoorn, 2020).Extrapolation mechanisms as early as the retina were identified by Berry et al. (1999), who presented flashed and smoothly moving bars to rabbits and salamanders (illustrated in Fig. 5B).They reported that position-selective retinal ganglion cells responded to moving bars at an earlier latency than for flashed bars in the same location.This co-occurred with shifts in firing rate distributions across populations of position-selective cells, where the highest firing rates were observed for cells corresponding to the leading edge of the bar (Fig. 5B).By comparison, the peaks of the distributions for flashed bars were centred at the middle of the bar.Assuming that stimulus location is estimated based on population response distributions, the represented location of the bar as reported by the retina was shifted forward along its direction of motion.Corresponding shifts in position representations have also been reported in the primate retina (Liu et al., 2021), cat and primate V1 (Jancke et al., 2004;Subramaniyan et al., 2018) and human visual cortex (Johnson et al., 2023).
Berry and colleagues proposed that extrapolation was achieved via contrast gain modulation implemented by retinal amacrine cells.Notably, this amacrine cell-mediated mechanism produces effects that are very similar to adaptation in visual cortex (Solomon & Kohn, 2014).After a bar enters a retinal ganglion cell's receptive field, bipolar cells providing feedforward input to the ganglion cell are continually stimulated as the bar passes through that location.This also stimulates amacrine cells that subsequently inhibit ganglion cell responses during further stimulation ( Q5 bib_johnston_and_lagnado_2015Johnston & Lagnado, 2015).By contrast, cells that are most responsive to the position of the bar's leading edge are less inhibited and show relatively higher firing rates.Such a mechanism produces shifted population responses (Fig. 5B) that resemble tuning curve shifts produced by flank adaptation in visual cortex (e.g., Kohn & Movshon, 2004) that are enacted via synaptic fatigue (Li & Glickfeld, 2023).This retinal circuit mechanism could be understood as producing a type of feedforward inherited adaptation that is not corrected-for in cortex (e.g., Dhruv & Carandini, 2014;Johnson et al., 2023).
Additional extrapolation mechanisms in the retina also resemble effects of adaptation on inhibitory neurons that implement surround suppression in V1.When retinal ganglion cells are stimulated, cells with nearby receptive fields are also depolarised, further enhancing responses around the leading edge of the moving bar (Kastner & Baccus, 2013;Trennholm et al., 2013aTrennholm et al., , 2013b;;Manookin et al., 2018;reviewed in Turner et al., 2023).This type of disinhibition resembles adaptation of the broadly tuned inhibitory surround in V1 (e.g., Wissig & Kohn, 2012;Patterson et al., 2013; depicted in Fig. 2F), which may be able to produce similar extrapolation effects in cortex (compare Kastner & Baccus, 2013;Solomon & Kohn, 2014).
Following similar lines of reasoning, Berry and colleagues also speculated that adaptation-enabled motion extrapolation may also occur in visual cortex.Surprisingly, this role of cortical adaptation has not been systematically investigated and is absent from recently proposed mechanistic frameworks (e.g., Hogendoorn, 2020; but see Turner et al., 2023).Networked fatigue models could be developed to predict adaptation dynamics for smooth motion sequences within different visual areas containing neurons with different receptive field sizes.This could characterise the likely contribution of cortical adaptation to motion extrapolation, complementing other mechanisms within the visual system (reviewed in Turner et al., 2023).

Extrapolation of other stimulus features
Remarkably, Berry et al. (1999) also speculated that adaptation might allow the visual system to generate population responses (and percepts) resembling probable future states of an object (here termed stimulus feature extrapolation).For example, in Nijhawan's (1994) demonstration of the flash-lag illusion with rotating bars (Fig. 5A) there are gradual changes in both the bar's orientation and position over time.This leads to illusory shifts in perceived orientation that co-occur with shifts in perceived location.Here, I propose an account of how this may occur in visual cortex.
As illustrated in Fig. 5C, orientation extrapolation (without concurrent position extrapolation) could occur via fatigue-based adaptation acting on orientation-selective neurons in V1 and other areas of visual cortex.In this example, a grating stimulus is presented at fixation (and does not change position) while gradually rotating in a clockwise direction.In a similar fashion to the retinal amacrinemediated mechanisms described above, orientationselective neurons that respond to the presented grating will adapt as the grating rotates through their range of preferred orientations.This produces shifted distributions of firing rates that are biased in the direction of rotation as compared to what would be observed for a stationary, flashed grating.Here, adaptation occurring within dynamic stimulus Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003 sequences can produce effects that have been theorised to occur via internally generated predictions of future object states (Blom et al., 2020) or changes in interconnectivity across brain areas (reviewed in Hogendoorn, 2020;Turner et al., 2023).Adaptation of broadly tuned inhibitory neurons and subsequent disinhibition of stimulus-selective excitatory neurons (e.g., Wissig & Kohn, 2012;Patterson et al., 2013, depicted in Fig. 2E) may also enhance the extent of population response shifts (e.g., following prolonged exposure to the rotating stimulus).
The perceptual consequences of this hypothesised orientation extrapolation can be demonstrated in a modified flashlag illusion in which a grating smoothly rotates clockwise.If another grating is flashed alongside this stimulus, the  Berry et al. (1999).Grey rectangles show snapshots of the trajectory of a black vertical bar that is flashed onscreen (top panel) and smoothly moves across the display (middle and bottom panels).Grey and red curves depict normalised population responses of location-selective neurons without adaptation (grey) and with adaptation (red).When the bar appears onscreen, the population response is centred on the location at which the bar is presented.As the bar moves across the screen, neurons responsive to the trailing edge are adapted (suppressed) over time, whereas other neurons are newly excited by the leading edge.This produces a shift in the peak of the population distribution toward the leading edge of the bar.C) Example of grating orientation extrapolation.An oriented grating stimulus (top panels) is flashed onscreen (left column) and smoothly rotates clockwise over time (middle and right columns).A layer of orientation-selective neurons (shaded in blue) provide input to a second layer of neurons (shaded in red) that integrates responses across multiple cells with similar orientation preferences.When the grating is flashed onscreen, the population response is centred over the presented orientation.As the grating rotates, adaptation occurring at the first layer causes the distribution of activity in the second layer to shift further clockwise than if the stimulus was flashed onscreen with the same orientation (bottom panels, effects highlighted by red arrows).Relative response magnitudes (ranging from 0 to 1) are displayed for second layer neurons around the orientation of the presented stimulus at each step.Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003 smoothly rotating grating will appear to be further rotated clockwise (i.e., in the direction of rotation) than the flashed grating.Videos demonstrating this effect and extrapolation of other stimulus features (such as colour) can be viewed at osf.io/jey87.
More generally, extrapolation could be expected to occur for any stimulus feature that is associated with repulsive adaptation aftereffects, including complex features such as line thickness (Blakemore & Sutton, 1969), movement speed (Mather et al., 2017;Mather & Parsons, 2018), shape (Dickinson et al., 2010) face identity (Rhodes et al., 2009) and facial expression (Xu et al., 2008(Xu et al., , 2012)).The degree of extrapolation and sensitivity to rates of change in a stimulus feature are likely to depend on the time-courses of adaptation within different areas of the visual system.
Only one year after speculations by Berry et al. (1999), this type of stimulus feature extrapolation was reported in Sheth et al. (2000) for colour, luminance, spatial frequency, and pattern entropy.In each of their experiments, the percept of a flashed disk lagged behind the changing disk with respect to the relevant feature dimension.For example, a disk that smoothly changed from red to green appeared to be greener than a flashed disk of the same colour.However, Sheth and colleagues disputed a role of adaptation, and instead favoured a priming and masking-based explanation.As a consequence, it appears that contributions of adaptation to feature extrapolation have not been adequately investigated in subsequent work.Although adaptation is certainly not the sole contributing process to flash-lag effects (see Hogendoorn, 2020;Nijhawan, 2008), its contribution cannot be dismissed entirely based on the limited control analyses in Sheth et al. (2000).
A unique contribution of adaptation can be revealed by demonstrating the retinotopic specificity of some stimulus feature extrapolation effects.Because adaptation acts on stimulus-selective neurons that have limited spatial receptive fields, adaptation effects (and feature extrapolation) should also be local to those stimulated populations of neurons.This is observable in visual displays whereby a stimulus with a gradually-changing feature also increases in size over time (videos available at osf.io/jey87).For example, when viewing a circle that gradually changes in colour from red to green while expanding in size, the inner portion of the circle appears greener than the edges.In this case, the degree of extrapolation toward green is larger for the centre that has been adapted over a longer period as compared to the more recently stimulated edges of the expanding circle.A similar effect can be observed for expanding, rotating gratings, whereby the lines in the centre and edges of a rotating and expanding grating appear to be mismatched due to more extensive orientation extrapolation in the centre.More generally, sensitivity to gradual size changes should depend on the receptive field sizes of the relevant adapted stimulus-selective neurons.
Importantly, these hypothesised shifts in feature-selective population responses would not necessarily be dependent on the presentation of a predictable stimulus trajectory (e.g., a smoothly rotating bar).Even random sequences of stimuli may produce shifts in single neuron tuning curves and population responses depending on the differences between current and recent prior stimulus feature values (as observed in Felsen et al., 2002;Dhruv & Carandini, 2014).
This type of adaptation-enabled feature extrapolation appears to reflect an in-built capacity of visual stimulusselective neurons to (partly) compensate for neural transmission delays and align visual representations with probable future states.Surprisingly, stimulus feature extrapolation (other than motion extrapolation) has not been thoroughly investigated since initial observations by Berry et al. (1999) and Sheth et al. (2000).Consequently, a broad collection of feature extrapolation effects remains to be characterised.
Shifted population responses arising from adaptation are also likely to influence the actions of other predictive processes that contribute to extrapolation in different contexts (e.g., De Freitas et al., 2016;Callahan-Flintoft et al., 2020;reviewed in Hogendoorn, 2020;Rust & Palmer, 2021).Mapping the magnitudes of extrapolation for different stimulus features, their degree retinotopic specificity and their sensitivity to rates of change (e.g., De Freitas et al., 2016;Sheth et al., 2000;Yook et al., 2022) may also help us better understand how we maintain stable, unified percepts of moving and dynamically changing objects (discussed in Hogendoorn, 2022).

Conclusion
In this review I have described two influential, competing mechanistic accounts of adaptation in the visual system.I have also presented neuroimaging and psychophysical evidence that poses fundamental problems for predictive coding models of adaptation but is congruent with networked fatigue models.Based on this evidence, it appears that adaptation is best categorised as a spatiotemporally global process enacted by fatigue-based mechanisms.Although observers can, in principle, form probabilistic stimulus expectations based on recent stimulation history, any consequences of such expectations are likely to co-occur (or possibly interact) with fatigue-based adaptation.An understanding of adaptation via fatigue-based mechanisms also leads to fascinating new lines of inquiry focused on the interactions between adaptation and other predictive processes in the visual system.

Declaration of competing interest
None.

Fig. 2 e
Fig. 2 e Proposed mechanisms of adaptation.A) Response fatigue.A neuron (shaded blue) provides input to a target neuron (shaded grey).The left plot of connected neurons depicts afferent responses following the first presentation of a stimulus.The right plot depicts a reduction in firing rates following an immediate repetition of that stimulus due to mechanisms intrinsic to the target neuron (spike rate adaptation or afterhyperpolarisation). B) Input fatigue.Multiple neurons provide input to a target neuron.With stimulus repetition, excitatory synaptic input to the target neuron decreases due to synaptic depression or synaptic plasticity (middle plot) or response fatigue within the input neurons (right plot).C) Fatigue in recurrently connected neurons.A pair of neurons provide excitatory drive to each other.Adaptation due to response or input fatigue reduces the degree of recurrent excitatory input.D) Inherited adaptation.A target neuron in one visual brain area integrates inputs from excitatory neurons that reside in an earlier area within the visual hierarchy.Reduced excitatory input from these neurons reduces the response of the target neuron.E) Adaptation of inhibitory neurons.The blue shaded neuron provides input to an inhibitory neuron, which subsequently inhibits a broad set of excitatory neurons, including those selective for a different stimulus (green shaded neurons).When the stimulus that preferentially excites the green shaded neurons is presented, responses of these neurons are disinhibited (enhanced) due to adaptation of the inhibitory neuron.F) Adaptation of the suppressive surround.A stimulated excitatory neuron also stimulates inhibitory neurons that reduce excitatory responses across a broad area exceeding the excitatory neuron's classical receptive field (i.e., the suppressive surround).The inhibitory neurons also adapt and less effectively inhibit responses of a subsequently stimulated excitatory neuron with a classical receptive field that is located within the suppressive surround.G) Prediction error minimisation via expectation suppression.In this circuit, superficial pyramidal cells (triangles) signal prediction errors, whereas deep pyramidal cells (squares) signal predictions about the current or future state of sensory or corticocortical input.Upon the first presentation of a stimulus there are not specific expectations to see that stimulus, resulting in large responses of superficial pyramidal neurons (i.e., large prediction error signals).Upon repetition of the stimulus, an observer's expectations have been adjusted to see that stimulus again.Increased activity of the deep pyramidal neurons (corresponding to expectations for the presented stimulus) inhibits responses of superficial pyramidal cells via actions of inhibitory interneurons.H) Bayesian formulation of adaptation as described in predictive coding models.Following exposure to an adaptor stimulus (e.g., a grating with vertically oriented lines), the predictions of an internal model (the prior) are centred on the feature values of that adapting stimulus.Likelihood distributions reflect population-level patterns of sensory input within the relevant visual area.A repetition of the same vertical grating leads to greatly overlapping prior and likelihood distributions and low prediction error (approximated using Kullback-Leibler divergence).By contrast, a stimulus that is oriented away from the adaptor results in relatively less overlap and higher overall prediction error.Rep ¼ repeated, Alt ¼ alternating., a.u.¼ arbitrary units.

Fig. 3 e
Fig. 3 e Examples of experiment designs manipulating stimulus repetition and stimulus expectations.A) Blocked repetition probability design of Summerfield et al. (2008) and subsequent replications.In each trial, two images are sequentially presented that may be the same (repetitions) or different images (alternations).The probability of stimulus repetition is high (e.g., 75%) in some blocks of trials, and low (25%) in other blocks.B) Depiction of fMRI BOLD responses typically observed in blocked repetition probability designs.C) Predictive cueing designs.In each trial a cue signals the appearance probabilities of different stimuli that subsequently appear.Stimuli are denoted here as expected (75% probability), neutral (50% probability) or surprising (25% probability).D) Hypothesised effects of both expectation suppression and surprise responses on BOLD signals.Here, BOLD signals inversely scale with the subjective appearance probability of the stimulus.E) Hypothesised effects of surprise-related BOLD signal increases without concurrent expectation suppression.Animal stimulus images are taken fromRossion and Pourtois (2004).a.u.¼ arbitrary units.

Fig. 5 e
Fig. 5 e Examples of motion and stimulus feature extrapolation and hypothesised contributions of cortical adaptation.A) The flash-lag illusion as presented in Nijhawan (1994).Observers view a bar that smoothly rotates clockwise (left panel).When two bars with the same orientation as the central bar are flashed onscreen (middle panel), the central bar appears further rotated clockwise than the flashed bars (right panel).B) Example of smooth linear motion extrapolation, adapted fromBerry et al. (1999).Grey rectangles show snapshots of the trajectory of a black vertical bar that is flashed onscreen (top panel) and smoothly moves across the display (middle and bottom panels).Grey and red curves depict normalised population responses of location-selective neurons without adaptation (grey) and with adaptation (red).When the bar appears onscreen, the population response is centred on the location at which the bar is presented.As the bar moves across the screen, neurons responsive to the trailing edge are adapted (suppressed) over time, whereas other neurons are newly excited by the leading edge.This produces a shift in the peak of the population distribution toward the leading edge of the bar.C) Example of grating orientation extrapolation.An oriented grating stimulus (top panels) is flashed onscreen (left column) and smoothly rotates clockwise over time (middle and right columns).A layer of orientation-selective neurons (shaded in blue) provide input to a second layer of neurons (shaded in red) that integrates responses across multiple cells with similar orientation preferences.When the grating is flashed onscreen, the population response is centred over the presented orientation.As the grating rotates, adaptation occurring at the first layer causes the distribution of activity in the second layer to shift further clockwise than if the stimulus was flashed onscreen with the same orientation (bottom panels, effects highlighted by red arrows).Relative response magnitudes (ranging from 0 to 1) are displayed for second layer neurons around the orientation of the presented stimulus at each step.
Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003 certain conditions (as observed in V1 and IT, and subsequent replications were not appropriate for detecting a genuine modulation of repetition suppression by stimulus expectations.For each block type, repetition suppression was calculated as the difference in BOLD signals evoked by repeated as compared to alternating stimulus trials.In frequent repetition CORTEX3943_proof ■ 15 June 2024 ■ 7/19 Please cite this article as: Feuerriegel, D., , Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling?, Cortex, https://doi.org/10.1016/j.cortex.2024.06.003Feuerriegel, Vogels, and Kov acs (2021) systematically evaluated the evidence for expectation suppression effects on BOLD signals, MEG/EEG signals, firing rates and local field potentials in the visual systems of humans and non-human primates.