Predicting visual memory across images and within individuals
Introduction
We remember some items that we encounter in our day-to-day lives with ease. For example, we may recall a painting that we saw in a museum long after our visit. At the same time, we often fail to remember other, similar items, such as the wall art in a doctor's office. Despite the fact that what items go on to be remembered or forgotten can seem arbitrary, our memory is not completely unpredictable.
Intuitively, it seems that what we go on to remember should be largely determined by processing that occurs during and after encoding. However, recent work has discovered two factors that determine our memories ahead of time. First, individuals collectively show strong agreement in which images they will remember. That is, certain images are intrinsically memorable or forgettable (Bainbridge, Isola, & Oliva, 2013; Isola, Xiao, Torralba, & Oliva, 2011). At the same time, each individual exhibits idiosyncratic moment-to-moment sustained attention dynamics, and the attentional state leading up to the moment of encoding impacts subsequent memory (deBettencourt, Norman, & Turk-Browne, 2018). This suggests, for example, that we're unlikely to remember a forgettable art piece or recall an exhibit encountered after our sustained attention had faded during a long museum visit. Neither memorability nor sustained attention, however, feature in most models of visual memory.
By testing many individuals on diverse stimulus sets, studies of visual memory have revealed striking consistency in the pictures that are remembered or forgotten (Bainbridge et al., 2013; Isola, Parikh, Torralba, & Oliva, 2011; Isola, Xiao, et al., 2011; Isola, Xiao, Parikh, Torralba, & Oliva, 2014). In other words, ahead of time, we can make predictions about which specific images will be remembered. This widespread consistency implies that memorability is a property inherent to an image itself. This memorability can be quantified through continuous recognition tasks, in which participants detect specific stimulus repeats in a stream of images. Certain images, i.e., high memorability images, are much more likely to be correctly detected in these tasks. Importantly, a memorability score measured in one experiment has been shown to successfully translate across other tasks, participants, image contexts, and delays (e.g., Bainbridge, 2020; Broers, Potter, & Nieuwenstein, 2018; Goetschalckx, Moors, & Wagemans, 2018). Furthermore, memorability is not simply a product of an item's low-level visual features (color, texture, shape, orientation, or spatial frequency) alone, nor is it a product of attractiveness or visual interest. Instead, it is distinct from other visual factors (Bainbridge, 2019; Bainbridge et al., 2013) and unaffected by reward and cognitive control (Bainbridge, 2020). In sum, research on image memorability emphasizes that individuals' memories are enhanced for specific stimuli over others.
By tracking individuals' behavior over time, on the other hand, studies of sustained attention have revealed how fluctuating attentional states impact what is later remembered (Barel & Tzischinsky, 2020; deBettencourt et al., 2018; Madore et al., 2020; Song, Finn, & Rosenberg, 2021). That is, when attention is lapsing, we can predict that the forthcoming image will be disadvantaged. Changes in sustained attention from one moment to the next can be measured via continuous performance tasks (CPTs), in which participants repeatedly make the same response to the vast majority of stimuli, but then must make a different response to a rarely presented stimulus. Sustained attentional states can be operationalized via behavioral performance on such a task, with lapsing attentional states indexed by measures such as misses (incorrect responses to infrequent target trials), false alarms (incorrect responses to frequent nontarget trials), and faster and more prepotent responses (deBettencourt, Cohen, Lee, Norman, & Turk-Browne, 2015; deBettencourt, Keene, Awh, & Vogel, 2019; Robertson, Manly, Andrade, Baddeley, & Jenny, 1997; Rosenberg, Noonan, DeGutis, & Esterman, 2013). In sum, research on sustained attention emphasizes that an individual's memories are enhanced for images that appear in engaged attentional states.
Although image memorability and sustained attentional state are each important for what we remember, no work to date has combined these two factors. Do external image features common across the population (like memorability) and internal mental states idiosyncratic to individuals (like sustained attention) explain unique variance in what we remember? Can we make honed predictions of what people will remember based on the memorability of a given image and their sustained attentional state during that time? To ask these questions, we built a model of visual long-term memory that leverages the influence of both image memorability and individual sustained attentional state. In Experiments 1 and 2 we collected new data to measure image memorability and participant sustained attention and memory, respectively. In Experiment 3 we reanalyzed existing data measuring participant sustained attention and memory to test the replicability of the results found in Experiments 1 and 2.
Leveraging these memorability scores and behavioral measures of attentional states at each moment, we combined data across experiments to build a model of subsequent memory. Specifically, the model predicted whether images are remembered or forgotten based on image memorability scores and response time (RT) signatures of trial-to-trial fluctuations in attention. Results revealed that image memorability and sustained attentional state uniquely predicted memory, and together explained more variance in what people remembered than either factor alone. Thus, armed only with the memorability of an item and measures of someone's attentional state, we can successfully predict what individuals will go on to remember using these two factors previously described in completely separate literatures.
Section snippets
Methods
We used data from three experiments to characterize the distinct contributions of image memorability and sustained attentional state to subsequent memory (Fig. 1). In Experiment 1, we ran a large-scale crowd-sourced online experiment to derive the intrinsic memorability of 1100 scene images. In Experiment 2, we collected data as participants performed a CPT and subsequent recognition memory test with these images. In Experiment 3, we re-analyzed data from a study in which different participants
Image memorability is reliable across individuals
We first asked whether there were specific scene images in our stimulus set that were more memorable or forgettable across individuals. To evaluate the consistency of corrected recognition (CR) performance for images in the continuous recognition task (Fig. 1a), we correlated the values for specific scene images obtained from different split-halves of the Experiment 1 Amazon Mechanical Turk sample (n = 706). Consistent with prior work, image memorability was highly reliable (mean Spearman's
Discussion
What affects our long-term memories? We posited that certain factors are highly specific to the information to be encoded, but shared across individuals, whereas other factors may be highly specific to individuals, regardless of what information is being encoded. To characterize both image- and individual-specific factors that impact what we remember, we built a model that predicted visual long-term memory from an image's memorability and an individual's moment-to-moment attentional state. We
Data and code availability
The de-identified data for all experiments and materials for all data analyses are posted at https://osf.io/6uc48/. Experiments were not formally preregistered.
CRediT authorship contribution statement
Cheyenne D. Wakeland-Hart: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Visualization. Steven A. Cao: Investigation, Data curation, Writing – review & editing. Megan T. deBettencourt: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Visualization, Supervision. Wilma A. Bainbridge: Conceptualization, Methodology, Software, Validation,
Declaration of Competing Interest
None.
Acknowledgements
This research was supported by the National Science Foundation BCS-2043740 (M.D.R.), National Institute of Health grant F32MH115597 (M.T.dB.), and The University of Chicago Summer Institute in Social Research Methods and Micro-Metcalf Programs (C.D.W.-H.) and College Curriculum Innovation Fund (M.D.R.).
References (42)
Memorability: How what we see influences what we remember
Psychology of Learning and Motivation - Advances in Research and Theory
(2019)- et al.
Intrinsic and extrinsic effects on image memorability
Vision Research
(2015) - et al.
Errors lead to transient impairments in memory formation
Cognition
(2020) - et al.
“Oops!”: Performance correlates of everyday attentional failures in traumatic brain injured and normal subjects
Neuropsychologia
(1997) - et al.
Predicting attention across time and contexts with functional brain connectivity
Current Opinion in Behavioral Sciences
(2021) - et al.
Linking implicit and explicit memory: Common encoding factors and shared representations
Neuron
(2006) The resiliency of image memorability: A predictor of memory separate from attention and priming
Neuropsychologia
(2020)- et al.
Memorability: A stimulus-driven perceptual neural signature distinctive from memory
NeuroImage
(2017) - et al.
The intrinsic memorability of face photographs
Journal of Experimental Psychology: General
(2013) - et al.
Dissociating neural markers of stimulus memorability and subjective recognition during episodic retrieval
Scientific Reports
(2018)
The relation between sustained attention and incidental and intentional object-location memory
Brain Sciences
Parsimonious mixed models
ArXiv
Colour memory for various sky, skin, and plant colours: Effect of the image context
Color Research and Application
The psychophysics toolbox
Spatial Vision
Enhanced recognition of memorable pictures in ultra-fast RSVP
Psychonomic Bulletin and Review
A limited memory algorithm for bound constrained optimization
Journal of Scientific Computing
Closed-loop training of attention with real-time brain imaging
Nature Neuroscience
Real-time triggering reveals concurrent lapses of attention and working memory
Nature Human Behaviour
Forgetting from lapses of sustained attention
Psychonomic Bulletin and Review
Sustained attention and spatial attention distinctly influence long-term memory encoding
Journal of Cognitive Neuroscience
Sustained attention across the life span in a sample of 10,000: Dissociating ability and strategy
Psychological Science
Cited by (0)
- 1
Authors contributed equally