Elsevier

Cognition

Volume 227, October 2022, 105201
Cognition

Predicting visual memory across images and within individuals

https://doi.org/10.1016/j.cognition.2022.105201Get rights and content

Abstract

We only remember a fraction of what we see—including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image's memorability and an individual's fluctuating attentional states. Here, we build the first model of memory synthesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, n = 706) and attentional state indexed by response time on a continuous performance task (Experiments 2 and 3, n = 57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual- and image-specific factors allows for directed forecasting of our memories.

Significance statement

Although memory is a fundamental cognitive process, much of the time memory failures cannot be predicted until it is too late. However, in this study, we show that much of memory is surprisingly pre-determined ahead of time, by factors shared across the population and highly specific to each individual. Specifically, we build a new multidimensional model that predicts memory based just on the images a person sees and when they see them. This research synthesizes findings from disparate domains ranging from computer vision, attention, and memory into a predictive model. These findings have resounding implications for domains such as education, business, and marketing, where it is a top priority to predict (and even manipulate) what information people will remember.

Introduction

We remember some items that we encounter in our day-to-day lives with ease. For example, we may recall a painting that we saw in a museum long after our visit. At the same time, we often fail to remember other, similar items, such as the wall art in a doctor's office. Despite the fact that what items go on to be remembered or forgotten can seem arbitrary, our memory is not completely unpredictable.

Intuitively, it seems that what we go on to remember should be largely determined by processing that occurs during and after encoding. However, recent work has discovered two factors that determine our memories ahead of time. First, individuals collectively show strong agreement in which images they will remember. That is, certain images are intrinsically memorable or forgettable (Bainbridge, Isola, & Oliva, 2013; Isola, Xiao, Torralba, & Oliva, 2011). At the same time, each individual exhibits idiosyncratic moment-to-moment sustained attention dynamics, and the attentional state leading up to the moment of encoding impacts subsequent memory (deBettencourt, Norman, & Turk-Browne, 2018). This suggests, for example, that we're unlikely to remember a forgettable art piece or recall an exhibit encountered after our sustained attention had faded during a long museum visit. Neither memorability nor sustained attention, however, feature in most models of visual memory.

By testing many individuals on diverse stimulus sets, studies of visual memory have revealed striking consistency in the pictures that are remembered or forgotten (Bainbridge et al., 2013; Isola, Parikh, Torralba, & Oliva, 2011; Isola, Xiao, et al., 2011; Isola, Xiao, Parikh, Torralba, & Oliva, 2014). In other words, ahead of time, we can make predictions about which specific images will be remembered. This widespread consistency implies that memorability is a property inherent to an image itself. This memorability can be quantified through continuous recognition tasks, in which participants detect specific stimulus repeats in a stream of images. Certain images, i.e., high memorability images, are much more likely to be correctly detected in these tasks. Importantly, a memorability score measured in one experiment has been shown to successfully translate across other tasks, participants, image contexts, and delays (e.g., Bainbridge, 2020; Broers, Potter, & Nieuwenstein, 2018; Goetschalckx, Moors, & Wagemans, 2018). Furthermore, memorability is not simply a product of an item's low-level visual features (color, texture, shape, orientation, or spatial frequency) alone, nor is it a product of attractiveness or visual interest. Instead, it is distinct from other visual factors (Bainbridge, 2019; Bainbridge et al., 2013) and unaffected by reward and cognitive control (Bainbridge, 2020). In sum, research on image memorability emphasizes that individuals' memories are enhanced for specific stimuli over others.

By tracking individuals' behavior over time, on the other hand, studies of sustained attention have revealed how fluctuating attentional states impact what is later remembered (Barel & Tzischinsky, 2020; deBettencourt et al., 2018; Madore et al., 2020; Song, Finn, & Rosenberg, 2021). That is, when attention is lapsing, we can predict that the forthcoming image will be disadvantaged. Changes in sustained attention from one moment to the next can be measured via continuous performance tasks (CPTs), in which participants repeatedly make the same response to the vast majority of stimuli, but then must make a different response to a rarely presented stimulus. Sustained attentional states can be operationalized via behavioral performance on such a task, with lapsing attentional states indexed by measures such as misses (incorrect responses to infrequent target trials), false alarms (incorrect responses to frequent nontarget trials), and faster and more prepotent responses (deBettencourt, Cohen, Lee, Norman, & Turk-Browne, 2015; deBettencourt, Keene, Awh, & Vogel, 2019; Robertson, Manly, Andrade, Baddeley, & Jenny, 1997; Rosenberg, Noonan, DeGutis, & Esterman, 2013). In sum, research on sustained attention emphasizes that an individual's memories are enhanced for images that appear in engaged attentional states.

Although image memorability and sustained attentional state are each important for what we remember, no work to date has combined these two factors. Do external image features common across the population (like memorability) and internal mental states idiosyncratic to individuals (like sustained attention) explain unique variance in what we remember? Can we make honed predictions of what people will remember based on the memorability of a given image and their sustained attentional state during that time? To ask these questions, we built a model of visual long-term memory that leverages the influence of both image memorability and individual sustained attentional state. In Experiments 1 and 2 we collected new data to measure image memorability and participant sustained attention and memory, respectively. In Experiment 3 we reanalyzed existing data measuring participant sustained attention and memory to test the replicability of the results found in Experiments 1 and 2.

Leveraging these memorability scores and behavioral measures of attentional states at each moment, we combined data across experiments to build a model of subsequent memory. Specifically, the model predicted whether images are remembered or forgotten based on image memorability scores and response time (RT) signatures of trial-to-trial fluctuations in attention. Results revealed that image memorability and sustained attentional state uniquely predicted memory, and together explained more variance in what people remembered than either factor alone. Thus, armed only with the memorability of an item and measures of someone's attentional state, we can successfully predict what individuals will go on to remember using these two factors previously described in completely separate literatures.

Section snippets

Methods

We used data from three experiments to characterize the distinct contributions of image memorability and sustained attentional state to subsequent memory (Fig. 1). In Experiment 1, we ran a large-scale crowd-sourced online experiment to derive the intrinsic memorability of 1100 scene images. In Experiment 2, we collected data as participants performed a CPT and subsequent recognition memory test with these images. In Experiment 3, we re-analyzed data from a study in which different participants

Image memorability is reliable across individuals

We first asked whether there were specific scene images in our stimulus set that were more memorable or forgettable across individuals. To evaluate the consistency of corrected recognition (CR) performance for images in the continuous recognition task (Fig. 1a), we correlated the values for specific scene images obtained from different split-halves of the Experiment 1 Amazon Mechanical Turk sample (n = 706). Consistent with prior work, image memorability was highly reliable (mean Spearman's

Discussion

What affects our long-term memories? We posited that certain factors are highly specific to the information to be encoded, but shared across individuals, whereas other factors may be highly specific to individuals, regardless of what information is being encoded. To characterize both image- and individual-specific factors that impact what we remember, we built a model that predicted visual long-term memory from an image's memorability and an individual's moment-to-moment attentional state. We

Data and code availability

The de-identified data for all experiments and materials for all data analyses are posted at https://osf.io/6uc48/. Experiments were not formally preregistered.

CRediT authorship contribution statement

Cheyenne D. Wakeland-Hart: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Visualization. Steven A. Cao: Investigation, Data curation, Writing – review & editing. Megan T. deBettencourt: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Visualization, Supervision. Wilma A. Bainbridge: Conceptualization, Methodology, Software, Validation,

Declaration of Competing Interest

None.

Acknowledgements

This research was supported by the National Science Foundation BCS-2043740 (M.D.R.), National Institute of Health grant F32MH115597 (M.T.dB.), and The University of Chicago Summer Institute in Social Research Methods and Micro-Metcalf Programs (C.D.W.-H.) and College Curriculum Innovation Fund (M.D.R.).

References (42)

  • E. Barel et al.

    The relation between sustained attention and incidental and intentional object-location memory

    Brain Sciences

    (2020)
  • D. Bates et al.

    Parsimonious mixed models

    ArXiv

    (2015)
  • P. Bodrogi et al.

    Colour memory for various sky, skin, and plant colours: Effect of the image context

    Color Research and Application

    (2001)
  • D.H. Brainard

    The psychophysics toolbox

    Spatial Vision

    (1997)
  • N. Broers et al.

    Enhanced recognition of memorable pictures in ultra-fast RSVP

    Psychonomic Bulletin and Review

    (2018)
  • R. Byrd et al.

    A limited memory algorithm for bound constrained optimization

    Journal of Scientific Computing

    (1995)
  • M.T. deBettencourt et al.

    Closed-loop training of attention with real-time brain imaging

    Nature Neuroscience

    (2015)
  • M.T. deBettencourt et al.

    Real-time triggering reveals concurrent lapses of attention and working memory

    Nature Human Behaviour

    (2019)
  • M.T. deBettencourt et al.

    Forgetting from lapses of sustained attention

    Psychonomic Bulletin and Review

    (2018)
  • M.T. deBettencourt et al.

    Sustained attention and spatial attention distinctly influence long-term memory encoding

    Journal of Cognitive Neuroscience

    (2021)
  • F.C. Fortenbaugh et al.

    Sustained attention across the life span in a sample of 10,000: Dissociating ability and strategy

    Psychological Science

    (2015)
  • Cited by (0)

    1

    Authors contributed equally

    View full text