Representational similarity analyses in simultaneous EEG-fMRI measurements reveal the spatio-temporal trajectories of reconstructed episodic memories

In this project, we use multivariate analysis techniques on simultaneously acquired EEG and fMRI data to investigate the spatio-temporal dynamics of memory retrieval. We specifically explore how the neural representations of a retrieved object differ from the representations of the originally perceived object. Participants studied novel associations between objects and verbs in an encoding phase, and subsequently recalled the objects upon presentation of the corresponding verb cue. Multivariate pattern classifiers were first used to decode perceptual and conceptual processing from EEG and fMRI brain activation patterns separately. At the hemodynamic level, we found that conceptual features were generally dominant during memory retrieval, and represented at later processing stages than perceptual features. At the electrophysiological level, the conceptual dominance was reflected in an earlier representation of conceptual than perceptual information during recall, showing a reversed order relative to initial encoding (perception). Finally, representational similarity analyses allowed us to map the EEG time courses onto spatial fMRI patterns, demonstrating again that it is primarily the later stages of visual processing that are recapitulated during memory recall. Together, the results shed light onto the nature of mental representations during the reconstruction of visual objects from memory.

In this project, we use multivariate analysis techniques on simultaneously acquired EEG and fMRI data to investigate the spatio-temporal dynamics of memory retrieval. We specifically explore how the neural representations of a retrieved object differ from the representations of the originally perceived object. Participants studied novel associations between objects and verbs in an encoding phase, and subsequently recalled the objects upon presentation of the corresponding verb cue. Multivariate pattern classifiers were first used to decode perceptual and conceptual processing from EEG and fMRI brain activation patterns separately. At the hemodynamic level, we found that conceptual features were generally dominant during memory retrieval, and represented at later processing stages than perceptual features. At the electrophysiological level, the conceptual dominance was reflected in an earlier representation of conceptual than perceptual information during recall, showing a reversed order relative to initial encoding (perception). Finally, representational similarity analyses allowed us to map the EEG time courses onto spatial fMRI patterns, demonstrating again that it is primarily the later stages of visual processing that are recapitulated during memory recall. Together, the results shed light onto the Introduction Multivariate analyses have been used extensively in the object recognition literature to decode mental representations from brain activation patterns, and to correlate data from different imaging techniques (data 1115 This work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0 fusion). This work has unraveled an object perception stream along the ventral visual pathway in time and space (Cichy, Pantazis and Oliva, 2014) that mainly follows a feed-forward, perceptual-to-conceptual gradient. However, our understanding of how information travels through the brain when the object is not physically present but instead reconstructed from memory, remains limited (Dijkstra et al., 2018;Linde-Domingo et al., 2018). Here, we use multivariate analyses on simultaneous EEG-fMRI recordings to investigate the spatio-temporal trajectories of object representations during episodic memory recall. We first use LDA-based classifiers to track the reconstruction of perceptual and conceptual components during retrieval in the EEG time course as well as in the spatial fMRI patterns. In a final data fusion step, we then use representational similarity analysis (RSA) to map the temporal patterns onto spatial locations (Kriegeskorte and Kievit, 2013).

Methods
We tested 37 participants of the local student population. 14 subjects were excluded from the EEG data due to un-removable EEG noise caused by scanner artifacts, and three subjects were not included in the fMRI analysis due to failed scanning sequences.

Procedure
Participants studied 128 novel object-verb pairings overall, divided into 16 blocks (see Fig. 1). Importantly, objects were categorized into two perceptual classes, i.e. line drawings vs colorful photographs, and two conceptual classes, i.e. animate vs inanimate objects (based on Linde-Domingo et al., 2018). This allowed us to explore the neural sources of perceptual and conceptual information processing in a controlled manner. After encoding, participants recalled the objects upon presentation of the corresponding verb. Participants indicated successful retrieval with a button press and subsequently answered either a perceptual or conceptual question about the retrieved object. Each object was retrieved twice, once with each question type.

Measurements
EEG EEG data was recorded in the MRI scanner from the non-ferromagnetic 64-channel BrainCap MR (BrainVision, www.brainproducts.com).

Analysis
FMRI Functional images were realigned, unwarped and slice time corrected. The images obtained at different echo times were combined as a weighted average (Poser et al., 2006) and then normalized and smoothed. Anatomical images were segmented and coregistered with the functional images. We modelled a general linear model with a predictor for each single encoding and retrieval trial. The resulting t-maps were used for linear discriminant analysis (LDA) to predict perceptual and conceptual features from the voxel patterns of several regions of interest along the ventral visual stream. Within the LDA, we used a 5-fold crossvalidation and repeated this 20 times (k=5, repeat=2).
EEG Scanner artifact and cardioballistic artifact correction was performed with Brain Vision Analyzer 2.1. Further, we performed segmentation, visual artifact rejection, trial rejection and independent component analysis with FieldTrip (http://fieldtriptoolbox.org). Trials were smoothed with a Gaussian moving average with a full-width at half-maximum of 24ms and segmented into smaller time bins for the LDA. The LDA was trained to predict perceptual and conceptual features from the electrode voltage patterns (k=5, repeat=20).
Fusion RSA was used to compute pair-wise correlations between the 64 electrode voltage patterns of each trial, separately for each time bin along the EEG time course. The resulting correlations for each pair of stimuli was entered into a similarity matrix.
Similarly, we computed the representational similarity for each pair of stimuli for the fMRI data by using the voxel patterns of a sphere with a radius of 3 voxels from each location of the brain.
A second order correlation was then used to compare the representational similarity of each time bin along the EEG time course with the representational similarity of each brain location (i.e., center voxel of a searchlight).

Results and Conclusion
EEG During encoding, perceptual information decoding peaked around 100ms whereas conceptual information decoding peaked around 350ms. During retrieval, perceptual information was decoded with lower accuracies than the conceptual information. Further, perceptual information decoding reached its peak after the conceptual information decoding did, indicating a dominance of conceptual information processing and replicating our previous EEG work (Linde-Domingo et al., 2018).

FMRI
During encoding, perceptual information was classified most accurately in early visual cortex, whereas conceptual information was additionally evident in later areas along the ventral visual stream including parietal and temporal areas. Classification accuracies within retrieval were comparatively low, but central to our hypothesis, conceptual information was decoded with higher accuracy than perceptual information, especially in parietal and temporal ROIs.

Fusion
The second order correlation shows how information travels from the early visual cortex along the ventral visual stream during encoding, and that this processing stream tends to be reversed during retrieval, and involved parietal regions on top of the ventral visual processing stream.
In this project, we were able to zoom in on the neural processing cascade during memory recall, and to map memory-related electrophysiological activation patterns onto distinct spatial brain sources. We show that the retrieval stream is generally reversed in comparison to encoding, and that the neural patterns that dominate during recall map onto late visual and association cortices.