Untangling featural and conceptual object representations
Introduction
How does the brain transform perceptual information into meaningful concepts and categories? One key organisational principle of object representations in the human ventral temporal cortex is animacy (Caramazza and Mahon, 2003; Caramazza and Shelton, 1998; Kiani et al., 2007; Kriegeskorte et al., 2008; Mahon and Caramazza, 2011; Spelke et al., 1995). Operationalised as objects that can move on their own volition, animate objects evoke different activation patterns than inanimate objects in human brain activity patterns in fMRI (Cichy et al., 2014; Connolly et al., 2012; Downing et al., 2001; Grootswagers et al., 2018; Konkle and Caramazza, 2013; Kriegeskorte et al., 2008) and in MEG/EEG (Carlson et al., 2013; Contini et al., 2017; Grootswagers et al., 2017; Grootswagers et al., 2019; Kaneshiro et al., 2015; Ritchie et al., 2015). A current theoretical debate concerns the degree to which categorical object representations in ventral temporal cortex are due to systematic featural differences within categories (Long et al., 2018; op de Beeck et al., 2008; Proklova et al., 2016).
Recent work has focused on understanding the contribution of visual features to the brain’s representation of categories, such as animacy. This work has shown that a substantial proportion of animacy (de)coding in ventral temporal cortex can be explained by low and mid-level visual features (e.g., texture and curvature) that are inherently associated with animate versus inanimate objects (Andrews et al., 2015; Bracci and Op de Beeck, 2016; Bracci et al., 2017; Bracci et al., 2019; Coggan et al., 2016; Kaiser et al., 2016; Long et al., 2018; Proklova et al., 2016; Rice et al., 2014; Ritchie, Bracci, & op de Beeck, 2017; Watson et al., 2016). Long et al. (2018) recently investigated how mid-level features contribute to categorical representations using images of intact objects and scrambled “texform” versions of the same objects. Crucially, the texform versions of the objects were unrecognisable (at the individual image identity level) but preserved mid-level features such as texture. Using fMRI, they found the categories of animacy and size were similarly coded in the brain for intact and texform versions of objects, thus demonstrating that such patterns can arise without the explicit recognition of an object (Long et al., 2018). In MEG and EEG, one study showed that animate and inanimate objects cannot be differentiated when they are closely matched for shape (Proklova et al., 2019). Other studies, however, have found that object animacy decoding generalises to unseen exemplars with different shapes (cf. Contini et al., 2017), suggesting animacy decoding, in part, might be based on general conceptual representations. Taken together, these results suggest that either there is some abstract conceptual representation of animacy, or that objects within the animate and inanimate categories share sufficient visual regularities to drive the categorical organisation of object representations in the brain.
In the current study, we tested the contribution of visual features to the dynamics of emerging conceptual representations. We used a previously published stimulus set (Fig. 1) that was designed to test the contribution of mid-level features to conceptual categories (animacy and size) in the visual system (Long et al., 2018), which consisted of luminance-matched real objects, and scrambled, “texform” versions of the same objects that retain mid-level texture and form information (Long et al., 2017; Long et al., 2018). We used EEG and a rapid-MVPA paradigm (Grootswagers et al., 2019) to study the emergence of conceptual information. Based on previous fMRI work (Long et al., 2018), we predicted that texforms would evoke animacy-like patterns in the EEG signal similar to intact objects. In addition, we hypothesized that animacy-like patterns evoked by texforms may need more time to develop. To test this, we presented the stimuli at varying rapid presentation rates, as faster rates have been shown to limit the depth of stimulus processing (Collins et al., 2018; Grootswagers et al., 2019; McKeeff et al., 2007; Robinson et al., 2019). We found that EEG activation patterns of texform versions of the objects were decodable, but that conceptual categorical decoding of intact objects was more robust, and could be achieved at faster presentation rates, which suggests that the visual system needs less time to process the intact objects. Together, our results provide evidence that visual features contribute to the representation of conceptual object categories, but also show that higher level abstractions cannot be fully explained by statistical regularities.
Section snippets
Methods
Stimuli, data, and analysis code are available online through https://osf.io/sz9ve.
Results
Participants (N = 20) viewed streams of texform stimuli and intact objects (Fig. 1). The stimuli were presented in random order at four presentation frequencies (60 Hz, 30 Hz, 20 Hz, 5 Hz) to target different levels of visual processing (Grootswagers et al., 2019; Robinson et al., 2019). The stimuli were developed by Long et al. (2017), and obtained from https://osf.io/69pbd/(Long et al., 2017, 2018). Continuous EEG was recorded during the streams and cut into overlapping epochs based on the
Discussion
In this study, we assessed the contribution of mid-level features to high level categorical object representations using a combination of fast periodic visual processing streams and multivariate EEG decoding. We used images of intact and texform versions of objects from a previously published study (Long et al., 2018) and found that their neural representations were similarly distinct at the image level. In contrast, the decoding accuracies of the original categorical distinctions of animacy
Acknowledgements
This research was supported by an Australian Research Council Future Fellowship (FT120100816) and an Australian Research Council Discovery project (DP160101300) awarded to T.A.C. The authors acknowledge the University of Sydney HPC service for providing High Performance Computing resources. The authors declare no competing financial interests.
References (59)
- et al.
On the partnership between neural representations of object categories and visual features in the ventral visual pathway
Neuropsychologia
(2017) - et al.
The organization of conceptual knowledge: the evidence from category-specific semantic deficits
Trends Cogn. Sci.
(2003) - et al.
Category-selective patterns of neural response in the ventral visual pathway in the absence of categorical information
Neuroimage
(2016) - et al.
Distinct neural processes for the perception of familiar versus unfamiliar faces along the visual hierarchy revealed by EEG
Neuroimage
(2018) - et al.
Decoding the time-course of object recognition in the human brain: from visual features to categorical decisions
Neuropsychologia
(2017) - et al.
EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
J. Neurosci. Methods
(2004) - et al.
Untangling invariant object recognition
Trends Cogn. Sci.
(2007) How Bayes factors change scientific practice
J. Math. Psychol.
(2016)- et al.
Finding decodable information that can be read out in behaviour
Neuroimage
(2018) - et al.
The representational dynamics of visual objects in rapid serial visual processing streams
Neuroimage
(2019)
Typicality sharpens category representations in object-selective cortex
Neuroimage
Matching categorical object representations in inferior temporal cortex of man and monkey
Neuron
What drives the organization of object knowledge in the brain?
Trends Cogn. Sci.
Nonparametric statistical testing of EEG- and MEG-data
J. Neurosci. Methods
The five percent electrode system for high-resolution EEG and ERP measurements
Clin. Neurophysiol.
Avoiding illusory effects in representational similarity analysis: what (not) to do with the diagonal
Neuroimage
The influence of image masking on object representations during rapid serial visual presentation
Neuroimage
Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference
Neuroimage
Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG
Neuroimage
Spatial properties of objects predict patterns of neural response in the ventral visual pathway
Neuroimage
Low-level properties of natural images predict topographic patterns of neural response in the ventral visual pathway
J. Vis.
Dissociations and associations between shape and category representations in the two visual pathways
J. Neurosci.
The ventral visual pathway represents animal appearance over animacy, unlike human behavior and deep neural networks
J. Neurosci.
Domain-Specific knowledge systems in the brain: the animate-inanimate distinction
J. Cogn. Neurosci.
Representational dynamics of object vision: the first 1000 ms
J. Vis.
Resolving human object recognition in space and time
Nat. Neurosci.
The representation of biological classes in the human brain
J. Neurosci.
A humanness dimension to visual object coding in the brain
Bayesian versus orthodox statistics: which side are you on?
Perspect. Psychol. Sci.
Cited by (23)
Dynamics of low-pass-filtered object categories: A decoding approach to ERP recordings
2023, Vision ResearchCitation Excerpt :To explore the issue further, some studies investigated the neural corelate of categorization processes. For instance, recent studies have used linear discriminant analysis (LDA) or neural distance to bound analysis to predict reaction times of the observer in animacy categorization tasks based on neural distances measured with human functional MRI magnetoencephalography or electroencephalography (EEG) (Carlson, Ritchie, Kriegeskorte, Durvasula, & Ma, 2014; Carlson, Tovar, Alink, & Kriegeskorte, 2013; Grootswagers, Robinson, Shatek, & Carlson, 2019; Grootswagers, Wardle, & Carlson, 2017; Ritchie, Tovar, & Carlson, 2015). According to distance-to-bound models (Ashby & Maddox, 1994; Pike, 1973), evidence close to a decision boundary is more equivocal, reflecting higher difficulty in categorization, whereas evidence far from the decision boundary is less equivocal with regard to a specific semantic category.
Capacity for movement is an organisational principle in object representations
2022, NeuroImageCitation Excerpt :The representations of higher-order categorical distinctions like animacy have been localised to the inferotemporal cortex (Haxby et al., 2001; Kriegeskorte et al., 2008), and are observable from patterns of brain activity from approximately 100–160 ms after stimulus onset (Contini et al., 2020; Goddard et al., 2016; Grootswagers et al., 2019, 2021). In addition to higher-order conceptual processing, some of this separation can be explained by differences in low and mid-level visual features between animate and inanimate stimuli (Grootswagers et al., 2019; Long et al., 2018; Wang et al., 2022). Even at rapid presentation rates, and when subjects are completing an unrelated task, animate stimuli are distinguishable from inanimate stimuli in patterns of EEG recordings (Grootswagers et al., 2021).
Unique contributions of perceptual and conceptual humanness to object representations in the human brain
2022, NeuroImageCitation Excerpt :The respective contribution of conceptual features of humanness (i.e., thinks or feels like a human) and perceptual features of humanness (i.e., looks like a human) to the representational organisation of objects in the human visual system remains an open question because these two levels have not been fully disentangled yet. Judgements of object agency used in previous research might not necessarily reflect the contribution of conceptual features but rather that of perceptual features such as human face-like or body-like shapes, as object shape has been shown to play a large role in object representations (Bracci et al., 2019; Bracci and Op de Beeck, 2016; Grootswagers et al., 2019b; Long et al., 2018; Proklova et al., 2016, 2019). Moreover, humanness has commonly been measured using rating scales (e.g., Contini et al. 2020) which may not reveal subtle differences between objects.
How big should this object be? Perceptual influences on viewing-size preferences
2022, CognitionCitation Excerpt :We conducted two versions of this experiment: In the first version, we created a larger stimulus set drawn from the same superset as reported in the experiment above. In the second version, intended as a replication experiment with some generalization, we changed the stimulus set again, in order to dovetail more closely with previous work, using a subset of the original texform images used by others (Long et al., 2016; Long & Konkle, 2017; Long et al., 2018; Wang, Janini, & Konkle, 2022; Grootswagers, Robinson, Shatek, & Carlson, 2019; see Fig. 1). These two versions of the experiment (Experiment 2a and 2b) were otherwise identical, except for the stimuli used and a few related details in their presentation.
Mapping the dynamics of visual feature coding: Insights into perception and integration
2024, PLoS Computational BiologyPlantness, Animalness, and Humanness: plant placement within animacy and adjacent scales
2024, Journal for the Theory of Social Behaviour