Automaticity of phonological and semantic processing during visual word recognition
Introduction
Reading is a multimodal activity. Many studies indeed show that processing written words engages not only orthographic but also phonological and semantic processes (Kiefer and Martens, 2010, Mechelli et al., 2007, Van Orden, 1987, Wheat et al., 2010, Wilson et al., 2011). In the context of interactive connectionist models of word perception, this observation is explained by a spreading of activation throughout a network in which orthography is linked to phonological and semantic information via weighted connections. Such spreading of activation provides the core mechanism for reading models that postulate an automatic activation of phonological and semantic representations in response to written input (Harm and Seidenberg, 2004, Harm and Seidenberg, 1999, Seidenberg and McClelland, 1989, Van Orden and Goldinger, 1994).
Yet, empirical evidence supporting the automaticity assumption remains controversial. On the one hand, several studies reported that phonological and semantic activation can be blocked or modulated by attentional and task demands, thus suggesting some form of top-down influence of the high-level processes involved in word perception (Brown et al., 2001, Demonet et al., 1992, Devlin et al., 2003, McDermott et al., 2003, Poldrack et al., 1999, Rumsey et al., 1997). On the other hand, this attentional and task-dependent account has been questioned by findings from a number of psycholinguistic studies that reported phonological and semantic effects in visual word recognition even when these representations were totally irrelevant to the task or not directly accessible (Rodd, 2004, Tanenhaus et al., 1980, Ziegler and Jacobs, 1995), thus supporting the claim of an automatic and possibly mandatory access to phonology and meaning during reading (Frost, 1998). Similarly, masked priming studies showed that shared phonological and semantic representations between a prime and a target affect recognition of the target even in the absence of prime awareness, which makes the strategic activation of these representations unlikely (Brysbaert, 2001, Brysbaert et al., 1999, Deacon et al., 2000, Drieghe and Brysbaert, 2002, Ferrand and Grainger, 1994, Kiefer and Spitzer, 2000, Lukatela and Turvey, 1994, Ortells et al., 2016, Wheat et al., 2010).
Thus, the question remains as to whether phonological and semantic activation in written word processing is task-dependent or whether it occurs automatically whenever participants process a written word. Although this topic has been extensively studied at the behavioral level (Besner et al., 1997, Labuschagne and Besner, 2015), the contribution of brain imaging studies to the debate is relatively scarce. The present study investigated how different brain areas involved in the processing of orthographic, phonological and semantic information responded to a manipulation of bottom-up and top-down information applied to written words.
So far, only a few brain imaging studies have manipulated bottom-up and top-down factors within the same experiment. Among these studies, bottom-up factors have been mainly manipulated by comparing the activation patterns induced by different types of visual input ranging from checkerboards, objects, symbols, sequences of characters, pseudowords to real words (Carreiras et al., 2007, Twomey et al., 2011, Yang et al., 2012). As described below, the present study adopted a different approach that consisted in manipulating the bottom-up information while using only written words, i.e., visual stimuli that can potentially lead to the activation of orthographic, phonological and semantic information. Additionally, previous studies that used words mainly focused on the neural responses within specific brain areas (mainly within the visuo-orthographic system). This does not allow one to have a more global picture about what areas respond to orthographic, phonological and semantic information in a bottom-up versus top-down fashion.
The present study relied on the assumption that automatic responses would be driven by stimulus, or bottom-up information, independently of top-down information determined by task demand. Precisely, we used a go/no-go paradigm, in which participants either focused on visual (symbol detection), phonological (rime detection) or semantic content (animal name detection) of written words. The presence of the task effect would imply that a given brain area is activated in a task-dependent manner. In contrast to most previous fMRI studies, only words were used as visual input. Stimulus-driven (bottom-up) processes were manipulated by changing stimulus visibility parametrically. That is, the time between the stimulus and the visual masks was gradually increased such that the stimulus became progressively visible (Fig. 1) (Dehaene et al., 2001). This allowed us to manipulate the degree of visibility of the words while keeping their presentation duration constant (Kouider et al., 2007). Participants’ sensitivity to this manipulation was measured through a behavioral forced-choice task performed after the fMRI scan.
Given that reading also strongly relies on the activation of brain areas involved in spoken language (Rueckl et al., 2015), analyses were extended to several brain areas involved in spoken language processing. To tap the spoken language network independent of written input, we used auditory functional localizers to identify, at the subject-specific level, the brain areas that process the phonological and semantic aspects of spoken input and further tested whether the very same areas also responded to written words (Fedorenko et al., 2010).
Overall, we hypothesized that different brain areas might respond to bottom-up and top-down information in different ways. While high-level language areas were expected to be more sensitive to task demands, the neural response of lower-level primary areas should be more sensitive to the visibility of the input. Interactions between task and visibility were expected in most areas of the reading network, although the precise pattern was thought to vary with the functional role of each specific area.
Section snippets
Participants
Twenty-five young adults (14 women; age: mean= 23 years; range= 18–30 years) participated in the experiment. All were right-handed native speakers of French. The experiment was approved by the ethics committee for biomedical research, and written informed consent was obtained from all participants. They received 60€ for their participation. Two subjects were excluded from the analyses due to excessive head movements and a technical problem during the acquisition.
Stimuli
The critical stimuli, used in
Behavioral data
As mentioned in the Introduction, the Go trials had been included to engage the participants in visual, phonological or semantic processes. Given that there were only 12 Go trials within each task, no statistical analysis was run on the data. Table 1 provided a description of the performance obtained in each task. The global performance pattern was similar across the three tasks. Hit rates were generally low, with the lowest rate observed in the visual task. This might be due to the difficulty
Discussion
Written words presented during symbol detection, rime detection and semantic judgment tasks elicited activity in the same left-hemisphere networks including the posterior regions involved in visual (inferior occipital cortex) and higher-level orthographic processing (fusiform gyrus) and more anterior regions involved in semantic (IFG tri), phonological (IFG oper) and articulatory processing (precentral cortex, SMA, insula) among other functions. Note however that the absence of the BOLD signal
Acknowledgements
This work was supported by the French Ministry of Research. Grant numbers: ANR-13-JSH2-0002 to C.P., ANR-11-LABX-0036 and ANR-11-IDEX-0001-02. We also thank Aurélie Ponz and Virginie Epting for running the experiment and Evelina Fedorenko for insightful discussions on the subject-specific ROI analyses.
References (80)
- et al.
Human temporal-lobe response to vocal sounds
Cogn. Brain Res
(2002) - et al.
Tuning of the human left fusiform gyrus to sublexical orthographic structure
Neuroimage
(2006) - et al.
Functional anatomy of intra-and cross-modal lexical tasks
Neuroimage
(2002) - et al.
Orthographic processing deficits in developmental dyslexia: beyond the ventral visual stream
Neuroimage
(2016) - et al.
Event-related potential indices of semantic priming using masked and unmasked words: evidence that the N400 does not reflect a post-lexical process
Cogn. Brain Res.
(2000) - et al.
Conscious, preconscious, and subliminal processing: a testable taxonomy
Trends Cogn. Sci.
(2006) - et al.
Susceptibility-induced loss of signal: comparing PET and fMRI on a semantic task
Neuroimage
(2000) - et al.
Brain States: top-down Influences in Sensory Processing
Neuron
(2007) Design efficiency
- et al.
Evaluation of the dual route theory of reading: a metanalysis of 35 neuroimaging studies
Neuroimage
(2003)
A procedure for identifying regions preferentially activated by attention to semantic and phonological relations using functional magnetic resonance imaging
Neuropsychologia
Subject-specific functional localizers increase sensitivity and functional resolution of multi-subject analyses
Neuroimage
The semantic origin of unconscious priming: behavioral and event-related potential evidence during category congruency priming from strongly and weakly related masked words
Cognition
Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex
Neuroimage
A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading
Neuroimage
The Interactive Account of ventral occipitotemporal contributions to reading
Trends Cogn. Sci.
Reading and reading disturbance
Curr. Opin. Neurobiol.
The cognitive neuroscience of sustained attention: where top-down meets bottom-up
Brain Res. Rev.
Top-down modulation of ventral occipito-temporal responses during visual word recognition
Neuroimage
Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system
Neuron
Implicit phonological priming during visual word recognition
Neuroimage
Brain regions involved in articulation
Lancet
Task by stimulus interactions in brain responses during Chinese character processing
Neuroimage
Top-down and bottom-up processes in speech comprehension
Neuroimage
Phonological information provides early sources of constraint in the processing of letter strings
J. Mem. Lang.
Differential sensitivity to words and shapes in ventral occipito-temporal cortex
Cereb. Cortex
The stroop effect and the myth of automaticity
Psychon. Bull. Rev.
Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies
Cereb. Cortex
Functional magnetic resonance imaging of human auditory cortex
Ann. Neurol.
Shifts of effective connectivity within a language network during rhyming and spelling
J. Neurosci.
Semantic processing in visual word recognition: activation blocking and domain specificity
Psychon. Bull. Rev.
Prelexical phonological coding of visual words in Dutch: automatic after all
Mem. Cogn.
Visual word recognition in bilinguals: evidence from masked phonological priming
J. Exp. Psychol. Hum. Percept. Perform.
Dissociation of human prefrontal cortical areas across different speech production tasks and gender groups
J. Neurophysiol.
Brain activation for lexical decision and reading aloud: two sides of the same coin?
J. Cogn. Neurosci.
Language-specific tuning of visual cortex? Functional properties of the visual word form area
Brain
When a ROWS is a ROSE: phonological effects in written word comprehension
Q. J. Exp. Psychol. Sect. A
Does semantic context benefit speech understanding through “top-down” processes? Evidence from time-resolved sparse fMRI
J. Cogn. Neurosci.
Cerebral mechanisms of word masking and unconscious repetition priming
Nat. Neurosci.
The anatomy of phonological and semantic processing in normal subjects
Brain
Cited by (29)
Effective connectivity of the left-ventral occipito-temporal cortex during visual word processing: Direct causal evidence from TMS-EEG co-registration
2022, CortexCitation Excerpt :The sensitivity of this brain area to written scripts has been illustrated both through developmental studies showing a progressive emergence of left-vOT responses during reading acquisition (Brem et al., 2010; Dehaene-Lambertz et al., 2018) and through comparisons of its responses to known scripts versus other kinds of visual input like false fonts, line drawings (Ben-Shachar et al., 2007) or objects (Szwed et al., 2011). In addition to the nature of visual input, left-vOT activity is also modulated by higher-order information on certain non-visual properties, such as the phonological and semantic content, as well as on task demands (Mano et al., 2013; Pattamadilok et al., 2017; Twomey et al., 2011; Yang et al., 2012). In terms of informational flow between this area and other parts of the brain, evidence from stimulus- and task-driven modulations of left-vOT activity suggests that this area exchanges information with both the primary visual cortex and areas involved in higher-order cognitive processes, thus, supporting the idea that it may act as an interface or gateway between the processing of visual and linguistic information (Carreiras et al., 2014; Price et al., 2011).
Automaticity in the reading circuitry
2021, Brain and LanguageCitation Excerpt :For example, although the Stroop task asks subjects to make an orthogonal judgment (color naming) rather than reading the word, attention is still directed toward the word stimuli (Strijkers, Bertrand, & Grainger, 2015; Stroop, 1935). The same is true for incidental reading tasks that direct attention to orthographic and shape features of the words (Klein et al., 2015; Pattamadilok et al., 2017; Paulesu, 2001; Price, 2012; Turkeltaub et al., 2003; Wilson et al., 2004). To test our hypothesis, it is essential to disentangle bottom-up, visually-driven responses from top-down, task-related responses, and assess whether and how components of the reading circuitry are activated in an automatic manner by bottom-up signals from visual cortex.
Top-down activation of the visuo-orthographic system during spoken sentence processing
2019, NeuroImageCitation Excerpt :Since the linguistic demands of the comprehension task remained constant in both clear speech and speech-in-noise conditions, our finding indicated that a factor that reduced the SNR also reduced a spread of neural activity from the primary sensory cortices to higher-order areas of the language network. A similar observation was also reported in our previous study (Pattamadilok et al., 2017). Using written words as inputs, we showed that the activity within the semantic and phonological areas decreased when the visibility of the written words decreased.