Elsevier

NeuroImage

Volume 149, 1 April 2017, Pages 244-255
NeuroImage

Automaticity of phonological and semantic processing during visual word recognition

https://doi.org/10.1016/j.neuroimage.2017.02.003Get rights and content

Highlights

  • Automatic access to phonological and semantic information during reading was examined.

  • We manipulated stimulus’ level of visibility (bottom-up) and task demand (top-down).

  • Brain areas influenced by bottom-up and/or top-down information were identified.

  • Both stimulus-driven and task-dependent mechanisms played a role during the process.

  • Yet, their relative contribution depends on the functional role of each ROI.

Abstract

Reading involves activation of phonological and semantic knowledge. Yet, the automaticity of the activation of these representations remains subject to debate. The present study addressed this issue by examining how different brain areas involved in language processing responded to a manipulation of bottom-up (level of visibility) and top-down information (task demands) applied to written words. The analyses showed that the same brain areas were activated in response to written words whether the task was symbol detection, rime detection, or semantic judgment. This network included posterior, temporal and prefrontal regions, which clearly suggests the involvement of orthographic, semantic and phonological/articulatory processing in all tasks. However, we also found interactions between task and stimulus visibility, which reflected the fact that the strength of the neural responses to written words in several high-level language areas varied across tasks. Together, our findings suggest that the involvement of phonological and semantic processing in reading is supported by two complementary mechanisms. First, an automatic mechanism that results from a task-independent spread of activation throughout a network in which orthography is linked to phonology and semantics. Second, a mechanism that further fine-tunes the sensitivity of high-level language areas to the sensory input in a task-dependent manner.

Introduction

Reading is a multimodal activity. Many studies indeed show that processing written words engages not only orthographic but also phonological and semantic processes (Kiefer and Martens, 2010, Mechelli et al., 2007, Van Orden, 1987, Wheat et al., 2010, Wilson et al., 2011). In the context of interactive connectionist models of word perception, this observation is explained by a spreading of activation throughout a network in which orthography is linked to phonological and semantic information via weighted connections. Such spreading of activation provides the core mechanism for reading models that postulate an automatic activation of phonological and semantic representations in response to written input (Harm and Seidenberg, 2004, Harm and Seidenberg, 1999, Seidenberg and McClelland, 1989, Van Orden and Goldinger, 1994).

Yet, empirical evidence supporting the automaticity assumption remains controversial. On the one hand, several studies reported that phonological and semantic activation can be blocked or modulated by attentional and task demands, thus suggesting some form of top-down influence of the high-level processes involved in word perception (Brown et al., 2001, Demonet et al., 1992, Devlin et al., 2003, McDermott et al., 2003, Poldrack et al., 1999, Rumsey et al., 1997). On the other hand, this attentional and task-dependent account has been questioned by findings from a number of psycholinguistic studies that reported phonological and semantic effects in visual word recognition even when these representations were totally irrelevant to the task or not directly accessible (Rodd, 2004, Tanenhaus et al., 1980, Ziegler and Jacobs, 1995), thus supporting the claim of an automatic and possibly mandatory access to phonology and meaning during reading (Frost, 1998). Similarly, masked priming studies showed that shared phonological and semantic representations between a prime and a target affect recognition of the target even in the absence of prime awareness, which makes the strategic activation of these representations unlikely (Brysbaert, 2001, Brysbaert et al., 1999, Deacon et al., 2000, Drieghe and Brysbaert, 2002, Ferrand and Grainger, 1994, Kiefer and Spitzer, 2000, Lukatela and Turvey, 1994, Ortells et al., 2016, Wheat et al., 2010).

Thus, the question remains as to whether phonological and semantic activation in written word processing is task-dependent or whether it occurs automatically whenever participants process a written word. Although this topic has been extensively studied at the behavioral level (Besner et al., 1997, Labuschagne and Besner, 2015), the contribution of brain imaging studies to the debate is relatively scarce. The present study investigated how different brain areas involved in the processing of orthographic, phonological and semantic information responded to a manipulation of bottom-up and top-down information applied to written words.

So far, only a few brain imaging studies have manipulated bottom-up and top-down factors within the same experiment. Among these studies, bottom-up factors have been mainly manipulated by comparing the activation patterns induced by different types of visual input ranging from checkerboards, objects, symbols, sequences of characters, pseudowords to real words (Carreiras et al., 2007, Twomey et al., 2011, Yang et al., 2012). As described below, the present study adopted a different approach that consisted in manipulating the bottom-up information while using only written words, i.e., visual stimuli that can potentially lead to the activation of orthographic, phonological and semantic information. Additionally, previous studies that used words mainly focused on the neural responses within specific brain areas (mainly within the visuo-orthographic system). This does not allow one to have a more global picture about what areas respond to orthographic, phonological and semantic information in a bottom-up versus top-down fashion.

The present study relied on the assumption that automatic responses would be driven by stimulus, or bottom-up information, independently of top-down information determined by task demand. Precisely, we used a go/no-go paradigm, in which participants either focused on visual (symbol detection), phonological (rime detection) or semantic content (animal name detection) of written words. The presence of the task effect would imply that a given brain area is activated in a task-dependent manner. In contrast to most previous fMRI studies, only words were used as visual input. Stimulus-driven (bottom-up) processes were manipulated by changing stimulus visibility parametrically. That is, the time between the stimulus and the visual masks was gradually increased such that the stimulus became progressively visible (Fig. 1) (Dehaene et al., 2001). This allowed us to manipulate the degree of visibility of the words while keeping their presentation duration constant (Kouider et al., 2007). Participants’ sensitivity to this manipulation was measured through a behavioral forced-choice task performed after the fMRI scan.

Given that reading also strongly relies on the activation of brain areas involved in spoken language (Rueckl et al., 2015), analyses were extended to several brain areas involved in spoken language processing. To tap the spoken language network independent of written input, we used auditory functional localizers to identify, at the subject-specific level, the brain areas that process the phonological and semantic aspects of spoken input and further tested whether the very same areas also responded to written words (Fedorenko et al., 2010).

Overall, we hypothesized that different brain areas might respond to bottom-up and top-down information in different ways. While high-level language areas were expected to be more sensitive to task demands, the neural response of lower-level primary areas should be more sensitive to the visibility of the input. Interactions between task and visibility were expected in most areas of the reading network, although the precise pattern was thought to vary with the functional role of each specific area.

Section snippets

Participants

Twenty-five young adults (14 women; age: mean= 23 years; range= 18–30 years) participated in the experiment. All were right-handed native speakers of French. The experiment was approved by the ethics committee for biomedical research, and written informed consent was obtained from all participants. They received 60€ for their participation. Two subjects were excluded from the analyses due to excessive head movements and a technical problem during the acquisition.

Stimuli

The critical stimuli, used in

Behavioral data

As mentioned in the Introduction, the Go trials had been included to engage the participants in visual, phonological or semantic processes. Given that there were only 12 Go trials within each task, no statistical analysis was run on the data. Table 1 provided a description of the performance obtained in each task. The global performance pattern was similar across the three tasks. Hit rates were generally low, with the lowest rate observed in the visual task. This might be due to the difficulty

Discussion

Written words presented during symbol detection, rime detection and semantic judgment tasks elicited activity in the same left-hemisphere networks including the posterior regions involved in visual (inferior occipital cortex) and higher-level orthographic processing (fusiform gyrus) and more anterior regions involved in semantic (IFG tri), phonological (IFG oper) and articulatory processing (precentral cortex, SMA, insula) among other functions. Note however that the absence of the BOLD signal

Acknowledgements

This work was supported by the French Ministry of Research. Grant numbers: ANR-13-JSH2-0002 to C.P., ANR-11-LABX-0036 and ANR-11-IDEX-0001-02. We also thank Aurélie Ponz and Virginie Epting for running the experiment and Evelina Fedorenko for insightful discussions on the subject-specific ROI analyses.

References (80)

  • K.B. McDermott et al.

    A procedure for identifying regions preferentially activated by attention to semantic and phonological relations using functional magnetic resonance imaging

    Neuropsychologia

    (2003)
  • A. Nieto-Castañón et al.

    Subject-specific functional localizers increase sensitivity and functional resolution of multi-subject analyses

    Neuroimage

    (2012)
  • J.J. Ortells et al.

    The semantic origin of unconscious priming: behavioral and event-related potential evidence during category congruency priming from strongly and weakly related masked words

    Cognition

    (2016)
  • R.A. Poldrack et al.

    Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex

    Neuroimage

    (1999)
  • C.J. Price

    A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading

    Neuroimage

    (2012)
  • C.J. Price et al.

    The Interactive Account of ventral occipitotemporal contributions to reading

    Trends Cogn. Sci.

    (2011)
  • C.J. Price et al.

    Reading and reading disturbance

    Curr. Opin. Neurobiol.

    (2005)
  • M. Sarter et al.

    The cognitive neuroscience of sustained attention: where top-down meets bottom-up

    Brain Res. Rev.

    (2001)
  • T. Twomey et al.

    Top-down modulation of ventral occipito-temporal responses during visual word recognition

    Neuroimage

    (2011)
  • F. Vinckier et al.

    Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system

    Neuron

    (2007)
  • L.B. Wilson et al.

    Implicit phonological priming during visual word recognition

    Neuroimage

    (2011)
  • R.J.S. Wise et al.

    Brain regions involved in articulation

    Lancet

    (1999)
  • J. Yang et al.

    Task by stimulus interactions in brain responses during Chinese character processing

    Neuroimage

    (2012)
  • A.A. Zekveld et al.

    Top-down and bottom-up processes in speech comprehension

    Neuroimage

    (2006)
  • J.C. Ziegler et al.

    Phonological information provides early sources of constraint in the processing of letter strings

    J. Mem. Lang.

    (1995)
  • M. Ben-Shachar et al.

    Differential sensitivity to words and shapes in ventral occipito-temporal cortex

    Cereb. Cortex

    (2007)
  • D. Besner et al.

    The stroop effect and the myth of automaticity

    Psychon. Bull. Rev.

    (1997)
  • J.R. Binder et al.

    Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies

    Cereb. Cortex

    (2009)
  • J.R. Binder et al.

    Functional magnetic resonance imaging of human auditory cortex

    Ann. Neurol.

    (1994)
  • T. Bitan et al.

    Shifts of effective connectivity within a language network during rhyming and spelling

    J. Neurosci.

    (2005)
  • M.S. Brown et al.

    Semantic processing in visual word recognition: activation blocking and domain specificity

    Psychon. Bull. Rev.

    (2001)
  • M. Brysbaert

    Prelexical phonological coding of visual words in Dutch: automatic after all

    Mem. Cogn.

    (2001)
  • M. Brysbaert et al.

    Visual word recognition in bilinguals: evidence from masked phonological priming

    J. Exp. Psychol. Hum. Percept. Perform.

    (1999)
  • R.L. Buckner et al.

    Dissociation of human prefrontal cortical areas across different speech production tasks and gender groups

    J. Neurophysiol.

    (1995)
  • M. Carreiras et al.

    Brain activation for lexical decision and reading aloud: two sides of the same coin?

    J. Cogn. Neurosci.

    (2007)
  • L. Cohen et al.

    Language-specific tuning of visual cortex? Functional properties of the visual word form area

    Brain

    (2002)
  • V. Coltheart et al.

    When a ROWS is a ROSE: phonological effects in written word comprehension

    Q. J. Exp. Psychol. Sect. A

    (1994)
  • M.H. Davis et al.

    Does semantic context benefit speech understanding through “top-down” processes? Evidence from time-resolved sparse fMRI

    J. Cogn. Neurosci.

    (2011)
  • S. Dehaene et al.

    Cerebral mechanisms of word masking and unconscious repetition priming

    Nat. Neurosci.

    (2001)
  • J. Demonet et al.

    The anatomy of phonological and semantic processing in normal subjects

    Brain

    (1992)
  • Cited by (29)

    • Effective connectivity of the left-ventral occipito-temporal cortex during visual word processing: Direct causal evidence from TMS-EEG co-registration

      2022, Cortex
      Citation Excerpt :

      The sensitivity of this brain area to written scripts has been illustrated both through developmental studies showing a progressive emergence of left-vOT responses during reading acquisition (Brem et al., 2010; Dehaene-Lambertz et al., 2018) and through comparisons of its responses to known scripts versus other kinds of visual input like false fonts, line drawings (Ben-Shachar et al., 2007) or objects (Szwed et al., 2011). In addition to the nature of visual input, left-vOT activity is also modulated by higher-order information on certain non-visual properties, such as the phonological and semantic content, as well as on task demands (Mano et al., 2013; Pattamadilok et al., 2017; Twomey et al., 2011; Yang et al., 2012). In terms of informational flow between this area and other parts of the brain, evidence from stimulus- and task-driven modulations of left-vOT activity suggests that this area exchanges information with both the primary visual cortex and areas involved in higher-order cognitive processes, thus, supporting the idea that it may act as an interface or gateway between the processing of visual and linguistic information (Carreiras et al., 2014; Price et al., 2011).

    • Automaticity in the reading circuitry

      2021, Brain and Language
      Citation Excerpt :

      For example, although the Stroop task asks subjects to make an orthogonal judgment (color naming) rather than reading the word, attention is still directed toward the word stimuli (Strijkers, Bertrand, & Grainger, 2015; Stroop, 1935). The same is true for incidental reading tasks that direct attention to orthographic and shape features of the words (Klein et al., 2015; Pattamadilok et al., 2017; Paulesu, 2001; Price, 2012; Turkeltaub et al., 2003; Wilson et al., 2004). To test our hypothesis, it is essential to disentangle bottom-up, visually-driven responses from top-down, task-related responses, and assess whether and how components of the reading circuitry are activated in an automatic manner by bottom-up signals from visual cortex.

    • Top-down activation of the visuo-orthographic system during spoken sentence processing

      2019, NeuroImage
      Citation Excerpt :

      Since the linguistic demands of the comprehension task remained constant in both clear speech and speech-in-noise conditions, our finding indicated that a factor that reduced the SNR also reduced a spread of neural activity from the primary sensory cortices to higher-order areas of the language network. A similar observation was also reported in our previous study (Pattamadilok et al., 2017). Using written words as inputs, we showed that the activity within the semantic and phonological areas decreased when the visibility of the written words decreased.

    View all citing articles on Scopus
    View full text