Elsevier

Hearing Research

Volume 366, September 2018, Pages 75-81
Hearing Research

Research Paper
Why does language not emerge until the second year?

https://doi.org/10.1016/j.heares.2018.05.004Get rights and content

Highlights

  • We examined the maturity of the speech network in infants in their first year using functional connectivity fMRI.

  • At 3 and 9-months old, we found the pattern of connectivity of the network to be similar to adults.

  • Supports growing evidence that broader language system is functioning from early in the first year.

  • Long period of helplessness in human infants after birth is not because they are born with immature brains.

  • Instead, the delay in language may be due to “silent learning”, analogous to the pre-training in deep-neural networks.

Abstract

From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N = 6) and 9 months (N = 7), and in an adult comparison group (N = 15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.

Introduction

At the time of their first birthday, human infants understand and speak just a few words. Only by their second birthday are they learning new words rapidly, and have typically built a vocabulary of three hundred words (Bloom, 1976; Frank et al., 2017). Here, we ask why rapid word learning does not happen earlier. Given that infants hear a million spoken words per month (Hart and Risley, 1995a), dominated by high frequency tokens (Piantadosi, 2014), what delays rapid language acquisition for a year?

It is possible that late language acquisition might reflect broader initial sluggishness in the life history of human cognitive development. Human infants are slow to develop in many ways and are helpless for a long time following birth, compared to animals with simpler brains (Jones et al., 2009). Consider motor function, for example: lambs and chickens are walking within a few days, while human infants take 9 months to crawl.

One explanation for this sluggish development is that human infants have large heads but the size of the mother's pelvis (Rosenberg and Trevathan, 2002) or the mother's metabolism (Dunsworth et al., 2012) are limited, and so birth must happen relatively early in gestation, while the infant's brain is still immature. This hypothesis continues to be influential and is supported by elegant modelling of evolutionary pressures that might have driven selection for intelligence (Piantadosi and Kidd, 2016). According to this hypothesis, human infants are helpless and their language delayed, because their brains must develop postnatally.

We propose an alternative explanation inspired by machine learning. Deep (many-layered) neural networks have in recent years come to dominate artificial intelligence, performing many tasks better than humans, such as visual recognition, playing Go and Chess, and driving cars. Although deep neural networks have been around for decades, they were not useful in practice because they could not be effectively trained. Given the enormous number of degrees of freedom of these networks, they had the tendency to overfit to the initial training examples, and thus to learn to perform tasks in idiosyncratic ways that did not generalize well to new examples. An important breakthrough was the innovation that networks should be pretrained (Hinton and Salakhutdinov, 2006). During pretraining, the network learns the statistics of the sensory input (P(S)). This is then followed by training which categories are associated with particular sensory input (p(c|S)). Pretraining is often used in deep learning, and has been shown to be particularly beneficial for more complex neural networks (Erhan et al., 2010). By analogy, our proposal is that human infants, which have complex brains, benefit from pretraining. In the case of language, we propose that the first year is spent learning the statistics of sound and developing the motoric models that underlie production. Only in the second year are these representations sufficiently mature for rapid language acquisition to begin.

The “immature brain” and “pretraining” hypotheses cannot be distinguished by observing language development in the first year, as they both predict it will be slow to develop. However, they make different predictions at the neural level. The immature brain hypothesis predicts that language systems will be immature for the first year, while the pretraining hypothesis predicts that they will be functioning. These neural measures need to be made in infants, as other animals do not have language, and have simpler brains (and so less need for pretraining).

From previous work, on the one hand, there is evidence to suggest the language system is immature. The auditory cortex is one part of the language network that is important for processing complex sounds (Peelle et al., 2010). Post-mortem anatomical studies have emphasized the immaturity of auditory cortex, finding that in newborns it does not yet have discernible laminar structure, and that it does not receive myelinated projections from the thalamus (Moore and Linthicum, 2007). In an extrapolation from ideas proposed for the motor system (Marin-Padilla and Marin-Padilla, 1982) it has been suggested that the acoustic radiation, which in adults carries rich information from the thalamus to the auditory cortex, is not yet mature, and that auditory input before six months may only be directly through the reticular activating system in the brainstem (Eggermont and Moore, 2012). Structural connectivity in the cortical language network has also been measured using diffusion-weighted magnetic resonance imaging (MRI) and tractography, and some studies have found evidence for immaturity (Perani et al., 2011). These findings support the immaturity hypothesis. Other studies using neuroimaging in infants in the first year have suggested auditory function is more mature. The acoustic radiation was recently traced using tractography in infants through the first year. It was present as early as one month, and its microstructure, as assessed using fractional anisotropy (FA) and other methods, changed only subtly through the first year. This suggests that rich auditory information might, therefore, be delivered to auditory cortex early in the first year (Zubiaurre-Elorza et al., 2018).

Assuming that complex auditory information could be delivered, is auditory cortex ready to process it? In adults, structural asymmetries are seen in cortical language regions. These are evident even in preterm newborns (Dubois et al., 2010). To assess cortical processing of stimuli, functional MRI (fMRI) has been used to measure the brain activation to sounds. In the first months, auditory cortex responds to speech (Dehaene-Lambertz et al., 2006; Dehaene-Lambertz et al., 2002; Perani et al., 2011). It encodes not just simple acoustic characteristics (such as frequency centroid, fundamental frequency, and envelope) but also more complex acoustic features in a similar way to adults (Wild et al., 2017). Hemispheric asymmetries are seen in activation in response to speech but not music in 2 month-old infants (Dehaene-Lambertz et al., 2010), similar to the activation pattern that would be seen in adults. Functional optical imaging has even found cortical responses differ by syllable in preterm infants (Mahmoudzadeh et al., 2013). These studies suggest that auditory cortex is processing the rich acoustic information that is being delivered and that this component of the language system is relatively mature, even in the first months. These results are consistent with the pretraining hypothesis.

A limitation of this evidence for brain maturity is that it focuses on auditory cortex, and there are many other components to the language network, including motor and prefrontal regions. Neuroimaging has shown that components of this system in the prefrontal cortex can be engaged by speech sounds in infants (Dehaene-Lambertz et al., 2006, 2002; Imada et al., 2006). However, it is not clear how mature the network is more broadly. It is not yet possible to identify tasks that will functionally activate each node of the broader network in infants, and so here we take a different approach, of characterising the network's maturity through its connectivity. Functional connectivity analysis has proven highly informative in adults, and is possible in infants (Doria et al., 2010; Fransson et al., 2007; Gao et al., 2011; Perani, 2012; Smyser and Neil, 2015). Specifically, across a broad language network comprising 15 regions we examine the pattern of connectivity – which connections are stronger and which are weaker – in adults and infants. Evidence of language network maturity would further support the pretraining hypothesis.

Section snippets

Identifying regions of the speech network

The speech network was identified using neurosynth.org, an open-source database of thousands of published functional MRI studies (Yarkoni et al., 2011). The keyword “speech” selected 839 studies that contained one or more contrasts of speech against a baseline. As speech production causes head movement, which is a problem for MRI, the vast majority of these contrasts reflected speech perception. These were compared to thousands of other contrasts that reflected other behaviours, to yield a

Results

Fig. 2c and f shows the functional connectivity in the broader speech network in adults. Pairwise comparison between adults showed that the pattern of connectivity within the speech network was highly consistent within the adult cohort (r = 0.65 ± 0.01, t(119) = 62.77, p < 0.001). Inter-hemispheric connectivity between homologous regions was notable for all of the cortical regions and the thalamus. The prefrontal cortices, insulae and SMA were also tightly interconnected, but the cerebellum was

Discussion

We found that at 3 and 9-months old the connectivity of the speech network, as measured with fMRI, was similar to adults. This was true both for the simple pairwise connectivity and the higher-order structure of connectivity, as assessed with hierarchical clustering. These results resonate with the gathering evidence of network maturity from diffusion tractography (Dubois et al., 2009, 2010; Zubiaurre-Elorza et al., 2018) and fMRI (Wild et al., 2017; Dehaene-Lambertz et al., 2002; G

Acknowledgements

This research was supported by support from the Canada Excellence Research Chairs, Government of Canada in Cognitive Neuroimaging, a Natural Sciences and Engineering Research Council of Canada Discovery Grant [NSERC 418293DG-2012], CIHR/NSERC Collaborative Health Research Project [CHRP 201110CPG] and the Children's Health Research Institute (to RC) from the Children's Heart Foundation. We give special thanks to the families and infants who participated in this study.

References (63)

  • C.D. Smyser et al.

    Use of resting-state functional MRI to study brain development and injury in neonates

    Seminars Perinatol

    (2015)
  • Junqian Xu et al.

    Evaluation of slice accelerations using multiband echo planar imaging at 3 T

    NeuroImage

    (2013)
  • W.S. Barnett et al.

    Research on the cost effectiveness of early educational intervention: implications for research and policy

    Am. J. Community Psychol.

    (1989)
  • E. Bergelson et al.

    Nature and origins of the lexicon in 6-mo-olds

    Proc. Natl. Acad. Sci. U. S. A.

    (2017)
  • E. Bergelson et al.

    At 6-9 months, human infants know the meanings of many common nouns

    Proc. Natl. Acad. Sci.

    (2012)
  • L. Bloom

    One Word at a Time: the Use of Single Word Utterances before Syntax. Mouton

    (1976)
  • M. Caskey et al.

    Adult talk in the NICU with preterm infants and developmental outcomes

    Pediatrics

    (2014)
  • R. Cusack et al.

    Methodological challenges in the comparison of infant fMRI across age groups

    Dev. Cogn. Neurosci.

    (2018)
  • R. Cusack et al.

    Optimizing stimulation and analysis protocols for neonatal fMRI

    PLoS One

    (2015)
  • G. Dehaene-Lambertz et al.

    Functional neuroimaging of speech perception in infants

    Sci. (New York, N.Y.)

    (2002)
  • G. Dehaene-Lambertz et al.

    Functional organization of perisylvian activation during presentation of sentences in preverbal infants

    Proc. Natl. Acad. Sci. U. S. A.

    (2006)
  • V. Doria et al.

    Emergence of resting state networks in the preterm human brain

    Proc. Natl. Acad. Sci. U. S. A.

    (2010)
  • J. Dubois et al.

    Structural asymmetries in the infant language and sensori-motor networks

    Cereb. Cortex (New York, N.Y.  1991)

    (2009)
  • H.M. Dunsworth et al.

    Metabolic hypothesis for human altriciality

    Proc. Natl. Acad. Sci.

    (2012)
  • J.J. Eggermont et al.

    Morphological and functional development of the auditory nervous system

    Hum. Audit. Dev.

    (2012)
  • D. Erhan et al.

    Why does unsupervised pre-training help deep Learning ;?

    J. Mach. Learn. Res.

    (2010)
  • David A. Feinberg et al.

    Multiplexed echo planar imaging for sub-second whole brain fMRI and fast diffusion imaging

    PLoS One

    (2010)
  • M.C. Frank et al.

    Wordbank: an open repository for developmental vocabulary data

    J. Child Lang.

    (2017)
  • P. Fransson et al.

    Resting-state networks in the infant brain

    Proc. Natl. Acad. Sci. U. S. A.

    (2007)
  • W. Gao et al.

    Temporal and spatial evolution of brain network topology during the first two years of life

    PLoS ONE

    (2011)
  • D. Gerry et al.

    Active music classes in infancy enhance musical, communicative and social development

    Dev. Sci.

    (2012)
  • Cited by (12)

    • Language level predicts perceptual categorization of complex reversible events in children

      2022, Heliyon
      Citation Excerpt :

      Linguistic communication is understood by infants as referential from 6 months (Marno et al., 2015), and word meaning is significantly comprehended from 4 months (Bergelson and Aslin, 2017). At a brain level, a neural infrastructure of language corresponding to adult patterns is remarkably in place from birth, whether in terms of functional activations in response to linguistic stimuli (Dehaene-Lambertz et al., 2006), the neuroanatomy of language regions (Dubois et al., 2010), or functional connectivity revealing a language network active in the resting state (Cusack et al., 2018). For encountering more ruly ‘language-less’ minds, therefore, we have to look elsewhere.

    • Neuroscience of language development

      2021, Encyclopedia of Behavioral Neuroscience: Second Edition
    View all citing articles on Scopus
    View full text