Trends in Cognitive Sciences
SpotlightAttention modulates ‘speech-tracking’ at a cocktail party
References (10)
The cocktail party problem
Curr. Biol.
(2009)Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns
Neuron
(2009)- et al.
Low-frequency neuronal oscillations as instruments of sensory selection
Trends Neurosci.
(2009) Some experiments on the recognition of speech, with one and two ears
J. Acoust. Soc. Am.
(1953)Modulation of early sensory processing in human auditory cortex during auditory selective attention
Proc. Natl. Acad. Sci. U.S.A.
(1993)
Cited by (23)
Neurodevelopmental oscillatory basis of speech processing in noise
2023, Developmental Cognitive NeuroscienceCitation Excerpt :The effect of age for both phrasal and syllabic CTS appeared to be mainly explained by an anterior-to-posterior shift (of about 1 cm) of right-hemisphere sources from youngest (5–7 years) to oldest (18–27 years) age groups. Whether this shift reflects a genuine developmental effect is difficult to tell since changes in brain anatomy from childhood to adulthood induce small, but consistent, age-dependent errors in the normalization of individual brains to a template (Wilke et al., 2002). Besides these unclear effects of age, our results rather emphasize the close similarity in location of cortical generators of CTS across the investigated age range.
A New Unifying Account of the Roles of Neuronal Entrainment
2019, Current BiologyComparing the potential of MEG and EEG to uncover brain tracking of speech temporal envelope
2019, NeuroImageCitation Excerpt :In line with this view, speech brain tracking is stronger when listening to intelligible speech compared to non-intelligible speech (Ahissar et al., 2001; Luo and Poeppel, 2007; Peelle et al., 2013; Riecke et al., 2018), and delta fluctuations track sentence boundaries, even in the absence of prosodic cues (Ding et al., 2016; Meyer et al., 2017). Studies on speech brain tracking have also increased our understanding of the neural mechanisms subtending speech perception in adverse listening conditions (O'Sullivan et al., 2014; Ding and Simon, 2014; Riecke et al., 2018; Zion-Golumbic et al., 2013; Zion-Golumbic and Schroeder, 2012). Indeed, in cocktail-party conditions, 1) speech brain tracking at delta and theta frequencies is stronger with the attended speech (i.e., the sound subjects are attending to) than with the global sound (i.e., the attended speech and the noise combined) (Broderick et al., 2017; Ding and Simon, 2012; Fuglsang et al., 2017; Horton et al., 2013; Luo and Poeppel, 2007; Mesgarani and Chang, 2012; O'Sullivan et al., 2014; Puschmann et al., 2017; Rimmele et al., 2015; Simon, 2015; Vander Ghinst et al., 2016; Zion-Golumbic et al., 2013), 2) it decreases when the signal-to-noise ratio (SNR) decreases (Giordano et al., 2016; Vander Ghinst et al., 2016 but see Ding and Simon, 2013 who found that speech brain tracking remains stable as long as speech is intelligible), 3) this dampening is more stringent in the right (vs. left) hemisphere at delta frequencies (Vander Ghinst et al., 2016), and 4) high-order brain regions track only the attended speech stream with no detectable trace of the unattended speech (Zion-Golumbic et al., 2013).