The effects of aging and hearing impairment on listening in noise

Summary The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from variable sensory input (figure-ground segregation). The research focuses on peripheral and central factors contributing to this decline using a tone-cloud-based figure detection task. Results based on behavioral measures and event-related brain potentials (ERPs) indicate that, despite delayed perceptual processes and some deterioration in attention and executive functions with aging, the ability to detect sound sources in noise remains relatively intact. However, even mild hearing impairment significantly hampers the segregation of individual sound sources within a complex auditory scene. The severity of the hearing deficit correlates with an increased susceptibility to masking noise. The study underscores the impact of hearing impairment on auditory scene analysis and highlights the need for personalized interventions based on individual abilities.


INTRODUCTION
In everyday situations, detecting auditory objects in noise (figure-ground segregation), such as understanding speech in a crowded restaurant, is essential for adaptive behavior.Difficulty in listening to speech under adverse listening conditions results in compromised communication and socialization, which are commonly reported phenomena in the aging population, [1][2][3] affecting ca.40% of the population over 50, and almost 71% over 70 years of age. 4,5Problems in figure-ground segregation are attributed to peripheral, central auditory, and/or cognitive factors. 5,6However, the relative contributions and interactions among these factors have yet to be studied in depth.Here, we investigated together the effects of peripheral (hearing loss) and central auditory processes on the deterioration of figure-ground segregation in aging combining personalized psychoacoustics and electrophysiology.
Major changes in the peripheral auditory system likely contribute to age-related hearing loss.For instance, outer and inner hair cells within the basal end of the cochlea degrade at an older age, resulting in high-frequency hearing loss. 5,7The pure-tone audiometric threshold is a proxy of changes in cochlear function and structure, 5 and it is well established that due to elevated hearing thresholds elderly people have difficulty hearing soft sounds. 8,9Age-related damage to the synapses connecting the cochlea to auditory nerve fibers (cochlear synaptopathy) results in higher thresholds with fibers having low spontaneous rates, 10,11 which is assumed to contribute to comprehension difficulties when speech is masked by background sounds. 12However, older adults with similar pure-tone thresholds can differ in their ability to understand degraded speech, even after the effects of age are controlled for. 13eficits in central auditory functions (decoding and comprehending the auditory message 14,15 ) may also contribute to the difficulties elderly people experience in speech-in-noise situations.Specifically, these central functions may partly compromise concurrent sound segregation 6,16,17 and lead to diminished auditory regularity representations and reduced inhibition of irrelevant information at higher levels of the auditory system. 16,18,19avigating noisy scenes also depends on selective attention (independently of the modality of stimulation), which is known to be impaired in aging (for reviews see Friedman and Grady 20,21 ).Specifically, increased distraction by irrelevant sounds [1][2][3] suggest deficits in inhibiting irrelevant information, which is especially prominent in information masking. 173][24][25] Figure-ground segregation has been recently studied with the help of tone clouds, a series of short chords composed of several pure tones with random frequencies.The figure within the cloud consists of a set of tones progressing together in time, while the rest of the tones randomly vary from chord to chord (background [24][25][26][27] ).When the frequency range of the figure and the background tone set spectrally overlap, the figure is only distinguishable by parsing the coherently behaving tones across frequency (concurrent grouping) and time (sequential grouping).With component tones of equal amplitude (which is typical in these studies), the ratio of the number of figure tones (figure coherence) and background tones (noise) determines the figure-detection signal-to-noise ratio (SNR).Figure detection performance within these tone clouds was found to predict performance in detecting speech in noise, 12,28 making this well-controlled stimulus attractive as a model for studying real-life figure detection.
Figure-related neural responses commence as early as after two temporally coherent chords (ca.150 ms from the onset of the figure 24,27,29 ). Figure detection accuracy scales with figure coherence and duration (the number of consecutive figure tone-sets presented).This suggests that both spectral and temporal integration processes are involved in figure detection. 25Event-related brain potential (ERP) signatures of figure-ground segregation are characterized by the early (200-300 ms from stimulus onset) frontocentral object-related negativity response (ORN); a later (450-600 ms), parietally centered component (P400) is elicited when listeners are instructed to detect the figure. 25The former indexes the outcome of the process separating the figure from the background (ORN is elicited when separating concurrent sound streams; 30,31 ), while the latter likely reflects a process leading to the perceptual decision, such as matching to a memorized pattern. 25,30,31hile a mechanism based on tonotopic neural adaptation could explain the extraction of a spectrally coherent figure, it is more likely that this kind of figure detection is based on a more general process, such as the analysis of temporal coherence between neurons encoding various sound features. 25,26,32This is because spectrally constant and variable figures can be detected equally efficiently [24][25][26] and the segregation process is robust against interruptions. 27ome results suggest that figure-ground segregation is primarily pre-attentive. 26,27However, there is also evidence that figure-ground segregation can be modulated by attention 25 and cross-modal cognitive load. 33he current study aimed to test the causes of impaired listening in noise in aging.To separate the effects of age and age-related hearing loss, three groups of listeners (young adults; normal-hearing elderly, and hearing-impaired elderly) have been tested.The group of hearingimpaired elderly was selected based on an elevated pure-tone audiometric threshold, thus assuring deterioration of peripheral function. 5The groups' differences in peripheral gain and cognitive load (e.g., effects of the inter-individual variation in working memory capacity) were reduced by keeping task performance approximately equal across all listeners using individualized stimuli.Electrophysiological measures have the advantage (compared to psychoacoustic measures) that they could provide supportive and additional information about the central processing stages.
Participants were presented with stimuli concatenating 40 chords of 50 ms duration (Figure 1A).Half of the stimuli included a figure (a set of tones rising together in time embedded in a cloud of randomly selected tones; figure trials), while the other half consisted only of chords made up of randomly selected frequencies (no-figure trials; stimuli were adapted from previous studies by O' Sullivan et al., Toth et al., and Teki et al. [24][25][26] ).Listeners performed the figure detection task under low noise (LN) and high noise (HN) conditions, which differed only in the number of concurrent randomly varying (background) tones.Stimulus individualization was achieved through two adaptive threshold detection procedures conducted before the main figure detection task.First, LN stimuli were adjusted for each participant by manipulating the number of tones belonging to the figure (figure coherence) so that participants performed figure detection at 85% accuracy.Second, HN stimuli were adjusted for each participant by increasing the number of background tones in the LN stimuli until the participant performed at 65% accuracy.
5][26] Suppose the elderly adults were more susceptible to masking.In that case, their performance will be affected by fewer additional background tones than young adults when individual stimuli are created for the HN condition.
As for separating peripheral and central processes, peripheral effects are expected to be more pronounced in the elderly group with hearing loss than in the normal-hearing elderly: higher coherence and/or fewer background tones are needed for reaching the same performance.In contrast, both groups will be equally affected by the deterioration of central processes.One general effect often seen in aging is slowing information processing. 20,21,33One should expect longer ORN or P400 latency in both elderly groups compared to young adults.Finally, less efficient selective attention may lead to lower P400 amplitudes.

Behavioral results
Participants were divided into three groups based on their age and pure-tone hearing threshold (Figure 1B), the latter measured by audiometry (young adults, normal-hearing elderly, and hearing-impaired elderly).None of the listeners reported issues with their hearing (''The Hearing Handicap Inventory for the Elderly'' adopted from 34 ).After measuring the participant's digit span and introducing the stimuli in general, the LN and HN conditions were set up individually for each participant.First, the number of parallel figure tones allowing the participant to detect the figure with 85% accuracy was established by a procedure of increasing the number of figure tones while simultaneously decreasing the number of background tones, thus keeping the total number of tones constant (N = 20; LN condition).Then, extra background tones were added, until the participant's performance declined to 65% (HN condition).The main experiment consisted of a series of trials mixing LN and HN, figure, and no-figure stimuli in equal proportion (25%, each).Figure detection responses and electroencephalogram (EEG) were recorded.Figure 1 summarizes behavioral results from the pure tone audiometry, digit span, and figureground segregation task.

Group differences in the number of figures and background tones in the LN and HN stimuli
The threshold detection procedure yielded distinct LN and HN conditions for each listener.The SNR was calculated from the ratio of the number of figure tones and background tones, separately for each group and the LN and HN conditions (Figure 1D).SNR values were log-transformed before analyses due to their heavily skewed distribution.As was set up by the procedure, there was a main effect of NOISE (LN vs. HN conditions) on SNR (F [1, 46]  To test whether the SNR effects were due to the number of coherent tones in the figure (coherence level; Figure 1G) or to the noise increase for the HN condition, separate ANOVAs were calculated for the coherence level (in LN) and the additional number of background tones (HN; Figure 1H).For coherence level, a one-way ANOVA with factor GROUP found a main effect (F(2, 46) = 9.62, p < 0.001, h p 2 = 0.295), with larger values in hearing-impaired elderly (M = 13.88,SD = 2.80) than in the normal-hearing elderly (M = 11.15,SD = 1.86; q[2, 95] = 4.57, p = 0.006) or young adults (M = 10.5, SD = 2.28; q[2, 95] = 5.67, p < 0.001).For the number of additional background tones, a one-way ANOVA with factor GROUP yielded no significant effect (F[2, 46] = 0.07, p = 0.93, h p 2 = 0.003).

Figure detection results
The effects of GROUP and NOISE on task performance (d', hit rate, false alarm rate, and RT; Figures 1D-1H) were tested with two-way mixedmodel ANOVAs.As was set up by the individualization procedures, there was no significant main effect of GROUP for any of the performance measures in the main figure detection segregation task in either noise condition (all Fs[2, 46] < 1.73, ps > 0.18, h p 2' s < 0.03), and there was a main effect of NOISE for hit rate, d' and reaction times (RT) (all three Fs > 24.9, ps < 0.001, h p 2' s > 0.35).Detailed results for the performance measures were as follows.
The effects of GROUP (young adults, normal-hearing elderly, and hearing-impaired elderly) and NOISE (LN versus HN) on performance (hit rate, false alarm rate, d', and RT) in the figure detection task were tested with two-way mixed-model ANOVAs.

Relationship between peripheral loss and figure-ground segregation
The relationship between peripheral hearing loss (average hearing threshold across frequencies and ears, as measured by pure-tone audiometry) and behavioral performance (log-transformed SNRs from the individualization procedure, as well as their difference; d' and RT from the main task, separately for LN and HN) was tested with Pearson's correlations.There were significant correlations between peripheral loss and SNR both in LN (r [47] = 0.556, p < 0.001; Bonferroni correction applied for all correlation tests) and HN (r[47] = 0.561, p < 0.001), as well as their difference (r[47] = À0.385,p = 0.044).The latter showed that larger peripheral loss (worse hearing) resulted in smaller noise differences between LN and HN.Confirming the success of stimulus individualization, no significant correlation was observed between peripheral loss and figure detection performance (d' and RT in either LN or HN; all rs[47] < 0.11, all ps > 0.5).

Relationship between working digit span and figure-ground segregation
The relationship between the digit span measures (working memory capacity and control, as measured by forward and backward digit span, respectively; Figure 1C) and behavioral performance (log-transformed SNRs of the individualization procedure and their difference; d' and RT from the main task, separately for LN and HN) was not significant (all rs[47] < 0.265, ps > 0.5).

ERP results
ERPs

Object-related negativity (ORN) AGE effect on the ORN amplitude
No main effect of AGE was found on the ORN amplitude.There was a tendency for the NOISE effect (F [1, 31]

HEARING IMPAIRMENT effect on the ORN amplitude
The three-way interaction of FIGURE 3   The brain regions activated during the ORN period were identified by source localization performed on the responses elicited by figure trials.The sensitivity to SNR was tested by comparing the source signals between the LN and HN conditions, separately for the young adult, normal-hearing, and hearing-impaired elderly groups by permutation-based t-tests.Significant NOISE effects were found predominantly in higher-level auditory and associational areas such as the left temporal cortices, the planum temporale (PT), and the intraparietal sulcus (IPS) Figure 2C).In young listeners, precentral cortical regions were also significantly sensitive to the SNR.

P400 AGE and HEARING IMPAIRMENT effects on the P400 amplitude
The P400 amplitude was significantly lower in the normal-hearing elderly group compared to the young adults (F [1,31]  Post hoc comparisons revealed that the P400 response was lower in the normal-hearing elderly relative to young adults for the figure responses (p = 0.008) but not for the no-figure responses (p > 0.05).No significant effect including NOISE or HEARING IMPAIRMENT was found for P400 amplitude.

AGE and HEARING IMPAIRMENT effects on the P400 peak latency
Neither NOISE nor AGE or HEARING IMPAIRMENT significantly affected the P400 latency.
EEG results not involving the GROUP (AGE or HEARING IMPAIRMENT) or NOISE factor ORN AGE effect on the ORN amplitude.interaction between FIGURE and LATERALITY (F[2, 54] = 8.6854, p < 0.001; h p 2 = 0.24) was caused by the amplitude at Pz being significantly higher than at P3 or P4 for figure (p < 0.001, both) but not for no-figure trials.

DISCUSSION
Using a tone-cloud-based figure detection task, we tested the contributions of peripheral and central auditory processes to age-related decline of hearing in noisy environments.We found that while aging slows the processing of the concurrent cues of auditory objects (long ORN latencies in the elderly groups) and may affect processes involved in deciding the task-relevance of the stimuli (lower P400 amplitude in the normal-hearing elderly than the young adult group), overall, it does not significantly reduce the ability of auditory object detection in noise (no significant differences in the SNRs between the young adults and the normal-hearing elderly group).However, when aging is accompanied by higher levels of hearing loss, grouping concurrent sound elements suffers and, perhaps not independently, the tolerance to noise decreases.The latter was supported by the results that (1) higher coherence was needed by the hearing-impaired than the normal-hearing elderly group for the same figure detection performance; (2) hearing thresholds negatively correlated with the number of background tones reducing detection performance from the LN to the HN level; (3) the ORN amplitudes did not significantly differ between the HN and LN condition in the hearing-impaired elderly group (while they differed in the other two groups).The inference about the effects of peripheral hearing loss is strongly supported by the efficacy of the stimulus individualization procedure that effectively eliminated correlations between hearing thresholds and performance measures of the figure detection task as well as between the current working memory indices and any of the behavioral measures.][37][38] We now discuss in more detail the general age-related changes in auditory scene analysis followed by the effects of age-related hearing loss.

Age-related changes in auditory scene analysis
We found no behavioral evidence for age-related decline either in the ability to integrate sound elements (coherence level) or in the sensitivity to the noise (number of added background tones).Evidence about aging-related changes in auditory scene analysis is contradictory.Results are suggesting that the ability to exploit sequential stimulus predictability for auditory stream segregation degrades with age. 39A recent study, however, suggested that elderly listeners can utilize predictability, albeit with a high degree of inter-individual variation. 36Further, de Kerangal and colleagues 40 also found that the ability to track sound sources based on acoustic regularities is largely preserved in old age.The current results strengthen this view, as figures within the tone clouds are detected by their temporally coherent behavior.Further, age did not significantly affect performance in an information masking paradigm, 41 a result fully compatible with the current finding of no significant behavioral effect of age, as tone clouds impose both energetic and informational masking on detecting figures.
Although figure detection performance was found preserved in the normal-hearing elderly group, the underlying neural activity significantly differed compared to the young adults both at the early (ORN) and later (P400) stage of processing figure trials.

ORN results
In line with our hypothesis, the early perceptual stage of central auditory processing was significantly slowed in the elderly compared to young adults (the peak latency of ORN was delayed by ca. 150 ms).Although using the mistuned partial paradigm, Alain and colleagues 6,42 did not report a significant delay of ORN in the elderly compared to young or middle-aged adults, a tendency of longer ORN peak latencies with age can be observed on the responses (see 42 Figure 2).ORN is assumed to reflect the outcome of cue evaluation, the likelihood of the presence of two or more concurrent auditory objects. 43Compared to the mistuned partial paradigm, in which only concurrent cues are present (i.e., there is no relationship between successive chords), the tone-cloud-based figure detection paradigm also includes a sequential element: the figure only emerges if the relationship between elements of successive chords is discovered.It is thus possible that the delay is due to slower processing of the temporal aspect of the segregation cues.Alternatively, the delay may be related to the higher complexity of concurrent cues in the current compared to the missing fundamental design, because the latter can rely on harmonicity, whereas the figure in the current paradigm links together tones with harmonically unrelated frequencies.Slower sensory information processing has often been found in the elderly compared to young adults (e.g., Alain and McDonald found an age-related delay of the latency of the P2 component).
A possible specific explanation of the observed aging-related delay of the ORN peak is that the auditory system at older age needs to accumulate more sensory evidence for the perceptual buildup of the object representation.The input from the periphery may be noisier at an older age (for review see Slade et al. 5 ) therefore, more time is needed to evaluate the relations between the current and previous chords or separate and integrate the spectral elements into an object than at a young age.This assumption is compatible with results showing that elderly listeners perform at a higher level in detecting mistuned partials with chords of 200 ms duration, compared to 40 ms duration.The ORN responses are similar to those obtained for young adults. 6Concordantly, some studies using speech in-noise tasks suggested that older listeners required more time than younger listeners to segregate sound sources from either energetic or informational maskers 17,44 The study from Ben-David et al. demonstrated similar results for a speech babble masker but not for a noise masker. 17Since the current study employed a delayed response task, the reaction time data were not suitable to test the age-related slowing information processing hypothesis.Young adults nevertheless responded slightly faster than the elderly groups, but the difference was not significant.

Article P400 results
The P400 amplitude was significantly lower in normal-hearing elderly compared to young adults.Considering the commonalities between the neural generators and sensitivity to stimulus and task variables between the P400 and the P3 components 16,20,45,46 P400 likely reflects attentional task-related processes. 47The P3 amplitude was found to be lower in healthy aging. 48,49This is interpreted as normal cognitive decline with aging. 48,50,51Therefore, the current finding of reduced P400 amplitude likely reflects general cognitive age-related changes in attention or executive functions.
Distraction by irrelevant sounds 1-3 may be an important cause of the difficulties encountered by many elderly people in speech-in-noise situations.Deterioration of these central functions may partly compromise concurrent sound segregation, 16,17,42 lead to diminished auditory regularity representations at the higher stations of the auditory system, 16,18 as well as deficient inhibition of irrelevant information processing. 19

Consequences of age-related hearing loss on stream segregation ability
The hypothesis suggesting that integrating sound elements into an object was more difficult for the elderly with moderate hearing loss than for normal-hearing elderly was confirmed: hearing-impaired elderly needed more figure tones and higher SNR than normal-hearing elderly listeners to reach similar figure-detection performance.Specifically, while for normal-hearing elderly listeners' ca.55% of the tones in the chord forming the figure was sufficient for an 85% figure-detection ratio (LN condition), hearing-impaired elderly needed ca.70% of the tones to belong to the figure for the same performance.Further, hearing impairment may increase susceptibility to masking by the background tones, as the number of background tones reducing performance from 85 to 65% (HN condition) negatively correlated with the hearing threshold.Supporting this explanation, former studies investigating speech perception performance in background noise found that the impact of hearing impairment is as detrimental for young and middle-aged as it is for older adults.When the background noise becomes cognitively more demanding, there is a larger decline in speech perception due to age or hearing impairment. 52,53he ORN responses may provide further insight into the problems of figure-ground segregation caused by hearing impairment.Whereas young adults and normal-hearing elderly elicited larger ORNs in the low than the HN condition, the ORN amplitude did not differ between the two conditions for the hearing-impaired elderly.This suggests that the sensory-perceptual processes involved in detecting figures in noise were not made more effective by surrounding the figure with less noise for the hearing-impaired elderly.The lower amount of information arriving from the periphery limits their ability to find coherence or integrate concurrent sound elements.Consequently, there is no capacity to reduce the effect of additional noise resulting in a steep performance decline when in more noisy situations.
In contrast to early (sensory-perceptual) processing, no difference was found between normal-hearing and hearing-impaired elderly listeners, as was shown by the similar-amplitude P400 responses in the two groups.Thus, hearing deficits without general cognitive effects (as was promoted by the group selection criteria and the lack of working memory differences found between the groups) only affect early sensory-perceptual processes.Further, as the level of hearing impairment of the current hearing-impaired group is modest (as none of the participants reported serious difficulties in the hearing handicap inventory for the elderly 34 ), the current results suggest that the tonecloud-based figure detection paradigm could be used to detect hearing loss before it becomes severe.

Conclusions
Results obtained in a well-controlled model of the speech-in-noise situation suggest that age-related difficulties in listening under adverse conditions are largely due to hearing impairment, making figure-ground segregation especially difficult for elderly people.Coherence levels needed by individuals to reliably detect figures were very sensitive to hearing impairment and may serve as a diagnostic tool for hearing decline before it becomes clinically significant.

Limitations of the study
The sample sizes for the normal-hearing and hearing-impaired elderly groups were relatively low, bringing the statistical power of the analyses into question.For ERP analyses statistical power might have been insufficient to detect all effects of interest (see e.g., Boudewyn et al. and Jensen et al. 54,55 for simulations on power in ERP studies), thus caution is advised before generalizing the results of the current study.Further caution is needed due to results that remained slightly above the conventional significance threshold (see object-related negativity (ORN) section in results).

STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:

Threshold detection setting up LN/HN conditions
After training, each participant performed two adaptive threshold detection tasks ($15 min) to determine the stimuli parameters corresponding to 85% (termed the low-noise (LN) condition) and 65% accuracy (the high-noise (HN) condition) in the figure detection task.In both threshold detection tasks, the trial structure was as described for the training phase, except for the lack of feedback at the end of each trial.
As in the training phase, participants were instructed to indicate the presence or absence of a figure in the stimulus, with an emphasis on accuracy.
In the first threshold detection task, the goal was to determine the participant's individual Figure coherence level that corresponded to ca. 85% accuracy, while keeping the overall number of tones in each cord constant at 20.In the second task, the coherence level was kept constant at the level determined in the LN threshold detection task, and the number of background tones in the chord was increased until performance dropped to ca. 65% accuracy.
In both cases, thresholds were estimated using the QUEST procedure, 58 an adaptive staircase method that sets the signal-to-noise ratio (SNR) of the next stimulus to the most probable level of the threshold, as estimated by a Bayesian procedure taking into account all past trials.SNR was determined as the ratio of the number of figure and background tones.Both tasks consisted of one block of 80 trials, with 20 trials added if the standard deviation of the threshold estimate was larger than the median difference between successive SNR levels allowed by FG stimuli parameters.The thresholding phase yielded stimulus parameters corresponding to 65 and 85% accuracy, separately for each participant.Thus, in the main experiment, the LN and HN condition tasks posed similar difficulty levels to each participant.The exact parameters used for the QUEST procedure can be found in the GitHub repository of the experiment.

Main figure detection task
In the main part of the experiment ($90 min), the trial structure and the instructions were identical to those used in the threshold detection procedure.Two conditions (LN and HN) were administered, resulting in four types of stimuli:  (20-20) of all four stimulus types in a randomized order.Summary feedback on performance (overall accuracy) was provided to participants after each block.Short breaks were inserted between successive stimulus blocks with additional longer breaks after the 4th and 7th blocks.

Analysis of behavioral data
From the threshold detection tasks, we analyzed the participants' coherence level in the LN condition, the number of additional background tones in the HN condition, and log-transformed SNR values for both conditions.A mixed-model ANOVA with the within-subject factor NOISE (LN vs. HN) and the between-subject factor GROUP (young adult, normal-hearing elderly, hearing-impaired elderly) was conducted on SNR.For coherence levels (LN only) and the number of additional background tones (HN only), one-way ANOVAs were performed with the between-subject factor GROUP (young, normal-hearing old, hearing-impaired old).
From the main task, detection performance was assessed by the sensitivity index, 59 false alarm rate (FA), and mean reaction times (RT).Mixed-model ANOVAs were conducted with the within-subject factor NOISE, and the between-subject factor GROUP, separately on d', FA, and RT.Statistical analyses were carried out in MATLAB (R2017a).The alpha level was set at 0.05 for all tests.Partial eta squared (h p 2 ) is reported as effect size.Post-hoc pairwise comparisons were computed by Tukey HSD tests.Pearson's correlations were calculated between the average of pure-tone audiometry thresholds in the 250-8000 Hz range and working memory measures (capacity and control) on one side and behavioral variables from the threshold detection and the figure detection tasks on the other side.Bonferroni correction was used to reduce the potential errors resulting from multiple comparisons.

Analysis of EEG data EEG recording and preprocessing
EEG was recorded with a Brain Products actiCHamp DC 64-channel EEG system and actiCAP active electrodes.Impedances were kept below 15 kU.The sampling rate was 1 kHz with a 100 Hz online low-pass filter applied.Electrodes were placed according to the International 10/20 system with FCz serving as the reference.Eye movements were monitored with bipolar recording from two electrodes placed lateral to the outer canthi of the eyes.
EEG was preprocessed with the EEGlab14_1_2b toolbox 60 implemented in Matlab 2018b.Signals were band-pass filtered between 0.5 and 80 Hz using a finite impulse response (FIR) filter (Kaiser windowed, with Kaiser b = 5.65326 and filter length n = 18112).A maximum of two malfunctioning EEG channels were interpolated using the default spline interpolation algorithm implemented in EEGlab.The Infomax algorithm of Independent Component Analysis (ICA) was employed for artifact removal. 60ICA components from blink artifacts were removed after visually inspecting their topography and the spectral contents of the components.No more than 10 percent of the overall number of ICA components (for a maximum of n = 3) were removed.

Event-related brain activity analysis
Epochs were extracted from the continuous EEG records between À800 and +2300 ms relative to the onset of the The predominantly fronto-central ORN 25,30,31,47 amplitudes were measured as the average signal in the 250-350 ms latency range from Figure onset for the young adult group and the 350-550 ms latency range for the normal-hearing and hearing-impaired elderly groups at the C3, Cz, and C4 leads.The predominantly parietal P400 25,30,31,47 amplitudes were measured as the average signal in the 650-850 ms latency range from the P3, Pz, and P4 electrodes in all three groups.
Peak latency was measured as the latency value of the maximal amplitude within the latency range of ORN and P400 respectively.

EEG source localization
3][64] The MNI/Colin27 brain template was segmented based on the default setting and was entered, along with default electrode locations, into the forward boundary element head model (BEM) provided by the openMEEG algorithm. 65For the modeling of time-varying source signals (current density) of all cortical voxels, a minimum norm estimate inverse solution was employed using dynamical Statistical Parametric Mapping normalization, 66

QUANTIFICATION AND STATISTICAL ANALYSIS Statistical analysis of behavioral data
From the threshold detection tasks, we analyzed the participants' coherence level in the LN condition, the number of additional background tones in the HN condition, and log-transformed SNR values for both conditions.A mixed-model ANOVA with the within-subject factor NOISE (LN vs. HN) and the between-subject factor GROUP (young adult, normal-hearing elderly, hearing-impaired elderly) was conducted on SNR.
For coherence levels (LN only) and the number of additional background tones (HN only), one-way ANOVAs were performed with the between-subject factor GROUP (young, normal-hearing old, hearing-impaired old).
From the main task, detection performance was assessed by the sensitivity index (57), false alarm rate (FA), and mean reaction times (RT).Mixed-model ANOVAs were conducted with the within-subject factor NOISE, and the between-subject factor GROUP, separately on d', FA, and RT.Statistical analyses were carried out in Matlab (R2017a).The alpha level was set at 0.05 for all tests.Partial eta squared (h p 2 ) is reported as effect size.Post-hoc pairwise comparisons were computed by Tukey HSD tests.Pearson's correlations were calculated between the average of pure-tone audiometry thresholds in the 250-8000 Hz range and working memory measures (capacity and control) on one side and behavioral variables from the threshold detection and the figure detection tasks on the other side.Bonferroni correction was used to reduce the potential errors resulting from multiple comparisons.

Statistical analysis of EEG data Event-related brain activity
The effects of age were tested with mixed-model ANOVAs with within-subject factors FIGURE (Figure vs. No-figure), NOISE (LN vs. HN), LATERALITY (left vs. midline vs. right), and the between-subject factor AGE (young adult vs. normal-hearing elderly) on the two ERP amplitudes and peak latencies.Similar mixed-model ANOVAs were conducted to test the effects of hearing impairment by exchanging AGE for the between-subject factor HEARING IMPAIRMENT (normal-hearing vs. hearing-impaired older adults).Post-hoc pairwise comparisons were computed by Tukey HSD tests.

EEG source activity
Contrasts were evaluated on the average signal for the time window of interest (250-350 ms for young adults and 350-550 ms for normal-hearing and hearing-impaired older listeners), between Figure trials of the LN and HN conditions, separately for each group by a permutationbased (N = 1000) paired sample t-test (alpha level = 0.01).

Figure 1 .
Figure 1.Stimuli and behavioral results (A) Figure examples of stimuli for the low (LN) and high noise (HN) conditions, respectively.(B) Mean pure tone audiometry thresholds for the young, adult (blue line; N = 20), normal-hearing, elderly (green line; N = 13), and hearing-impaired older listeners across elderly groups (red line; N = 16) in the 250-8000 Hz range.Error bars depict SEM on all graphs.(C) Forward and backward digit span performance (corresponding to working memory capacity and control) for the three groups.(D) Mean SNR values were derived from the threshold detection tasks (across LN and HN) from the stimulus individualization procedure.(E) Mean figure coherence level derived from the first threshold detection task (and employed later in the FG segregation task) for the three groups.Mean Figure coherence level of the LN stimuli for the three groups; significant group differences (p < 0.01) are marked by gray lines above the bar charts.(F) Mean increase in the background tone number between the tones from LN to HN conditions for the three groups.(G and H) Behavioral performance (Hit rate and false alarm rate, respectively) in, separately for LN and HN; color labels are at the FG segregation task lower right corner of the figure.
= 4.069, p = 0.0524; h p 2 = 0.116) with larger (more negative) ORN amplitudes in the LN compared to the HN condition.The interaction between NOISE and FIGURE also yielded a tendency (F[1, 31] = 4.1550, p = 0.05012; h p 2 = 0.118) with figure trials eliciting larger ORN for LN than HN (post hoc comparison: p = 0.032) but not no-figure trials.

Figure
Figure events elicited ORN between 250 and 350 ms latency from figure onset in the young adult group while between 350 and 550 ms in the normal-hearing and hearing-impaired elderly groups.There was a significant main effect of AGE F[1,31] = 4.2662, p < 0.05, h p 2 = 0.12), with the ORN peak latency delayed in normal-hearing elderly (M = 421 ms) compared to young adults (M = 280 ms).NOISE and HEARING IMPAIR-MENT did not significantly affect the ORN latency.The brain regions activated during the ORN period were identified by source localization performed on the responses elicited by figure trials.The sensitivity to SNR was tested by comparing the source signals between the LN and HN conditions, separately for the young adult, normal-hearing, and hearing-impaired elderly groups by permutation-based t-tests.Significant NOISE effects were found predominantly in higher-level auditory and associational areas such as the left temporal cortices, the planum temporale (PT), and the intraparietal sulcus (IPS) Figure2C).In young listeners, precentral cortical regions were also significantly sensitive to the SNR.

Figure 2 .
Figure 2. EEG results: ORN response (A) Group-averaged (young adult: N = 20; normal-hearing elderly: N = 13; hearing-impaired older elderly: N = 16) central (C3; maximal ORN amplitude) ERP responses to figure (solid line) and no-figure (dashed line) related central (C3 lead) ORN elicited stimuli obtained in the LN (red) and the HN condition (black), respectively, for young normal-hearing and hearing-impaired older listeners.).Zero latency is at the onset of the figure event.Gray vertical bands show the measurement window for ORN while the yellow dashed line indicates the latency.The bar charts on the right side of ORN in young adults.On the panel shown on the right, the effect of NOISE is shown on the barplot for mean ORN amplitude of figure only trials, respectively, for amplitudes (with SEM) separately for the LN and HN conditions and groups.Significant NOISE effects (p < 0.05) are marked by gray lines beside the bar charts.(B) Scalp distribution of the ORN responses to Figure elicited ORN response, respectively, stimuli for the LN and HN conditions and groups with color scale below.(C) Source localization results of Brain areas sensitive to the NOISE effect (HN vs. LN condition) within the ORN time window.(C) Significant NOISE effect on source activity (current source density based on dSPM) found in young normal-hearing and hearing-impaired older adult groups separately.(Color scale below).

Figure 3 .
Figure 3. EEG results: P400 response (A) Group-averaged (young adult: N = 20; normal-hearing elderly: N = 13; hearing-impaired older elderly: N = 16) parietal (Pz; maximal P400 amplitude) ERP responses to figure (solid line) and no-figure (dashed line) related parietal (Pz lead) p400 elicited stimuli obtained in the LN (red) and the HN condition (black), respectively, for young normal-hearing and hearing-impaired older listeners.).Zero latency is at the onset of the figure event.Gray vertical bands show the measurement window for P400.On the right, the effect of FIGURE is shown the bar charts on the barplot for the right side of the panel show the mean P400 amplitude, respectively, for figure and no-figure trials (collapsed across LN and HN) and groups.Amplitudes (with SEM) separately for the LN and HN conditions.Significant group effects (p < 0.05) are marked by gray lines beside the bar charts.(B) Scalp distribution of figure elicited the P400 response, respectively, responses to figure stimuli for the LN and HN conditions and groups with color scale below.
Figure -LN, No-figure -LN, Figure -HN, and Nofigure -HN.Participants received 200 repetitions of each stimulus type for a total of 800 trials.Trials were divided into 10 stimulus blocks of 80 trials each, with each block containing an equal number Figure in Figure trials.For No-figure trials, onsets were selected randomly from the set of Figure onsets in the Figure trials (each Figure onset value from Figure trials was selected only once for a No-figure trial).Only epochs from trials with a correct response (hit for Figure and correct rejection for No-figure trials) were further processed.Baseline correction was applied by averaging voltage values in the [-800 -0] ms time window.Epochs exceeding the threshold of +/À100 mV change throughout the whole epoch measured at any electrode were rejected.The remaining epochs were averaged separately for each stimulus type and group.The mean number of valid epochs (collapsed across groups) were Figure -HN: 107.78;No-figure -HN: 175.08; Figure -LN: 153.04;No-figure -LN: 173.29.Brain activity within the time windows corresponding to the ORN and P400 ERP components were measured separately for each stimulus type/condition/group. Time windows were defined by visual inspection of grand average ERPs.
separately on Figure and No-Figure trials of the LN and HN conditions, and the three groups.
were separately collected for figure and no-figure trials, the LN and HN conditions, and the young adult, normal-hearing elderly, and hearing-impaired elderly group.Only ERPs to figure and no-figure events with a correct response (hit for figure, correct rejection for no-figure) were analyzed.Two ERP components were identified based on a visual inspection of the group's average responses.Figure detection elicited the ORN response (Figures2A and 2B) over fronto-central leads, followed by a parietally maximal P400 response (Figures3A and 3B).Two contrasts were tested by mixed-model ANOVAs on the amplitudes and latencies of the ORN and P400 responses with factors of FIGURE (figure vs. no-figure), NOISE (LN vs. HN), LATERALITY (left vs. midline vs. right), and GROUP: one for exploring age effects by comparing the young adult and the normal-hearing elderly group (AGE factor), and one to test the effect of age-related hearing loss by comparing the normal-hearing and the hearing-impaired elderly group (HEARING IMPAIRMENT factor).Effects not including GROUP (AGE or HEARING IMPAIRMENT) or NOISE are reported separately at the end of the Results.
There was a main effect of FIGURE (F[1,31] = 72.7,p < 0.001, h p 2 = 0.70) with figure trials eliciting stronger ORN response (more negative signal) than no-figure trials and LATERALITY (F[2,62] = 9.08, p < 0.001, h p 2 = 0.23).Post-hoc pairwise comparisons revealed that the ORN was dominant on the left side as the amplitude at C3 amplitudes was larger than at C4 or Cz (p < 0.001, both).