Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Speech understanding in diffuse steady noise in typically hearing and hard of hearing listeners

  • Julie Bestel,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Audilab, Versailles, France

  • Elsa Legris,

    Roles Investigation

    Affiliation Audilab, Tours, France

  • Frédéric Rembaud,

    Roles Investigation

    Affiliation Audilab, Périgueux, France

  • Thierry Mom,

    Roles Conceptualization

    Affiliation Centre Hospitalier Universitaire de Clermont-Ferrand, Clermont-Ferrand, France

  • John J. Galvin III

    Roles Formal analysis, Visualization, Writing – original draft, Writing – review & editing

    jgalvin@hifla.org

    Affiliations University Hospital Center of Tours, Tours, France, House Institute Foundation, Los Angeles, CA, United States of America

Abstract

Spatial cues can facilitate segregation of target speech from maskers. However, in clinical practice, masked speech understanding is most often evaluated using co-located speech and maskers (i.e., without spatial cues). Many hearing aid centers in France are equipped with five-loudspeaker arrays, allowing masked speech understanding to be measured with spatial cues. It is unclear how hearing status may affect utilization of spatial cues to segregate speech and noise. In this study, speech reception thresholds (SRTs) for target speech in “diffuse noise” (target speech from 1 speaker, noise from the remaining 4 speakers) in 297 adult listeners across 9 Audilab hearing centers. Participants were categorized according to pure-tone-average (PTA) thresholds: typically-hearing (TH; ≤ 20 dB HL), mild hearing loss (Mild; >20 ≤ 40 dB HL), moderate hearing loss 1 (Mod-1; >40 ≤ 55 dB HL), and moderate hearing loss 2 (Mod-2; >55 ≤ 65 dB HL). All participants were tested without aided hearing. SRTs in diffuse noise were significantly correlated with PTA thresholds, age at testing, as well as word and phoneme recognition scores in quiet. Stepwise linear regression analysis showed that SRTs in diffuse noise were significantly predicted by a combination of PTA threshold and word recognition scores in quiet. SRTs were also measured in co-located and diffuse noise in 65 additional participants. SRTs were significantly lower in diffuse noise than in co-located noise only for the TH and Mild groups; masking release with diffuse noise (relative to co-located noise) was significant only for the TH group. The results are consistent with previous studies that found that hard of hearing listeners have greater difficulty using spatial cues to segregate competing speech. The data suggest that speech understanding in diffuse noise provides additional insight into difficulties that hard of hearing individuals experience in complex listening environments.

Introduction

Everyday listening conditions are complex and noisy. Human listeners are able to segregate target speech from competing sounds using a variety of acoustic cues, such as the similarity between the target speech and competing sounds and the availability of spatial cues. Competing sounds may interfere with target speech due to energetic, envelope, and/or informational masking [15]. Energetic masking depends on the degree of spectro-temporal overlap with the target and is thought to be largely peripheral. Envelope masking depends on the degree of envelope similarity to the target, even when there is no spectral-temporal overlap, and is thought to be more central in origin. Informational masking depends on the degree of lexical interference, the similarity between the target and competing speech (e.g., talker characteristics such as sex), and is thought to be more central in origin.

Understanding of target speech is often measured with co-located maskers, a configuration which provides no spatial cues. However, spatial cues (e.g., head shadow, inter-aural time differences, inter-aural level differences, etc.) have been shown to improve segregation of target speech and maskers [613]. Spatial release from masking has been shown to be poorer in hard of hearing than in typically-hearing (TH) listeners [1420]. Spatial release from masking has also been shown to poorer in older than in younger adults [16, 18]. Utilization of spatial cues to segregate target speech and maskers has been measured using various masker stimuli and sound source setups. In sound field, some studies have used 2–3 loudspeakers with target speech typically presented directly in front of the listener (0° azimuth) and maskers presented from 0° azimuth or from some other location (e.g., ±45°, ±90° azimuth, relative to the target location). Others have used more complex, multi-speaker setups such as the R-spaceTM 8-loudpeaker system to evaluate masked speech understanding in TH and/or hard of hearing listeners [2127]. The masking sound used in the R-spaceTM system replicates a “cocktail party” setting, with multi-talker babble processed to come from the multi-speaker sound sources; the target and masker speech can be assigned to any of the loudspeakers. The R-space system has also been used to evaluate the effect of microphone directionality and signal processing on signal-to-noise ratios (SNRs) for hearing aid and cochlear implant systems [22].

Compared to studies that used competing speech, multi-talker babble, or cafeteria/restaurant noise, relatively few studies have examined the benefit of spatial cues using noise maskers. Spatial release from masking has been shown to be larger with competing speech than with noise maskers [10, 28, 29]. Avivi-Reich et al. [30] found better speech understanding in young TH listeners when target speech was presented from a single loudspeaker and diffuse noise was presented from three different loudspeakers. Advantages for speech understanding in diffuse noise (3-loudspeaker array) have been observed in cochlear implant listeners using beam-forming microphones and signal-processing [31, 32]. Similarly, directional microphones have been evaluated using diffuse noise in hearing aid users [22, 33].

In general, the availability of spatial cues appears to benefit segregation of target and masker speech. Compared to speech or babble maskers (which produce some degree of energetic, envelope, and/or informational masking), there is relatively little information regarding the effect of spatialized noise (i.e., energetic masking) on TH and hard of hearing listeners’ speech understanding. A 5-loudspeaker array has been recommended by the French Society of Otorhinolaryngology-Head and Neck Surgery to assess speech perception in noise with spatial cues [34]. While this loudspeaker setup is available in many hearing centers in France, speech perception in spatialized noise is not often tested. Given hard of hearing listeners’ difficulties in utilizing spatial cues for competing speech or babble (informational and/or energetic masking) [1420], it is unclear whether such difficulties might persist in spatialized noise. Also, it is unclear how speech understanding may differ in diffuse or co-located noise and how hearing loss may interact with these noise conditions.

In the present study, unaided sentence recognition in diffuse noise were measured in 297 TH and hard of hearing listeners at 9 Audilab hearing centers using a 5-loudspeaker array. Speech understanding in diffuse noise was evaluated in light of participants’ hearing status, age at testing, sex, word recognition scores in quiet, and phoneme recognition scores in quiet. Speech understanding in diffuse or co-located noise was also compared in another 65 TH and hard of hearing listeners. The main research questions were: 1) What factors (e.g., audiometric thresholds, age at testing, word recognition scores in quiet, etc.) predict speech understanding in diffuse noise? and 2) How does speech understanding differ in diffuse versus co-located noise?

Methods

Participants

Participants were recruited from 9 Audilab hearing aid centers (a commercial enterprise) in the following cities: A) Audilab Versailles, B) Audilab Chartres, C) Audilab Niort, D) Audilab Tours, E) Audilab St Pryvé St Mesmin, F) Audilab Périgueux, G) Audilab La Chaussée St Victor, H) Audilab Pau, and I) Audilab Montlouis Sur Loire. The recruited hard of hearing participants were existing Audilab patients. The recruited TH participants were individuals who accompanied Audilab patients for clinical visits, as well as Audilab clinical trainees, assistants, and audiologists. Data were collected between June 2019 and December 2020. While this study period occurred during the Covid pandemic, enrollment remained high (297 participants in total). Data from hard of hearing participants were collected during routine clinical visits that were already scheduled (i.e., no additional visits were scheduled as part of the study). This was an observational study that was approved by the Comité de Protection des Personnes Nord Ouest IV (approval number: 2018 A02729 46). Participants were provided with a study information document and were informed that they could refuse to participate in or withdraw from the study if they so desired. This was a Research Implying Human Person Type 3 study (non-interventional research in which all procedures and products are within clinical standard of care, without additional or unusual procedures of diagnosis, treatment, or supervision). As such, written informed consent was not required or collected.

Inclusion criteria were adult (> 18 years old at testing), native speakers of French, with pure-tone average (PTA) across both ears < 65 dB HL, and normal otoscopy. For hard of hearing participants, only sensorineural hearing loss was allowed.

Exclusion criteria were conductive hearing loss, hearing loss due to ototoxicity (where known), pure-tone average (PTA) threshold difference across ears > 20 dB HL, and inability to understand the study and description and/or test procedures because of cognitive or language issues. For new patients (less than 6 months from diagnosis), conductive hearing loss was measured using bone conduction thresholds. For patients with more than 6 months since diagnosis, conductive hearing loss was determined according to medical history. Hearing loss due to otoxicity was excluded because we wanted to reduce sources of heterogeneity in sensorineural hearing loss, and because chemical agents may involve specific mechanisms that we did not want to include in our study sample. Across-ear asymmetry in PTA thresholds >20 dB HL was excluded because we wanted to reduce “better ear” effects when testing with diffuse noise.

Patient demographics and audiological measures

Pure-tone air conduction thresholds were measured for each ear in each participant using headphones for audiometric frequencies 0.5, 1, 2, 4, 6, and 8 kHz. If air conduction thresholds were available less than 6 months before the clinic appointment, these were used for the study. If air conduction thresholds were collected more than 6 months before the clinic appointment, thresholds were re-collected.

Similar to recommendations by the International Bureau for Audiology (https://www.biap.org/en/), participants were divided into 4 hearing status groups according to their binaural pure-tone average (PTA) thresholds across 0.5, 1, 2, and 4 kHz: 1) TH (PTA ≤ 20 dB HL), 2) Mild hearing loss (PTA > 20 to 40 dB HL), 3) Moderate hearing loss, 1st degree (Mod-1; PTA > 40 to 55 dB HL), and 4) Moderate hearing loss, 2nd degree (Mod-2; PTA > 55 to 65 dB HL). Note that the cutoff for the Mod-2 group was lower than recommended by the International Bureau for Audiology (70 dB HL) because we did not want the maximum speech level during testing to be overly high. During testing, the noise was fixed at 65 dBA and the speech level was adjusted to a maximum of 85 dBA (see details below). The present hearing status classifications are also similar to those reported by the Global Burden of Disease Expert Group on Hearing Loss [35].

A total of 335 participants were enrolled in the study. However, data from 21 participants were excluded due to loudspeaker failure from one center (Audilab Chartes), and data were excluded from another 17 participants for whom data collection was incomplete, or for whom SRTs ≤ 20 dB SNR could not be obtained within test runs (see description of test methods below). This left 297 participants that were included in the data analyses. Table 1 shows the distribution of participants for each subject group and test site, in terms of sex, age at testing, binaural PTA threshold, word recognition scores in quiet, and phoneme recognition scores in quiet.

thumbnail
Table 1. Demographic information within and across the different hearing status groups and test sites.

Data are shown for the number of male and female participants, the total number of participants, mean (and standard deviation) age at testing, pure-tone average (PTA) threshold in dB HL, percent correct word recognition scores (WRS) in quiet, and percent correct phoneme recognition scores (PRS) in quiet.

https://doi.org/10.1371/journal.pone.0274435.t001

Speech measures

Word and phoneme recognition in quiet.

Unaided monosyllable word recognition in quiet was measured in sound field at 65 dBA using monosyllable words from Lafon [36, 37]. Only binaural word recognition was measured. Stimuli were presented from the audiometer connected to a single loudspeaker at 0° azimuth), which was 1 m away from the listener. Two lists of 17 words each were tested for each participant. During testing, a word from the list was presented to the participant, who was asked to repeat what they heard as accurately as possible. The experimenter scored each phoneme that was correctly identified, as well as each whole word correctly identified. Scores were averaged across the two lists. Mean word and phoneme recognition scores for each hearing status group and each study site are shown in Table 1.

Sentence recognition in diffuse noise.

Unaided sentence recognition in noise was measured in sound field using the French Matrix stimuli from Jansen et al. [38], as implemented in the Hortech FraMatrix audiological testing software (Framatrix instruction manual, v. 1.5.4.0). Only binaural performance was measured. The French Matrix stimuli consist of 280 sentences constructed from 10 words selected from each of 5 categories (Name, Verb, Number, Noun, Adjective); the 50 words were selected to represent the distribution of phonemes in French language. The sentences are distributed into 28 lists of 20 sentences each. The sentences lists are balanced in terms of intelligibility in noise. The French Matrix test is typically implemented with speech and noise coming from a single loudspeaker directly in front of the participant (0° azimuth) who is seated 1 m away from the speaker. In the present study, speech was presented from a single loudspeaker directly in front of the participant (0° azimuth), but noise was simultaneously presented from 4 loudspeakers (45°, 135°, 225°, and 315° azimuth), creating a “diffuse noise”. Note that the same noise was presented from each of the 4 loudspeakers. The participant was seated in the middle of the speaker array, 1 m away from each speaker (see Fig 1). Stimuli were presented from an audiometer connected to the 5 loudspeakers. The target speech was delivered from one output channel of the audiometer, and the noise was delivered to the other output channel. An audio mixer was used to control the output levels of the noise fed to each of the four noise loudspeakers (see Fig 1). The steady noise was spectrally matched to the long-term average spectrum of the target sentences, and was supplied along with the FraMatrix sentences as part of the clinical test battery (“FraMatrix noise”). The overall noise level was fixed at 65 dBA. The level of the noise from each speaker was attenuated by 3 dB to maintain the fixed 65 dBA noise level; the multi-speaker 65 dBA noise level was confirmed by calibration at the listener head position. Noise was continuous during sentence recognition testing. During testing, a sentence was played in noise and presented to the participant, who was asked to repeat the sentence as accurately as possible (open-set test paradigm); the experimenter scored the correct responses. The level of the speech was adjusted according to the correctness of response. The initial SNR was +10 dB (i.e., speech presented at 75 dBA and noise at 65 dBA). If the participant repeated 3 out of 5 words correctly, the speech level was reduced; if not, the speech level was increased. The step size was automatically adjusted by the software (i.e., large steps at the beginning of the test and small steps near the end of the test). If the SNR exceeded 20 dB (i.e., speech level = 85 dBA) during a test run, the test run was discarded. The final SNR after a minimum 6 reversals in speech level was recorded as the speech reception threshold (SRT), defined as the SNR required to produce 50% correct sentence recognition in noise. Three lists of 20 sentences each were tested for each participant; test lists were randomized across participants.

thumbnail
Fig 1. Illustration of the test setup to measure speech understanding in diffuse noise.

Speech came from 0° azimuth (green speaker) and noise came from 45°, 135°, 225°, and 315° azimuth (red speakers) located 1 m away from the center of the listener’s head.

https://doi.org/10.1371/journal.pone.0274435.g001

Sentence recognition in co-located noise.

SRTs were measured in co-located noise for an additional 65 participants (13 TH, 22 Mild, 19 Mod-1, and 11 Mod-2) using the same methods as described above with diffuse noise, except that speech and noise were co-located. Participants were tested while facing the center loudspeaker directly in front (0° azimuth).

Data analysis

Data were validated by an independent contract research organization that was not part of Audilab. In the electronic case report forms, there were automatic data entry controls that prohibited experimenters from entering inappropriate values. Once data collection was complete, the dataset was “frozen” by the contract research organization, after which new data could not be entered. Data were reviewed by a data manager and a clinical research associate within the contract research organization, who checked the integrity of the data and issued queries for investigators to address.

Where appropriate, linear mixed models, Pearson correlations, linear regressions, and forward stepwise linear regression were performed using SigmaPlot (v.14) and SPSS software (v.22). Bonferroni corrections were applied to post-hoc pairwise comparisons to correct for multiple comparisons. The threshold for significance was p < 0.05.

Results

SRTs in diffuse noise

As per instructions provided by Hortech, and consistent with clinical practice, three lists were tested to measure SRTs. The first two runs are considered “practice” lists, and the third run is used as the SRT. Linear mixed-model analysis was performed on the SRT test run data, with group (TH, Mild, Mod-1, Mod-2) and test run (1, 2, 3) as fixed factors and participant as the random factor. Results showed significant effects for group [F(3, 293) = 141.0, p < 0.001] and test run [F(2, 551) = 250.0, p < 0.001]; there was a significant interaction [F(6, 551) = 16.5, p < 0.001]. Post-hoc Bonferroni-corrected pairwise comparisons showed that for the TH and Mild groups, SRTs were significantly higher (poorer) for run 1 than for runs 2 and 3 (p < 0.05 for both comparisons), with no significant difference between runs 2 and 3. For the Mod-1 and Mod-2 groups, SRTs were significantly higher for run 1 than for runs 2 and 3 (p < 0.05 for both comparisons), and significantly higher for run 2 than for run 3 (p < 0.05). Consistent with Hortech instructions and clinical practice, only data from the third run was used for SRTs.

Fig 2 shows boxplots of SRTs in diffuse noise for the different hearing status groups. The mean SRT was -7.6 ± 3.9, -3.8 ± 5.3, 3.3 ± 4.5, and 8.8 ± 6.3 dB SNR for the TH, Mild, Mod-1, and Mod-2 groups, respectively. Linear mixed-model analysis was performed on the SRT data, with hearing group as the fixed factor and participant as the random factor. Results showed a significant effect of hearing group [F(3, 293) = 121.6, p < 0.001). Post-hoc Bonferroni pairwise comparisons showed that SRTs were significantly lower for TH group than for the Mild, Mod-1, and Mod-2 groups (p < 0.001 for all comparisons), significantly lower for Mild group than for the Mod-1 and Mod-2 groups (p < 0.001 for both comparisons), and significantly lower for the Mod-1 group than for the Mod-2 group (p < 0.001).

thumbnail
Fig 2. Boxplots of SRTs in diffuse noise for each hearing group.

The grey boxes show the 25th and 75th percentile, the error bars show the 10th and 90th percentiles, the circles show outliers, the horizontal lines in the box show the median, the white stars show the mean, and the green boxes show the 95% confidence interval.

https://doi.org/10.1371/journal.pone.0274435.g002

Table 2 shows the mean and range of PTA thresholds, age at testing, word recognition scores, phoneme recognition scores, and SRTs, as well as the number of male and female participants for the different hearing status groups. Results of various linear mixed-model analyses performed on the data are shown at right. Across all groups, there was no significant difference in the number of male and female participants. While there was substantial overlap, age at testing, word recognition scores, and SRTs differed significantly among all groups (p < 0.05 for all comparisons). Phoneme recognition scores were significantly higher for the TH and Mild groups than for the Mod-1 and Mod-2 groups (p < 0.05 for all comparisons), and significantly higher for the Mod-1 group than for the Mod-2 group (p < 0.05), with no significant difference between the TH and Mild groups.

thumbnail
Table 2. Mean and range of PTA thresholds, age at testing, word recognition scores (WRS), phoneme recognition scores (PRS), and SRTs for the different hearing status groups.

Sex data are also shown. Results from linear mixed-model analyses are shown at right. For the analyses, PTA thresholds, age at testing, word recognition scores, phoneme recognition scores, and SRTs were the dependent variables, hearing status group was the fixed factor, and participant was the random factor, except for sex, where sex was the dependent variable and participant was the random factor. Significant effects are indicated by asterisks and italics. Significant post-hoc Bonferroni-adjusted pairwise comparisons are shown at far right (p < 0.05).

https://doi.org/10.1371/journal.pone.0274435.t002

Age at testing, PTA thresholds, word recognition scores, phoneme recognition scores, and SRTs in diffuse noise were compared using Pearson correlation analyses; complete results are shown in Table 3. For all groups, word recognition scores and phoneme recognition scores were highly correlated (p < 0.001 for all correlations). For the TH group, significant correlations were observed among age at testing, PTA thresholds, and SRTs (p < 0.005 for all correlations). For the Mild group, significant correlations were observed between word or phoneme recognition scores and age at testing, PTA thresholds, and SRTs (p < 0.001 for all correlations). For the Mod-1 group, significant correlations were observed between word or phoneme recognition scores and age at testing, PTA thresholds, and SRTs (p < 0.001 for all correlations); a significant correlation was also observed between PTA thresholds and SRTs (p < 0.001). For the Mod-2 group, SRTs were significantly correlated with age at testing PTA thresholds, word recognition scores, and phoneme recognition scores (p < 0.001 for all correlations). Across all participants, significant correlations were observed among age testing, PTA thresholds, word recognition scores, phoneme recognition scores, and SRTs (p < 0.001 for all correlations).

thumbnail
Table 3. Results of Pearson correlations analyses among age at testing, binaural PTA thresholds, word recognition scores (WRS), phoneme recognition scores (PRS), and SRTs.

Results are shown within each hearing status group and across all participants. Significant relationships after Bonferroni correction for multiple comparisons (adjusted p = 0.005) are indicated by asterisks and italics.

https://doi.org/10.1371/journal.pone.0274435.t003

Fig 3 shows SRTs in diffuse noise as a function of age at testing (upper left), PTA threshold in the better ear (upper right), word recognition scores (lower left), and phoneme recognition scores (lower right). Linear regression analyses across all data showed significant relationships between SRTs in diffuse noise and age at testing (r2 = 0.34, p < 0.001), PTA thresholds in the better ear (r2 = 0.57, p < 0.001), word recognition scores (r2 = 0.55, p < 0.001), and phoneme recognition scores (r2 = 0.45, p < 0.001). PTA thresholds in the better ear were significantly correlated with SRTs in diffuse noise only within the TH (r2 = 0.29, p < 0.001), and Mod-1 groups (r2 = 0.23, p < 0.001).

thumbnail
Fig 3. Scatterplots of SRTs in diffuse noise.

SRTs are shown as a function of (clockwise from top left): age at testing, pure-tone average (PTA) threshold in the better ear, phoneme recognition scores (PRS) in quiet, and word recognition scores (WRS) in quiet. In each panel, the different symbols show data for the different hearing groups. The diagonal lines show linear regression fits to all data in the plot; r and p values are shown in the lower right of each panel.

https://doi.org/10.1371/journal.pone.0274435.g003

While the inclusion criteria required less than 20 dB inter-aural asymmetry in terms of PTA thresholds, the mean asymmetry increased across the hearing status groups: 2.3 ± 2.2, 3.6 ± 3.2, 4.6 ± 3.7, 5.7 ± 3.7 dB for the TH, Mild, Mod-1, and Mod-2 groups, respectively. A linear mixed model, with hearing status group (TH, Mild, Mod-1, Mod-2) as the fixed factor and participant as the random factor, showed significant effect of hearing status group on PTA inter-aural asymmetry [F(3, 293) = 9.9, p < 0.001). Post-hoc Bonferroni-corrected pairwise comparisons showed that asymmetry was significantly larger for the Mod-1 and Mod-2 groups than for TH or Mild groups (p < 0.001 for all comparisons), with no significant difference between the Mod-1 and Mod-2 groups, or between the TH and Mild groups. Across all participants, linear regression analysis showed that SRTs in diffuse noise were weakly (but significantly) correlated with PTA inter-aural asymmetry (r2 = 0.03, p = 0.002), suggesting little clinical relevance.

Forward stepwise linear regression analysis was performed on the SRT data. Predictors entered into the model included: age at testing; threshold in the better ear at 500, 1000, 2000, 4000, 6000, and 8000 Hz; PTA threshold in the better ear; word recognition scores; phoneme recognition scores. Complete results of the model are shown in Table 4. According to the criteria to enter (p < 0.05), the following four predictors were included the model: PTA thresholds in the better ear, word recognition scores, phoneme recognition scores, and threshold at 1000 Hz in the better ear. Note that while age at testing was significantly correlated with SRTs in diffuse noise, it was also significantly correlated with PTA thresholds in the better ear, word recognition scores, and phoneme recognition scores, indicating substantial co-linearity (Table 3). According to the criteria to enter, age at testing was not included in the model. The predictive value with only PTA in the better ear was large and significant (r2 = 0.57, p < 0.001). When PTA in the better ear was combined with word recognition scores, the predictive value increased (r2 = 0.65, p < 0.001). Adding phoneme recognition scores and 1000-Hz thresholds in the better ear marginally increased the predictive value (r2 = 0.66, p < 0.001). After consideration of co-linearity among predictors (value-inflation factor > 5), the model that included PTA threshold in the better ear and word recognition scores in quiet best explained the variability in SRTs in diffuse noise (r2 = 0.65, p < 0.001).

thumbnail
Table 4. Results from forward stepwise linear regression model.

SRT in diffuse noise was the dependent variable. The four predictors in the model were pure-tone average (PTA) threshold in the better ear, word recognition scores (WRS) in quiet, phoneme recognition scores (PRS) in quiet, and thresholds in the better ear at 1000 Hz. Coefficients of the model are shown at left, the results from the analysis of variance (ANOVA) are shown in the middle, and the prediction (r and r2) are shown at right. The asterisks indicate significant effects for the coefficients and the ANOVA results. For the value-inflated factor (VIF) column, the italicized values indicate substantial collinearity among the entered predictors in the model (VIF > 5).

https://doi.org/10.1371/journal.pone.0274435.t004

SRTs in co-located vs. diffuse noise

SRTs with co-located or diffuse noise were measured in another 65 participants. With co-located noise, the mean SRT was -4.2 ± 1.1, 0.2 ± 3.2, 2.9 ± 2.6, and 8.4 ± 7.0 dB SNR for the TH, Mild, Mod-1, and Mod-2 groups, respectively. With diffuse noise, the mean SRT was -8.9 ± 1.4, -2.4 ± 4.6, 1.9 ± 4.3, and 8.8 ± 9.2 dB SNR for the TH, Mild, Mod-1, and Mod-2 groups, respectively. The left panel of Fig 4 shows SRTs in diffuse noise as a function of SRTs in co-located noise. Linear mixed-model analysis was performed on the SRT data, with noise configuration (co-located with speech, diffuse) and hearing status group (TH, Mild, Mod-1, Mod-2) as fixed factors and participant as the random factor. Results showed significant effects for noise configuration [F(1, 61) = 15.3, p < 0.001] and hearing status group [F(3, 61) = 30.3, p < 0.001]; there was a significant interaction [F(3, 46) = 4.1, p = 0.010]. Post-hoc Bonferroni pairwise comparisons showed that SRTs were significantly lower (better) in diffuse noise than in co-located noise only for the TH (p < 0.001) and Mild groups (p = 0.003). For co-located noise, SRTs were significantly lower for the TH group than for the Mild (p = 0.043), Mod-1 (p < 0.001) and Mod-2 groups (p < 0.001), significantly lower for the Mild and Mod-1 groups than for the Mod-2 group (p < 0.001 for both comparisons), with no significant difference between the Mild and Mod-1 groups. For diffuse noise, SRTs were significantly lower for the TH group than for the Mild (p = 0.001), Mod-1 (p < 0.001) and Mod-2 groups (p < 0.001), significantly lower for the Mild than for the Mod-1 (p = 0.018) and Mod-2 groups (p < 0.001), and significantly lower for the Mod-1 than for the Mod-2 group (p < 0.001). Linear regression analysis across all data showed a significant correlation between SRTs in co-located and diffuse noise (r2 = 0.71, p < 0.001).

thumbnail
Fig 4. SRTs in diffuse vs. co-located noise.

Left: SRTs in diffuse noise as a function of SRTs in co-located noise for the different hearing status groups. The solid diagonal line shows unity; values below the diagonal indicate that SRTs were lower with diffuse noise than with co-located noise. Right: Masking release with diffuse noise (SRTs in co-located noise—SRTs in diffuse noise) as a function of SRTs in co-located noise for the different hearing status groups. Values greater than 0 indicate masking release with diffuse noise; values less than 0 indicate interference with diffuse noise.

https://doi.org/10.1371/journal.pone.0274435.g004

Masking release with diffuse noise was calculated as the difference between SRTs with co-located noise and SRTs with diffuse noise. The mean masking release with diffuse noise was 4.8 ± 1.8, 2.6 ± 3.7, 1.1 ± 3.8, and -0.5 ± 6.0 dB for the TH, Mild, Mod-1, and Mod-2 groups, respectively. The right panel of Fig 4 shows masking release with diffuse noise as a function of SRTs in co-located noise. As noted above, SRTs were significantly different between diffuse and co-located noise only for the TH and Mild groups, indicating that only the TH and Mild groups experienced significant masking release with diffuse noise. Linear mixed-model analysis was performed on the masking release data, with hearing status group (TH, Mild, Mod-1, Mod-2) as the fixed factor and participant as the random factor. Results showed a significant effect of group [F(3, 61) = 4.1, p = 0.010]. Post-hoc Bonferroni pairwise comparisons showed that masking release was significantly larger for the TH group than for the Mod-2 group (p = 0.011), with no significant differences among the remaining groups. Across all participants, linear regression analysis showed a weak (but significant) relationship between masking release and SRTs in co-located noise (r2 = 0.06, p = 0.046).

Discussion

The main questions of this study were: 1) What factors predict speech understanding in diffuse noise? and 2) How does speech understanding differ in diffuse versus co-located noise? Regarding the first question, significant relationships were observed between SRTs in diffuse noise and PTA thresholds, age at testing, word recognition scores, and phoneme recognition scores (p < 0.001 for all correlations; Table 3; Fig 3). However, there was also great variability in terms of demographic factors, word recognition scores, phoneme recognition scores, and SRTs in diffuse noise (Table 2). For example, SRTs in diffuse noise ranged by 19.4, 21.7, 28.5, and 25.5 dB for the TH, Mild, Mod-1, and Mod-2 groups, respectively. The range for age at testing was similarly broad: 63.2, 69.5, 56.0, and 28.6 years for the TH, Mild, Mod-1, and Mod-2 groups, respectively.

Predictors of SRTs in diffuse noise

Stepwise linear regression showed that SRTs in diffuse noise were largely predicted by a combination of PTA thresholds and word recognition scores in quiet (r2 = 0.66, p < 0.001). Given the predictably strong correlation between word and phoneme recognition scores (r2 = 0.98, p < 0.001), only one of these speech measures was needed for the model. And while age at testing was also predictive of SRTs in diffuse noise (r2 = 0.34, p < 0.001), it was strongly correlated with PTA thresholds (r2 = 0.49, p < 0.001). Thus, SRTs in diffuse noise appeared to be predicted by a combination of relatively peripheral (PTA thresholds) and central auditory processing (word recognition scores). The predictive value of PTA thresholds and word recognition in quiet is in line with Plomp’s [39] finding that speech understanding in noise in hard of hearing listeners was limited by a combination of audibility and signal distortion. Similar to the present findings, Kuhne [40] found a significant correlation between sentence recognition in noise and word recognition in quiet. However, Wilson [41] found no significant relationship between word recognition in quiet and in noise, suggesting that testing in noise may capture some different aspect of speech perception in hard of hearing listeners. Across all participants, we found a strong correlation between SRTs in diffuse noise and PTA thresholds (r2 = 0.56, p < 0.001) and word recognition scores (r2 = 0.55, p < 0.001). However, within the hearing status groups, correlations were somewhat weaker. SRTs in diffuse noise were significantly correlated with PTA thresholds in the TH (r2 = 0.29; p < 0.001) and Mod-1 groups (r2 = 0.23; p < 0.001), and with word recognition scores in the Mild (r2 = 0.22, p < 0.001), Mod-1(r2 = 0.25, p < 0.001), and Mod-2 groups (r2 = 0.29, p < 0.001).

Effect of hearing loss on benefit of spatial cues

Regarding the second main research question, SRTs were significantly lower in diffuse noise than in co-located noise only for the TH and Mild groups. Similar to previous studies [1419], the Mod-1 and Mod-2 groups benefitted less from spatial cues than did the TH and Mild groups. In adults, speech understanding and noise and spatial release from masking have been shown to worsen with increasing hearing loss [6, 7, 14, 1618, 20], possibly due to reduced audibility of the target speech and/or suprathreshold distortion to the speech signal due to hearing loss [17, 39, 42, 43]. There was a weak but significant correlation between SRTs in co-located noise and masking release with diffuse noise (r2 = 0.06, p = 0.046). Taken together, the data suggest that testing with diffuse and co-located noise may reveal different deficits in speech understanding among hard of hearing listeners.

Clinical implications

As of 2018, speech in noise testing is now mandatory in France for all auditory evaluations for hearing aid candidates and recipients [34]. While a 5-loudspeaker setup is installed in many clinical hearing centers throughout France, as recommended by the French Society of Otorhinolaryngology-Head and Neck Surgery [34], speech understanding in noise is most often measured using co-located noise. One motivation for this study was that there was great interest among Audilab centers regarding how speech understanding might differ between co-located and diffuse noise, especially in light of hearing loss. Assuming that the listener is directly facing a single loudspeaker, co-located speech and noise offers greater stimulus control, in that speech and noise will be subjected to the same room acoustics and speech and noise will be identically presented to each ear. As such, testing with co-located speech and noise involves a very simple test setup that may be preferable in the clinic. While co-located speech and noise offers good stimulus control, it does not occur in the natural world, except from electronic devices and public address systems. In everyday complex listening environments, target speech and maskers do not typically come from the same sound source. Provided that there are sufficient localization cues and abilities, a listener will typically turn to face a target talker and benefit from the spatial separation between the target and maskers [4446]. Thus, the present target and noise speaker setup is in line with an ecologically valid listening situation after head turn. Note that the same noise was presented from the four masker speaker locations. Due to room acoustics and pinna effects, the spectrum of the noise may have deviated across speakers at the ear level [47]. Another approach that could easily be implemented would be to introduce a small delay in the noise across the speakers; this might help listeners to better localize the noise sources.

The recommended clinical French Matrix testing procedure is to measure SRTs three times, with the first two runs discarded as practice and familiarization. For all groups, SRTs for the first run were significantly poorer than for runs 2 and 3. For the TH and Mild groups, there was no significant difference between runs 2 and 3. For the Mod-1 and Mod-2 groups, SRTs were significantly poorer for run 2 than run 3. This suggests for individuals with moderate hearing loss, some learning may persist even at the third test run; it is unclear whether SRTs would stabilize with additional test runs. Note that this observation is true only for the present diffuse noise conditions where noise was presented at 65 dBA and the speech level was adapted. Further testing with a greater number of hard of hearing listeners is needed to observe whether the same holds true for co-located speech and noise.

Interestingly, SRTs in diffuse noise < 20 dB SNR were obtained in 32 participants for whom word recognition scores were < 40% correct and in 9 participants for whom phoneme recognition scores were < 40% correct, suggesting that SRTs in diffuse noise may be obtained in listeners that exhibit poor word or phoneme recognition in quiet. While the syntax is fixed, the French Matrix test offers little context; in this study, the test was administered in an open-set paradigm, further reducing context cues. In clinical practice, poor word recognition in quiet often obviates testing sentence recognition in noise. While testing with co-located speech and noise is the current clinical standard, the present data suggest that many hard of hearing listeners with poor word recognition in quiet may be capable of some degree of speech understanding in diffuse noise. If time allows, it may be worthwhile to capture speech performance for these listeners in both co-located and diffuse noise.

Limitations to the study

There was substantial variability in age at testing within the different hearing status groups (Table 2), and age at testing was significantly correlated with SRTs in diffuse noise. Cognitive function may have limited speech performance; unfortunately, this was not measured in the present study as it is not part of the standard of clinical care. In future research and/or clinical evaluation, measures of cognitive function may help to better understand the variability in speech performance across patients. Hearing loss has been associated with cognitive decline [48, 49]. Unfortunately, hearing aid usage and compliance is typically poor in patients with cognitive impairment [50].

In this study, unaided SRTs were measured in hard of hearing listeners. Given that the maximum speech presentation level was 85 dBA (20 dB SNR) and the elevated PTA thresholds for the Mod-1 and Mod-2 groups, audibility of the target sentence may have been an issue. For the Mod-2 group, where the maximum PTA threshold was 65 dB HL and the maximum allowable SRT was 20 dB SNR (sentence presented at 85 dBA), the maximum sentence presentation level would be 20 dB sensation level (SL; difference between sentence level and PTA threshold level). This is a much lower effective sentence level than for the TH group, where the maximum PTA threshold was 20 dB HL; at a 20 dB SNR, this would correspond to a maximum sentence presentation level of 65 dB SL. It is unclear how aided hearing may affect SRTs in diffuse noise. Hearing aids would improve audibility, but device amplitude compression may effectively reduce the SNR, relative to the input SNR. Hearing aids may also distort other aspects of the signal, but hearing aid features such as noise reduction and directional microphones may improve SRTs in diffuse noise. Comparing unaided and aided SRTs in diffuse noise would be a valuable future study.

Finally, the inclusion criteria for this study required less than 20 dB inter-aural asymmetry in terms of PTA thresholds. Inter-aural asymmetry was significantly larger for the Mod-1 and Mod-2 groups than for the TH and Mild groups (p < 0.05 for all comparisons); asymmetry was also significantly correlated with SRTs in diffuse noise (p < 0.002). Because of this, PTA thresholds in the better ear were used in the stepwise linear regression model. The inter-aural asymmetry may have contributed to the somewhat large variability observed in SRTs within each of the hearing status groups (Fig 2). Reducing inter-aural asymmetry via optimal fitting of hearing aids may help to maximize hard of hearing listeners’ binaural perception of speech in spatialized noise [5153].

Conclusions

In this study, speech understanding in noise was measured in diffuse noise in 297 participants from 9 Audilab centers in France. Speech was delivered from the front speaker (°0 azimuth) and noise was delivered from 4 loudspeakers (45°, 135°, 225°, and 315° azimuth). In another 65 participants, SRTs were measured in diffuse noise and in co-located noise. Major findings include:

  • SRTs in diffuse noise were significantly correlated with PTA thresholds, age at testing, word recognition scores in quiet, and phoneme recognition scores in quiet. A stepwise linear regression model showed that SRTs in diffuse noise were well-predicted by a combination of PTA thresholds and word recognition scores in quiet.
  • SRTs in diffuse noise were significantly lower than SRTs in co-located noise only for the TH and Mild groups. For the Mild, Mod-1, and Mod-2 groups, masking release with diffuse was not significantly correlated with SRTs in co-located noise, suggesting that hearing loss limited the benefit of spatially separated speech and noise.
  • Measuring speech performance in diffuse noise with unaided hearing may provide additional insight into difficulties that hard of hearing individuals experience in complex listening environments. One advantage of testing with unaided hearing is that differences in hearing aid signal processing and parameters across patients are not a factor in performance, provided there is sufficient audibility during testing. Also, diffuse noise provides spatial cues without the lexical informational masking associated with spatially separated competing speech.

Acknowledgments

We thank Dominique Ménétrier, as well as the clinicians at the Audilab centers for their assistance in data collection. We thank all participants for their time and effort.

References

  1. 1. Brungart DS, Simpson BD, Ericson MA, Scott KR. Informational and energetic masking effects in the perception of multiple simultaneous talkers. J Acoust Soc Am. 2001 Nov;110(5 Pt 1):2527–38. pmid:11757942
  2. 2. Durlach NI, Mason CR, Kidd G Jr, Arbogast TL, Colburn HS, Shinn-Cunningham BG. Note on informational masking. J Acoust Soc Am. 2003 Jun;113(6):2984–7. pmid:12822768
  3. 3. Kidd G Jr, Mason CR, Swaminathan J, Roverud E, Clayton KK, Best V. Determining the energetic and informational components of speech-on-speech masking. J Acoust Soc Am. 2016 Jul;140(1):132. pmid:27475139
  4. 4. Stone MA, Canavan S. The near non-existence of "pure" energetic masking release for speech: Extension to spectro-temporal modulation and glimpsing. J Acoust Soc Am. 2016 Aug;140(2):832. pmid:27586715
  5. 5. Yost WA. Spatial release from masking based on binaural processing for up to six maskers. J Acoust Soc Am. 2017 Mar;141(3):2093. pmid:28372135
  6. 6. Duquesnoy AJ. Effect of a single interfering noise or speech source upon the binaural sentence intelligibility of aged persons. J Acoust Soc Am. 1983 Sep;74(3):739–43. pmid:6630729
  7. 7. Bronkhorst AW, Plomp R. The effect of head-induced interaural time and level differences on speech intelligibility in noise. J Acoust Soc Am. 1988 Apr;83(4):1508–16. pmid:3372866
  8. 8. Gelfand SA, Ross L, Miller S. Sentence reception in noise from one versus two sources: effects of aging and hearing loss. J Acoust Soc Am. 1988 Jan;83(1):248–56. pmid:3343444
  9. 9. Kidd G Jr, Mason CR, Rohtla TL, Deliwala PS. Release from masking due to spatial separation of sources in the identification of nonspeech auditory patterns. J Acoust Soc Am. 1998 Jul;104(1):422–31. pmid:9670534
  10. 10. Freyman RL, Helfer KS, McCall DD, Clifton RK. The role of perceived spatial separation in the unmasking of speech. J Acoust Soc Am. 1999 Dec;106(6):3578–88. pmid:10615698
  11. 11. Freyman RL, Balakrishnan U, Helfer KS. Spatial release from informational masking in speech recognition. J Acoust Soc Am. 2001 May;109(5 Pt 1):2112–22. pmid:11386563
  12. 12. Brown DK, Cameron S, Martin JS, Watson C, Dillon H. The North American Listening in Spatialized Noise-Sentences test (NA LiSN-S): Normative data and test-retest reliability studies for adolescents and young adults. J Am Acad Audiol. 2010 Nov-Dec;21(10):629–41. pmid:21376004
  13. 13. Kidd G Jr, Mason CR, Best V, Marrone N. Stimulus factors influencing spatial release from speech-on-speech masking. J Acoust Soc Am. 2010 Oct;128(4):1965–78. pmid:20968368
  14. 14. Bronkhorst AW, Plomp R. Binaural speech intelligibility in noise for hearing-impaired listeners. J Acoust Soc Am. 1989 Oct;86(4):1374–83. pmid:2808911
  15. 15. Arbogast TL, Mason CR, Kidd G Jr. The effect of spatial separation on informational masking of speech in normal-hearing and hearing-impaired listeners. J Acoust Soc Am. 2005 Apr;117(4 Pt 1):2169–80. pmid:15898658
  16. 16. Marrone N, Mason CR, Kidd G Jr. The effects of hearing loss and age on the benefit of spatial separation between multiple talkers in reverberant rooms. J Acoust Soc Am. 2008;124(5):3064–3075. pmid:19045792
  17. 17. Best V, Mason CR, Kidd G Jr. Spatial release from masking in normally hearing and hearing-impaired listeners as a function of the temporal overlap of competing talkers. J Acoust Soc Am. 2011 Mar;129(3):1616–25. pmid:21428524
  18. 18. Srinivasan NK, Jakien KM, Gallun FJ. Release from masking for small spatial separations: Effects of age and hearing loss. J Acoust Soc Am. 2016;140(1):EL73. pmid:27475216
  19. 19. Best V, Mason CR, Swaminathan J, Roverud E, Kidd G Jr. Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures. J Acoust Soc Am. 2017 Jan;141(1):81. pmid:28147587
  20. 20. Zobel BH, Wagner A, Sanders LD, Başkent D. Spatial release from informational masking declines with age: Evidence from a detection task in a virtual separation paradigm. J Acoust Soc Am. 2019 Jul;146(1):548. pmid:31370625
  21. 21. Revit LJ, Schulein RB, Julstrom S. Toward accurate assessment of real-world hearing aid benefit. Hear Rev. 2002;9:34–38.
  22. 22. Compton-Conley CL, Neuman AC, Killion MC, Levitt H. Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol. 2004 Jun;15(6):440–55. pmid:15341225
  23. 23. Valente M, Mispagel KM, Tchorz J, Fabry D. Effect of type of noise and loudspeaker array on the performance of omnidirectional and directional microphones. J Am Acad Audiol. 2006 Jun;17(6):398–412. pmid:16866004
  24. 24. Gifford RH, Revit LJ. Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise. J Am Acad Audiol. 2010 Jul-Aug;21(7):441–51; quiz 487–8. pmid:20807480
  25. 25. Firszt JB, Holden LK, Reeder RM, Waltzman SB, Arndt S. Auditory abilities after cochlear implantation in adults with unilateral deafness: a pilot study. Otol Neurotol. 2012;33(8):1339–1346. pmid:22935813
  26. 26. Kolberg ER, Sheffield SW, Davis TJ, Sunderhaus LW, Gifford RH. Cochlear implant microphone location affects speech recognition in diffuse noise. J Am Acad Audiol. 2015 Jan;26(1):51–8; quiz 109–10. pmid:25597460
  27. 27. Reeder RM, Cadieux J, Firszt JB. Quantification of speech-in-noise and sound localisation abilities in children with unilateral hearing loss and comparison to normal hearing peers. Audiol Neurootol. 2015;20 Suppl 1(0 1):31–7. pmid:25999162
  28. 28. Hawley ML, Litovsky RY, Culling JF. The benefit of binaural hearing in a cocktail party: effect of location and type of interferer. J Acoust Soc Am. 2004 Feb;115(2):833–43. pmid:15000195
  29. 29. Hu H, Dietz M, Williges B, Ewert SD. Better-ear glimpsing with symmetrically-placed interferers in bilateral cochlear implant users. J Acoust Soc Am. 2018 Apr;143(4):2128. pmid:29716260
  30. 30. Avivi-Reich M, Fifield B, Schneider BA. Can the diffuseness of sound sources in an auditory scene alter speech perception? Atten Percept Psychophys. 2020 Jun;82(3):1443–1458. pmid:31410762
  31. 31. Dillier N, Lai WK. Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor. Audiol Res. 2015 Oct 7;5(2):132. pmid:26779327
  32. 32. Mosnier I, Mathias N, Flament J, Amar D, Liagre-Callies A, Borel S, et al. Benefit of the UltraZoom beamforming technology in noise in cochlear implant users. Eur Arch Otorhinolaryngol. 2017 Sep;274(9):3335–3342. pmid:28664331
  33. 33. Soede W, Bilsen FA, Berkhout AJ. Assessment of a directional microphone array for hearing-impaired listeners. J Acoust Soc Am. 1993 Aug;94(2 Pt 1):799–808. pmid:8370886
  34. 34. Joly CA, Reynard P, Mezzi K, Bakhos D, Bergeron F, Bonnard D, et al. Guidelines of the French Society of Otorhinolaryngology-Head and Neck Surgery (SFORL) and the French Society of Audiology (SFA) for speech-in-noise testing in adults. Eur Ann Otorhinolaryngol Head Neck Dis. 2021 Jun 14:S1879-7296(21)00091-0. pmid:34140263
  35. 35. Stevens G, Flaxman S, Brunskill E, Mascarenhas M, Mathers CD, Finucane M; et al. Global and regional hearing impairment prevalence: An analysis of 42 studies in 29 countries. Eur J Public Health. 2013 Feb;23(1):146–52. pmid:22197756
  36. 36. Lafon J. C. (1958). Le test phonétique. Les Cahiers de la Compagnie Française d’Audiologie, 5.
  37. 37. Lafon J.C. (1964). Le test phonétique et la mesure de l’audition. Eindhoven: édition Centrex.
  38. 38. Jansen S, Luts H, Wagener KC, Kollmeier B, Del Rio M, Dauman R, et al. Comparison of three types of French speech-in-noise tests: a multi-center study. Int J Audiol. 2012 Mar;51(3):164–73. pmid:22122354
  39. 39. Plomp R. Auditory handicap of hearing impairment and the limited benefit of hearing aids. J Acoust Soc Am. 1978;63(2):533–549. pmid:670550
  40. 40. Kuehne Ellyn Mary, "Can Performance on a Speech-In-Quiet Monosyllabic Word List Predict Speech-In-Noise Capabilities?" (2019). Capstones & Scholarly Projects. 48.
  41. 41. Wilson RH. Clinical experience with the words-in-noise test on 3430 veterans: comparisons with pure-tone thresholds and word recognition in quiet. J Am Acad Audiol. 2011;22(7):405–423. pmid:21993048
  42. 42. Best V, Thompson ER, Mason CR, Kidd G Jr. An energetic limit on spatial release from masking. J Assoc Res Otolaryngol. 2013;14(4):603–610. pmid:23649712
  43. 43. Best V, Mason CR, Swaminathan J, Kidd G Jr, Jakien KM, Kampel SD, et al. On the contribution of target audibility to performance in spatialized speech mixtures. Adv Exp Med Biol. 2016;894:83–91. pmid:27080649
  44. 44. Kidd G Jr, Arbogast TL, Mason CR, Gallun FJ. The advantage of knowing where to listen. J Acoust Soc Am. 2005 Dec;118(6):3804–15. pmid:16419825
  45. 45. Grange JA, Culling JF. The benefit of head orientation to speech intelligibility in noise. J Acoust Soc Am. 2016 Feb;139(2):703–12. pmid:26936554.
  46. 46. Grange JA, Culling JF, Bardsley B, Mackinney LI, Hughes SE, Backhouse SS. Turn an ear to hear: How hearing-impaired listeners can exploit head orientation to enhance their speech intelligibility in noisy social settings. Trends Hear. 2018 Jan-Dec;22:2331216518802701. pmid:30334495
  47. 47. Batteau DW. The role of the pinna in human localization. Proc R Soc Lond B Biol Sci. 1967 Aug 15;168(1011):158–80. pmid:4383726
  48. 48. Lin FR. Hearing loss and cognition among older adults in the United States. J Gerontol A Biol Sci Med Sci. 2011 Oct;66(10):1131–6. pmid:21768501
  49. 49. Lin FR, Yaffe K, Xia J, Xue QL, Harris TB, Purchase-Helzner E, et al. Hearing loss and cognitive decline in older adults. JAMA Intern Med. 2013 Feb 25;173(4):293–9. pmid:23337978
  50. 50. Nirmalasari O, Mamo SK, Nieman CL, Simpson A, Zimmerman J, Nowrangi MA, et al. Age-related hearing loss in older adults with cognitive impairment. Int Psychogeriatr. 2017 Jan;29(1):115–121. pmid:27655111
  51. 51. Wilson RH, Civitello BA, Margolis RH. Influence of interaural level differences on the speech recognition masking level difference. Audiology. 1985;24(1):15–24. pmid:3977779
  52. 52. Yoon YS, Li Y, Kang HY, Fu QJ. The relationship between binaural benefit and difference in unilateral speech recognition performance for bilateral cochlear implant users. Int J Audiol. 2011 Aug;50(8):554–65. pmid:21696329
  53. 53. Yoon YS, Shin YR, Gho JS, Fu QJ. Bimodal benefit depends on the performance difference between a cochlear implant and a hearing aid. Cochlear Implants Int. 2015 May;16(3):159–67. pmid:25329752