Our pilot vowel discrimination experiment addresses the competition between attentional focus and language exposure in two noise conditions using two groups of participants (L1 English-speakers (L1-EN) taking a perception test in Spanish and L1 Spanish (L1-SP) speakers taking a perception test in Spanish). Our noise conditions included three signal-to-noise ratio (SNR) conditions (-12, -6 and 0 decibels (dB)) and conditions using automatically generated multi-speaker background babble for 1-12 speakers. Our results show notable confusion by both groups in discriminating back round vowels [o] and [u] regardless of L1 or language exposure. We attribute this confusion to the fact that tongue height, detectible through F1, is obfuscated by F3 (lip rounding). In the absence of a visual input by which a listener can discriminate mid and high vowels by a control parameter such as lip aperture (or jaw angle), listeners experience notable difficulty in discerning vowel categories regardless of L1 or exposure to a target L2. Our results are consistent with the notion that both attentional focus and language exposure may provide advantages to vowel discrimination in noise, but compete in bottom-up/top-down protocols.
Cite as: Gibson, M., Schlechtweg, M., Blecua Falgueras, B., Ayala Alcalde, J. (2022) Language-specific interactions of vowel discrimination in noise. Proc. Interspeech 2022, 3118-3122, doi: 10.21437/Interspeech.2022-10673
@inproceedings{gibson22_interspeech, author={Mark Gibson and Marcel Schlechtweg and Beatriz {Blecua Falgueras} and Judit {Ayala Alcalde}}, title={{Language-specific interactions of vowel discrimination in noise}}, year=2022, booktitle={Proc. Interspeech 2022}, pages={3118--3122}, doi={10.21437/Interspeech.2022-10673} }