ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Phonetic Analysis of Self-supervised Representations of English Speech

Dan Wells, Hao Tang, Korin Richmond

We present an analysis of discrete units discovered via self-supervised representation learning on English speech. We focus on units produced by a pre-trained HuBERT model due to its wide adoption in ASR, speech synthesis, and many other tasks. Whereas previous work has evaluated the quality of such quantization models in aggregate over all phones for a given language, we break our analysis down into broad phonetic classes, taking into account specific aspects of their articulation when considering their alignment to discrete units. We find that these units correspond to sub-phonetic events, and that fine dynamics such as the distinct closure and release portions of plosives tend to be represented by sequences of discrete units. Our work provides a reference for the phonetic properties of discrete units discovered by HuBERT, facilitating analyses of many speech applications based on this model.


doi: 10.21437/Interspeech.2022-10884

Cite as: Wells, D., Tang, H., Richmond, K. (2022) Phonetic Analysis of Self-supervised Representations of English Speech. Proc. Interspeech 2022, 3583-3587, doi: 10.21437/Interspeech.2022-10884

@inproceedings{wells22_interspeech,
  author={Dan Wells and Hao Tang and Korin Richmond},
  title={{Phonetic Analysis of Self-supervised Representations of English Speech}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={3583--3587},
  doi={10.21437/Interspeech.2022-10884}
}