Phone Features Improve Speech Translation

Elizabeth Salesky, Alan W Black


Abstract
End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation. Their performance is often assumed to be superior, though in many conditions this is not yet the case. We compare cascaded and end-to-end models across high, medium, and low-resource conditions, and show that cascades remain stronger baselines. Further, we introduce two methods to incorporate phone features into ST models. We show that these features improve both architectures, closing the gap between end-to-end models and cascades, and outperforming previous academic work – by up to 9 BLEU on our low-resource setting.
Anthology ID:
2020.acl-main.217
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2388–2397
Language:
URL:
https://aclanthology.org/2020.acl-main.217
DOI:
10.18653/v1/2020.acl-main.217
Bibkey:
Cite (ACL):
Elizabeth Salesky and Alan W Black. 2020. Phone Features Improve Speech Translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2388–2397, Online. Association for Computational Linguistics.
Cite (Informal):
Phone Features Improve Speech Translation (Salesky & Black, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.217.pdf
Video:
 http://slideslive.com/38929284
Code
 esalesky/xnmt-devel