ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

From Start to Finish: Latency Reduction Strategies for Incremental Speech Synthesis in Simultaneous Speech-to-Speech Translation

Danni Liu, Changhan Wang, Hongyu Gong, Xutai Ma, Yun Tang, Juan Pino

Speech-to-speech translation (S2ST) converts input speech to speech in another language. A challenge of delivering S2ST in real time is the accumulated delay between the translation and speech synthesis modules. While recently incremental text-to-speech (iTTS) models have shown large quality improvements, they typically require additional future text inputs to reach optimal performance. In this work, we minimize the initial waiting time of iTTS by adapting the upstream speech translator to generate high-quality pseudo lookahead for the speech synthesizer. After mitigating the initial delay, we demonstrate that the duration of synthesized speech also plays a crucial role on latency. We formalize this as a latency metric and the present a simple yet effective duration-scaling approach for latency reduction. Our approaches consistently reduce latency by 0.2-0.5 second without sacrificing speech translation quality.


doi: 10.21437/Interspeech.2022-10568

Cite as: Liu, D., Wang, C., Gong, H., Ma, X., Tang, Y., Pino, J. (2022) From Start to Finish: Latency Reduction Strategies for Incremental Speech Synthesis in Simultaneous Speech-to-Speech Translation. Proc. Interspeech 2022, 1771-1775, doi: 10.21437/Interspeech.2022-10568

@inproceedings{liu22u_interspeech,
  author={Danni Liu and Changhan Wang and Hongyu Gong and Xutai Ma and Yun Tang and Juan Pino},
  title={{From Start to Finish: Latency Reduction Strategies for Incremental Speech Synthesis in Simultaneous Speech-to-Speech Translation}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={1771--1775},
  doi={10.21437/Interspeech.2022-10568}
}