We present a lightweight adaptable neural TTS system with high quality
output. The system is composed of three separate neural network blocks:
prosody prediction, acoustic feature prediction and Linear Prediction
Coding Net as a neural vocoder. This system can synthesize speech with
close to natural quality while running 3 times faster than real-time
on a standard CPU.
The modular setup of the system allows for simple adaptation to
new voices with a small amount of data.
We first demonstrate
the ability of the system to produce high quality speech when trained
on large, high quality datasets. Following that, we demonstrate its
adaptability by mimicking unseen voices using 5 to 20 minutes long
datasets with lower recording quality. Large scale Mean Opinion Score
quality and similarity tests are presented, showing that the system
can adapt to unseen voices with quality gap of 0.12 and similarity
gap of 3% compared to natural speech for male voices and quality gap
of 0.35 and similarity of gap of 9% for female voices.
Cite as: Kons, Z., Shechtman, S., Sorin, A., Rabinovitz, C., Hoory, R. (2019) High Quality, Lightweight and Adaptable TTS Using LPCNet. Proc. Interspeech 2019, 176-180, doi: 10.21437/Interspeech.2019-1705
@inproceedings{kons19_interspeech, author={Zvi Kons and Slava Shechtman and Alex Sorin and Carmel Rabinovitz and Ron Hoory}, title={{High Quality, Lightweight and Adaptable TTS Using LPCNet}}, year=2019, booktitle={Proc. Interspeech 2019}, pages={176--180}, doi={10.21437/Interspeech.2019-1705} }