ISCA Archive Interspeech 2019
ISCA Archive Interspeech 2019

Zero Resource Speech Synthesis Using Transcripts Derived from Perceptual Acoustic Units

Karthik Pandia D. S., Hema A. Murthy

Zerospeech synthesis is the task of building vocabulary independent speech synthesis systems, where transcriptions are unavailable for training data. It is, therefore, necessary to convert training data into a sequence of fundamental acoustic units that can be used for synthesis during the test. This paper attempts to discover, and model perceptual acoustic units consisting of steady state, and transient regions in speech. The transients roughly correspond to CV, VC units, while the steady-state corresponds to sonorants and fricatives. The speech signal is first preprocessed by segmenting the same into CVC-like units using a short-term energy-like contour. These CVC segments are clustered using a connected components-based graph clustering technique. The clustered CVC segments are initialized such that the onset (CV) and decays (VC) correspond to transients, and the rhyme corresponds to steady-states. Following this initialization, the units are allowed to re-organise on the continuous speech into a final set of AUs in an HMM-GMM framework. AU sequences thus obtained are used to train synthesis models. The performance of the proposed approach is evaluated on the Zerospeech 2019 challenge database. Subjective and objective scores show that reasonably good quality synthesis with low bit rate encoding can be achieved using the proposed AUs.


doi: 10.21437/Interspeech.2019-2336

Cite as: S., K.P.D., Murthy, H.A. (2019) Zero Resource Speech Synthesis Using Transcripts Derived from Perceptual Acoustic Units. Proc. Interspeech 2019, 1113-1117, doi: 10.21437/Interspeech.2019-2336

@inproceedings{s19_interspeech,
  author={Karthik Pandia D. S. and Hema A. Murthy},
  title={{Zero Resource Speech Synthesis Using Transcripts Derived from Perceptual Acoustic Units}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1113--1117},
  doi={10.21437/Interspeech.2019-2336}
}