ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

HMM-Free Encoder Pre-Training for Streaming RNN Transducer

Lu Huang, Jingyu Sun, Yufeng Tang, Junfeng Hou, Jinkun Chen, Jun Zhang, Zejun Ma

This work describes an encoder pre-training procedure using frame-wise label to improve the training of streaming recurrent neural network transducer (RNN-T) model. Streaming RNN-T trained from scratch usually performs worse than non-streaming RNN-T. Although it is common to address this issue through pre-training components of RNN-T with other criteria or frame-wise alignment guidance, the alignment is not easily available in end-to-end manner. In this work, frame-wise alignment, used to pre-train streaming RNN-T’s encoder, is generated without using a HMM-based system. Therefore an all-neural framework equipping HMM-free encoder pre-training is constructed. This is achieved by expanding the spikes of CTC model to their left/right blank frames, and two expanding strategies are proposed. To our best knowledge, this is the first work to simulate HMM-based frame-wise label using CTC model for pre-training. Experiments conducted on LibriSpeech and MLS English tasks show the proposed pre-training procedure, compared with random initialization, reduces the WER by relatively 5%~11% and the emission latency by 60 ms. Besides, the method is lexicon-free, so it is friendly to new languages without manually designed lexicon.


doi: 10.21437/Interspeech.2021-586

Cite as: Huang, L., Sun, J., Tang, Y., Hou, J., Chen, J., Zhang, J., Ma, Z. (2021) HMM-Free Encoder Pre-Training for Streaming RNN Transducer. Proc. Interspeech 2021, 1797-1801, doi: 10.21437/Interspeech.2021-586

@inproceedings{huang21e_interspeech,
  author={Lu Huang and Jingyu Sun and Yufeng Tang and Junfeng Hou and Jinkun Chen and Jun Zhang and Zejun Ma},
  title={{HMM-Free Encoder Pre-Training for Streaming RNN Transducer}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={1797--1801},
  doi={10.21437/Interspeech.2021-586}
}