ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

An investigation of spectral restoration algorithms for deep neural networks based noise robust speech recognition

Bo Li, Yu Tsao, Khe Chai Sim

Deep Neural Networks (DNNs) are becoming widely accepted in automatic speech recognition (ASR) systems. The deep structured nonlinear processing greatly improves the model's generalization capability, but the performance under adverse environments is still unsatisfactory. In the literature, there have been many techniques successfully developed to improve Gaussian mixture modelsĀf robustness. Investigating the effectiveness of these techniques for the DNN is an important step to thoroughly understand its superiority, pinpoint its limitations and most importantly to further improve it towards the ultimate human-level robustness. In this paper, we investigate the effectiveness of speech enhancement using spectral restoration algorithms for DNNs. Four approaches are evaluated, namely minimum mean-square error spectral estimator (MMSE), maximum likelihood spectral amplitude estimator (MLSA), maximum a posteriori spectral amplitude estimator (MAPA), and generalized maximum a posteriori spectral amplitude algorithm (GMAPA). The preliminary experimental results on the Aurora 2 speech database show that with multi-condition training data the DNN itself is capable of learning robust representations. However, if only clean data is available, the MLSA algorithm is the best spectral restoration training method for DNNs.


doi: 10.21437/Interspeech.2013-278

Cite as: Li, B., Tsao, Y., Sim, K.C. (2013) An investigation of spectral restoration algorithms for deep neural networks based noise robust speech recognition. Proc. Interspeech 2013, 3002-3006, doi: 10.21437/Interspeech.2013-278

@inproceedings{li13c_interspeech,
  author={Bo Li and Yu Tsao and Khe Chai Sim},
  title={{An investigation of spectral restoration algorithms for deep neural networks based noise robust speech recognition}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={3002--3006},
  doi={10.21437/Interspeech.2013-278}
}