ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

A diversity-penalizing ensemble training method for deep learning

Xiaohui Zhang, Daniel Povey, Sanjeev Khudanpur

A common way to improve the performance of deep learning is to train an ensemble of neural networks and combine them during decoding. However, this is computationally expensive in test time. In this paper, we propose an diversity-penalizing ensemble training (DPET) procedure, which trains an ensemble of DNNs, whose parameters were differently initialized, and penalizes differences between each individual DNN's output and their average output. This way each model learns to emulate the average of the whole ensemble of models, and in test time we can use one arbitrarily chosen member of the ensemble. Experimental results on a variety of speech recognition tasks show that this technique is effective, and gives us most of the WER improvement of the ensemble method while being no more expensive in test time than using a single model.


doi: 10.21437/Interspeech.2015-712

Cite as: Zhang, X., Povey, D., Khudanpur, S. (2015) A diversity-penalizing ensemble training method for deep learning. Proc. Interspeech 2015, 3590-3594, doi: 10.21437/Interspeech.2015-712

@inproceedings{zhang15i_interspeech,
  author={Xiaohui Zhang and Daniel Povey and Sanjeev Khudanpur},
  title={{A diversity-penalizing ensemble training method for deep learning}},
  year=2015,
  booktitle={Proc. Interspeech 2015},
  pages={3590--3594},
  doi={10.21437/Interspeech.2015-712}
}