ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

Full multicondition training for robust i-vector based speaker recognition

Dayana Ribas, Emmanuel Vincent, José Ramón Calvo

Multicondition training (MCT) is an established technique to handle noisy and reverberant conditions. Previous works in the field of i-vector based speaker recognition have applied MCT to linear discriminant analysis (LDA) and probabilistic LDA (PLDA), but not to the universal background model (UBM) and the total variability (T) matrix, arguing that this would be too much time consuming due to the increase of the size of the training set by the number of noise and reverberation conditions. In this paper, we propose a full MCT approach which consists of applying MCT in all stages of training, including the UBM and the T matrix, while keeping the size of the training set fixed. Experiments in highly nonstationary noise conditions show a decrease of the equal error rate (EER) to 14.16% compared to 17.90% for clean training and 18.08% for MCT of LDA and PLDA only. We also evaluate the impact of state-of-the-art multichannel speech enhancement and show further reduction of the EER down to 10.47%.


doi: 10.21437/Interspeech.2015-284

Cite as: Ribas, D., Vincent, E., Calvo, J.R. (2015) Full multicondition training for robust i-vector based speaker recognition. Proc. Interspeech 2015, 1057-1061, doi: 10.21437/Interspeech.2015-284

@inproceedings{ribas15_interspeech,
  author={Dayana Ribas and Emmanuel Vincent and José Ramón Calvo},
  title={{Full multicondition training for robust i-vector based speaker recognition}},
  year=2015,
  booktitle={Proc. Interspeech 2015},
  pages={1057--1061},
  doi={10.21437/Interspeech.2015-284}
}