Recently, there has been a significant amount of work on the recognition of emotions from speech and biosignals. Most approaches to emotion recognition so far concentrate on a single modality and do not take advantage of the fact that an integrated multimodal analysis may help to resolve ambiguities and compensate for errors. In this paper, we describe various methods for fusing physiological and voice data at the feature-level and the decision-level as well as a hybrid integration scheme. The results of the integrated recognition approach are then compared with the individual recognition results from each modality.
Cite as: Kim, J., André, E., Rehm, M., Vogt, T., Wagner, J. (2005) Integrating information from speech and physiological signals to achieve emotional sensitivity. Proc. Interspeech 2005, 809-812, doi: 10.21437/Interspeech.2005-380
@inproceedings{kim05c_interspeech, author={Jonghwa Kim and Elisabeth André and Matthias Rehm and Thurid Vogt and Johannes Wagner}, title={{Integrating information from speech and physiological signals to achieve emotional sensitivity}}, year=2005, booktitle={Proc. Interspeech 2005}, pages={809--812}, doi={10.21437/Interspeech.2005-380} }