Skip to main content

DialogueSMM: Emotion Recognition in Conversation with Speaker-Aware Multimodal Multi-head Attention

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14303))

  • 823 Accesses

Abstract

Emotion recognition in conversation (ERC) aims to automatically detect and track the emotional states of speakers in dialogue, which is essential for social dialogue system and decision-making. However, most existing ERC models only use textual information or fuse multimodal information in a simple way like concatenation. To fully leverage multimodal information, we propose a speaker-aware multimodal multi-head attention (DialogueSMM) model for ERC, which can effectively integrate textual, audio, and visual modalities, consider different speakers, and utilize emotion clues. Experimental results on both English and Chinese benchmark datasets show that DialogueSMM outperforms comparative state-of-the-art models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Busso, C., et al.: Iemocap: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42, 335–359 (2008)

    Article  Google Scholar 

  2. Chudasama, V., Kar, P., Gudmalwar, A., Shah, N., Wasnik, P., Onoe, N.: M2fnet: multi-modal fusion network for emotion recognition in conversation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4652–4661 (2022)

    Google Scholar 

  3. Darwin, C., Prodger, P.: The expression of the emotions in man and animals. Oxford University Press, USA (1998)

    Google Scholar 

  4. Ghosal, D., Majumder, N., Poria, S., Chhaya, N., Gelbukh, A.: Dialoguegcn: a graph convolutional neural network for emotion recognition in conversation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 154–164 (2019)

    Google Scholar 

  5. Hazarika, D., Poria, S., Mihalcea, R., Cambria, E., Zimmermann, R.: Icon: interactive conversational memory network for multimodal emotion detection. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2594–2604 (2018)

    Google Scholar 

  6. Hazarika, D., Poria, S., Zadeh, A., Cambria, E., Morency, L.P., Zimmermann, R.: Conversational memory network for emotion recognition in dyadic dialogue videos. In: Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting, vol. 2018, p. 2122. NIH Public Access (2018)

    Google Scholar 

  7. Hu, D., Wei, L., Huai, X.: Dialoguecrn: contextual reasoning networks for emotion recognition in conversations. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conferenceon Natural Language Processing (Volume 1: Long Papers), pp. 7042–7052 (2021)

    Google Scholar 

  8. Hu, J., Liu, Y., Zhao, J., Jin, Q.: Mmgcn: multimodal fusion via deep graph convolution network for emotion recognition in conversation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 5666–5675 (2021)

    Google Scholar 

  9. Lian, Z., Liu, B., Tao, J.: Ctnet: conversational transformer network for emotion recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 29, 985–1000 (2021)

    Article  Google Scholar 

  10. Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  11. Majumder, N., Poria, S., Hazarika, D., Mihalcea, R., Gelbukh, A., Cambria, E.: Dialoguernn: an attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6818–6825 (2019)

    Google Scholar 

  12. Plutchik, R.: A psychoevolutionary theory of emotions (1982)

    Google Scholar 

  13. Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., Mihalcea, R.: Meld: a multimodal multi-party dataset for emotion recognition in conversations. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 527–536 (2019)

    Google Scholar 

  14. Shen, W., Wu, S., Yang, Y., Quan, X.: Directed acyclic graph network for conversational emotion recognition. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1551–1560 (2021)

    Google Scholar 

  15. Shenoy, A., Sardana, A., Graphics, N.: Multilogue-net: a context aware rnn for multi-modal emotion detection and sentiment analysis in conversation. ACL 2020, 19 (2020)

    Google Scholar 

  16. Xu, S., Jia, Y., Niu, C., Zan, H.: Mmdag: multimodal directed acyclic graph network for emotion recognition in conversation. In: Proceedings of the Thirteenth Language Resources and Evaluation Conference, pp. 6802–6807 (2022)

    Google Scholar 

  17. Tsai, Y.-H.H., Bai, S., Liang, P.P., Kolter, J.Z., Morency, L.-P., Salakhutdinov, R.: Multimodal transformer for unaligned multimodal language sequences. In: Proceedings of the Conference. Association for Computational Linguistics. Meeting, vol. 2019, p. 6558. NIH Public Access (2019)

    Google Scholar 

  18. Zadeh, A., Liang, P.P., Mazumder, N., Poria, S., Cambria, E., Morency, L.P.: Memory fusion network for multi-view sequential learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  19. Zhao, J., et al.: M3ed: multi-modal multi-scene multi-label emotional dialogue database. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5699–5710 (2022)

    Google Scholar 

  20. Zou, S., Huang, X., Shen, X., Liu, H.: Improving multimodal fusion with main modal transformer for emotion recognition in conversation. Knowl.-Based Syst. 258, 109978 (2022)

    Article  Google Scholar 

Download references

Acknowledgement

We would like to thank the anonymous reviewers for their insightful and valuable comments. This work was supported in part by National Natural Science Foundation of China (Grant No.62006211).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuxiang Jia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Niu, C., Xu, S., Jia, Y., Zan, H. (2023). DialogueSMM: Emotion Recognition in Conversation with Speaker-Aware Multimodal Multi-head Attention. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14303. Springer, Cham. https://doi.org/10.1007/978-3-031-44696-2_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44696-2_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44695-5

  • Online ISBN: 978-3-031-44696-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics