ABSTRACT
As an inseparable and crucial component of communication affects play a substantial role in human-device and human-human interaction. They convey information about a person's specific traits and states [1, 4, 5], how one feels about the aims of a conversation, the trustworthiness of one's verbal communication [3], and the degree of adaptation in interpersonal speech [2]. This multifaceted nature of human affects poses a great challenge when it comes to applying machine learning systems for their automatic recognition and understanding. Contemporary self-supervised learning architectures such as Transformers, which define state-of-the-art (SOTA) in this area, have shown noticeable deficits in terms of explainability, while more conventional, non-deep machine learning methods, which provide more transparency, often fall (far) behind SOTA systems. So, is it possible to get the best of these two 'worlds'? And more importantly, at what price? In this talk, I provide a set of Dos and Don'ts guidelines for addressing affective computing tasks w. r. t. (i) preserving privacy for affective data and individuals/groups, (ii) being efficient in computing such data in a transparent way, (iii) ensuring reproducibility of the results, (iv) knowing the differences between causation and correlation, and (v) properly applying social and ethical protocols.
- Shahin Amiriparian, Lukas Christ, Andreas König, Eva-Maria Meßner, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. MuSe 2022 Challenge: Multimodal Humour, Emotional Reactions, and Stress. In Proceedings of the 30th ACM International Conference on Multimedia (MM'22), October 10--14, 2022, Lisbon, Portugal. Association for Computing Machinery, Lisbon, Portugal. 3 pages, to appear.Google ScholarDigital Library
- Shahin Amiriparian, Jing Han, Maximilian Schmitt, Alice Baird, Adria Mallol-Ragolta, Manuel Milling, Maurice Gerczuk, and Björn Schuller. 2019. Synchronization in Interpersonal Speech. Frontiers in Robotics and AI , Vol. 6 (2019). https://doi.org/10.3389/frobt.2019.00116Google ScholarCross Ref
- Shahin Amiriparian, Jouni Pohjalainen, Erik Marchi, Sergey Pugachevskiy, and Björn Schuller. 2016. Is Deception Emotional? An Emotion-Driven Predictive Approach. In Interspeech 2016. 2011--2015. https://doi.org/10.21437/Interspeech.2016--565Google Scholar
- Lukas Christ, Shahin Amiriparian, Alice Baird, Panagiotis Tzirakis, Alexander Kathan, Niklas Müller, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress. In Proceedings of the 3rd Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, Lisbon, Portugal. Workshop held at ACM Multimedia 2022, to appear.Google Scholar
- Björn Schuller, Stefan Steidl, Anton Batliner, Alessandro Vinciarelli, Klaus Scherer, Fabien Ringeval, Mohamed Chetouani, Felix Weninger, Florian Eyben, Erik Marchi, et al. 2013. The INTERSPEECH 2013 computational paralinguistics challenge: Social signals, conflict, emotion, autism. In Proceedings of INTERSPEECH.Google Scholar
Index Terms
- The Dos and Don'ts of Affect Analysis
Recommendations
The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress
MuSe' 22: Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and ChallengeThe Multimodal Sentiment Analysis Challenge (MuSe) 2022 is dedicated to multimodal sentiment and emotion recognition. For this year's challenge, we feature three datasets: (i) the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset that ...
MuSe 2022 Challenge: Multimodal Humour, Emotional Reactions, and Stress
MM '22: Proceedings of the 30th ACM International Conference on MultimediaThe 3rd Multimodal Sentiment Analysis Challenge (MuSe) focuses on multimodal affective computing. The workshop is held in conjunction with ACM Multimedia'22. Three datasets are provided as part of the challenge: (i) the Passau Spontaneous Football Coach ...
Towards an intelligent framework for multimodal affective data analysis
An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal ...
Comments