Abstract
Multimodal learning has demonstrated a great advantage in sentimental analysis tasks due to the richer information from different modalities, especially the complementary information. However, our study shows that multimodal data not only provides useful complementary information, but also contains some information that is irrelevant to or conflicts with the task of sentiment prediction. It can degrade the training effectiveness of multimodal models. To tackle this problem, we propose a Learnable Masked AutoEncoder (LMAE) to eliminate the irrelevant or conflicting features of each modality by a learned mask. Afterward, the selected features from modalities are fused by a cross-modal attention. Experiments on samples with conflicting information across modalities and two benchmark datasets, CMU-MOSI and CMU-MOSEI, demonstrate the superiority of our proposal over seven state-of-the-art methods.
H. Chen and H. Xuan—Contributed equally to this work.
This work was supported in part by the Guangdong Provincial Key Research and Development Programme under Grant 2021B0101410002.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Andrew, G., Arora, R., Bilmes, J., Livescu, K.: Deep canonical correlation analysis. In: International Conference on Machine Learning, pp. 1247–1255 (2013)
Bagher Zadeh, A., Liang, P.P., Poria, S., Cambria, E., Morency, L.P.: Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In: Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 2236–2246 (2018)
Bayoudh, K., Knani, R., Hamdaoui, F., Mtibaa, A.: A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets. Vis. Comput. 38, 1–32 (2021)
Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. Adv. Neural Inf. Process. Syst. 29 (2016)
Han, W., Chen, H., Gelbukh, A., Zadeh, A., Morency, L.P., Poria, S.: Bi-bimodal modality fusion for correlation-controlled multimodal sentiment analysis. In: International Conference on Multimodal Interaction, pp. 6–15 (2021)
Han, W., Chen, H., Poria, S.: Improving multimodal fusion with hierarchical mutual information maximization for multimodal sentiment analysis. In: Conference on Empirical Methods in Natural Language Processing, pp. 9180–9192 (2021)
Hazarika, D., Zimmermann, R., Poria, S.: MISA: modality-invariant and -specific representations for multimodal sentiment analysis. In: ACM International Conference on Multimedia, pp. 1122–1131 (2020)
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009 (2022)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hotelling, H.: Relations between two sets of variates. In: Breakthroughs in Statistics: Methodology and Distribution, pp. 162–190 (1992)
Hu, G., Lin, T.E., Zhao, Y., Lu, G., Wu, Y., Li, Y.: UniMSE: towards unified multimodal sentiment analysis and emotion recognition. In: Conference on Empirical Methods in Natural Language Processing, pp. 7837–7851 (2022)
Huang, X., Peng, Y.: Deep cross-media knowledge transfer. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8837–8846 (2018)
Huang, Y., Du, C., Xue, Z., Chen, X., Zhao, H., Huang, L.: What makes multi-modal learning better than single (provably). Adv. Neural. Inf. Process. Syst. 34, 10944–10956 (2021)
Kenton, J.D.M.W.C., Toutanova, L.K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: North American Chapter of the Association for Computational Linguistics, vol. 1, pp. 4171–4186 (2019)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, pp. 1–8 (2015)
Li, Y., Wang, Y., Cui, Z.: Decoupled multimodal distilling for emotion recognition. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6631–6640 (2023)
Majumder, N., Poria, S., Hazarika, D., Mihalcea, R., Gelbukh, A., Cambria, E.: DialogueRNN: an attentive RNN for emotion detection in conversations. In: AAAI Conference on Artificial Intelligence, vol. 33, pp. 6818–6825 (2019)
Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., Manocha, D.: M3ER: multiplicative multimodal emotion recognition using facial, textual, and speech cues. In: AAAI Conference on Artificial Intelligence, vol. 34, pp. 1359–1367 (2020)
Poria, S., Hazarika, D., Majumder, N., Mihalcea, R.: Beneath the tip of the iceberg: current challenges and new directions in sentiment analysis research. IEEE Trans. Affect. Comput. (2020)
Rahman, W., et al.: Integrating multimodal information in large pretrained transformers. In: Annual Meeting of the Association for Computational Linguistics, pp. 2359–2369 (2020)
Sun, Z., Sarma, P., Sethares, W., Liang, Y.: Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis. In: AAAI Conference on Artificial Intelligence, vol. 34, pp. 8992–8999 (2020)
Tishby, N., Zaslavsky, N.: Deep learning and the information bottleneck principle. In: IEEE Information Theory Workshop, pp. 1–5 (2015)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)
Wang, Y., et al.: A systematic review on affective computing: emotion models, databases, and recent advances. Inf. Fusion 83–84, 19–52 (2022)
Yang, D., Huang, S., Kuang, H., Du, Y., Zhang, L.: Disentangled representation learning for multimodal emotion recognition. In: ACM International Conference on Multimedia, pp. 1642–1651 (2022)
Yu, W., Xu, H., Yuan, Z., Wu, J.: Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In: AAAI Conference on Artificial Intelligence, vol. 35, pp. 10790–10797 (2021)
Zadeh, A., Zellers, R., Pincus, E., Morency, L.P.: MOSI: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259 (2016)
Zhang, Y., Chen, M., Shen, J., Wang, C.: Tailor versatile multi-modal learning for multi-label emotion recognition. In: AAAI Conference on Artificial Intelligence. vol. 36, pp. 9100–9108 (2022)
Zhou, Y., Liang, X., Zheng, S., Xuan, H., Kumada, T.: Adaptive mask co-optimization for modal dependence in multimodal learning. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1–5. IEEE (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Liang, X., Chen, H., Xuan, H., Zhou, Y. (2024). MetaSelection: A Learnable Masked AutoEncoder for Multimodal Sentiment Feature Selection. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14431. Springer, Singapore. https://doi.org/10.1007/978-981-99-8540-1_14
Download citation
DOI: https://doi.org/10.1007/978-981-99-8540-1_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8539-5
Online ISBN: 978-981-99-8540-1
eBook Packages: Computer ScienceComputer Science (R0)