Skip to main content

MetaSelection: A Learnable Masked AutoEncoder for Multimodal Sentiment Feature Selection

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14431))

Included in the following conference series:

  • 386 Accesses

Abstract

Multimodal learning has demonstrated a great advantage in sentimental analysis tasks due to the richer information from different modalities, especially the complementary information. However, our study shows that multimodal data not only provides useful complementary information, but also contains some information that is irrelevant to or conflicts with the task of sentiment prediction. It can degrade the training effectiveness of multimodal models. To tackle this problem, we propose a Learnable Masked AutoEncoder (LMAE) to eliminate the irrelevant or conflicting features of each modality by a learned mask. Afterward, the selected features from modalities are fused by a cross-modal attention. Experiments on samples with conflicting information across modalities and two benchmark datasets, CMU-MOSI and CMU-MOSEI, demonstrate the superiority of our proposal over seven state-of-the-art methods.

H. Chen and H. Xuan—Contributed equally to this work.

This work was supported in part by the Guangdong Provincial Key Research and Development Programme under Grant 2021B0101410002.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andrew, G., Arora, R., Bilmes, J., Livescu, K.: Deep canonical correlation analysis. In: International Conference on Machine Learning, pp. 1247–1255 (2013)

    Google Scholar 

  2. Bagher Zadeh, A., Liang, P.P., Poria, S., Cambria, E., Morency, L.P.: Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In: Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 2236–2246 (2018)

    Google Scholar 

  3. Bayoudh, K., Knani, R., Hamdaoui, F., Mtibaa, A.: A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets. Vis. Comput. 38, 1–32 (2021)

    Google Scholar 

  4. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. Adv. Neural Inf. Process. Syst. 29 (2016)

    Google Scholar 

  5. Han, W., Chen, H., Gelbukh, A., Zadeh, A., Morency, L.P., Poria, S.: Bi-bimodal modality fusion for correlation-controlled multimodal sentiment analysis. In: International Conference on Multimodal Interaction, pp. 6–15 (2021)

    Google Scholar 

  6. Han, W., Chen, H., Poria, S.: Improving multimodal fusion with hierarchical mutual information maximization for multimodal sentiment analysis. In: Conference on Empirical Methods in Natural Language Processing, pp. 9180–9192 (2021)

    Google Scholar 

  7. Hazarika, D., Zimmermann, R., Poria, S.: MISA: modality-invariant and -specific representations for multimodal sentiment analysis. In: ACM International Conference on Multimedia, pp. 1122–1131 (2020)

    Google Scholar 

  8. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009 (2022)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Hotelling, H.: Relations between two sets of variates. In: Breakthroughs in Statistics: Methodology and Distribution, pp. 162–190 (1992)

    Google Scholar 

  11. Hu, G., Lin, T.E., Zhao, Y., Lu, G., Wu, Y., Li, Y.: UniMSE: towards unified multimodal sentiment analysis and emotion recognition. In: Conference on Empirical Methods in Natural Language Processing, pp. 7837–7851 (2022)

    Google Scholar 

  12. Huang, X., Peng, Y.: Deep cross-media knowledge transfer. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8837–8846 (2018)

    Google Scholar 

  13. Huang, Y., Du, C., Xue, Z., Chen, X., Zhao, H., Huang, L.: What makes multi-modal learning better than single (provably). Adv. Neural. Inf. Process. Syst. 34, 10944–10956 (2021)

    Google Scholar 

  14. Kenton, J.D.M.W.C., Toutanova, L.K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: North American Chapter of the Association for Computational Linguistics, vol. 1, pp. 4171–4186 (2019)

    Google Scholar 

  15. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, pp. 1–8 (2015)

    Google Scholar 

  16. Li, Y., Wang, Y., Cui, Z.: Decoupled multimodal distilling for emotion recognition. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6631–6640 (2023)

    Google Scholar 

  17. Majumder, N., Poria, S., Hazarika, D., Mihalcea, R., Gelbukh, A., Cambria, E.: DialogueRNN: an attentive RNN for emotion detection in conversations. In: AAAI Conference on Artificial Intelligence, vol. 33, pp. 6818–6825 (2019)

    Google Scholar 

  18. Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., Manocha, D.: M3ER: multiplicative multimodal emotion recognition using facial, textual, and speech cues. In: AAAI Conference on Artificial Intelligence, vol. 34, pp. 1359–1367 (2020)

    Google Scholar 

  19. Poria, S., Hazarika, D., Majumder, N., Mihalcea, R.: Beneath the tip of the iceberg: current challenges and new directions in sentiment analysis research. IEEE Trans. Affect. Comput. (2020)

    Google Scholar 

  20. Rahman, W., et al.: Integrating multimodal information in large pretrained transformers. In: Annual Meeting of the Association for Computational Linguistics, pp. 2359–2369 (2020)

    Google Scholar 

  21. Sun, Z., Sarma, P., Sethares, W., Liang, Y.: Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis. In: AAAI Conference on Artificial Intelligence, vol. 34, pp. 8992–8999 (2020)

    Google Scholar 

  22. Tishby, N., Zaslavsky, N.: Deep learning and the information bottleneck principle. In: IEEE Information Theory Workshop, pp. 1–5 (2015)

    Google Scholar 

  23. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  24. Wang, Y., et al.: A systematic review on affective computing: emotion models, databases, and recent advances. Inf. Fusion 83–84, 19–52 (2022)

    Google Scholar 

  25. Yang, D., Huang, S., Kuang, H., Du, Y., Zhang, L.: Disentangled representation learning for multimodal emotion recognition. In: ACM International Conference on Multimedia, pp. 1642–1651 (2022)

    Google Scholar 

  26. Yu, W., Xu, H., Yuan, Z., Wu, J.: Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In: AAAI Conference on Artificial Intelligence, vol. 35, pp. 10790–10797 (2021)

    Google Scholar 

  27. Zadeh, A., Zellers, R., Pincus, E., Morency, L.P.: MOSI: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259 (2016)

  28. Zhang, Y., Chen, M., Shen, J., Wang, C.: Tailor versatile multi-modal learning for multi-label emotion recognition. In: AAAI Conference on Artificial Intelligence. vol. 36, pp. 9100–9108 (2022)

    Google Scholar 

  29. Zhou, Y., Liang, X., Zheng, S., Xuan, H., Kumada, T.: Adaptive mask co-optimization for modal dependence in multimodal learning. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1–5. IEEE (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuefeng Liang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 205 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liang, X., Chen, H., Xuan, H., Zhou, Y. (2024). MetaSelection: A Learnable Masked AutoEncoder for Multimodal Sentiment Feature Selection. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14431. Springer, Singapore. https://doi.org/10.1007/978-981-99-8540-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8540-1_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8539-5

  • Online ISBN: 978-981-99-8540-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics