Token-Level Contrastive Learning with Modality-Aware Prompting for Multimodal Intent Recognition

Authors

  • Qianrui Zhou Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China
  • Hua Xu Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China
  • Hao Li Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China
  • Hanlei Zhang Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China
  • Xiaohan Zhang Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
  • Yifan Wang Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
  • Kai Gao School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China

DOI:

https://doi.org/10.1609/aaai.v38i15.29656

Keywords:

ML: Multimodal Learning, ML: Representation Learning, NLP: Language Grounding & Multi-modal NLP

Abstract

Multimodal intent recognition aims to leverage diverse modalities such as expressions, body movements and tone of speech to comprehend user's intent, constituting a critical task for understanding human language and behavior in real-world multimodal scenarios. Nevertheless, the majority of existing methods ignore potential correlations among different modalities and own limitations in effectively learning semantic features from nonverbal modalities. In this paper, we introduce a token-level contrastive learning method with modality-aware prompting (TCL-MAP) to address the above challenges. To establish an optimal multimodal semantic environment for text modality, we develop a modality-aware prompting module (MAP), which effectively aligns and fuses features from text, video and audio modalities with similarity-based modality alignment and cross-modality attention mechanism. Based on the modality-aware prompt and ground truth labels, the proposed token-level contrastive learning framework (TCL) constructs augmented samples and employs NT-Xent loss on the label token. Specifically, TCL capitalizes on the optimal textual semantic insights derived from intent labels to guide the learning processes of other modalities in return. Extensive experiments show that our method achieves remarkable improvements compared to state-of-the-art methods. Additionally, ablation analyses demonstrate the superiority of the modality-aware prompt over the handcrafted prompt, which holds substantial significance for multimodal prompt learning. The codes are released at https://github.com/thuiar/TCL-MAP.

Published

2024-03-24

How to Cite

Zhou, Q., Xu, H., Li, H., Zhang, H., Zhang, X., Wang, Y., & Gao, K. (2024). Token-Level Contrastive Learning with Modality-Aware Prompting for Multimodal Intent Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 17114-17122. https://doi.org/10.1609/aaai.v38i15.29656

Issue

Section

AAAI Technical Track on Machine Learning VI