Negative Pre-aware for Noisy Cross-Modal Matching

Authors

  • Xu Zhang University of Electronic Science and Technology of China
  • Hao Li University of Electronic Science and Technology of China
  • Mang Ye Wuhan University

DOI:

https://doi.org/10.1609/aaai.v38i7.28564

Keywords:

CV: Language and Vision, CV: Multi-modal Vision

Abstract

Cross-modal noise-robust learning is a challenging task since noisy correspondence is hard to recognize and rectify. Due to the cumulative and unavoidable negative impact of unresolved noise, existing methods cannot maintain a stable performance when the noise increases. In this paper, we present a novel Negative Pre-aware Cross-modal (NPC) matching solution for large visual-language model fine-tuning on noisy downstream tasks. It is featured in two aspects: (1) For noise recognition and resistance, previous methods usually directly filter out a noise subset, we propose to estimate the negative impact of each sample. It does not need additional correction mechanisms that may predict unreliable correction results, leading to self-reinforcing error. We assign a confidence weight to each sample according to its negative impact in the training process. This adaptively adjusts the contribution of each sample to avoid noisy accumulation. (2) For maintaining stable performance with increasing noise, we utilize the memorization effect of DNNs by maintaining a memory bank. Specifically, we apply GMM to select high-confident clean samples as the memory entry, where the memory entry is used to estimate the negative impact of each sample. Since clean samples are easier distinguished by GMM with increasing noise, the memory bank can still maintain high quality at a high noise ratio. Compared to the correction mechanism focusing on noise samples, memory bank-based estimation is more robust, which makes the model performance stable on noisy datasets. Extensive experiments demonstrate that our method significantly improves matching accuracy and performance stability at increasing noise ratio. Our approach also surpasses the state-of-the-art methods by a large margin. The code is available at: https://github.com/ZhangXu0963/NPC.

Published

2024-03-24

How to Cite

Zhang, X., Li, H., & Ye, M. (2024). Negative Pre-aware for Noisy Cross-Modal Matching. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7341-7349. https://doi.org/10.1609/aaai.v38i7.28564

Issue

Section

AAAI Technical Track on Computer Vision VI