SoftCLIP: Softer Cross-Modal Alignment Makes CLIP Stronger

Authors

  • Yuting Gao Tencent Youtu Lab
  • Jinfeng Liu Shanghai Jiao Tong University
  • Zihan Xu Tencent Youtu Lab
  • Tong Wu Tencent Youtu Lab
  • Enwei Zhang Tencent Youtu Lab
  • Ke Li Tencent Youtu Lab
  • Jie Yang Shanghai Jiao Tong University
  • Wei Liu Shanghai Jiao Tong University
  • Xing Sun Tencent Youtu Lab

DOI:

https://doi.org/10.1609/aaai.v38i3.27955

Keywords:

CV: Language and Vision, CV: Representation Learning for Vision

Abstract

During the preceding biennium, vision-language pre-training has achieved noteworthy success on several downstream tasks. Nevertheless, acquiring high-quality image-text pairs, where the pairs are entirely exclusive of each other, remains a challenging task, and noise exists in the commonly used datasets. To address this issue, we propose SoftCLIP, a novel approach that relaxes the strict one-to-one constraint and achieves a soft cross-modal alignment by introducing a softened target, which is generated from the fine-grained intra-modal self-similarity. The intra-modal guidance is indicative to enable two pairs have some local similarities and model many-to-many relationships between the two modalities. Besides, since the positive still dominates in the softened target distribution, we disentangle the negatives in the distribution to further boost the relation alignment with the negatives in the cross-modal learning. Extensive experiments demonstrate the effectiveness of SoftCLIP. In particular, on ImageNet zero-shot classification task, using CC3M/CC12M as pre-training dataset, SoftCLIP brings a top-1 accuracy improvement of 6.8%/7.2% over the CLIP baseline.

Published

2024-03-24

How to Cite

Gao, Y., Liu, J., Xu, Z., Wu, T., Zhang, E., Li, K., Yang, J., Liu, W. ., & Sun, X. (2024). SoftCLIP: Softer Cross-Modal Alignment Makes CLIP Stronger. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1860-1868. https://doi.org/10.1609/aaai.v38i3.27955

Issue

Section

AAAI Technical Track on Computer Vision II