Image Difference Captioning with Pre-training and Contrastive Learning

Authors

  • Linli Yao Renmin University of China
  • Weiying Wang Renmin University of China
  • Qin Jin Renmin University of China

DOI:

https://doi.org/10.1609/aaai.v36i3.20218

Keywords:

Computer Vision (CV), Speech & Natural Language Processing (SNLP)

Abstract

The Image Difference Captioning (IDC) task aims to describe the visual differences between two similar images with natural language. The major challenges of this task lie in two aspects: 1) fine-grained visual differences that require learning stronger vision and language association and 2) high-cost of manual annotations that leads to limited supervised data. To address these challenges, we propose a new modeling framework following the pre-training-finetuning paradigm. Specifically, we design three self-supervised tasks and contrastive learning strategies to align visual differences and text descriptions at a fine-grained level. Moreover, we propose a data expansion strategy to utilize extra cross-task supervision information, such as data for fine-grained image classification, to alleviate the limitation of available supervised IDC data. Extensive experiments on two IDC benchmark datasets, CLEVR-Change and Birds-to-Words, demonstrate the effectiveness of the proposed modeling framework. The codes and models will be released at https://github.com/yaolinli/IDC.

Downloads

Published

2022-06-28

How to Cite

Yao, L., Wang, W., & Jin, Q. (2022). Image Difference Captioning with Pre-training and Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3108-3116. https://doi.org/10.1609/aaai.v36i3.20218

Issue

Section

AAAI Technical Track on Computer Vision III