p-Laplacian Adaptation for Generative Pre-trained Vision-Language Models

Authors

  • Haoyuan Wu The Chinese University of Hong Kong
  • Xinyun Zhang The Chinese University of Hong Kong
  • Peng Xu The Chinese University of Hong Kong
  • Peiyu Liao The Chinese University of Hong Kong
  • Xufeng Yao The Chinese University of Hong Kong
  • Bei Yu The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v38i6.28415

Keywords:

CV: Language and Vision, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

Vision-Language models (VLMs) pre-trained on large corpora have demonstrated notable success across a range of downstream tasks. In light of the rapidly increasing size of pre-trained VLMs, parameter-efficient transfer learning (PETL) has garnered attention as a viable alternative to full fine-tuning. One such approach is the adapter, which introduces a few trainable parameters into the pre-trained models while preserving the original parameters during adaptation. In this paper, we present a novel modeling framework that recasts adapter tuning after attention as a graph message passing process on attention graphs, where the projected query and value features and attention matrix constitute the node features and the graph adjacency matrix, respectively. Within this framework, tuning adapters in VLMs necessitates handling heterophilic graphs, owing to the disparity between the projected query and value space. To address this challenge, we propose a new adapter architecture, p-adapter, which employs p-Laplacian message passing in Graph Neural Networks (GNNs). Specifically, the attention weights are re-normalized based on the features, and the features are then aggregated using the calibrated attention matrix, enabling the dynamic exploitation of information with varying frequencies in the heterophilic attention graphs. We conduct extensive experiments on different pre-trained VLMs and multi-modal tasks, including visual question answering, visual entailment, and image captioning. The experimental results validate our method's significant superiority over other PETL methods. Our code is available at https://github.com/wuhy68/p-Adapter/.

Published

2024-03-24

How to Cite

Wu, H., Zhang, X., Xu, P., Liao, P., Yao, X., & Yu, B. (2024). p-Laplacian Adaptation for Generative Pre-trained Vision-Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6003-6011. https://doi.org/10.1609/aaai.v38i6.28415

Issue

Section

AAAI Technical Track on Computer Vision V