Evolving Parameterized Prompt Memory for Continual Learning

Authors

  • Muhammad Rifki Kurniawan Xi'an Jiaotong University
  • Xiang Song Xi'an Jiaotong University
  • Zhiheng Ma Shenzhen Institute of Advanced Technology,Chinese Academy of Sciences
  • Yuhang He Xi’an Jiaotong University
  • Yihong Gong Xi'an Jiaotong University
  • Yang Qi Xi'an Jiaotong University
  • Xing Wei Xi'an Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v38i12.29231

Keywords:

ML: Life-Long and Continual Learning, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

Recent studies have demonstrated the potency of leveraging prompts in Transformers for continual learning (CL). Nevertheless, employing a discrete key-prompt bottleneck can lead to selection mismatches and inappropriate prompt associations during testing. Furthermore, this approach hinders adaptive prompting due to the lack of shareability among nearly identical instances at more granular level. To address these challenges, we introduce the Evolving Parameterized Prompt Memory (EvoPrompt), a novel method involving adaptive and continuous prompting attached to pre-trained Vision Transformer (ViT), conditioned on specific instance. We formulate a continuous prompt function as a neural bottleneck and encode the collection of prompts on network weights. We establish a paired prompt memory system consisting of a stable reference and a flexible working prompt memory. Inspired by linear mode connectivity, we progressively fuse the working prompt memory and reference prompt memory during inter-task periods, resulting in continually evolved prompt memory. This fusion involves aligning functionally equivalent prompts using optimal transport and aggregating them in parameter space with an adjustable bias based on prompt node attribution. Additionally, to enhance backward compatibility, we propose compositional classifier initialization, which leverages prior prototypes from pre-trained models to guide the initialization of new classifiers in a subspace-aware manner. Comprehensive experiments validate that our approach achieves state-of-the-art performance in both class and domain incremental learning scenarios.

Published

2024-03-24

How to Cite

Kurniawan, M. R. ., Song, X., Ma, Z., He, Y., Gong, Y., Qi, Y. ., & Wei, X. (2024). Evolving Parameterized Prompt Memory for Continual Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13301-13309. https://doi.org/10.1609/aaai.v38i12.29231

Issue

Section

AAAI Technical Track on Machine Learning III