Diffuser: Efficient Transformers with Multi-Hop Attention Diffusion for Long Sequences

Authors

  • Aosong Feng Yale University
  • Irene Li Yale University
  • Yuang Jiang Yale University
  • Rex Ying Yale University

DOI:

https://doi.org/10.1609/aaai.v37i11.26502

Keywords:

SNLP: Language Models, ML: Classification and Regression, ML: Deep Neural Network Algorithms, ML: Graph-based Machine Learning, SNLP: Question Answering, SNLP: Text Classification

Abstract

Efficient Transformers have been developed for long sequence modeling, due to their subquadratic memory and time complexity. Sparse Transformer is a popular approach to improving the efficiency of Transformers by restricting self-attention to locations specified by the predefined sparse patterns. However, leveraging sparsity may sacrifice expressiveness compared to full-attention, when important token correlations are multiple hops away. To combine advantages of both the efficiency of sparse transformer and the expressiveness of full-attention Transformer, we propose Diffuser, a new state-of-the-art efficient Transformer. Diffuser incorporates all token interactions within one attention layer while maintaining low computation and memory costs. The key idea is to expand the receptive field of sparse attention using Attention Diffusion, which computes multi-hop token correlations based on all paths between corresponding disconnected tokens, besides attention among neighboring tokens. Theoretically, we show the expressiveness of Diffuser as a universal sequence approximator for sequence-to-sequence modeling, and investigate its ability to approximate full-attention by analyzing the graph expander property from the spectral perspective. Experimentally, we investigate the effectiveness of Diffuser with extensive evaluations, including language modeling, image modeling, and Long Range Arena (LRA). Evaluation results show that Diffuser achieves improvements by an average of 0.94% on text classification tasks and 2.30% on LRA, with 1.67x memory savings compared to state-of-the-art benchmarks, which demonstrates superior performance of Diffuser in both expressiveness and efficiency aspects.

Downloads

Published

2023-06-26

How to Cite

Feng, A., Li, I., Jiang, Y., & Ying, R. (2023). Diffuser: Efficient Transformers with Multi-Hop Attention Diffusion for Long Sequences. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12772-12780. https://doi.org/10.1609/aaai.v37i11.26502

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing