A Simple and Effective Positional Encoding for Transformers

Pu-Chin Chen, Henry Tsai, Srinadh Bhojanapalli, Hyung Won Chung, Yin-Wen Chang, Chun-Sung Ferng


Abstract
Transformer models are permutation equivariant. To supply the order and type information of the input tokens, position and segment embeddings are usually added to the input. Recent works proposed variations of positional encodings with relative position encodings achieving better performance. Our analysis shows that the gain actually comes from moving positional information to attention layer from the input. Motivated by this, we introduce Decoupled Positional Attention for Transformers (DIET), a simple yet effective mechanism to encode position and segment information into the Transformer models. The proposed method has faster training and inference time, while achieving competitive performance on GLUE, XTREME and WMT benchmarks. We further generalize our method to long-range transformers and show performance gain.
Anthology ID:
2021.emnlp-main.236
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2974–2988
Language:
URL:
https://aclanthology.org/2021.emnlp-main.236
DOI:
10.18653/v1/2021.emnlp-main.236
Bibkey:
Cite (ACL):
Pu-Chin Chen, Henry Tsai, Srinadh Bhojanapalli, Hyung Won Chung, Yin-Wen Chang, and Chun-Sung Ferng. 2021. A Simple and Effective Positional Encoding for Transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2974–2988, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
A Simple and Effective Positional Encoding for Transformers (Chen et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.236.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.236.mp4
Data
GLUEMultiNLIXTREME