Language Model Pre-training on True Negatives

Authors

  • Zhuosheng Zhang Shanghai Jiao Tong University
  • Hai Zhao Shanghai Jiao Tong University
  • Masao Utiyama National Institute of Information and Communications Technology
  • Eiichiro Sumita National Institute of Information and Communications Technology

DOI:

https://doi.org/10.1609/aaai.v37i11.26639

Keywords:

SNLP: Language Models, SNLP: Interpretability & Analysis of NLP Models, SNLP: Question Answering, SNLP: Sentence-Level Semantics and Textual Inference

Abstract

Discriminative pre-trained language models (PrLMs) learn to predict original texts from intentionally corrupted ones. Taking the former text as positive and the latter as negative samples, the PrLM can be trained effectively for contextualized representation. However, the training of such a type of PrLMs highly relies on the quality of the automatically constructed samples. Existing PrLMs simply treat all corrupted texts as equal negative without any examination, which actually lets the resulting model inevitably suffer from the false negative issue where training is carried out on pseudo-negative data and leads to less efficiency and less robustness in the resulting PrLMs. In this work, on the basis of defining the false negative issue in discriminative PrLMs that has been ignored for a long time, we design enhanced pre-training methods to counteract false negative predictions and encourage pre-training language models on true negatives by correcting the harmful gradient updates subject to false negative predictions. Experimental results on GLUE and SQuAD benchmarks show that our counter-false-negative pre-training methods indeed bring about better performance together with stronger robustness.

Downloads

Published

2023-06-26

How to Cite

Zhang, Z., Zhao, H., Utiyama, M., & Sumita, E. (2023). Language Model Pre-training on True Negatives. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 14002-14010. https://doi.org/10.1609/aaai.v37i11.26639

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing