Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution

Authors

  • Emily McMilin Independent Researcher

DOI:

https://doi.org/10.1609/aaai.v38i17.29842

Keywords:

NLP: Interpretability, Analysis, and Evaluation of NLP Models, RU: Causality, NLP: (Large) Language Models

Abstract

Modern language modeling tasks are often underspecified: for a given token prediction, many words may satisfy the user’s intent of producing natural language at inference time, however only one word will minimize the task’s loss function at training time. We introduce a simple causal mechanism to describe the role underspecification plays in the generation of spurious correlations. Despite its simplicity, our causal model directly informs the development of two lightweight black-box evaluation methods, that we apply to gendered pronoun resolution tasks on a wide range of LLMs to 1) aid in the detection of inference-time task underspecification by exploiting 2) previously unreported gender vs. time and gender vs. location spurious correlations on LLMs with a range of A) sizes: from BERT-base to GPT-3.5, B) pre-training objectives: from masked & autoregressive language modeling to a mixture of these objectives, and C) training stages: from pre-training only to reinforcement learning from human feedback (RLHF). Code and open-source demos available at https://github.com/2dot71mily/uspec.

Published

2024-03-24

How to Cite

McMilin, E. (2024). Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18778-18788. https://doi.org/10.1609/aaai.v38i17.29842

Issue

Section

AAAI Technical Track on Natural Language Processing II