Quantifying Train-Evaluation Overlap with Nearest Neighbors

Gauri Kambhatla, Thuy Nguyen, Eunsol Choi


Abstract
Characterizing benchmark datasets is crucial to interpreting model performance. In this work, we study train-evaluation overlap as a measure of an individual dataset’s adequacy to evaluate model generalization over a wide range of datasets. We quantify the overlap with a simple novel metric based on a nearest neighbors approach between the training and evaluation sets. We identify nearest training examples for each evaluation example by mapping instances with generic and task-specific embedding methods. Our study on eleven classification and extractive QA tasks reveals a wide range of train-evaluation overlap, and we show that the data collection method of the dataset and the difficulty of the task may play a role in the amount of overlap. Lastly, we use our nearest neighbor analysis to identify challenging or potentially mislabeled examples. Our analysis quantifies train-evaluation overlap, providing insights for constructing datasets to study generalization.
Anthology ID:
2023.findings-acl.183
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2905–2920
Language:
URL:
https://aclanthology.org/2023.findings-acl.183
DOI:
10.18653/v1/2023.findings-acl.183
Bibkey:
Cite (ACL):
Gauri Kambhatla, Thuy Nguyen, and Eunsol Choi. 2023. Quantifying Train-Evaluation Overlap with Nearest Neighbors. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2905–2920, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Quantifying Train-Evaluation Overlap with Nearest Neighbors (Kambhatla et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.183.pdf