Clean or Annotate: How to Spend a Limited Data Collection Budget

Derek Chen, Zhou Yu, Samuel R. Bowman


Abstract
Crowdsourcing platforms are often used to collect datasets for training machine learning models, despite higher levels of inaccurate labeling compared to expert labeling. There are two common strategies to manage the impact of such noise: The first involves aggregating redundant annotations, but comes at the expense of labeling substantially fewer examples. Secondly, prior works have also considered using the entire annotation budget to label as many examples as possible and subsequently apply denoising algorithms to implicitly clean the dataset. We find a middle ground and propose an approach which reserves a fraction of annotations to explicitly clean up highly probable error samples to optimize the annotation process. In particular, we allocate a large portion of the labeling budget to form an initial dataset used to train a model. This model is then used to identify specific examples that appear most likely to be incorrect, which we spend the remaining budget to relabel. Experiments across three model variations and four natural language processing tasks show our approach outperforms or matches both label aggregation and advanced denoising methods designed to handle noisy labels when allocated the same finite annotation budget.
Anthology ID:
2022.deeplo-1.17
Volume:
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Month:
July
Year:
2022
Address:
Hybrid
Editors:
Colin Cherry, Angela Fan, George Foster, Gholamreza (Reza) Haffari, Shahram Khadivi, Nanyun (Violet) Peng, Xiang Ren, Ehsan Shareghi, Swabha Swayamdipta
Venue:
DeepLo
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
152–168
Language:
URL:
https://aclanthology.org/2022.deeplo-1.17
DOI:
10.18653/v1/2022.deeplo-1.17
Bibkey:
Cite (ACL):
Derek Chen, Zhou Yu, and Samuel R. Bowman. 2022. Clean or Annotate: How to Spend a Limited Data Collection Budget. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 152–168, Hybrid. Association for Computational Linguistics.
Cite (Informal):
Clean or Annotate: How to Spend a Limited Data Collection Budget (Chen et al., DeepLo 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.deeplo-1.17.pdf
Video:
 https://aclanthology.org/2022.deeplo-1.17.mp4
Data
DynaSentMultiNLINewsQA