Skip to main content

Statistical Significance Testing for Natural Language Processing

  • Book
  • © 2020

Overview

Part of the book series: Synthesis Lectures on Human Language Technologies (SLHLT)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 44.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (8 chapters)

About this book

Data-driven experimental analysis has become the main evaluation tool of Natural Language Processing (NLP) algorithms. In fact, in the last decade, it has become rare to see an NLP paper, particularly one that proposes a new algorithm, that does not include extensive experimental analysis, and the number of involved tasks, datasets, domains, and languages is constantly growing. This emphasis on empirical results highlights the role of statistical significance testing in NLP research: If we, as a community, rely on empirical evaluation to validate our hypotheses and reveal the correct language processing mechanisms, we better be sure that our results are not coincidental.

The goal of this book is to discuss the main aspects of statistical significance testing in NLP. Our guiding assumption throughout the book is that the basic question NLP researchers and engineers deal with is whether or not one algorithm can be considered better than another one. This question drivesthe field forward as it allows the constant progress of developing better technology for language processing challenges. In practice, researchers and engineers would like to draw the right conclusion from a limited set of experiments, and this conclusion should hold for other experiments with datasets they do not have at their disposal or that they cannot perform due to limited time and resources. The book hence discusses the opportunities and challenges in using statistical significance testing in NLP, from the point of view of experimental comparison between two algorithms. We cover topics such as choosing an appropriate significance test for the major NLP tasks, dealing with the unique aspects of significance testing for non-convex deep neural networks, accounting for a large number of comparisons between two NLP algorithms in a statistically valid manner (multiple hypothesis testing), and, finally, the unique challenges yielded by the nature of the data and practices of the field.

Authors and Affiliations

  • Technion - Israel Institute of Technology, Riverside, Israel

    Rotem Dror, Lotem Peled-Cohen, Segev Shlomov, Roi Reichart

About the authors

Rotem Dror is a Ph.D. student in the Natural Language Processing Research Group under the supervision of Professor Roi Reichart at the Technion, Israel Institute of Technology. Rotem's research interests lie in the intersection of Machine Learning, Statistics, Optimization, and Natural Language Processing. In her Ph.D., she focuses mainly on developing statistical methods for evaluating results of NLP tasks and on novel algorithms for structured prediction in NLP. Rotem's papers have been published in the top-tier conferences and journals of the NLP community. Rotem is a recipient of the Google Ph.D. Fellowship 2018.Lotem Peled-Cohen holds an M.Sc. in Natural Language Processing (cum laude) from the Technion, under the supervision of Professor Roi Reichart. Lotem's research revolved around textual sarcasm, and her work about sarcasm interpretation using monolingual Machine Translation was published in the ACL 2017 Proceedings and appeared in multiple media channels. After her studies, Lotem worked as a Data Scientist, focusing mostly on Conversational AI. She later became an independent consultant and lecturer at ML, NLP, and Deep Learning. Lotem was a lecturer at multiple colleges and presented in conferences worldwide. Nowadays, Lotem brings her ML & NLP expertise to the world of product management at Samsung Next, as part of the Whisk product department. Lotem works as an ML Product Manager who leadsa collaboration between Samsung offices worldwide (Korea, Russia, US, and Israel) responsible for building innovative, intelligent, and trustworthy ML products.
Segev Shlomov is a Ph.D. student under the supervision of Associate Professor Yakov Babichenko at the Technion, Israel Institute of Technology. Segev's research interests lie at the intersection of Statistics, Social Learning, and Information Theory. Segev holds an M.Sc. in Operations Research (summa cum laude) from the Technion. He was three-time a summer intern at the Artificial Intelligence department of the IBM research labs, Haifa, Israel, and he is one of the main contributors to IBM's Lambada AI service. Segev's papers have been published in top-tier conferences and journals of both the NLP community and the Economics and Computation communities. Segev is a recipient of the Jacobs Outstanding Ph.D. Scholarship for the year 2020.
Roi Reichart is an Associate Professor at the Technion, Israel Institute of Technology. Before joining the Technion, on July 2013, he was a post-doctoral researcher at the Computer Laboratory of the University of Cambridge, UK, and at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of MIT. Prior to this, he was a Ph.D. student under the supervision of Professor Ari Rappoport at the Interdisciplinary Center for Neural Computation (ICNC) of Hebrew University of Jerusalem. His main research interest is NLP, with a focus on language learning in its context and designing models that integrate domain and world knowledgewith data-driven methods. He has hence worked on problems such as domain adaptation, learning with minimal human annotation (and involvement), language transfer and multilingual learning, multi-modal (text and vision) processing, and NLP of Web data. He has focused on structured aspects of language and has developed effective algorithms for inference across linguistic structures. Finally, he is interested in proper evaluation of NLP algorithms and has worked on problems such as measuring statistical significance in NLP, word embedding evaluation, and unsupervised learning (particularly clustering) evaluation.

Bibliographic Information

Publish with us