Open access
Date
2020-11Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks. Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it. In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted. To enable intrinsic probing, we propose a novel framework based on a decomposable multivariate Gaussian probe that allows us to determine whether the linguistic information in word embeddings is dispersed or focal. We then probe fastText and BERT for various morphosyntactic attributes across 36 languages. We find that most attributes are reliably encoded by only a few neurons, with fastText concentrating its linguistic structure more than BERT. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000462314Publication status
publishedExternal links
Book title
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Pages / Article No.
Publisher
Association for Computational LinguisticsEvent
Organisational unit
09682 - Cotterell, Ryan / Cotterell, Ryan
Notes
Due to the Coronavirus (COVID-19) the conference was conducted virtually.More
Show all metadata
ETH Bibliography
yes
Altmetrics