Abstract
Understanding the consequences of mutation for molecular fitness and function is a fundamental problem in biology. Recently, generative probabilistic models have emerged as a powerful tool for estimating fitness from evolutionary sequence data, with accuracy sufficient to predict both laboratory measurements of function and disease risk in humans, and to design novel functional proteins. Existing techniques rest on an assumed relationship between density estimation and fitness estimation, a relationship that we interrogate in this article. We prove that fitness is not identifiable from observational sequence data alone, placing fundamental limits on our ability to disentangle fitness landscapes from phylogenetic history. We show on real datasets that perfect density estimation in the limit of infinite data would, with high confidence, result in poor fitness estimation; current models perform accurate fitness estimation because of, not despite, misspecification. Our results challenge the conventional wisdom that bigger models trained on bigger datasets will inevitably lead to better fitness estimation, and suggest novel estimation strategies going forward.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
ew2760{at}columbia.edu
alanamin{at}g.harvard.edu
jonathan.frazer{at}crg.eu
debbie{at}hms.harvard.edu
↵† Work done while at Harvard Medical School.
Strengthened Theorem 4.1. Additional discussion of related work. Corrected errors in Figure 4 and supplement.
3 N.b. since the representation is one-dimensional, and the downstream predictor is assumed to be a monotonically increasing function, success can be evaluated in a “zero-shot” setting using Spearman correlation or AUC, instead of training a supervised predictor on a small amount of labelled data.
4 In particular, this can occur if the amount of regularization or the point of early stopping change with increasing amounts of data N, say because these values are chosen based on performance on a downstream task. In this case, regularization and early stopping will not act like a standard parametric prior, and their effects will not necessarily be asymptotically washed out in the large N limit.