Limits of the Numerical The Abuses and Uses of Quantification
edited by Christopher Newfield, Anna Alexandrova and Stephen John
University of Chicago Press, 2022
Cloth: 978-0-226-81713-2 | Paper: 978-0-226-81715-6 | Electronic: 978-0-226-81716-3
DOI: 10.7208/chicago/9780226817163.001.0001
ABOUT THIS BOOKAUTHOR BIOGRAPHYREVIEWSTABLE OF CONTENTS

ABOUT THIS BOOK

This collection examines the uses of quantification in climate science, higher education, and health.
 
Numbers are both controlling and fragile. They drive public policy, figuring into everything from college rankings to vaccine efficacy rates. At the same time, they are frequent objects of obfuscation, manipulation, or outright denial. This timely collection by a diverse group of humanists and social scientists challenges undue reverence or skepticism toward quantification and offers new ideas about how to harmonize quantitative with qualitative forms of knowledge.   

Limits of the Numerical focuses on quantification in several contexts: climate change; university teaching and research; and health, medicine, and well-being more broadly. This volume shows the many ways that qualitative and quantitative approaches can productively interact—how the limits of the numerical can be overcome through equitable partnerships with historical, institutional, and philosophical analysis. The authors show that we can use numbers to hold the powerful to account, but only when those numbers are themselves democratically accountable.

AUTHOR BIOGRAPHY

Christopher Newfield is director of research at the Independent Social Research Foundation, London. Anna Alexandrova is professor of philosophy of science in the Department of History and Philosophy of Science at the University of Cambridge, where she is also a fellow of King’s College. Stephen John is the Hatton Lecturer in the Philosophy of Public Health in the Department of History and Philosophy of Science at the University of Cambridge, where he is also a fellow of Pembroke College.

REVIEWS

Limits of the Numerical shows with compelling detail, theoretical vision, and political urgency just how and why numbers matter. As J. L. Austin and Judith Butler showed us how we do things with words, the authors of Limits of the Numerical show us how we do things with numbers.”
— Chad Wellmon, University of Virginia

“The availability and power of numbers in our ‘data-driven world’ have never been greater, and, for just that reason, are greatly contested. Limits of the Numerical explores the paradoxes of quantitative reasoning that have arisen as a corollary of its power and recognizes that a blind reverence for numbers undermines expertise as much as it supports it. These stories of numbers are inescapably human ones.”
— Theodore M. Porter, University of California, Los Angeles

“In the confusing context of both the pandemic and global warming, this compelling book is a timely unraveling of the uses and abuses of statistical models, quantified measures, big data, and numerical targets. Limits of the Numerical paves the way for renewed scientific controversies and public debates on the work of quantification and its politics.”
— Isabelle Bruno, University of Lille and Academic Institute of France (IUF)

TABLE OF CONTENTS

- Christopher Newfield, Anna Alexandrova, Stephen John
DOI: 10.7208/chicago/9780226817163.003.0001
[Original Critique;quantification;fragility of the numerical;cost-benefit analysis;crisis of expertise;big data;statactivism]
Humanities and social sciences scholarship about numerical forms of reasoning and governing tend to ascribe to it totalizing powers of domination of all other forms of knowledge and discourse. This is illustrated most vividly by the Original Critique, a tradition of analyzing quantification as a process that displaces and erases the qualitative. Developed by historians and sociologists, Original Critique emphasizes how cost-benefit analysis and, latterly, big data, have eclipsed other ways of policy evaluation and knowledge generation. This introduction offers a more nuanced and ambivalent picture, demonstrating the fragility of the numerical and the variability of its fates across different spheres of science and practice. The introduction proposes a new definition of quantification that recovers the essential political role of numbers in challenging the status quo, namely: through democratic accountability and statactivism. The introduction shows that the numerical both enables and challenges the so-called crisis of expertise and articulates a methodology for approaching quantification that respects its diversity. (pages 1 - 20)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

Part I. Expert Sources of the Revolt against Experts

- Elizabeth Chatterjee
DOI: 10.7208/chicago/9780226817163.003.0002
[populism;expertise;trust;technocracy;Donald Trump;Brexit;depoliticization;new public management;post-democracy;quantocracy]
If the political influence of quantitative experts were as powerful as much scholarship suggests, how could it prove so suddenly fragile during the Brexit vote and US presidential election of 2016? This chapter argues that the public prestige of experts does not automatically rise and fall in tandem with numerical politics. Well before 2016, the relationship between the two was fraying. The prominence of numerical technologies in public policy from the 1980s onward was an expression of expert weakness, not strength. Performance indicators, public opinion measurement, and invented numbers were examples of quantification against experts. This gradually devalued the popular currency of expertise. Yet recent populist attacks on quantocracy do not signal a total rejection of numerical politics. Drawing on evidence from political speeches and a corpus of Donald J. Trump’s tweets, the chapter shows that Anglo-American populists introduced charismatic numbers without experts. Plebiscitary numbers (e.g., poll results and Twitter followings), gained visibility. Such metrics claim a more “authentic” legitimacy as direct and intuitive representations of the popular will. Both trends—the breakdown of the link between quantocracy and depoliticization, and the rise of numbers purportedly free of experts—require us to update our sociologies of numbers in public life. (pages 23 - 46)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

- Christopher Newfield
DOI: 10.7208/chicago/9780226817163.003.0003
[presidential election;Democratic party;quantification;racism;authoritarianism;deindustrialization;democracy;Trump voter;Obama voter;economic decision-making]
How does quantification affect US political culture? This chapter uses the two presidential elections of 2016 and 2020 to define a factor--“numerical fatalism”--that helped the much-discussed voter turn from Obama to Trump. A large body of literature suggests that racism, authoritarian personality structures, and lack of a university degree were factors in voters supporting Trump. Equally, these factors, either individually or in combination, conflict with most of these voters’ preferences for most major Democratic policies. Analyzing two characteristic Democratic formulations (Robert B. Reich in 1991 and Barack Obama in 2016), the chapter argues that Democratic candidates weakened their appeal to Trump Democrats by using quantitative arguments both to rationalize the deindustrialization that resulted from “free trade” globalization and to reject the political agency that might resist it.The Original Critique of quantification noted the tendency of authorities to use numerical discourses to correct and exclude vernacular knowledge that emerges from the direct experience of non-experts. The chapter concludes by extending this critique with an alternative version of Obama’s 2016 statement, based on a parity between quantitative and qualitative discourses that is a prerequisite to improving Democratic party performance and, more fundamentally, to re-democratizing everyday political economy. (pages 47 - 68)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

Part II. Can Narrative Fix Numbers?

- Heather Steffen
DOI: 10.7208/chicago/9780226817163.003.0004
[assessment;audit;higher education;learning outcomes;narrative]
This chapter examines the representational and rhetorical features of the narratives that surround and secure regimes of social measurement, particularly learning outcomes assessment in US higher education. The chapter defines and describes “audit narratives”—implicit and explicit stories about audits, auditors, and auditees that appear in the rhetorical performances around programs of social measurement—in arguments produced by learning assessment advocates. In contrast to the common assumption that narrative and quantification are opposing modes of communication, this chapter shows how narrative supports the restructuring of qualitative and professional knowledge domains into cultures of audit permeated by quantitative rationality. Audit narratives are able to reshape subjectivities, organizational relations, and institutional missions by representing: 1) audit as a universal solution; 2) the auditor as a mediator; and 3) the auditee as both a bad actor and a responsible subject. The chapter concludes with a discussion of how audit narratives in assessment discourse affect undergraduate education in the US and with a consideration of the implications for the broader study of quantitative cultures from humanities perspectives. (pages 71 - 92)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

- Trenholme Junghans
DOI: 10.7208/chicago/9780226817163.003.0005
[rare diseases;orphan drugs;pharmaceutical regulation;health technology assessment;qualitative excess;classification;counting;commensuration]
Whereas many approaches to the numerical focus on quantifying regimes’ propensity toward sedimentation and irreversibility, this chapter presents an alternative. In assessing the efficacy and value of drugs for rare diseases (sometimes referred to as “orphan drugs”), current quantitatively based paradigms can fall short. These shortcomings provide an opening for patient groups and powerful pharmaceutical interests to actively mount their own critiques of “the limits of the numerical” in the fields of pharmaceutical regulation and health technology appraisal (HTA), and to push for alternatives that are less quantitatively rigorous. Less rigorous methods compel decision makers to make uncomfortable reckonings about the value and credibility of non-standard forms of evidence; to contend with the qualitative excess that numbers so effectively suppress; and to consider the repertoires of classification, counting, and commensuration that undergird and naturalize established regimes of quantification. This case argues for an expanded field of analysis in the critical study of quantification, one that attends to the micro contingencies that lend coherence to established quantifying regimes (e.g., mundane practices of reckoning similarity, and habitual ways of classifying and counting), and that better recognize the possibility of active challenges to established regimes of quantification. (pages 93 - 116)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

- Laura Mandell
DOI: 10.7208/chicago/9780226817163.003.0006
[reading;literature;digital humanities;interpretation;macroanalysis;case history;mixed method]
This chapter compares two different readings of Jane Austen's novel Emma, one written by a literary critic that is qualitative, with a quantitative reading proposed by an expert in the emerging field of digital humanities. Digital humanists who make use of numerical analyses, text-mining, and topic modeling, for example, often claim that their readings are more "correct" readings than qualitative close-reading could ever be because of its small sample size. Rather than reject either method, this chapter treats Austen's Emma as a "case history" for the sake of undermining the correct/incorrect dichotomy through the "mixed methods" approach employed by sociologists. Additionally, the chapter theorizes what goes on in discussions of literature (e.g., in college classrooms) where evaluating whether an interpretation is more or less correct actualizes the case history's effects. (pages 117 - 140)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

Part III. When Bad Numbers Have Good Social Effects

- Stephen John
DOI: 10.7208/chicago/9780226817163.003.0007
[manipulation;lying;misleading;health advice;vagueness;trust;expertise]
Much health advice involves making claims that involve vague concepts (e.g., “good health”), which are based on weak, uncertain, or inconclusive evidence. Nonetheless, much advice is framed in numerically precise terms. This chapter investigates the ethics of this phenomenon of “spurious precision” through a case study of the UK government’s advice that everyone should eat five portions of fruit and vegetables daily. The chapter investigates the relationship between spurious precision and familiar concepts from the ethics of communication: lying, misleading, deceiving, and manipulating. It also draws on literature in the philosophy of science on the roles of non-epistemic values in science, and how scientific claims and concepts “travel” between different disciplines and spheres of use. Through this close study of the ethics of one way in which numerical communication can be misleading, the chapter aims to contribute to a larger topic: how numbers can be used to mislead and manipulate, as well as educate, and how they figure into larger relationships of trust between experts and non-expert audiences. (pages 143 - 160)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

- Gabriele Badano
DOI: 10.7208/chicago/9780226817163.003.0008
[political philosophy;quantification;precision;capability approach;John Rawls;public justification;National Institute for Health and Care Excellence]
This chapter aims to make analytical political philosophy part of existing discussions about the role of numbers in the workings of political institutions that already cut across many other disciplines in the humanities and social sciences. It first explores the prominent “capability approach” to justice, characterized by skepticism toward excessive precision in law- and policy-making. Given the close link between precision and quantification, the loudest voice from political philosophy turns out to be one of warning against the use of numbers in political decision-making. However, the chapter also discusses powerful objections to the capability approach that, building on the work of John Rawls, stress the importance of public justification and, in turn, simplifying devices in political decision-making. Those objections demonstrate that quantification is very important from a normative perspective. To further support its claim that under certain circumstances numerical tools might well be the best way of making political decisions, the chapter uses as a case study: the National Institute for Health and Care Excellence, an administrative body in charge of appraising health technology for use in the British National Health Service. (pages 161 - 178)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

Part IV. The Uses of the Numerical for Qualitative Ends

- Anna Alexandrova, Ramandeep Singh
DOI: 10.7208/chicago/9780226817163.003.0009
[well-being;life satisfaction;measurement;indicator;national statistics;GDP;cost efficiency;evidence-based policy]
This chapter traces the recent history of quantification of well-being to show that it remains at once diverse, controversial, and widely embraced. One measure in particular, life satisfaction, dominates because of its ease and accessibility, because it became essential for challenging traditional economic indicators (e.g., GDP), and because it can be plugged in straightforwardly into cost efficiency analysis. It is tempting to criticize any of these measures because they do not capture what well-being truly is. However, this is misplaced because well-being is a malleable concept that fits around practical goals of governance, politics, and management. Instead, a better way to criticize well-being measures is by reviewing the specific practical projects in which it is deployed and by comparing them to alternatives. The chapter illustrates more and less technocratic deployments of well-being indicators in national statistics and evidence-based policy. (pages 181 - 200)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

- Greg Lusk
DOI: 10.7208/chicago/9780226817163.003.0010
[measurement;policy;climate change;extreme weather;adaptation;trust;decision making;risk analysis;expertise]
How do we make and use numbers in a responsible way? This chapter addresses this deceptively simple question. It argues that there is a central insight between common criticisms of quantification and certain views of measurement in philosophy of science. The shared insight is that quantification is itself perspectival. Numbers—despite claims to objectivity—always carry with them a certain orientation toward that which they represent. This orientation connects the ethical impacts of quantification to the epistemic perspective that such numbers promote. Thus, one sign of a virtuous method of quantification, this chapter claims, is an alignment between quantification’s representational capacities and laudable social ends. However, achieving this alignment is harder than it may seem, as is demonstrated through a coupled ethical-epistemic analysis of extreme weather attribution. Extreme weather attribution is a promising social technology that can be used to spur climate adaption and promote justice. The analysis of weather attribution in this chapter shows how and why certain numbers may or may not align with their intended purposes, but also how one might begin to assess the virtues and vices of quantification that bridge science, policy, and decision-making. (pages 201 - 218)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

- Aashish Mehta, Christopher Newfield
DOI: 10.7208/chicago/9780226817163.003.0011
[higher education;benefits of a college degree;economics;humanities;human capital theory;Bildung;wage returns to education;college humanities major]
This chapter compares humanities scholars’ and economists’ views of the benefits of a college degree, finds them broadly compatible, and argues that an overreliance on quantitative evidence in policy-making sidelines both. Many of the benefits of higher education that humanities scholars emphasize are multifaceted and exist either within its recipient (e.g., Bildung), or take the form of large changes in social and political organization (e.g., social, including racial, justice). Thus, they are either not directly observable, not reducible to statistics, or both. Economics, drawing on human capital theory, emphasizes that higher-education changes behavioral outcomes, some of which are observable. The two views are compatible because the benefits enumerated by humanists not only coexist with, but also cause the changed outcomes that economists emphasize. However, only some of the observable outcomes are quantifiable, and only a subset of those can be causally attributed to higher education using quantitative methods. This chapter demonstrates, through a thorough literature review, that overemphasizing the most readily quantified benefit--the wage returns to college--has warped funding models. These funding models undermine humanities majors--which benefits society, not just the person receiving the education--and creates inequity in cultural and intellectual experience. (pages 219 - 256)
This chapter is available at:
    University of Chicago Press
    https://academic.oup.com/chica...

Acknowledgments

References

Contributors

Index