The Role of Meta-Empirical Theory Assessment in the Acceptance of Atomism

The universal acceptance of atomism in physics and chemistry in the early 20 century went along with an altered view on the epistemic status of microphysical conjectures. Contrary to the prevalent understanding during the 19 century, on the new view unobservable objects could be ‘discovered’. It is argued in the present paper that this shift can be connected to the implicit integration of elements of meta-empirical theory assessment into the concept of theory confirmation.


1: Introduction
Theory confirmation is traditionally taken to be driven by the agreement of a theory's predictions with empirical data. 2 In Dawid (2006, 20013), it was proposed that the considerable degree of trust physicists have in recent years developed in empirically unconfirmed theories such as string theory can best be represented if one extends the concept of confirming evidence beyond the theory's intended domain (the domain that contains the data a theory's predictions can agree or disagree with). 3 In Dawid (2013Dawid ( , 2018, it has been suggested that considering a widened notion of epistemically relevant evidence can also be helpful for understanding the mechanism of empirical confirmation in modern science.
The present paper aims to demonstrate the confirmation-relevance of evidence beyond the theory's intended domain in an important chapter of the history of physical and chemical reasoning: the path towards the near-universal acceptance of atomism from the later 19 th to the early 20 th century.
In Section 2, I present the concepts of non-empirical and meta-empirical confirmation that introduce the confirmation-relevance of evidence beyond a theory's intended domain. Section 3 then spells out the core claims of this paper. After a sketch of the 19 th century debate on atomism in Section 4, Sections 5-7 discuss three instructive moments in that debate. Sections 8-10 will then analyze the relevance of the discussed contexts for a historical view on metaempirical theory assessment.

2: Non-Empirical Confirmation, Meta-Empirical Confirmation, and Meta-Empirical Assessment
It is argued in Dawid (2006Dawid ( , 2013) that a fully adequate appraisal of epistemic commitment in contemporary fundamental physics profits from distinguishing between empirical confirmation by evidence within the theory's intended domain and non-empirical confirmation by evidence beyond its intended domain. The most effective forms of non-empirical confirmation are based on assessing the spectrum of possible alternatives to the theory in question: meta-level observations about the research process serve as indicators of a scarcity of possible alternatives to the given theory. If a scientist has plausible reasons to infer from observations about the research process that scientific conceptual alternatives to a known theory are probably very scarce or absent, this provides an epistemic basis for trusting that theory. The subgroup of nonempirical confirmation that works along those lines has been given the name meta-empirical confirmation (MEC) in Dawid (2020).
According to Dawid (2006Dawid ( , 2013, scientists deploy three specific arguments of MEC when evaluating their theories in the absence of sufficient empirical confirmation: i) The no alternatives argument (NAA) 4 : Scientists tend to trust a theory if they observe that, despite considerable efforts, no alternative theory that can account for the corresponding empirical regime is forthcoming. ii) The unexpected explanation argument (UEA): Scientists tend to trust a theory if they observe that the theory turns out to be capable of explaining significantly more than what it was built to explain. iii) The meta-inductive argument (MIA): Scientists tend to have increased trust in a theory that fulfills the first or the first two criteria if it is their understanding that previous theories in their research field that satisfied those criteria had usually turned out empirically successful once tested. Dawid (2018) stresses the point that MEC type reasoning is not confined to contexts where empirical evidence is missing. It is also of crucial importance for assessing the scientific significance of empirical confirmation. The spectrum of alternative theories that agree with the empirically confirmed theory with respect to the collected data but differ from its predictions in other contexts controls the extent to which empirical confirmation licenses trust in a theory's so far untested predictions in a given regime. Assessing the number of unconceived alternatives therefore is a precondition for relying on empirically confirmed theories. Assessments of this kind do not amount to meta-empirical confirmation because they rely on empirical evidence within the theory's intended domain. But they use the MEC mode of reasoning to establish the significance of empirical confirmation in the given context. We will therefore call the described mode of reasoning meta-empirical assessment (MEA). In the absence of MEA, empirical confirmation can be formally achieved based on the agreement between the theory's predictions and collected data. The reliability of the theory's so far untested predictions cannot be established in that case, however, which gravely reduces the scientific value of confirmation.
When MEA is applied in the context of empirical confirmation, a fourth very important mode of reasoning is added to the three forms of MEC described before: novel confirmation. Novel confirmation is empirical confirmation by data that has not entered the construction process of the confirmed theory. The issue of confirmation extra value of novel confirmation over accommodation (where the confirming data did inform theory construction) has for a long time been extensively discussed in the philosophy of science (see e.g. Maher 1988, Mayo 1996, Worrall 2002, Barnes 2008. The context of MEA offers a fairly straightforward view on the issue (Dawid 2013a). Novel confirmation generates confirmation extra value based on assessing the spectrum of possible theories in a way similar to MIA: while MIA infers constraints on that spectrum from predictive success elsewhere in the research field, a similar but more powerful inference can be based on novel predictive success of the given theory itself if that theory has found empirical confirmation.
The relation between MEA and MEC is the following: MEA as a general form of reasoning plays an important role in all contexts of theory confirmation. Normally, it is deployed in the context of empirical confirmation to assess the reliability of an empirically confirmed theory. In special cases, however, MEA can stand on its own. In those cases, it turns into MEC and assumes the role of an independent mode of confirmation.
As it stands, striking examples of candidates for MEC seem to be confined to contemporary fundamental physics. The specific combination of strong conceptual constraints on consistent theory building on the one hand and the difficulties to find conclusive empirical testing on the other makes MEC a significant and independent mode of reasoning in that specific context. Other scientific disciplines as well as previous stages in physics lack at least one of the two described ingredients. The basic lines of reasoning represented by MEA, to the contrary, are quite generic and have played a significant role in various fields of science for a long time.

3: What this Paper Aims to Demonstrate
In light of the proposed role of MEA, two historical questions may be raised. First, if one aims to establish that MEA has often been of crucial importance for assessing a theory's status, it would be interesting to find historical cases that exemplify and demonstrate the significant role of MEA at earlier stages of natural science. Second, as suggested in Dawid (2009Dawid ( , 2013Dawid ( , 2019, the relevance of MEC in contexts like string theory and eternal inflation may be taken to suggest an actual shift in the concept of confirmation in fundamental physics: elements of MEA are acknowledged in today's fundamental physics as legitimate parts of theory assessment to a higher extent than what was customary throughout much of the 20 th century. In order to weigh the plausibility of that view, it would be helpful to understand whether comparable shifts of the perspective on confirmation that were related to the role of MEA have occurred at earlier stages in the history of science.
In the present paper, I will argue that the responses to those two questions are directly linked to each other. My analysis will focus on the rise of atomism in the late 19 th and early 20 th century that led to its near-universal acceptance around 1910. I will argue for two points. First, I will aim to make plausible that the step towards the general acceptance of the atomic hypothesis after Perrin's experiments is to a considerable extent based on MEA type reasoning. Second, I will argue that the element of MEA that was instrumental in establishing atomism eventually got integrated in the concept of theory confirmation. This step established a notion of theory confirmation that substantially differed from the 19 th century view and has been prevalent in the physical sciences throughout the 20 th century until today.

4: The 19 th century debate on Atomism
The claim that the world consists of extremely small indivisible objects moving through space dates back to ancient Greek and Indian philosophy, where it was motivated by fundamental philosophical considerations. It regained popularity in early modern science and found increasing empirical support, both in physics and chemistry, in the 19 th century. The 19 th century also saw the emergence of a more rigid take on the testing of scientific hypotheses, which made it substantially more difficult for atomism to get credit as a testable, and therefore legitimate, scientific position. 5 This led to the peculiar situation that, at the very point when atomism turned into an empirically relevant hypothesis, a considerable part of the scientific community started doubting the hypothesis' scientific legitimacy.
Two main elements in anti-atomist reasoning at the time may be distinguished. First, it was argued that the naïve ontological commitment associated with the atomist hypothesis was at variance with the core message conveyed by the physics research process: the evolution of physics had led to ever-increasing conceptual abstraction and formalization, which suggested a representation of observed phenomena by mathematical structure without reliance on intuitionbased ontological extrapolations from our dealings with everyday objects. Atomism, in this light, seemed old-fashioned and detrimental to conceptual progress in physics and chemistry. 6 The second argument, which will be more important for our analysis, addressed the threat of underdetermination of theories that relied on posits of unobservable objects. It was argued that it would be impossible ever to rule out or even render improbable the scenario that data that was in agreement with atomist conceptions could be represented just as well or better by theories that did not assume atoms. Moreover, the stability of atomist theorizing seemed to be systematically impaired by the existence of competing, mutually inconsistent atomist models among which it was impossible to decide one empirical grounds. 7 In this light, the argument went, it would never be possible to reliably establish posits about microphysical objects. 8 While posits on the existence or behavior of visible objects could be endorsed based on direct observation of those objects, posits of microphysical objects could be the basis for making predictions that agreed with observations but nevertheless, as a matter of principle, remained mere speculations. The ontology of a microphysical theory, in that light, could at most serve as an auxiliary construction deployed for making structural characteristics of the observed phenomena easier to grasp.
The understanding that there was an unbridgeable epistemic gulf between hypotheses on observable objects that could be scientifically proven, and hypotheses about unobservable objects that could not was abandoned in the early 20 th century as a direct result of the general acceptance of atomist theories in physics and chemistry. The concepts of scientific proof and verification were replaced by confirmation and discovery 9 . On the new understanding, unobservable objects could be discovered and theories about unobservable objects could be confirmed just as well as theories about observable objects. Attempts to retain the epistemic significance of the boundary between observable and unobservable objects retreated to the philosophical level and ended up being discussed in the context of the scientific realism debate. The scientific concepts of discovery and confirmation, however, were deployed on both sides of that philosophical divide.
In the following, I want to analyze three episodes in the history of late 19 th and early 20 th century atomism which, in my understanding, demonstrate the role of non-empirical reasoning in the process that eventually led to the universal understanding that atoms could be and had been discovered. The first two case studies exemplify two different ways in which the epistemic status of atomism was understood by the concept's exponents in physics and chemistry during the later decades of the 19 th century. James C. Maxwell in his paper "On the Dynamical Evidence of the molecular constitution of Bodies" from 1875 tries to establish substantial epistemic support for atomism while, at the same time, conceding a qualitative difference between that support and the actual proof of a hypothesis by observation. 10 In the second case study, van 't Hoff, LeBel and Kekulé are significantly more forceful than Maxwell in asserting the reality of atoms based on the development of atomic models of isomers in 1874. The third case study looks at the fast emergence of a scientific consensus on the existence of atoms after Perrin's experiments of 1909/10. It will turn out that meta-empirical theory assessment provides a helpful basis for understanding the differences between the three cases.

5: Maxwell and his "Method of Physical Speculation"
Maxwell was one of the main exponents and defenders of physical atomism in the 19 th century. His contributions to statistical mechanics were crucial for making contact between atomism and phenomena, the hypothesis is said to be verified, so long, at least, as someone else does not invent another hypothesis which agrees still better with the phenomena." (Maxwell 1875, p357) 9 We will, in the following, understand discovery as a declaration of conclusive confirmation. 10 Maxwell's use of the term 'proof' in the given context may be close to 'conclusive confirmation' in a modern terminology. empirical precision data. In a programmatic text, Maxwell (1875) aims to make the best possible case for atomism against the considerable number of anti-atomist voices at the time. 11 Maxwell acknowledges a fundamental epistemic difference between theories about observable objects, such as Newtonian mechanics, and theories such as atomism which rely on posits of unobservable objects. In Maxwell's understanding, something akin to Newton's deduction from the phenomena is applicable to theories about observable objects. They can, in a sense, be proved by empirical data. If what we observe behaves consistently in the way the theory predicts, this means that the theory is true. This road is blocked for theories about unobservables, however. The latter must, for all eternity, face the threat that what they posit might not really exist. If someone came up with a theory that did not contain the objects posited by our theory but described our observations in a more accurate or predictively more powerful way than ours, we'd be led to admit that our theory was false. Maxwell thus admits that atomism cannot be proved (which one might roughly equate with "conclusively confirmed" in modern terminology). In this light, Maxwell concedes that the anti-atomist can plausibly claim that atomism should rather be called a speculation than a proven hypothesis.
Maxwell insists, however, that a careful analysis of the case for atomism can nevertheless reveal substantial epistemic support for the hypothesis. In demonstrating this, Maxwell's guiding principle is to reduce as much as possible the speculative leap that leads towards atomist commitment. The scientist should, on Maxwell's view, hypothesize no more than what is essential for saving the phenomena. Maxwell writes: "Of all hypotheses as to the constitution of bodies, that surely is the most warrantable which assumes no more that they are material systems, and proposes to deduce from the observed phenomena just as much information about the conditions and connections of the material system as these phenomena can legitimately furnish." By "material systems" Maxwell means systems that obey the fundamental physical laws (specifically Newtonian mechanics and electromagnetism) that had been well-established based on testing observable objects. A theory about unobservable objects, Maxwell suggests, should stick as closely as possible to what is known about comparable observable phenomena. In addition, Maxwell introduces two more requirements that pull in opposite directions. The theory needs to be specific enough to make interesting predictions that are empirically testable. But it should at the same time be adaptable to future observations, which increases the chances of the theory's long-term survival 12 .
Physical atomism as proposed in the context of statistical mechanics, in Maxwell's understanding satisfied the conditions he spells out very well. A series of small and natural steps led towards the atomist description used in statistical mechanics. The notion that there could be objects that were too small to be seen was well established based on empirically controlled objects close to that boundary (such as very small dust particles). The properties of atoms were modelled after well-understood properties of observable objects. The implications of these 11 Maxwell's text and the epistemic significance Maxwell attributes to his line of reasoning has been analyzed in detail by Achinstein (2010Achinstein ( , 2013. 12 Note the decidedly non-Popperian spirit of the last point.
properties could be calculated based on statistical mechanics and then empirically tested in various ways, thereby satisfying Maxwell's second criterion. Still, many characteristics of atoms remained undetermined. Therefore, the approach avoided risky speculative leaps and retained flexibility in the face of future observations, as required by his third criterion. Maxwell concludes that all this in conjunction, though not equivalent to the near-deductive 'proof' of a theory about observable objects, did generate substantial epistemic support for atomism.
Maxwell proposes a conservative understanding of the basis for the epistemic support for atomism. On his view, legitimate trust in the atomist hypothesis is based on minimizing the risk of going astray by avoiding daring speculation, on emulating what has worked before and on finding various ways of checking the coherence and empirical viability of the resulting hypothesis. While this strategy of reasoning, in the eyes of Maxwell, achieves a certain level of epistemic support for atomism, there remains the fundamental problem that hypotheses about unobservable objects, even if they avoid unnecessary speculative excesses, still enter unchartered territory where, as a matter of principle, there is no way of ruling out that an alternative but empirically roughly equivalent theory represents the truth about microphysics. To the critic who raises this worry, Maxwell's response is considerate but somewhat timid. Maxwell concedes that alternative explanations of our observations cannot be excluded, which is why we cannot speak of a proof (conclusive confirmation) of atomism. But he points out that our intuitions on the continuity between the observable and the unobservable regime render that risk reasonably small.

6: The case of Isomeres
Atomism made its entry in chemistry when John Dalton suggested that the law of definite proportions could be understood in terms of bound states of atoms. The atomist perspective was developed further in a number of steps. Of particular importance was Kekulé's proposal of the Benzol ring, which strongly suggested an interpretation in terms of a spatial structure of atoms. Based on Kekulé's proposal, van't Hoff (1874) and LeBel (1874) independently proposed a solution to the problem of isomers in terms of spatial atomic structure. 13 Isomers are chemical substances that consist of the same chemical elements with the same proportions but nevertheless differ in their chemical properties. Given that they consisted of the same elements, it seemed natural to represent the difference between isomers by positing different characteristics of binding between those elements. Van't Hoff and LeBel understood that such different characteristics could be achieved by assuming different relative spatial positions of the same atoms in a molecular bound state.
At first glance, the arguments in favor of an atomist explanation of isomeres seem to be of a similar kind as those deployed in Maxwell's case for atomism in the context of statistical mechanics. Like in Maxwell's case, the arguments for an atomist explanation of isomers rely on intuitions that are derived from our experiences with everyday objects and, just as in Maxwell's case, they involve consistency checks and comparison between the hypothesis' predictions and empirical data.
At closer inspection, however, a significant new element can be identified: the defense of an atomist view on isomers relied on the self-confident use of no-alternatives reasoning on abstract grounds. While Maxwell tried to control the epistemic risk of atomism by minimizing the speculative element of the new posits, the exponents of atomist models of isomeres fully embraced the creative leap that came with their hypothesis. 14 15 The idea that atoms could only be bound into molecules according to a small set of allowed connections and spatial structures could not be extrapolated from anything one knew about small observable objects. (In the end, this feature found a satisfactory explanation only in the context of quantum mechanics.) Rather, the idea was motivated entirely by the need to explain the observed chemical phenomena. In other words, the posits of such sets of allowed spatially oriented molecular structures did not shun speculative leaps for the sake of minimizing risk of failure. The posits' exponents did not aim to increase their trustworthiness by keeping them as close as possible to what one knew about observable objects. Rather, the viability of the hypothesis was defended based on a fullfledged no alternatives argument: there just seemed to be no other way of making sense of the phenomenon of isomeres. The status of the hypothesis became stronger with the growing conviction that indeed there was no other plausible conceptual representation of the phenomenal differences between different isomers.
The no alternatives arguments involved two core points. First, in the absence of spatial positioning there seemed to be no degree of freedom available at all to represent differences between substances that consisted of the same elements with same proportions. Second, as had been observed by Louis Pasteur, different isomers of salts of tartaric acid rotated the polarization axis of polarized light in different ways. Given that light polarization was understood to be a spatial phenomenon, it seemed difficult to imagine any physical representation of the effect of isomers on polarization that was not based on spatial characteristics of the differences between the isomers themselves. Once one conceded that the spatial characteristics of the substance were crucial in both cases, it was exceedingly difficult to imagine a representation of the spatial characteristics of isomers that was not based on an atomist perspective. In conjunction, the described steps of reasoning thus suggested that the phenomenology of isomers left no alternative to assuming molecules extended in space.
The atomist view on isomers was additionally strengthened by successful predictions with regard to the number of isomers of particular chemical substances. The logic of the spatial analysis often allowed clear-cut conclusions regarding the set of possible spatial orientations for molecules consisting of a given set of atoms, which then translated into predictions regarding the numbers of isomers. Many of those predictions could be tested and confirmed in the years after the atomist explanation of isomers had been presented. 16 14 A manifestation of this speculative attitude may be seen in August Kekulé's famous quote recounting his conceptualization of the benzol ring: "Let us learn to dream, gentlemen, and then perhaps we shall learn the truth." (Kekulé 1890). 15 For an in-depth analysis of the role of imagination in late 19 th century chemistry, see Rocke (2010). 16 Isomers already provided a basis for coherent MEA type reasoning in favor of atomism before the advent of stereochemistry. Even without stereochemical arguments, atomist accounts of isomers correctly predicted the number of isomers for specific chemical compounds (Rocke 2001). Those predictions were instances of successful novel prediction. Moreover, they allowed for NAA type reasoning since no such predictions seemed possible without atomist reasoning. I am grateful to Alan Rocke for pointing this out to me.
The opponents of atomism could and did still resort to invoking the threat of unconceived alternatives and characterize the full endorsement of atomism as unlicensed dogmatism on that basis (see e.g. Berthelot as described by Rocke (2001)). Obviously, there could be no proof that no fully satisfactory non-atomist representation of isomers was possible. In the end, the chemist had to make up her mind whether the combination of no-alternatives reasoning and cases of novel confirmation provided a sufficiently strong basis for the conjecture that atomism indeed provided the only possible framework for giving a satisfactory representation of isomer phenomenology.
The described mode of reasoning is a far cry from Maxwell's defensive idea that one could establish some epistemic reliability of a hypothesis about unobservable objects by keeping the hypothesis similar to our understanding of observable contexts. In the case of isomers, the debate focused on the logical space of scientific possibilities, while the issue of similarity to observable contexts had become all but irrelevant.

7: Perrin and the Universal Acceptance of Atomism
The case of isomers did not lead to the universal acceptance of atomism. Generally, the debate on chemical atomism seems to have had only limited impact on the debate on atomism in physics. But even in chemistry itself, a considerable faction of anti-atomists remained unconvinced by the reasoning offered by van't Hoff and LeBel.
The final breakthrough of atomism as the universally accepted paradigm in microphysics and chemistry happened in the context of physics more than a generation after the atomist explanation of isomers had been proposed. In a series of experiments, Jean Baptiste Perrin (1910) demonstrated that roughly the same value for the Avogadro number could be extracted from measurements of Brownian motion in a number of different ways. Each of those strategies involved a different and distinct element of modern physics, including different aspects of the mechanics of colliding bodies and the theory of gravity. All of them, however, depended crucially on the assumption that the Avogadro number denoted the number of atoms in a unit volume. That is, all strategies crucially depended on atomism. Perrin's set of experiments demonstrated that a number that characterized an aspect of the posited unobservable objects could be arrived at based on a broad spectrum of lines of reasoning that addressed a wide range of physical properties attributable to those unobservable objects. Only an atomist point of view could plausibly bundle those lines of reasoning in a way that turned them into statements about the same subject matter (that is, the number of atoms in a unit volume). In the absence of an atomist view there seemed to be no way of understanding why Perrin's various investigations did not extract a set of different and independent characteristic numbers. Therefore, it seemed outright miraculous on a non-atomist view that Perrin's various lines of analysis delivered roughly the same number.
The canonical view on the power of Perrin's experiments focuses on the implausibility of the idea that the coherent results for the Avogadro number would be a mere coincidence. 17 While the implausibility of the coincidence scenario is an important element of support for atomism, it is not sufficient, however. Even if one rules out mere coincidence as an explanation of the coherence of Avogadro's results, the possibility remains that another, non-atomist explanation can explain that coherence as well. Therefore, two further important steps are required for making a convincing case for atomism based on Perrin's results. First, one needs to establish that no non-atomist explanation other than mere coincidence has been found. Second, it needs to be established that the existence of unconceived alternative explanations can be considered very unlikely. The latter point amounts to a full-fledged no-alternatives argument. The unanimous endorsement of atomism after Perrin's experiments can only be understood if one assumes that the no-alternatives argument was implicitly acknowledged to be very convincing by most scientists at the time.
Perrin's reasoning played out within the "low epistemic risk" context of Maxwell's statistical mechanics. It did not involve the high-flying conjectures on the detailed structures of complicated molecules that were deployed in the context of isomers and stereochemistry. But unlike in the case of Maxwell's reasoning, the power of Perrin's arguments lay to a high degree in the invocation of a no alternatives argument: it seemed just utterly implausible to assume that there was a non-atomist representation of Perrin's experiments that could explain the coherent extraction of the Avogadro number. In a broader context, it seemed even more implausible to assume that there was a non-atomist set of theories that could explain Perrin's coherent measurements of the Avogadro number plus the steadily increasing spectrum of physical and chemical phenomena that did find a convincing explanation on an atomist basis.
The viability of that judgement was supported by a meta-inductive argument (MIA): In another context of microphysical research, the investigation of cathode rays, the hypothesis that those rays consisted of charged particles (electrons) had been painstakingly established by J. J. Thomson in 1897 based on excluding all alternative explanations that came to mind. Endorsing this result, however, was, once again, based on the understanding that one understood the given research context sufficiently well to carry out the inference from the exclusion of all known alternative explanations to the statement that there were no other unconceived explanations. Only the second step could justify trust in the hypothesis and its more far-reaching predictions. In other words, a NAA-type argument also played a crucial role in endorsing the electron hypothesis based on Thomson's experiments. The later success of the electron hypothesis in an increasing number of experimental tests, from measurements of the mass of electrons to measurements of their traces in the earliest cloud chamber setups, demonstrated the stability of the hypothesis in a wide range of physical contexts. This instilled trust in the reliability of wellconsidered no-alternative claims in the emerging field of microphysics. The trust generated by success stories of no-alternatives reasoning in cases such as the electron could then be used to bolster the trust in no-alternatives claims in the contexts of Perrin's experiments.
One may also identify a substantial element of support for atomism that is related to unexpected explanation (UEA) and novel confirmation. At the time of Perrin, atomism had already accrued a wide range of instances of novel confirmation. One might mention Dalton's prediction that mass ratios between different compounds that consisted of the same elements could be represented in terms of small integer numbers; van 't Hoff's predictions of spectra of isomeres; Maxwell's prediction of the pressure independence of gas viscosity at medium pressure; or the predictions regarding details of Brownian motion that could be extracted from Perrin's analysis.
Beyond these specific cases of novel confirmation, the striking and coherent range of applicability across a wide number of research issues in chemistry and physics might be framed in terms of UEA. 18 Novel confirmation arguments as well as UEA further supported the hypothesis that no equally satisfactory alternative hypotheses to atomism could be constructed.

8: Integrating Meta-Empirical theory assessment in Empirical Confirmation
As described in the previous section, the general acceptance of atomism in the early 20 th century relied on a continuation of the strategies of MEA that had been deployed already in the context of isomers and stereochemistry from the 1870s onwards. However, one important difference between the two cases must be noted.
In the isomer case, MEA was used to generate trust in the theory without making the explicit step towards equating the status of atoms with the status of observed phenomena. Even supporters of atomism accepted a fundamental epistemic difference between a directly observed phenomenon such as the attractive force exerted by a magnet and a theoretically conjectured object such as the atom. The former has been empirically proven (which would roughly correspond to "discovered" in modern terminology.) The latter could be considered likely to exist based on meta-empirical assessment, but still remained an unproven (and maybe unprovable) hypothesis. This perspective was reflected by the fact that two aspects of theory analysis were still kept apart by the exponents of isomer atomism. There was the observation of the predictive success of a three-dimensional atomic representation of isomers on the one hand. And there was the recognition that no alternative account seemed capable of being similarly successful on the other. The first observation suggested that an atomist investigation of isomers constituted a progressive research program. The second observation, in the eyes of many of the approach's exponents, did suggest that chemical atomism was probably true. But their commitment to the latter claim was not considered sufficiently stable for declaring atomism conclusively established beyond the context where it had been found to be successful and without alternatives. It seems difficult to find any voice that asserted in the late 19 th century that the success of an atomist theory of isomers in itself decisively supported atomism in conceptually very different research contexts, such as thermodynamics.
Perrin's experiments changed this situation. The consensus emerged that the web of empirical evidence and predictive and explanatory success in conjunction with the described meta-empirical assessment of the situation sufficed for acknowledging the discovery of the atom. Henceforth, atoms were viewed as a discovered phenomenon just like the magnetic force. After Perrin, the meta-empirical assessment that established the reliability of theoretical hypotheses that had been successfully empirically tested was implicitly accepted as the foundation for declaring empirical confirmation of a hypothesis conclusive. The general acknowledgement of the discovery of atoms from that point onwards in itself licensed the deployment of the atomist paradigm in all contexts where it was relevant, be it in other contexts of physics, such as thermodynamics, or in chemistry.
I suggest that one crucial conceptual step is responsible for the described shift. Metaempirical analysis after Perrin was not any more viewed as an extraneous element of reasoning that could, beyond the scientific question of the progressive nature of a research program, influence the scientist's commitment to the theory's truth. To the contrary, it was now implicitly accepted as an integral part of the process of theory confirmation itself. Empirical confirmation after Perrin consisted of two elements. First, there was the observation of the agreement between the theory's predictions and the empirical data. And second, there was meta-empirical theory assessment that provided the basis for assuming that the empirically successful theory, with respect to its core tenets, most probably did not have any possible alternatives. Once that point had been established, the posited unobservable objects could be treated on par with empirically confirmed observable phenomena. It had become possible to discover unobservable objects based on empirical testing. This fundamental step had substantial implications for the scientific research process. Given that the existence of alternatives to atomism as the basis for microphysical and chemical theory building had been ruled out, future theory building could be anchored within that framework without further justification. This implied that the evidential threshold for establishing future hypotheses within that framework was substantially lowered. The announcements of discoveries of elementary particles could increasingly rely on a wellestablished set of conditions that had to be met for announcing a discovery. This development allowed physicists to infer the existence of microphysical objects from an increasingly narrow evidential base. When the discovery of the Higgs particle was announced in 2012 at the LHC, the announcement was based on a 5σ effect extracted from one specific experimental setup that included two detectors at one particle collider. Extracting that effect was based on a subtle and complex web of elementary particle theories and collider physics that was, through many twists and turns, rooted in the old core idea of atomism. Taking this entire web of theories for granted was necessary for trusting the Higgs result. Physicists took it for granted based on their understanding that there just was no possible alternative set of theories that was in agreement with the previously collected empirical data in high energy physics but played out differently in the context of the Higgs discovery (Dawid 2015(Dawid , 2017. The understanding that a theoretically highly charged empirical result extracted in one single experimental setup could conclusively establish the existence of a Higgs particle is a far cry from the extensive web of different mutually independent forms of physical analysis that had been necessary for acknowledging the discovery of the atom a century earlier.
Atomism and the Higgs discovery are two particularly conspicuous examples of the role of MEA in modern empirical confirmation. I suggest that they represent a mechanism that is generally at work in modern empirical confirmation in physics and beyond. Further in depth analysis of the way the actual process of empirical confirmation plays out in various research contexts would be needed to test the general viability of that claim, which lies beyond the scope of the present article.

9: Connecting the three examples
We have compared three main instances of the debate on atomism where trust in the atomist hypothesis was generated due to the approach's empirical and predictive success. Maxwell could reproduce and predict several characteristics of the viscosity and specific heat of gases based on statistical mechanics. Van 't Hoff and LeBel could reproduce and predict spectra of isomers based on a three-dimensional model of molecular structure. A generation later, Perrin extracted a coherent value of Avogadro's number from various independent aspects of an atomist model of Brownian motion. While the core inference to the viability of atomism from predictive impressive success was similar in all cases, the lines of reasoning were quite different.
Maxwell's argument from continuity laid out in Section 3 is not meta-empirical in the sense spelled out in this paper. It remains at the ground level of registering predictive success, comparing conceptual characteristics of scientific theories on both sides of the observability divide and evaluating the speculative boldness and conceptual flexibility of the theoretical approach he assesses. At no point does Maxwell aim to infer the theory's viability from contingent facts about the research process. Maxwell's argument has obvious limitations, however. It constrains the range of reliable microphysical reasoning by confining it to characteristic scales close to the limits of observability and to theories that remain conceptually close to the physics about the observable regime. Therefore, it offers no basis for endorsing the daring and counter-intuitive conceptual leaps of 20 th century microphysics.
In the two other cases, the continuity argument has lost its central role. The theory of molecular structure deployed in the isomer case is highly speculative in assuming fixed angles and specific valence of atoms without being able to infer the plausibility of those posits from the available physics about observable objects. Its viability therefore could not possibly be argued for along the lines proposed by Maxwell. Brownian motion does live at the interface between the observable and the unobservable regime, which does make Perrin's research context open to Maxwell-type continuity arguments. Nevertheless, continuity seems to play merely a secondary role in Perrin's case for atomism. What is at the core of the atomist conviction in the isomer case and in the case of Perrin's experiments is something fundamentally different: the perceived nearexclusion of the possibility that alternative equally successful scientific representations may exist that can account for convergence of Perrin's results. This perception is arrived at based on metaempirical reasoning.
In conjunction, the analysis of the three cases conveys the following picture. Mid-19 th century physicists and chemists largely endorsed the understanding that there was an irreducible difference in epistemic status between theories that relied on unobservable objects and those that did not. Atomists like Maxwell in essence shared this understanding but considered it inadequate in light of atomism's record of empirical success to call the hypothesis a mere speculation. Maxwell therefore tried to extract epistemic support for atomism from the approach's conceptual conservativism and the continuity argument. This approach had to remain unsatisfactory, however, because it did not address the core epistemic problem of microphysical hypotheses, which was the problem of underdetermination.
The case for an atomist understanding of isomers and Perrin's experiments did directly attack this problem by substantially decreasing the credence in the existence of unconceived alternatives based on meta-empirical reasoning. This step freed the epistemic support for atomism from its dependence on closeness to the observable sphere. The integration of MEA into the mechanism of empirical confirmation finally provided the basis for the empirical confirmation of highly unintuitive and speculative theories about unobservable objects if they were supported by empirical evidence and the lack of alternatives was supported to a sufficient degree by MEA. It laid the grounds on which later scientists were able to endorse and trust the far-reaching theories of quantum physics and chemistry in the 20 th and 21 st century.

10: Conclusion
Let us return to the pair of questions posed in Section 3. The first question builds on the idea that MEA plays an essential role in establishing the significance of empirical confirmation. Are there historical cases that clearly demonstrate this role of MEA?
This paper's analysis has provided one reason why such examples are less straightforward to identify than one would think at first glance. As discussed above, MEA has been fully integrated into the physicist's notion of confirmation after Perrin's experiments. For that reason, it is not treated as an independent element of the confirmation process but rather is used as an implicit background assumption that provides the basis for the physicist's understanding of the significance of confirming data. More specifically, the lack of alternatives to the deployment of the dominant paradigm of microphysical objects has turned into the default expectation for all microphysical reasoning below the Planck scale.
The extent to which the full integration of MEA into the concept of empirical confirmation has led to its underappreciation in the philosophy of science is exemplified by the hypotheticodeductivist view that characterizes confirmation exclusively in terms of successful empirical prediction and does not account for the assessment of the spectrum of possible alternatives at all.
But MEA's full integration in the concept of empirical confirmation has made its implicit use largely invisible to the scientist as well. This may in part explain the conflicting reactions to the way the role of MEC has been judged in the context of string theory. MEC seems perfectly in line with conventional scientific reasoning to most of those who work on string theory and develop their assessments of the theory implicitly within their everyday research process. But it has been considered a violation of scientific standards of theory evaluation by many scientific observers who judge the motivations for having trust in the theory "from the outside" in terms of their explicit understanding of the mechanism of scientific theory confirmation -an understanding that was insensitive to background role of MEA.
Section 3 furthermore raised the question whether there are conspicuous historical precursors of the rise of MEC, where shifting attitudes towards theory assessment were also triggered by a different take on MEA-type reasoning. The presented historical analysis suggests that the current situation in fundamental physics indeed constitutes the second MEA-induced shift of the concept of confirmation that follows an earlier very substantial shift at the beginning of the 20 th century.
Before that first shift, empirical proof was confined to theories representing observable phenomena while even the most convincing posits about unobservable objects could only be called well-motivated speculations. After the shift, the concepts of confirmation and discovery have become applicable to unobservable objects.
It is important to emphasize that the modern concept of discovery is used, in the scientific realm, without realist implications. The discovery of a microphysical object neither implies the finality of the corresponding theory nor the fundamental existence of the discovered objects in a realist sense. 19 Discovery merely implies that, within a given empirical regime, all empirical implications of the observed object can be expected to be consistent with future empirical testing within that regime.
Empirical confirmation and discovery are supported by MEA but remain based on data within the theory's intended domain. This "controlled extension" of the concepts of discovery and conclusive confirmation is based, as described above, on the implicit integration of MEA into the confirmation process. Discovery is possible only if MEA has established that the existence of unconceived alternatives that could explain the given data without positing the discovered object is very improbable. The discovered object, however, must be causally linked to collected empirical data.
The step towards meta-empirical confirmation (MEC) amounts to a further step in the same direction. MEC generates trust in a theory's viability in the absence of data within the theory's intended domain that confirms the theory's characteristic hypotheses (that is, hypotheses whose empirical implications reach beyond the empirical implications of its wellconfirmed effective theories). While the meta-empirical aspect of theory evaluation has been hidden in the context of the discovery of unobservable objects, it is back in full sight in the context of MEC, since it plays the primary role in the absence of confirming empirical data. This is one reason why the continuity between empirical and meta-empirical confirmation is difficult to discern.
There are important differences between the earlier step towards acknowledging the discovery of unobservable objects based on MEA and the recent shift towards taking MEC seriously. Discoveries of unobservable objects after 1910 were just as stable as endorsements of theories about observable phenomena used to be in the 19 th century. Giving a stronger role to MEA thus did not weaken the concept of confirmation. MEC, to the contrary, does look substantially weaker than conclusive empirical confirmation. Once the modern concept of empirical confirmation had been introduced, the reliability of discoveries of unobservable objects made the distinction between confirmation of theories about observable and unobservable objects scientifically irrelevant. To the contrary, the distinction between MEC and empirical confirmation remains of crucial importance today because it indicates a substantial difference in confirmation strength.
Empirical confirmation remains the only path to conclusive confirmation. MEC is a second-best option that can be deployed under specific circumstances as long as empirical confirmation is not forthcoming. Even on the most optimistic current view on MEC this point remains undisputed. But how much confidence should we have in our current views on MEC? If we try to push the imperfect analogy between the earlier shift towards a modern notion of empirical confirmation and the considered recent shift towards MEC a little further, the latter would not be at a stage comparable to the state of empirical confirmation after Perrin's experiments. It may rather be comparable to the dawning of the modern concept of empirical confirmation right after the advent of stereochemistry. Whether there will be a Perrin moment for MEC down the road, whether MEC will, on the contrary, lose its momentum, or whether the status of MEC will end up somewhere in between those two scenarios, is impossible to predict today. Few in the 1870s foresaw the upcoming consensus about the discovery of the atom that had turned reality just one generation later. But new science tends not to follow an old script.