This paper relates the topic of truth approximation to recent work in social epistemology. Footnote 1 In particular, it is shown that much of the work on opinion dynamics that has been carried out lately shows that social aspects of research strategies should be of direct interest to those working on the issue of truth approximation. Conversely, we argue that in the debate concerning peer disagreement, which social epistemologists have devoted much attention to in the past years, issues related to truth approximation have been unduly neglected, and that progress is to be expected from considering from a truth-approximation perspective the question of whether an overt disagreement among parties that take each other to be epistemically equally well positioned with respect to a given issue can be rationally sustained.

It is no exaggeration to say that the bulk of the work related to truth approximation has focused on getting the relevant definitions right: what exactly does it mean to say that a given theory is close to the truth, or that one theory is closer to the truth than another? This project turned out to be much harder than it at first appeared. The literature now contains various sophisticated proposals for defining the notion of truth-closeness as well as for defining related notions (e.g., estimated truth-closeness). We will have nothing new to say on these, or indeed on how to best define truth-closeness. Whenever we assume a particular definition of truth-closeness in this paper, the context will be simple enough to make the assumed definition entirely uncontroversial. Footnote 2

Some of those interested in truth approximation have gone beyond the definitional work and investigated the question of which method or methods are most efficient for approximating the truth. Footnote 3 This related the issue of truth approximation to confirmation theory. We aim to address the question of efficiency in approximating the truth from a different angle, specifically, a socio-epistemic angle.

Modern science is fundamentally, perhaps even essentially, a social enterprise, in that most of its results not just happen to be due to, but require for their obtainment, the collaborative efforts of (often large) groups of scientists. At the same time, science is clearly a goal-directed enterprise. Specifically, it is plausible to assume that the aim of scientific practice is to approach the truth. From a social engineering perspective, then, the question arises how scientific collaboration is best to take place in order to achieve this goal: which forms of collaboration and interaction among scientists will be most conducive to approaching the truth. However, more specific questions will also crop up. Supposing that it is desirable to approach the truth both quickly and accurately, is there one way of organizing collaborative research that will ensure both maximal speed and accuracy or will there have to be trade-offs? And if the latter should turn out to be the case, how should collaborative research proceed when speed is important and how should it proceed when accuracy is?

Given the multifarious ways in which, in actual scientific practice, scientists collaborate and interact, one should not expect a general answer to these and kindred questions. In any event, it is not our purpose to provide such answers. Rather, we will concentrate on the efficiency (or otherwise) of what is likely to be a consequence of collaboration and interaction among scientists, to wit, that a scientist’s belief on a given issue tends to be influenced to at least some extent by his or her colleagues’ beliefs on the same issue. While this kind of mutual influencing may be inevitable, it is not a priori that it helps to attain the scientific goal of truth approximation. Perhaps this goal would be served better by scientists’ going strictly by the data and ignoring altogether their colleagues’ opinions on the matter of interest. Or perhaps in the end it makes no difference whether or not there is this mutual influencing in scientific communities. The first part of the paper surveys recent work in social epistemology that bears on the question of what contribution to the achievement of the designated goal (if any) is made by the fact that scientists frequently allow their beliefs to be affected by those of their colleagues, or at least by those of some of their colleagues. Even though this is a narrowly focused question, we nevertheless hope that it will bring into relief the relevance of social epistemology to the topic of truth approximation.

As said above, we believe that the converse holds as well, in that issues relating to truth approximation are immediately relevant to one of the currently central debates in social epistemology, namely, the debate concerning the possibility of rational sustained disagreement among peers. So far, this issue has been mainly studied from a static or synchronic perspective: only single cases of disagreement have been considered, and authors have queried what the best response in those cases would be. We take instead a more dynamic or diachronic perspective on the matter by considering strategies for responding to disagreements and querying how these strategies fare, in terms of truth approximation, in the longer run, when they are applied over and over again to respond to arising cases of disagreement. This approach is reasonable: If, as most participants to the debate on peer disagreement hold, there is an a priori answer to the question of whether or not it is rational to stick to one’s opinion in the face of disagreement with a person one regards as one’s peer, the answer will imply how one should respond to an overt case of peer disagreement whenever it arises. In other words, the answer will imply an epistemic policy, and it makes sense to assess such policies in terms of their conduciveness to the scientific goal of approximating the truth.

1 Truth Approximation in Models of Opinion Dynamics

In the past decade, social epistemologists but also computer scientists and physicists have developed models to study the epistemic behavior of communities of agents who update their beliefs at least partly as a function of the beliefs of some or all other agents in the community. Questions that have been asked about such communities concern the conditions under which the beliefs of the initially disagreeing agents converge or tend to converge, partly or fully, as well as those under which the beliefs tend to polarize. Some of these models postulate a truth which the agents can try to determine via evidence they receive. In those models, belief updates are functions either of that evidence alone or of that evidence in combination with the beliefs of other agents. Here the main questions asked all concern the conditions under which the agents’ beliefs converge, or can be expected to converge, to the truth. While some analytical results have been obtained in this field, by far most results come from computer simulations. Models for studying opinion dynamics in this fashion have been developed by a number of researchers, including Deffuant et al. (2000), Weisbuch et al. (2002), and Ramirez-Cano and Pitt (2006). But the arguably most elegant and in any case best-known model of this type was developed by Hegselmann and Krause in a series of seminal papers. Footnote 4 In the following, we begin by describing the original Hegselmann–Krause (HK) model and then go on to survey some recent extensions of it, which aim to make the model more realistic in a number of respects. These extended models will be seen to offer at least a beginning for studying the effect that various forms of socio-epistemic interaction among scientists may have on how well they will do with respect to the scientific goal mentioned earlier.

All versions of the HK model to be considered here study the opinion dynamics of communities of agents who are individually trying to determine the value of a certain parameter, where the agents know antecedently that the true value lies in the half-open interval 〈0, 1]. In the simplest version of the model, the agents simultaneously update their beliefs as to the value of the parameter by averaging over the beliefs of those other agents in the community whose beliefs are within a distance of ɛ from their own, for some given ɛ ∈ [0,1] (henceforth to be called “neighbors”; note that every agent counts as its own neighbor). To illustrate, Fig. 1 shows the evolution of the beliefs of twenty-five agents who, starting from different initial beliefs, repeatedly update in the said way, and where ɛ = .1. What we see are diverging converging groups.

Fig. 1
figure 1

Repeated updating by averaging

But most of Hegselmann and Krause’s work concerns a more interesting version of their model. In this version, the agents update their beliefs not just on the basis of the beliefs of their neighbors, but also on the basis of direct information—say, experimental evidence—they acquire about the value of the parameter they are trying to determine. Specifically, they update their beliefs by taking a weighted average of, on the one hand, the (straight) average of the beliefs of their neighbors and, on the other hand, of the reported value of the targeted parameter. To put this still more formally, let τ ∈ 〈0,1] be the value of the parameter, x i (t) the opinion of agent x i at time t, and α ∈ [0,1] the weighting factor. Further let X i (t) be the set of agent x i ’s neighbors at t—that is, X i (t): = {j : |x i (t) − x j (t)| ⩽ ɛ}—and let |X i (t)| be the cardinality of that set. Then the belief of agent x i at time t + 1 is given by the following equation:

$$ x_i(t + 1) = \alpha\frac{1}{\left|X_{i}(t)\right|} \sum_{j\in X_{i}(t)}x_j(t)+(1-\alpha)\tau. $$
(1)

For purposes of illustration, we set τ = .75, α = .5, and ɛ = .1. Then if the agents considered in Fig. 1 updated their beliefs by (1), these would evolve over time as shown in Fig. 2. We now see convergence of all opinions. Footnote 5

Fig. 2
figure 2

Repeated updating with evidential input

To clarify the idea of evidential input, we note that it is not an assumption of this model that the agents know the value of the parameter. Rather, the idea is that the agents get information from the world which gives some indication of the value of that parameter. Exactly how the agents evaluate this information and bring it to bear on their beliefs is left implicit in the model, except that the impact on each agent’s new belief is supposed to be captured by Eq. 1. Footnote 6

Hegselmann and Krause have investigated almost the complete parameter space of the above model by means of computer simulations; they also offer some analytical results. Though the model is still rather simple, they obtain a batch of interesting results, relating, for instance, speed of convergence to the value of ɛ and also to the value of τ. Furthermore, in some of their computer experiments, they allow different agents to have different values for α. These show that it is not a necessary condition for convergence to τ that all agents have access to direct information concerning τ. In fact, Hegselmann and Krause show analytically that, under certain (rather weak) circumstances, it is enough that one agent has a value for α strictly less than 1 in order for all agents to eventually end up at the truth. Footnote 7

Interesting though these and further results are, it is not hard to see that the model has some important limitations. For one, the agents are supposed to always receive entirely accurate information concerning τ. For another, in the updating process the agents’ beliefs all weigh equally heavily. Douven (2010) and Douven and Riegler (2010) argue that neither assumption is particularly realistic. Scientists have to live with measurement errors and other factors that make the information they receive noisy, and it is an undeniable fact of both everyday and scientific life that the beliefs of some people are given more weight than those of others. To remove the first limitation, Douven (2010) proposed to replace (1) by an update rule which permits the receipt of information about τ that is “off” a bit, meaning that it does not necessarily point in the direction of precisely τ but possibly only to some value close to it. To remove the second limitation, Douven and Riegler (2010) proposed a further variant of (1) which allows the assignment of weights to the agents, where each individual agent may receive a different weight. Taken together, these proposals amount to an extension of the HK model which replaces (1) by this update rule:

$$ x_i(t+1) = \alpha\frac{\sum_{j\in X_i(t)}x_j(t) w_j}{\sum_{j\in X_i(t)}w_j}+(1-\alpha)\left(\tau + \hbox{rnd}(\zeta)\right). $$
(2)

In this formula, \(w_j\geqslant 0\) is the weight assigned to agent j, and rnd(ζ) is a function which outputs a unique uniformly distributed random real number in the interval [−ζ, +ζ], with ζ ∈ [0,1], each time the function is invoked.Footnote 8

There were two main results to report about this model. The first is that communities of agents who may obtain imprecise information about τ end up on average being closer to τ for higher values of both α and ɛ. On the other hand, with lower values for these parameters, they on average tend to get faster to a value that is at least moderately close to τ. For an illustration, see the graphs in Fig. 3, which show the developments of the beliefs of twenty-five agents who receive imprecise information about τ, but who in the left and right graph give different weight to the beliefs of their neighbors each time they update. Simulations showed the result depicted here to be entirely typical.Footnote 9

Fig. 3
figure 3

Updating with imprecise information \((\zeta =.2)\); left: \(\alpha=.9,\,\varepsilon=.1\); right: \(\alpha=.5,\,\varepsilon=0\)

The second main result is that, contrary to what one might initially expect, varying the assignments of reputations has neither a significant influence on average speed of convergence to τ nor on average accuracy of convergence. Presumably, this just goes to show that (2) is still too poor as a model to simulate realistically the effects of reputation in opinion dynamics. As of this writing, Rainer Hegselmann is completing a new, much richer model—the Epistemic Network Simulator, as he has dubbed it—which allows agents to move about in a two-dimensional environment and to associate with, as well as dissociate from, others on the basis of a number of criteria, including reputation. More significant results concerning the role of reputation in opinion dynamics, in particular with respect to speed and accuracy of convergence to the truth, are to be expected from experiments that are scheduled to be conducted in this model.

Zollman (2007, 2010) presents another model for network simulations, this one operating on the basis of a Bayesian update mechanism. Zollman’s intriguing work shows that, from a socio-epistemic perspective, it may be important to maintain epistemic diversity in a community of agents, at least for a while, and that, for that reason, it is not always best if all agents have access to all information available in their community; for the same reason, a certain dogmatism on the part of the agents—understood in terms of their prior probability distributions—may be beneficial. In particular, it is shown that well-connected networks of relatively undogmatic agents are more vulnerable to the receipt of misleading evidence than less well-connected networks or networks with more dogmatic agents; in the former, misleading evidence spreads rapidly through the community, and is thereby more likely to have a lasting effect on their convergence behavior. This suggests that performance of networks in terms of their capacity to approximate the truth can be improved either by ensuring that different agents have access to different parts of the information that is somewhere available in the network or by endowing the agents with extreme priors, thereby building in a kind of dogmatism. As Zollman’s (2010) simulations also show, however, doing both of the foregoing will typically make the network perform quite poorly, in as much as dogmatic agents with limited access to the available information are likely to stick with their initial extreme views, and so likely not to converge to the truth. Footnote 10

Finally, Riegler and Douven (2009) present a model that, while clearly in the spirit of the HK model—it still involves interacting truth-seeking agents who receive information about the truth—departs from the HK model in that the agents’ belief states no longer concern a single parameter but are characterizable as theories. The theories are axiomatizable in a finitary propositional language, so that the agents’ belief states can be semantically represented as finite sets of possible worlds and hence also—assuming some ordering of the worlds—as finite strings of 0’s and 1’s. Agents’ being neighbors is then defined in terms of a metric on binary strings known in the literature as the Hamming distance, which is defined as the number of digits in which the strings differ. Specifically, an agent i’s neighbors are said to be those agents j such that i and j’s belief states are within a given Hamming distance from one another. Just as in the HK model the agents are trying to determine the value of a parameter, in the current model they are trying to determine the truth, which is some theory in their language. They again do this by repeatedly updating their belief states both on the basis of their neighbors’ belief states and on the basis of incoming information about the truth (which comes in the form of theories entailed by the truth). Also as in the HK model, the updating proceeds by a specific way of averaging over the neighbors’ belief states together with the evidence.

In simulations carried out in this model, both the value of the minimal Hamming distance required for inclusion in the neighborhood and the strength (in the logical sense of the word) of the information about the truth the agents received at each update were varied. The main conclusion that could be drawn from these simulations was qualitatively similar to one of the conclusions of the simulations carried out in the extended version of the HK model that assumes (2) as an update rule, to wit, that being open to interaction with other agents helps agents to approximate the truth more closely, but it also slows down convergence in that it takes longer to get within a moderately close distance from the truth as compared to agents that ignore the belief states of other agents and go strictly by the information about the truth. Footnote 11

This is a far from complete survey of what has been going on in the area of opinion dynamics in the past ten years or so. Nevertheless, we hope it is enough to show that the area of research is of immediate interest to the topic of truth approximation. An exclusive focus on the individual scientist, and which confirmation-theoretic principles he or she might deploy in order to approximate the truth as expeditiously and/or accurately as he or she can, is too narrow. To complete the picture, we must attend to communication practices, ways of sharing information, cooperation networks, and other features of the social architecture. For the studies discussed above give ample reason to believe that how accurate and how fast, on average, a scientist will approximate the truth depends on these features, too.

2 The Rationality of Peer Disagreement and Truth Approximation

While it is a fairly uncontroversial descriptive fact that people disagree in a variety of areas of thought, there is an interesting normative question concerning the epistemic status of such disagreements. More specifically, the question has been raised whether there can ever be rational disagreements among agents who are and/or take each other to be epistemic peers on a certain proposition. To say that a number of agents are epistemic peers concerning a question, Q, is to say, first, that they are equally well positioned evidentially with respect to Q, second, that they have considered Q with equal care, and, third, that they are equally gifted with intellectual virtues, cognitive skills, and the like, at least insofar as these virtues, skills, and so on, are relevant to their capacity of answering Q. Footnote 12 For epistemic peers to rationally disagree on Q is for them to justifiably hold different doxastic attitudes concerning the issue of what is the correct answer to Q. Finally, of special interest in the debate on peer disagreement, and also the question we will focus on, is whether or not epistemic peers can sustain a rational disagreement after full disclosure, that is, once they have shared all their relevant evidence as well as the respective doxastic attitudes arrived at.

A number of philosophers have recently argued that it is impossible for peers to rationally sustain a disagreement, at least after full disclosure. Suppose pro and con are both leading experts in a given field. While they both take each other to be peers on a certain question, Q, they disagree on what the correct answer is; pro thinks it is P, con thinks it is the negation of P. There is a powerful intuition that the rational thing to do here is to “split the difference,” as it has come to be called, which at least in the kind of case where peers hold contradictory beliefs is commonly taken to amount to suspending judgment on the issue of what the correct answer is to Q. This intuition is backed by the observation that if the parties to the disagreement were entitled to hold on to their respective beliefs, they would also be entitled to discount their opponent’s opinion simply on the grounds that a disagreement has occurred. Certainly, however, there exists no such entitlement. One cannot discount the opinion of someone one takes to be a peer on some question simply on the basis that a disagreement has occurred. Footnote 13

In some cases in which peers hold contrary and not just contradictory beliefs on a given matter, “splitting the difference” has another natural interpretation, besides “suspension of belief.” Suppose, for example, that opt holds that recovery from the current (2010) financial crisis will begin to occur in 2012 whereas pes holds that it will not begin before 2016. Then one way in which they could settle their disagreement is by quite literally splitting the difference and adopting, both of them, the belief that the recovery from the financial crisis will begin to occur in 2014. Lehrer and Wagner’s early work on opinion dynamics can in effect be interpreted as an argument in support of this way of resolving disagreements among peers. Footnote 14

Now, while we did not use the terminology “splitting the difference” in the previous section, it will be clear that, as a matter of fact, the agents in the HK models and its variants do exactly that: they split the difference (in the second sense mentioned above) with their “neighbors” (as we called them), that is, with other agents whose relevant beliefs are sufficiently close to their own. We are free, of course, to stipulate that an agent’s neighbors at a given time are precisely those agents he or she regards as his or her peers at that time. And, as is argued in Douven (2010), if we do make that stipulation, then the results concerning truth approximation that were obtained in the said models can be swiftly brought to bear on the debate about peer disagreement. Footnote 15

The central claim of that paper is that there is no a priori answer to the question how peers ought to respond to the discovery that they disagree: in some contexts, the rational thing to do for them may be to split the difference, in others, it may be to ignore each other’s belief and purely go by whichever evidence they receive from the world. The claim is largely argued for on the basis of results concerning the variant of the HK model which assumes (2) as an update rule. As we said, and as can be readily gleaned from Fig. 3, if the information the agents receive is “noisy” (which as a rule it is in scientific practice), then if they adhere to the practice of splitting the difference, they will tend to converge to the truth more accurately, albeit that in this case convergence to a value at least moderately close to the truth takes, on average, longer than if they go strictly by the (noisy) information. The claim then follows from the fact that there is no general answer to the question which is better: more accurate convergence, even if this takes a while, or more rapid convergence which, however, is less accurate, even in the long run. Footnote 16

That in the peer disagreement debate considerations related to truth approximation have been neglected so far is, we believe, because authors are strongly inclined to concentrate on single-case resolutions of disagreements. However, if the answer to the question how peers should respond to the discovery that they disagree on some issue is a priori, as virtually all participants to the debate have claimed, then presumably peers ought to respond in a uniform way in each particular case of disagreement. And that makes it legitimate to compare practices of resolving disagreements. In particular, we may ask how these practices fare in the light of the goal of approximating the truth. If we do so, we said, the results from the computer simulations described in Sect. 1 suggest that no one practice may be generally best: splitting the difference, on the one hand, and ignoring one’s peers beliefs and going purely by the evidence, on the other hand, can both be said to serve our epistemic goal, even if they do so in different ways.

We briefly want to point to some further considerations related to truth approximation that are pertinent to the peer disagreement debate, considerations which also suggest that under certain circumstances it would be wrong to adopt splitting the difference as one’s practice for settling disagreements with peers. Historians of science have argued that scientists have a tendency to disregard evidence which sits badly or even goes against their preferred view. They may simply fail to see the evidence, Footnote 17 or they may reason that the evidence must be misleading, possibly due to some measurement error. Footnote 18 Even if this may not be what ideal scientists would do, given that it is what actual ones do, a certain diversity in the views held by scientists would seem to increase the chances of scientific progress by decreasing the chance that some—potentially crucial—evidence is missed by all groups of researchers. This is buttressed by the results of Zollman’s (2010) simulations described in the previous section. Of course, as Zollman also notes, we do not want scientists’ beliefs to diverge permanently; the hope is that, in the end, the scientific community as a whole will converge on the truth. Nevertheless, Zollman argues, we should aim for transient diversity. And, patently, even that would be prohibited if, on the discovery of peer disagreement, scientists were to suspend judgment on their theories, or were to adopt some “middle” position between their and their opponents’ theories. Footnote 19

We thus hope to have shown that observations concerning a community’s capacity to approximate the truth are relevant to the debate about peer disagreement. Our main observation concerned the earlier reported results from computer simulations, showing that there can be a trade-off between accuracy and speed of convergence if the information that agents have access to and also take into account may be imprecise. An additional observation concerned situations in which, even though agents may all have access to the same information, they are prone to overlook, or perhaps even willingly ignore, information that is irreconcilable with their favored theory or theories. In such situations, a certain tenacity in one’s holding onto a belief in the face of disagreeing peers may be more conducive to the goal of approximating the truth than the willingness to split the difference that Christensen, Feldman, and others hold to be appropriate here.

But what, then, remains of the powerful intuition that some epistemic response is called for when we discover that we disagree with a peer? The intuition is correct, we think; it just is to be noted that such a response can take many forms. What is correct is that it would be epistemically irresponsible behavior to simply pass by the discovery of disagreement with a peer. One way to take into account such disagreement is to suspend judgment on the disputed proposition, or to split the difference in the more literal sense discussed above. Another way is to scrutinize one’s evidence for the said proposition a second time, carefully reconsider the argument or arguments that led one from the evidence to the proposition, checking the various steps involved in this argument (these arguments), and do the same with one’s opponent’s argument (arguments) in as open-minded a manner as possible. The latter type of response, we submit, is mandatory; the former, as we argued, is not, and must in some circumstances even be advised against.