Echoes of covid misinformation

ABSTRACT Public support for responses to the coronavirus pandemic has sharply diverged on partisan lines in many countries, with conservatives tending to oppose lockdowns, social distancing, mask mandates and vaccines, and liberals far more supportive. This polarization may arise from the way in which the attitudes of each side is echoed back to them, especially on social media. In this paper, I argue that echo chambers are not to blame for this polarization, even if they are causally responsible for it. They are not to blame, because belief calibration in an echo chamber is a rational process; moreover, the epistemically constitutive properties of echo chambers are not optional for epistemically social animals like us. There is no special problem of echo chambers; rather, there is a problem of misleading evidence (especially higher-order evidence). Accordingly, we ought to respond to misinformation about COVID neither by attempting to dismantle echo chambers nor by attempting to make people more rational, but rather by attempting to supplant unreliable higher-order evidence with better evidence.

themselves. Nor is there an obvious linkage between divergent values and the positions each side has taken. Of course, Republicans can oppose vaccine mandates and lockdowns in the name of freedom (with some plausibility), but Republicans could just as easily support them in the name of the ingroup loyalty and respect for authority (Graham et al., 2009) they equally value. Democrats might have opposed lockdowns on account of the way in which they impose significantly greater burdens on the vulnerable and the poor than on those who can easily work from home or who have financial resources they can draw upon. The actual positions each side has taken seem significantly underdetermined by their prior commitments, suggesting that political polarization is explained by some very different kind of cause.
A plausible, albeit partial, explanation for political polarization cites echo chambers. Democrats and Republicans, Tories and Labor voters (and so on) sort themselves into different groups. Our friends, especially our online friends, tend to share the same values and outlook with one another and with us, and exchange the same political narratives and the same news and opinion items. As a result, we get our prior attitudes reinforced, our opinions reflected back to us, and we become further entrenched in our views. We may be subject to group polarization (Sunstein, 2002), polarization within the group of likeminded people, whereby initial views tend to become more extreme over time. When people sort themselves into different echo chambers on the basis of political differences, what can be expected to emerge is the hardening of divergent attitudes. Echo chambers predict what recent events seem to manifest: stark divisions of political attitude correlated with political identities.
It is almost universally accepted that echo chambers lead irrationally to divergent attitudes. In an echo chamber, we are not properly responsive to evidence: we respond to social cues and approbation rather than evidence. When evidence opposing our views is allowed to intrude, it is not assessed fairly but seen as deceptive merely in virtue of counting against our views: the very fact that evidence conflicts with our emerging or established attitude is taken as a strong reason to discount it (and those who support it), ensuring the attitude is resistant to rational criticism.
Echo chambers may well underlie the polarization of response we've seen on COVID-19 over the past 18 months (and, to varying degrees, for very much longer than that on other issues). In this paper, however, I will argue that belief formation in an echo chamber is not irrational. Rather, it arises from individuals responding to their evidence, giving to it its appropriate weight as evidence. The notion of 'rationality' I'm invoking here is, of course, subjectivist in an important way. A process is objectively rational if, in the actual circumstances, it tracks the truth. Belief formation in an echo chamber can't always be objectively rational, of course -not if it can give rise to two or more conflicting attitudes. If one group ends up believing p and another not-p via an identical process, then that process can't be objectively truth-tracking. But it can be subjectively truth-tracking -it can reflect a rational response to the evidence -when the evidence presented to each group is different. A subjectively rational response to misleading evidence will often lead to a false belief. If misleading evidence strongly suggests that the butler did it and there are no (subjectively available) defeaters, the detective should form the corresponding belief.
I focus on this subjective notion of rationality for several reasons. First, I think it's the notion of rationality that is at issue when it's claimed that belief formation via echo chamber is irrational. The claim is not simply that echo chambers lead to false beliefs; rather, the common claim is that echo chambers cause beliefs via an irrational process. Second, subjectively rational processes are the only processes we have for tracking truths. Subjectively rational processes are valuable because, under appropriate conditions, they allow us to identify truths, and they're normatively assessable in terms of how well designed they are for these purposes. Since they are the means whereby we track truths, they're the appropriate focus when we're concerned with whether and how to alter to alter our epistemic practices. In arguing that belief formation in echo chambers is rational, I argue that there is no need to alter such practices. Indeed, I'll claim, we can't alter them: the epistemically constitutive features of echo chambers are not products of recent innovations in how discourse is structured online, or of the way in which we sort ourselves politically. Rather, they're constitutive features of how rational social animals form and update their beliefs. If echo chambers go wrong epistemically -and of course they do -we improve belief formation not by altering the way in which we respond to evidence within echo chambers, but by addressing the inputs into the echo chambers.

Echo chambers and epistemic bubbles
Thi Nguyen (2020) has influentially distinguished echo chambers from what he calls epistemic bubbles. Epistemic bubbles and echo chambers are each a kind of network, paradigmatically of friends or of followers on social media. A network is an epistemic bubble when some kinds of information are excluded from it by omission. For instance, a philosopher's Facebook friend network might be an epistemic bubble when it consists (almost) entirely of political liberals. She need not have selected her friends on the basis of their political views, but often the criteria we use to select friends correlate with their political views. Whatever the reason for the omission, conservative voices might be absent from her network and her diet of political news and opinion accordingly one-sided.
Epistemic bubbles can have bad epistemic consequences. They can result in false or unjustified beliefs. If my bubble excludes reliable voices, my beliefs might be at variance from the truth. At best, they will reflect only a partial subset of the overall evidence, and there may be no reason to think that that the excluded evidence is unreliable. The filters are not sensitive to epistemic properties. For all that, epistemic bubbles aren't particularly interesting, from an epistemic perspective. There's nothing surprising about the fact that the selective omission of evidence may lead to false or badly justified beliefs. That's a familiar fact, and the epistemic consequences of such omissions are equally familiar (examples range from the beliefs of those who live in states in which news is tightly controlled to banal cases involving the beliefs of those taken in by con artists). It's also worth noting that epistemic bubbles on social media don't seem to play a major role in explaining divergent beliefs on COVID-19. While there are systematic differences in how broad a range of sources different groups rely on (Benkler et al., 2018), social media seems to broaden that range, not shrink it (Acerbi, 2019).
Echo chambers appear to be much more epistemically interesting. Echo chambers are networks characterized by relations of trust (and distrust) between nodes. Echo chambers may also be epistemic bubbles, excluding untrusted voices. But they need not be. The person who lives in an echo chamber might hear from a diverse range of voices, but she weighs these voices very differently. She may discount certain views just because they come from out-group members. She might dismiss someone because of the Biden sticker on their profile, or the MAGA hat they wear.
Echo chambers appear a more difficult problem than epistemic bubbles. Epistemic bubbles are, as Nguyen says, easily popped: we might (for instance) tweak the algorithms on social media to ensure that all of us have a varied diet of information. But ensuring that I get a varied diet of information won't help me to escape my echo chamber. If I ascribe low credibility to a certain source (the New York Times or Fox, depending on my political orientation) just because I regard it as the source of the out-group, I may be systematically unresponsive to the evidence these sources present. Because echo chambers explain how agents can diverge in their beliefs despite access to a broad range of evidence (even to exactly the same set of evidence), they seem able to explain our apparent political polarization in the face of the evidence that social media broadens our sources compared to the past. Fox may expose me to the CDC's recommendations or to Biden's pleas, but it does so disparagingly and this dismissive attitude is echoed across my network; I believe accordingly.

Epistemic pathologies of echo chambers
Both epistemic bubbles and echo chambers can explain how people may come to accept false claims like more people die from COVID vaccines than from the disease. Epistemic bubbles may lead to false beliefs simply by excluding reliable sources. There's nothing mysterious about agents forming false beliefs under conditions like these: rational agents, responding appropriately to the evidence they're presented with, may come to hold false beliefs if their evidence is misleading. If your chamber echoes InfoWars and Breitbart, and excludes CNN, The New York Times and the Washington Post, you simply may never hear CDC recommendations. At least in principle however, if the explanation for such beliefs is simply exclusion of reliable sources, the problem is easily addressed. Pop the epistemic bubble -make the formerly excluded evidence available -and agents' beliefs update accordingly. But echo chambers are held to produce beliefs that are peculiarly resistant to correction, generated by distinctive pathological processes.
In an epistemic bubble, false or unjustified beliefs may arise because some of the evidence is unavailable to the agent. But echo chambers can give rise to false or unjustified beliefs even if all the evidence, on both sides, is presented to the believer. Belief formation goes awry in echo chambers due to the different kinds of feedback loops that (supposedly) characterize them. Elzinga (2020) identifies two basic kinds of feedback loops. Intranetwork feedback loops are feedback loops between agents who share an echo chamber. Suppose you and I share an echo chamber. Your confidence in a belief may rise because I express support for it, but your subsequent agreement may lead me to raise my confidence in turn. Boyd (2019) expresses a similar worry, centering around what he calls 'groupstrapping'. Groupstrapping occurs when a group member offers testimony in support of a belief, p, already held by the group. That testimony causes the group's confidence that p to rise, which may then elicit further expressions of belief. Confidence may ramp up more or less indefinitely, in virtue of this feedback loop.
Inter-network feedback loops occur between an agent and someone outside their echo chamber. This feedback loop is activated by an outsider's attempt to correct a false belief of a group member: under many conditions, the attempted correction will be seen by insiders as evidence of the unreliability of the outsider rather than evidence in favor of the correction offered. Elzinga suggests that this kind of corroboration occurs when group members already believe that outsiders are untrustworthy; that is, he (explicitly) sees such corroboration as arising from what Begby (2020) calls evidential preemption. Evidential preemption paradigmatically occurs when agents are forewarned that particular individuals or members of particular groups are unreliable on a particular topic. Such preemption serves to inoculate group members against certain sorts of evidence: they are led to think that any apparently conflicting evidence presented has already been taken into account by those within their echo chamber, and that this evidence must therefore be irrelevant or misleading. That the CDC said p is taken as evidence against p.
These feedback loops are epistemically troubling. They may leave us with beliefs held with a degree of confidence that renders them impervious to clear contrary evidence, and therefore immune to correction. If we're lucky enough to share our echo chamber with those who are reliable, these mechanisms will leave us with accurate beliefs. As Jennifer Lackey (2018) points out, being disposed to ignore or distrust climate skeptics will protect us against misleading evidence (equally, there may be no epistemic reason to pop an epistemic bubble that excludes QAnon conspiracy theorists). According to their critics, however, even when our echo chambers are shared with reliable sources, they may leave us with credences that are illaligned to the genuine strength of our evidence. We therefore have good reason to break down echo chambers, to ensure that these feedback loops don't misalign credences with evidence.
I'm going to argue that these worries are misplaced and misleading. Worries about feedback loops leaving us with beliefs that are misaligned with the evidence are misplaced: echo chambers should not be expected to lead to such a misalignment. Worries about echo chambers leaving us unable to respond to clear evidence are misleading; misleading not because this phenomenon doesn't occur, but because the phenomenon inheres in being a rational agent, not in being an inhabitant of an echo chamber.

Echo chambers as testimonial networks
When people point to echo chambers to explain phenomena like polarization over COVID-19, they implicitly or explicitly assume that echo chambers are, if not genuinely novel, a recently important phenomenon. It's only with the recent popularity of social media that we began to sort ourselves into echo chambers to any great extent. At least if we understand echo chambers as epistemic networks of a certain sort (rather than as their current online instantiation), I think that's false. The epistemically significant properties of echo chambers -the trust relations that constitute them as echo chambers -are and always have been instantiated in many places beside networks of people on social media. 1 Indeed, we don't even need to share a common space, actual or virtual, with others to stand in the echo chamber relation to them. We need only to be disposed to respond to their testimony in certain ways, and we possess the dispositions to defer in the relevant ways to others we have never met or interacted with.
An echo chamber is a network in which outside voices are discredited and insiders given significant weight. I share an echo chamber with someone -I stand in the echo chamber relationship to someone, whether or not I interact with them or with a common source of information, in the way distinctive of their online manifestations -just in case I'm disposed to recognize her as someone I should trust as an informant in virtue of her possession of properties that lead me to identify her as belonging to my group, where the group is defined by common values, common commitments, common activities or (paradigmatically) common politics. You and I may share an echo chamber despite being strangers: your t-shirt, lawn sign or bumper sticker might mark you as epistemically reliable (by my lights). I am disposed to trust you in virtue of your Biden sticker, or distrust you in virtue of your MAGA hat. Your dress, your car, your choice of newspaper (and whether or not you're wearing a facemask): all of these are cues that mark you as reliable or unreliable (again, by my lights). Of course, often the only cues of group membership available to us are people's assertions: your saying Trump won the election is excellent evidence of your political commitments. I may discount your future testimony (on related topics) in virtue of classifying you as a Trump supporter on this basis.
As epistemologists recognize, the selective deployment of trust and distrust is rational. A host of empirical evidence indicates that we prefer the testimony of those we perceive to be competent and benevolent over those we take to be incompetent or to disregard our interests (Harris, 2012;Mascaro & Sperber, 2009;Sperber et al., 2010). This kind of selective trust and distrust is surely appropriate. It's rational to prefer testimony from those who show signs of competence (by our lights): they're more likely to get things right. It's rational to prefer testimony from those who appear benevolent, for two reasons. First, they're less likely to seek to seek to deceive me for their own gain. Second, those who share my values are disposed to get things right (by my lights, of course) on normative matters (Rini, 2017), and therefore are more reliable when testifying on these topics. Even deference on straightforwardly empirical questions to those who share our values may be justifiable. Many empirical questions are normatively inflected, and copartisans tend to share the same assessments of the issues (think of gun control, capital punishment, nuclear power, and climate change, as well as vaccines, social distancing and lockdowns). If we share the same takes on these questions because, say, our values make us more receptive to certain facts or because we're less likely to be subject to distorting biases, deference to those who share our values may be justified. 2 Preferring the testimony of the in-group to that of the out-group therefore seems justifiable. In trusting my fellow partisans, I trust those who tend to get things right in the factual and the normative domains. Conversely, in distrusting out-group testimony, I discard the testimony of those who are less likely to be competent and more likely to be deceptive. None of this is epistemically objectionable. In fact, the weight we give to testimony is, and ought to be, sensitive to its purely epistemic properties too: we place greater weight on testimony that accords with our prior beliefs than with testimony that conflicts with them for familiar reasons: because our posterior probabilities should be a function of our prior probabilities. Echo chambers cannot be objectionable on the grounds that they lead us to discount evidence that is preempted by our priors; such preemption is how rational agents are supposed to function.
If echo chambers are objectionable, then it must be because this kind of preemption of evidence (and its opposite; a disposition to place greater weight on evidence that accords with our existing beliefs) is carried to an extreme that is unjustifiable. We can understand their critics as pointing to this kind of metastasization of trust and distrust relations. It might be rational to prefer the testimony of the in-group to the out-group, but it's certainly not rational to groupstrap one's confidence indefinitely high. It might be rational to distrust testimony from those who reject my values, but it's certainly not rational to entirely disregard the testimony of agents merely because they belong to the out-group. Indeed, such runaway processes are not rational, but I don't think these worries are realistic. A reminder of some of the basic points of Bayesian confirmation theory will help us to see why.
The value of a piece of evidence for an agent depends on her prior estimation of its likelihood: the more unexpected the evidence, the more weight it should have for her. If I believe p and you unexpectedly testify that not-p, you've given me a reason to update my beliefs. But if I believe that p and also believe that you will testify that not-p, your actually testifying that not-p provides me with no new evidence. 3 Belief update occurs when agents respond to unexpected testimony or perceptual input: they update their model of the world (including their model of the agents in the world) to accommodate the unexpected input. With this simple picture in mind, we're in a good position to disarm worries about echo chambers.
Echo chambers are supposed to be constituted by or give rise to pathological inter and intra-network feedback loops. In fact, these two kinds of feedback loop are inconsistent with one another. Inter-network feedback loops involve agents dismissing the testimony of the out-group because that's precisely what in-group members expect to hear from them. That's in fact how rational agents should function: we shouldn't update our beliefs on the basis of the evidence we already confidently expected to receive. Equally, however, the corroborating testimony of in-group members should be given little weight, on the same grounds. If we dismiss the testimony of the out-group because we expect it, we should give little evidential weight to the confirming testimony of the in-group. We have precisely the same reason to reject co-partisans' testimony that p as out-group members' testimony that not-p. If out-group members' testimony is preempted, so is the testimony of the in-group.
If echo chambers are really subject to inter-network feedback loops -if they're characterized by evidential preemption -then we shouldn't expect them to be subject to pathological intra-network feedback loops. Groupstrapping and the like won't occur, because the testimony of the ingroup will have little evidential value for its members: the same processes that preempt out-group testimony also prevent these intra-network effects. I won't increase my confidence that the vaccines contain tracking devices in virtue of your saying so; not if I already took you to be one of us and that claim already circulates within our echo chamber (matters would be very different if you were a Harvard epidemiologist or a CNN anchor, because then your testimony would be unexpected). There is no principled reason to think that agents who reject the testimony of out-group members because they expect it to place excessive weight on the equally expected testimony of in-group members.
Before we become reconciled to echo chambers, however, there are two problems we need to address. First, I've given grounds for thinking that problematic intra-network feedback loops won't occur. But my defense of echo chambers against this worry seems to depend on the existence of internetwork feedback loops. I've argued that the first kind of pathological feedback loop won't arise, because the corroborating testimony of co-partisans is accorded little weight for the same reasons that the conflicting testimony of the out-group is accorded little weight. But the second kind of feedback loop is itself pathological. It's not rational to dismiss completely the testimony of agents who count as epistemic peers just because they belong to the outgroup. 4 So the defense of echo chambers offered is of limited comfort to us. Second, there are grounds for worrying that the defense doesn't even succeed in disarming the threat of intra-network feedback loops. It should be conceded that the testimony of co-partisans is of little evidential value to us. Our confidence can't be groupstrapped into rapid ascent. But 'little evidential value' is still some evidential value. I'm never certain that you won't dissent. Given the size and diversity (in some respects) of my group, I should be expected to give some weight to concurring voices within it. My confidence won't shoot up, but it might creep up inexorably, and that's all we need for it to misalign with the actual strength of the evidence.
Let's focus on the second worry first. It is of course true that expected testimony from in-group members often has some evidential value for copartisans. I may be less sure that p than that you're a reliable judge whether p, and my confidence that p might accordingly rise in virtue of your testimony. Equally, even when I am very confident that p, I might be less confident that my entire group will agree with me than I am that p, and therefore their apparent concurrence may cause my confidence in the latter to rise. But neither of these processes is epistemically irrational. In both kinds of case, my raising my confidence that p is my updating my credence to align with the weight of the actual evidence. Your testimony that p is in fact evidence in favor of p when I'm more confident you're a good judge of this sort of thing than I am of p; equally, my group's concurrence is genuinely good evidence that p if I think dissent is somewhat likely, given the size and diversity of the group. So, the fact that my in-group can cause my confidence to rise isn't a worry. It's what we'd hope for.
Such groupstrapping would be irrational if it caused my confidence to creep up indefinitely. But that's not going to happen, on the Bayesian picture. The more confident I am that p and that you're a good judge whether p, the more confident my expectation that you'll testify that p, and the less weight I'll give to your testimony. On the Bayesian picture, further iterations of the feedback loop have rapidly diminishing returns. My credence won't be groupstrapped; at most, it might be nudged a little higher in virtue of learning that many others in my in-group share my belief. It's hard to see what's wrong with that. That looks like me calibrating my beliefs to the available evidence.
So the second worry isn't a real concern: it's hard to see how intranetwork feedback loops might lead me to misalign my confidence to the genuine weight of the evidence. Let's turn now to the first worry; that the evidence of the out-group will be preempted inappropriately. After our discussion of the second worry, it shouldn't be hard to see that this concern is misplaced. It is only in rare cases that we will give no weight at all to dissenting testimony, no matter how numerous and (apparently) competent the testifiers. It's only when we're already extremely confident that p that this will occur (we might think of Lackey's well-known 'arithmetic' case (Lackey, 2010); your offering me testimony that 2 + 2 ≠ 4 is a reason for me to doubt your competence or sincerity, not my arithmetic). For the same reason that we (rationally) give some small weight to concurring testimony by the ingroup, we give some small weight to the dissenting testimony of the outgroup. Unless the testimony is precisely what we expect, we adjust our model of the world to accommodate it: altering our confidence that p, or our confidence that the testifier is competent and sincere, or (very often) both. When we completely dismiss testimony from apparently competent testifiers, we do so because we are very confident that p and that the agent is incompetent or insincere; it's hard to see what's irrational about that.
Before concluding, let me briefly respond to three objections. First, it might be objected that while the appeal to Bayesian theory to defend echo chambers might be well and good, it shows only that Bayesian agents wouldn't misalign their beliefs to the available evidence. It shows nothing about how agents like us actually function. Perhaps the best explanation of how human beings so often come to have false and sometimes even bizarre beliefs attributes to them belief updating processes that are far from Bayesian. I have two responses to this concern. First, I point out that the critics of echo chambers appear unprincipled in their invocation of beliefupdating processes. As we've seen, appeals to inter-and intra-network feedback loops are of dubious consistency. At minimum, it is incumbent on the critics to move beyond their intuitions about how agents in echo chambers would update their beliefs and provide evidence that they really do update like this.
Second, it's worth pointing out that there is growing evidence that earlier work in political psychology, apparently indicating motivated departures from Bayesian principles, failed to consider agents' prior beliefs (Tappin et al., 2020). A survey of the evidence suggests that at most departures from Bayesian update is a rare exception and not a common occurrence, and that the evidence for these rare exceptions is itself questionable (Tappin & Gadsby, 2019). There's no particular reason to think that agents in echo chambers exhibit the precise pattern of departures from Bayesian principles that their critics invoke, and good reason to think that they don't. 5 A second objection might turn on a very different kind of concern. Rather than appeal to how we depart from Bayesian belief update, it might appeal to the costs and benefits of forming beliefs about the kinds of topics around which echo chambers typically (or most saliently) form. Given that our individual attitudes make no discernible difference to policy on climate change, let alone to whether it is occurring, we have little incentive to accuracy on this kind of topic, and our belief update might instead reflect the social costs and benefits of aligning our beliefs with those of the people we interact with (Kahan, 2016;Williams, 2021a) or the costs of knowledge itself (Williams, 2021b). My central reason for preferring an account on which belief formation is explained by rational update on evidence (especially higher-order evidence) taken to bear directly on the topic at issue is that such an account has greater explanatory power than rivals. In particular, the rational update account provides a single unified mechanism for belief acquisition and update not only in the low stakes (for the individual herself) context of political partisanship, but also in much higher stakes context, like the acquisition of beliefs essential to survival in harsh environments. I claim that this mechanism underlies much of cultural evolution (Levy, 2022). On Kahan's view, belief update in an echo chamber is indirectly rational: the belief may not be responsive to evidence about (say) climate change, but it is responsive to evidence about what is in the individual's interests. On my view, the belief is directly rational: it is a rational response to the higher-order evidence possessed by the agent. 6 A third worry turns on some apparently counterintuitive implications of the rational update model. Consider the epistemically virtuous agent invoked by John Maynard Keynes's (probably apocryphal) line "when the facts change, I change my mind". The agent described is epistemically virtuous on two scores. First, they exhibit an epistemically virtuous openmindedness in their willingness to change their mind. Second, their change of mind responds to the facts, not just to people's opinions about the facts. If my view can't explain why this agent is virtuous, that seems to count against it. 7 I think my view can explain such changes, and explain why they are rational. It's important to stress, first, that my account is fully compatible with rational agents updating their beliefs by tracking the first-order facts (rather than the higher-order evidence). We are indeed epistemically dependent rational beings; heavily dependent on others for knowledge in every domain. Most of what we know we know by testimony, and even knowledge specialists, within the precise sphere of their expertise, are dependent on others. Science is a heavily collaborative enterprise, and even in the same lab it is normal for roles to be specialized such that each member is epistemically dependent on the others (and on the work of thousands of other scientists worldwide). At the same time, however, the specialist, in her area of expertise, is obviously much more capable of tracking the firstorder evidence than are laypeople (or specialists in different areas). Higher-order evidence is one source of evidence among others; the scientist, too, is responsive to it, but its role is somewhat smaller for her in the domain of her expertise than it is for the layperson.
The rational update account can also explain quite dramatic changes in mind. We do not defer to every agent in our chamber equally; some agents are recognized as especially knowledgeable, and if they change their mind, we are likely to do so as well. Of course, that's entirely rational: if our echo chamber contains genuine experts, who are more responsive to the firstorder evidence than we are, we ought to change our mind when they do. The same mechanism can work to pathological effect, when we defer to someone we wrongly take to have expert knowledge. Think of how evangelical attitudes to character changed dramatically in the wake of the nomination of Donald Trump as the Republican candidate for the presidency (Miller, 2018). Evangelicals consistently rated character as the most important attribute in a politician for many years prior to the rise of Trump; after his nomination, they rated it lowest of all groups. The implicit testimony of the party surely played a role in this about face (Levy, 2021b). The framework here both explains such shifts and explains why they need not be especially admirable.
If echo chambers are constituted, as I've argued, by our dispositions to defer differentially to those who evince cues of reliability and unreliability, then they don't have the problematic epistemic features philosophers have attributed to them. Of course, they may lead us astray. If we're disposed to defer to unreliable agents, then we'll be systematically poised to acquire false beliefs. But that's as it should be: the rational agent is misled by misleading evidence. The beliefs of the rational inhabitant of an echo chamber will be calibrated to the evidence, the first-order evidence whether p as well as the higher-order evidence constituted by the concurring and dissenting testimony of others. 8

Conclusion
Echo chambers are usually seen as more pernicious than epistemic bubbles. The latter are easily popped, but the former are difficult to escape. Echo chambers lead to agents unwarrantedly discounting the opinions of others just because they're different to theirs, and to an increase in our confidence in our beliefs that outstrips the weight of the evidence. These worries about echo chambers are misplaced. Belief formation in echo chambers involves agents calibrating their beliefs to the actual weight of their evidence (much or most of which is higher-order evidence). We all live in echo chambers; we're all disposed to defer and to discount testimony on the basis of considerations concerning how reliable are those who offer it. That's not a problem we should aim to solve: that's how rational agents should operate.
Given that we're all in echo chambers all the time -given, that is, that we're epistemically social animals, heavily reliant on testimony and necessarily sensitive to cues of the reliability and unreliability of testimonythere's every reason to believe that echo chambers play a very heavy role in the formation and maintenance of bad beliefs about COVID-19 and about responses to it. But that fact provides us with no reason to worry about echo chambers, specifically. Nor do we have good reason to worry about the rationality of those who inhabit them. They're calibrating their beliefs to their evidence, just as we expect of one another.
If echo chambers are not the problem and nor is the rationality of agents, why do so many of us go so spectacularly wrong, and how might we work to improve belief formation? The problem of bad belief is a problem of misleading evidence: rational agents reliably form false beliefs when their evidence supports such beliefs. Echo chambers echo misleading or false first-order evidence and reinforce it with plenty of misleading higherorder evidence, in the form of cues to reliability. The way to address bad beliefs is to address the evidence that is echoed in them. Typically, echo chambers echo the views of particular individuals, or sets of individuals, much more than the views of others. When they form around political orientations, as they often do, these views are those of partisan elites (who are known to have a significant influence on partisan opinions (Arceneaux, 2008;Flores, 2018). Partisan elite attitudes constitute higher-order evidence in favor of the views they endorse. It's not irrational to weigh elite opinion heavily: elites tend to be better educated and have more time to evaluate evidence, and their roles tend to expose them to a wider range of experts than non-elites. But in some echo chambers, elite opinion is more responsive to genuine experts (on specific topics) than others. The chains of testimony about climate science, for example, can be traced back to genuine scientific experts in some, while in others it traces back to shills and merchants of doubt (Levy, 2019a). But most of us are ill-equipped to assess which experts are the genuine experts for ourselves, or to trace the chains of testimony to assess their reliability, and the task is rendered impossible when rival higher-order evidence is preempted. 9 More generally, because bad belief formation in an echo chamber arises from individuals assessing their evidence rationally, there is typically little room for individual epistemic virtue to correct for bad belief. Instead, the solution lies with managing the epistemic environment. There may, however, be one way in which individuals can manage their epistemic environment for themselves. Bad belief formation is path-dependent: it is because we update on our beliefs sequentially that Bayesian agents we do not converge on the truth (Hahn et al., 2018). Sometimes, we are in a good position to avoid getting on a bad path in the first place, because we may be able to assess the reliability of an echo chamber prior to becoming a member. This happens, typically, when we enter an echo chamber because we're interested in a certain topic on which we judge it reliable, while knowing that misleading opinions on other topics are also often echoed in it. We underestimate how powerful higher-order evidence will be for us once we are enchambered and therefore how likely we are to come to change our beliefs. Sometimes, the remedy is to look elsewhere.
These cases are atypical, however. For most of us, most of the time, we first begin to wonder about the reliability of our echo chambers once we are in them, and the higher-order evidence provides us with reassurance. We are all always in echo chambers -they're essential to the function of rational social animals like us -and it's too late to wonder whether we should enter them. Our echo chambers are constituted by the people we trust to provide us with higher-order evidence, and assessments of their reliability must be done from within our echo chambers, and in ways guided by the evidence it provides us with.
These facts ensure that belief formation is a fraught and difficult issue. Addressing it requires work to rebuild trust between agents and scientific institutions (which requires, inter alia, that these institutions do more to deserve trust), and at the same time work to lower trust in unreliable sources. Attempts to address it by legislation to promote more reliable voices over less themselves risk undermining trust (though it may be that no-platforming can be justifiable when it is aimed at ensuring that new higher-order evidence is not generated by the provision of a platform associated with reliability (Levy, 2019b). Elsewhere (Levy, 2022) I've made a start on other, tentative and underdeveloped proposals, to improve the quality of the epistemic environment and thereby the reliability of the voices echoed. But just as belief formation -good and bad -is a collective enterprise, so is such work. It will take concerted work on many fronts and contributions from many disciplines to address bad beliefs.

Notes
1. Buttressing this claim that echo chambers are not novel, it's worth noting that there's little evidence that the epistemic pathologies often blamed (partially, at any rate) on echo chambers are themselves on the increase. Uscinski and Parent (2014) find no evidence that conspiracy theories have become more prevalent in the United States since the advent of social media (or indeed over the past 100 years). There have been spikes in acceptance of conspiracy theories, but the last one occurred around 1950. I thank a reviewer for Philosophical Psychology for prompting me to recognize the relevance of this evidence. 2. Moreover, as the literature in inductive risk has shown, it is at least sometimes impossible to assess factual questions without making normative assumptions, such as how heavily to weigh the risks of false positives over negatives (Douglas, 2000). This fact may make testimony about the reliability of empirical research more accurate, by my lights, when it comes from those who share my values. 3. Begby (2020) explains evidential preemption by reference to Bayesian theory. He suggests that explicit warnings aim at altering our estimates of likelihood. My expectation that you will testify that not-p preempts your testimony: since I now expect that testimony, it has no evidential value for me. 4. On some accounts, outgroup members won't count as peers of course. They're not likelihood peers: agents I regard as just as likely as I am to hit upon the right answer to questions under dispute (Elga, 2007). Perhaps the mere fact of disagreement provides us with (higher-order) evidence only when it's the disagreement of likelihood peers; nevertheless, it seems irrational entirely to discard the testimony of agents who are as competent and knowledgeable on a topic as I am (see Levy, 2021a for discussion). 5. If people are more rational in belief formation and updating than is commonly held, why are theorists so strongly drawn to explanations turning on irrationality? I think there may be two reasons. First, because we don't share the same priors as those who come to hold false beliefs (about COVID-19 or Trump), we find it hard to see how they could come to believe these things through a rational process. Second, people often misrepresent their own beliefs, expressing belief in sometimes bizarre claims in order to signal their political allegiances, rather than to sincerely report their beliefs (see Hannon, 2021;Levy & Ross, 2021 for review and discussion). 6. I'm grateful to a reviewer for Philosophical Psychology for asking me to think about how my view relates to Kahan's identity-protective cognition account, and Williams' development of that view.
7. I'm grateful to a reviewer for Philosophical Psychology for pressing me on this issue. 8. Yuval Avnur (2020) is to my knowledge the only writer on echo chambers to consider whether higher-order evidence might explain the divergent beliefs they give rise to. He regards the suggestion as merely a "notional variant" of his preferred motivated reasoning account, because diverging groups may have access to the same higherorder, as well as first-order, evidence. This response overlooks the role of our priors in responding to evidence: our differing priors -in particular expressed in our dispositions to defer and dismiss -explain why we respond differently to the evidence. We don't need to invoke motivated reasoning. Motivation might sometimes or often explain why my priors are set as they are, but that doesn't entail that I don't respond rationally to my evidence in the light of my priors. 9. In saying we're ill-equipped to assess which of the competing claims to expertise reflects genuine reliability I take a stand on a controversial issue, of course. Other philosophers are much more optimistic (Anderson, 2011;Goldman, 2001;Guerrero, 2017). I argue in favor of pessimism in (Levy, 2022).