1 Introduction

The epistemic condition of blameworthiness pertains to the relationship between an agent’s being morally blameworthy for a wrongdoing and the agent’s epistemic relation to the fact that the act is wrong. The idea is that blameworthiness requires that the agent is in the right sort of epistemic relation to the wrongness of the action, such that ignorance (understood as lacking a true belief) is sometimes an excuse. This idea that ignorance can excuse is a familiar part of ordinary moral life. “I’m sorry, I didn’t know,” is something many of us have probably heard others (and ourselves!) say as an excuse for our accidental wrongdoing. Of course, ignorance is not always excusing. Accounts of the epistemic condition of blameworthiness take up this task of identifying when ignorance does or does not excuse.

One plausible view in the literature is the Reasonable Expectation View (hereafter, RE). According to RE, whether ignorance excuses an agent from deserving blame is a matter of whether the agent could reasonably have been expected to avoid or correct the ignorance. This paper will not take up the debate about whether RE is true or how it fares against rival accounts.Footnote 1 Instead, this paper is primarily focused on examining what RE implies for an interesting type of ignorance: moral or political ignorance rooted in cognitive bias. With the prevalence of political polarization, mis/disinformation campaigns, news algorithms, social media, and echo-chambers, it is worth examining what a plausible view like RE implies about when, if ever, bias-based ignorance excuses the agent from blame—from deserving resentment or indignation.Footnote 2

That said, exploring the application of RE to cognitive biases on the whole would be too ambitious for a single paper, given the heterogeneity amongst the variety of cognitive biases that there are. And so, this paper will be limited to focusing on myside bias. Ultimately, the paper argues that RE has two revisionary implications for our practice of blame.

  1. (1)

    Political or moral ignorance rooted in myside bias is an excuse in a surprising number of cases.

  2. (2)

    Our epistemic position to know whether a given instance of political or moral ignorance excuses is often inadequate.

Roughly, the case for (1) and (2) starts from the fact that whether an agent can have reasonably been expected to have overcome their cognitive bias (and thereby avoided or corrected their biased-based ignorance) is a matter of their cognitive capacities, opportunities, and evidence pertaining to bias mitigation. From here, I appeal to empirical evidence about what it takes to mitigate myside bias to argue that a significant number of ordinary non-ideal agents do not have sufficient opportunities to develop reliable bias mitigation skills. In turn, they cannot reasonably be expected to have mitigated the bias, and thereby avoided or corrected the ignorance. Moreover, I argue that the details that we would need to know in order to know whether a given agent could reasonably have been expected to mitigate myside bias is fairly complex, such that our epistemic position for knowing whether a given instance of ignorance is an excuse is less secure than we might have hoped.

The remainder of the paper is structured as follows. In section II, the account of the epistemic condition of blameworthiness that my argument relies on is explained in more detail. In section III, I introduce the main cognitive bias that the paper uses as a case study: myside bias. Section IV then draws on empirical research about myside bias to identify what a reasonable expectation to avoid ignorance despite our natural tendency towards myside bias demands of us. Sections V, VI, and VII examine the quality of educational opportunities in the United States as it relates to the skills needed for mitigating myside bias. This serves as a case study that shows important ways that RE has revisionary implications for our practice of blame. Lastly, section VIII provides additional notes of clarification about the revisionary implications of RE that I have argued for.

2 The Reasonable Expectations Account

The core principle of RE is formulated as follows.

Reasonable Expectation (RE): S's state of ignorance, IG, excuses her from a resulting ignorant wrongdoing iff S could not have reasonably been expected to have corrected or avoided IG.Footnote 3

Michael Zimmerman (1997), Gideon Rosen (2004), and Neil Levy (2009; 2016) have argued on the basis of RE for revisionary implications for our practice of blame. However, unlike my argument, their arguments rely on a conception of reasonable expectation that takes what can reasonably be expected of S to be constrained by S’s occurrent beliefs about what S has reason to do. The idea is that if x-ing is contrary to S’s occurrent beliefs about what is worth doing, then S cannot be reasonably expected to x. In contrast, my argument rejects this occurrent belief constraint in favor of the rival account of reasonable expectations. The rival account holds that what can reasonably be expected of an agent is a matter of the agent’s capacities, knowledge, opportunities, and evidence, where this does not involve an occurrent belief constraint. On this view, just because an agent’s occurrent beliefs do not call for x-ing, it does not follow that an expectation that the agent x is an unreasonable expectation. To help illustrate this understanding of reasonable expectations, let us turn to an example from Clarke (2017).

Millionth Customer: Imagine that Sam and his wife had read in the morning paper that the store would award the prize to its millionth customer. As the article (which they read to the end) explained, when the store reached its millionth customer minus one, a sign to this effect would be posted in the store window. As Sam walked by, he saw the sign in the window. But Sam failed to put two and two together. In this version of the story, absent excusing circumstances, Sam’s wife might have a beef with him. He wasn’t aware that he was failing to win the prize, but it was reasonable to expect him to have been aware of this fact (Clarke 2017, 248).

The idea is that Sam’s cognitive capacities and evidence make it so he can reasonably be expected to “put two and two together” so to say, and that this can be so, even if Sam never knowingly acted contrary to what he occurrently believed he had most reason to do. In short, on this understanding of RE, the agent can be in a position where he can reasonably be expected to know better on the basis of his cognitive capacities, opportunities, and evidence even if he don’t realize that he is falling short. Of course, it needn’t be the case that the cognitive capacities and evidence Sam has ground a reasonable expectation to have put two and two together. Further details about why Sam failed to put two and two together might preclude a reasonable expectation. For instance, what if Sam’s failure was due to his being highly distracted by a pressing concern that ought to be occupying his thoughts? What if it was due to his having a brain glitch through no fault of his own, as we all sometimes do? The point of the case is just to show that evidence and capacities can play a role in grounding a reasonable expectation to x despite the agent’s occurrent beliefs not calling for him to x. Call this the Capacities, Opportunities, and Evidence version of RE

An interesting point to note is that this latter approach has been put forth as a way of arguing that RE is not significantly revisionary.Footnote 4 Yet as my argument aims to show, this version of RE has surprising revisionary implications about how often ignorance is an excuse. (Hereafter, my use of ‘RE’ will refer to the Capacities, Opportunities, and Evidence version of the reasonable expectation principle.) Now, let us turn to the particular cognitive bias that this paper examines: myside bias.

3 Introduction to Myside Bias

Myside bias, a species of confirmation bias, is the natural human tendency to “evaluate evidence, generate evidence, and test hypotheses in a manner biased towards [our] own prior beliefs, opinions, and attitudes” (Stanovich, West, & Toplak 2013, 259). When we encounter information that is contrary to our prior commitments, our brain’s natural response is to try to reduce or outright avoid the stress of cognitive dissonance that comes when we hold conflicting beliefs. To avoid the psychological distress, we automatically engage a tendency to give that counterevidence less weight than it deserves; we are more critical of it, downplay its significance, or even outright dismiss it. And so, when this tendency occurs, the task of correctly evaluating our evidence becomes more difficult than it would be if we did not naturally have this biased tendency. Here, a question arises: when, if ever, does this bias make the expectation that the person correct or avoid the ignorance an unreasonable expectation? The answer turns on what the person would have needed to do to mitigate the effects of the bias and whether that agent could reasonably have been expected to have done so.

One might think that if we are more careful in our reasoning or more concerned with getting the right belief, we will not engage in myside bias. In other words, one might be tempted to attribute myside bias to a matter of someone simply exercising their agency in an intellectually lazy way. However, being concerned with careful reasoning and true beliefs does not get you out of the grip of myside bias. In fact, even if someone has the various features associated with traditional measures of intelligence, it still would not put one in a position to easily overcome myside bias.Footnote 5 What does effectively mitigate myside bias is engaging in a particular type of cognitive activity: metacognition.

In the next two sections of the paper, I will be going through the details about what metacognition is and the different ways that engaging in metacognition can mitigate bias, have no impact, or even make bias worse. I would like to flag that it may be easy to lose sight of the larger aim of the paper as we work through the details of metacognition. Nevertheless, going through these weeds is important for having a clearer understanding of what exactly an expectation to overcome the effects of myside bias demands of a person. And since this paper is in the business of assessing when such an expectation is reasonable or not, a clearer understanding about what exactly the expectation demands is very important. With that cautionary note out of the way, let us turn to an exploration of metacognition and myside bias.

4 Metacognition: What It is and How it Can Mitigate Myside Bias

In broad strokes, metacognition is simply the process of thinking about one’s own thinking, i.e., cognition of one’s cognition. More precisely, it is a cognitive process that involves three factors: i) the information one has about one’s own cognition and cognition in general; ii) the process of monitoring one’s current cognitive activities; and iii) regulating the strategies and reasoning procedures that one is currently using (Schwartz, Scott, & Holzberger 2013, 80). These three factors work together to create a cognitive process that involves someone monitoring her own cognition for deviations from what she thinks good cognition in her current circumstances would be like. And if her own sense of how she is reasoning matches up with (or at least does not conflict with) background assumptions about good reasoning, then she does not modify the way that she is approaching the issue. But if the agent’s monitoring suggests to her that she is reasoning in a way that is problematic according to her own background assumptions, then she attempts to modify her way of approaching the problem to get it to match up with her model of good reasoning.

Accordingly, if someone is engaging in a pattern of reasoning of whatever sort, biased or not, that pattern of reasoning is changed via metacognition only if the person’s own conception of what good reasoning is like is different from how she currently conceives of her current way of reasoning. It is not enough for the pattern to be different from what the model of good reasoning says good reasoning is. It needs to be taken as different. This is because the input that drives revision in the process of metacognition is not the actual occurrence of the reasoning straying away from the individual’s model of good reasoning. It is the person’s representation of her reasoning pattern that serves as the input. Thus, even if the person’s model of good reasoning involves strategies that mitigate bias, it needn’t follow that active engagement in metacognition will lead to the bias being corrected or mitigated. The person could still fail to represent the occurrent reasoning as falling short of her own standard. Accordingly, successful mitigation or correction of bias via metacognition requires that the person represents her pattern of thinking as falling short of her standard.

Moreover, the representation needs to target the particular pattern of thought that involves myside bias as the part of the reasoning that falls short of good reasoning. After all, someone could represent her cognition as standing in need of improvement by noting any old part of her way of reasoning as deficient and in need of improvement. Yet if one is to mitigate myside bias, the part of one’s reasoning that involves the bias is the part that needs to be represented as in need of improvement. However, even having a representation of one’s myside biased thinking as thinking that falls short of good reasoning is not enough to mitigate bias via metacognition. Identifying problematic ways of thinking as problematic and in need of revision is not sufficient for resolving those problems. Knowing is only half the battle. The person needs to actually make revisions to the way of reasoning. And more precisely, she needs to carry out a revision that i) successfully targets the biased pattern of reasoning and ii) successfully mitigates the biased reasoning.

To sum up, whether metacognition mitigates myside bias depends on

  1. (a)

    the person representing one’s own pattern of thought as falling short of good reasoning, such that it calls for improvement,

  2. (b)

    the pattern of thought that is represented as needing improvement is the part of one’s thinking that involves myside bias,

  3. (c)

    the person’s own representation of good reasoning contains concrete informative strategies that work against engaging in myside bias, and

  4. (d)

    those concrete informative strategies are deployed and target the patterns of thought behind the biased thinking.

Accordingly, when someone’s ignorance behind an ignorant wrongdoing is due to myside bias, whether she can reasonably be expected to know better is a matter of whether she could have reasonably been expected to have corrected her biased thinking by meeting (a)-(d). If we are to better understand what exactly is demanded of agents when they are expected to correct their ignorance and not fall into myside bias, we should turn to examining what it takes for an actual person to meet (a) through (d) and thereby mitigate myside bias.

A review of the studies on metacognitive strategies for reducing myside bias suggest that a key factor for whether agents fall prey to myside biased thinking is the type of argument schema that they use, i.e., what they think good reasoning and argumentation is like. Argument schemas that do not engage with multiple points of views (e.g., not considering counterexamples) are conducive to myside bias. In contrast, when an individual holds an argument schema that takes strong argumentation to involve critically engaging with alternative points of view and counterexamples to one’s own view, the individual is better suited to avoid myside bias. That said, the studies suggest that merely engaging with points of view counter to one’s own is not sufficient. One must critically engage alternative points of views and be just as critical of one’s own view as one is of the alternative(s) (Christensen-Branum et al., 2018).

For instance, McCrudden and Barnes (2016) found that a major factor for whether students engage in myside bias is whether students’ argument schema calls for consistently applying the same evaluative criteria to arguments, regardless of whether one agrees or disagrees with the conclusion of an argument. Their findings are corroborated by the results of prior studies on metacognition and myside bias (Kardash and Howell 2000; Lombardi et al. 2013; Song and Ferretti 2013). In each of these other studies, critical evaluation of alternative points of view and a consistent application of said evaluative standards were associated with students being less prone to engaging in myside biased thinking.

Furthermore, in Lombardi et al. (2013), they found that teaching students metacognitive strategies that involved critically reflecting on alternative viewpoints in a consistent manner decreased the tendency for students to engage in myside biased thinking, increased their understanding of evaluating scientific arguments, and that these effects were maintained 6 months later when the students were re-evaluated. The importance of critical engagement is further corroborated by the findings from Wolfe & Britt (2008) and Wolfe (2012). In these studies, myside bias was associated with having a “fact-based argument schema” as opposed to a “balanced argument schema.” Students who viewed good argumentation as a matter of simply listing facts that supported their conclusion were far more prone to myside bias than students who thought that good argumentation involved a balanced consideration of both sides of an issue.

I should note that neither Wolfe & Britt (2008) nor Wolfe (2012) explicitly mention metacognitive strategies or consistently applying the same evaluative criteria to belief-consistent and belief-inconsistent arguments. However, their findings fit and corroborate the hypothesis that such critical consistent engagement is effective at reducing myside bias. For if that hypothesis is right, then we should expect students who have a fact-based argument schema to not invoke strategies that mitigate the tendency to engage in myside bias, and thereby be more susceptible to the bias. Likewise, if the hypothesis is right, then we should expect students who engage in considerations for and against alternative viewpoints to have less of a tendency to engage in myside bias compared to their peers who use a fact-based argument schema. Of course, it will not be enough to simply consider alternative viewpoints, since someone could still do so in an inconsistent manner. But if someone applies the consistent critical reflection that is suggested by McCrudden & Barnes (2016), they will at least need to take a balanced approach to argumentation as opposed to simply listing off positive support for their beliefs. And so, given the hypothesis of McCrudden & Barnes (2016), it should not be surprising that the students who use a balanced argument schema were found to engage less often in myside bias compared to their peers who used a fact-based argument schema.

Overall, these studies on effective strategies at myside bias reduction suggest that successfully reducing myside bias via metacognition involves a certain degree of know-how with respect to evaluating arguments and evidence. As others have noted, “understanding what to do with knowledge may be more critical in informal reasoning competence than having a larger knowledge base” (Weinstock 2009, 431 as cited in Wolfe 2012, 479). To successfully mitigate myside bias when one is evaluating the merits of a position, one will need to have an argumentation schema that not only calls for the consideration of arguments for the opposing viewpoints, but for doing so in a way that avoids the application of a double standard. Moreover, this skill does not seem to be something that tends to naturally occur in human reasoning. Prior studies on metacognition have been found to suggest that the type of argument schema used is heavily influenced by a person’s education about argumentation. In their literature review, Christensen-Branum et al. note that prior studies support the view that “how teachers frame and define argument in the classroom, especially over the life of a student’s educational career, is heavily implicated in the student’s propensity toward myside bias” (2018, 438).

Accordingly, when the agent in question is a cognitively typical adult in non-extenuating circumstances, whether they can reasonably be expected to have avoided or corrected ignorance rooted in myside side will generally depend on the following two factors.Footnote 6

The Opportunity for Skill Development Factor: Did the person have available educational opportunities—that they could reasonably be expected to have taken—to learn and develop an understanding of good argumentation that calls for balanced critical evaluation of both one’s own position and opposing positions?

The Application of Skills Factor: Could the person reasonably be expected to successfully apply the argument schema to the biased reasoning that gave rise to the ignorance in question?

With this in mind, let us now return to the question of when myside bias-based ignorance excuses for real non-ideal agents. In doing so, we’ll begin with the factor of opportunity for skill development and take the United States of America as a case study.Footnote 7

5 The United States and Reasonable Expectations to Mitigate Myside Bias

A good place to start for addressing the educational opportunities of non-ideal agents in the US is to consider the formal public educational opportunities that are available. Until 2011, the national standards for public education in the US did not place an emphasis on the importance of engaging with opposing viewpoints for argumentation. Prior to Common Core, the national education standard did not have anything specific about writing or argumentation as such. For instance, No Child Left Behind’s standards were just concerned with mathematics and reading comprehension, which said nothing about applying a standard for argumentation or evidence assessment (U.S. Department of Education, 2002). There were state standards for written argumentation, but it tended to involve teaching students argumentation via the standard five paragraph essay that students are taught to use in primary and secondary education. Yet this standard five paragraph essay is not conducive to mitigating myside bias. It takes the form of a fact-based argument schema, which, per Wolfe & Britt (2008) and Wolfe (2012), is associated with myside biased reasoning. And so, the opportunities afforded by public education in the US pre-2011 were no guarantee of being an opportunity sufficient for developing the necessary skills for mitigating myside bias. In fact, given the emphasis on a fact-based argument schema, we have some reason to suspect that many of the opportunities were deficient.

When Common Core was adopted in 2011, there were some standards that were put in place for argumentation that emphasized the importance of considering and charitably construing alternative points of view and even considering counterarguments. However, it is unclear whether the standards call for a consistent application of a critical standard when it comes to engaging the other side. The standard set by Common Core is vague about what it means to charitably construe alternative points of view or considering counterarguments. As we’re familiar with, there are better and worse ways to engage with counter-arguments. It’s one thing to take up an easy counter-argument and another to dig deep and consider more forceful objections. That said, we can set aside the question of whether the standard of common core sufficiently emphasizes a consistent application of critical scrutiny. This is because even if the standard itself sets the right bar, we have reason to think that the implication of the standard itself within the education system did not guarantee opportunities that would make a meaningful difference to student abilities.

Unfortunately, there is no data on national writing scores that have been published since the National Assessment of Educational Progress (NAEP) released a 2011 report of student scores. So, we cannot reason from data that directly speaks to Common Core’s influence on student’s writing and argumentative abilities. However, this does not mean we lack evidence for the claim that these abilities did not meaningfully improve.

First, the scores with respect to reading comprehension and analysis at the 12th grade level have not meaningfully improved from 2009 to 2019. The same is true for mathematics at the 12th grade level (NAEP, 2019). One explanation is that despite there being a better standard in place that is meant to guide what students are taught, the standard has largely not been implemented well. Moreover, we have no reason to think that implementation has been any more successful in writing and argumentation. In a teacher survey by the Reboot Foundation, a major characteristic of teacher responses was that there was a lack of resources and professional development training for teaching critical thinking (Reboot Foundation, 2020). The lack of resources and professional training likely tracks wealth differences between different school districts, where wealthier districts have the means to help teachers improve critical thinking skills of students, while those districts in less well-off areas are left underfunded and undertrained. For instance, within the state of Illinois, the quality of resources and training for implementing the standard of Common Core differs greatly depending on the wealth of the school district (An & Cardona-Maguigad, 2019). Lastly, a further problem for the implementation of teaching critical thinking skills necessary for bias mitigation in the US is that there are misunderstandings about how to effectively teach the skills to students (Bouygues, 2022).

Thus, in light of what limited evidence that is available on the teaching of critical thinking and argumentation in the US, we should be skeptical of the idea that finishing primary and secondary education in the US is enough for it to be reasonable to expect the agent to have the skills necessary for overcoming myside bias. This is true even when we add the qualification of post the implementation of Common Core. Of course, this is not to say that the opportunities available to students in US primary and secondary education system are never capable of grounding a reasonable expectation to have developed the skills needed for mitigating myside bias. The evidence above supports the more moderate conclusion that the opportunities afforded via one’s primary and secondary education in the US do not always ground such a reasonable expectation. Whether they do will vary (e.g., they will vary according to school district).Footnote 8

Moreover, even if we have an individual who falls into the camp where opportunities from primary and secondary education were insufficient, it need not follow that the agent cannot reasonably be expected to overcome myside bias. There are further opportunities beyond those provided by one’s primary and secondary education. To reasonably conclude that the agent can’t reasonably be expected to overcome the bias, we would need to know that there were no further opportunities available that were sufficient for grounding such a reasonable expectation. For instance, higher education is one opportunity that one might have for learning about the importance of a balanced argument schema that applies a consistent critical standard. Of course, not everyone pursues the opportunity of higher education. In fact, nearly half of the US population has not attained a college degree (U.S. Census Bureau, 2022). But not having taken an available opportunity is not the same as not taking an opportunity that one could reasonably be expected to have taken. Reasonable expectations are not just about the most readily available or actually taken opportunities. The opportunities that someone could have taken but did not take can also be relevant, depending on the context. In some cases, the average US adult may have had the opportunity to pursue higher education but decided not to. For these cases, the issue is i) whether the person could reasonably be expected to have pursued the opportunity to receive higher education and ii) whether that opportunity would have led them to develop the argumentative skills to mitigate myside bias.

For those adults in the US who did not pursue a higher education, it is doubtful that an expectation that they did pursue higher education is a reasonable expectation. Even if we set aside the issue of cost for the sake of argument, it is still the case that there are other options aside from pursuing a college degree that are legitimate options for someone to take. It is their own life to live; it would be inappropriate to expect them to pursue higher education if they would rather walk a different path (e.g., being a stay-at-home parent, practicing a trade, etc.) Thus, in those cases, the expectation that they attended university and thereby acquired the skills necessary for mitigating myside bias would be an unreasonable expectation. However, might it be reasonable to expect those who did go through higher-education to have developed the skills necessary for mitigating myside bias? Let us now turn to that question.

The prospects are more optimistic when we’re considering those who took the opportunities afforded by higher education. Of course, it is true that not all higher education opportunities are equal nor do all majors place the same emphasis on metacognition and balanced argument schemas. However, general education requirements that apply regardless of major tend to involve multiple courses where students have the opportunities to learn and develop those skills that are necessary for bias mitigation. Consequently, it will often be the case that those who have gone through general education requirements of their degree can reasonably be expected to have some degree of skill pertaining to metacognition and applying a critically consistent argument schema. Of course, the opportunities that one can reasonably be expected to take will be better for students that pursue a degree that focuses more on critical thinking, argumentation, and analysis. For instance, the opportunities pertaining to bias mitigating skills that an undergrad pursuing a degree in business administration will be less rich than those for a student pursuing a degree in the liberal arts. That is not to say that students who graduate with a degree in business administration lack the skills necessary for mitigating myside bias. It is also not to say that any given individual with a business administration degree has worse bias mitigating skills than any given individual with a degree in the liberal arts. I just mean to highlight that not all opportunities afforded by one’s college education are the same; some tracks within higher education include opportunities that are more focused on skills necessary for mitigating myside bias than other tracks.

Moreover, it is likely that the quality of the opportunities will also vary along a number of other factors as well, such as professor or instructor effectiveness and even the work, school, life balance of the individual student. For instance, what a student can reasonably be expected to get out of opportunities in class to develop bias mitigating skills is surely different between a well-off student who is in a position to just focus on their education and a student who also needs to work and take care of their sick parents. Again, this is not to say that the student who has more taxing responsibilities cannot reasonably be expected to develop metacognitive skills, but that the degree of skill level that they can reasonably be expected to develop is plausibly lower, all other things being equal.Footnote 9

To sum up, the quality of the opportunities for developing bias mitigation skills vary for individuals. For those opportunities in primary and secondary education, whether they are even sufficient for grounding a reasonable expectation to develop the skills will vary along factors outside of student control: resources of their school district and the sort of skills and training that their teachers receive for effectively teaching critically consistent argument schemas and their application. In contrast, the opportunities students are provided with in higher education are at least sufficient to ground a reasonable expectation to have developed some degree of skill with respect to metacognition and the application of critically consistent argument schemas. The variation will pertain to the degree of skill that they can reasonably be expected to have developed, given the different quantity and quality of opportunities that vary per student and their particular programs’ emphasis on critical thinking and argumentation, as well as other relevant factors about the student’s own personal life.

Let us now turn to examining opportunities outside of formal education to develop the argument schema associated with myside bias mitigation. Surely, there are such opportunities available; we live in a time where there is an incredible amount of information that we have access to even with just a phone and an internet connection. Given this ample availability of information to learn about bias mitigation, is it a reasonable expectation that an agent have developed the right skills for mitigating myside bias, regardless of their formal educational opportunities? Unfortunately, no. For those whose formal educational opportunities were deficient, the opportunities outside of formal education will also be deficient. Even if we grant for the sake of argument that these agents all have evidence that they have bias and that it is worth mitigating, it is questionable that the opportunities that they can reasonably be expected to take are sufficient for properly developing a robust set of bias-mitigating skills.

The most readily available and seemingly reliable sources on bias that they can reasonably be expected to find by doing their own research are not great. In the mainstream sources that discuss reducing bias, the general idea is to engage with sources that disagree with you.Footnote 10 This is certainly an important step in overcoming biases like myside bias. It at least gets one past the issue of only seeking out confirming evidence for one’s prior beliefs. However, this step is deficient on its own. As we saw earlier from Wolfe & Britt (2008) and Wolfe (2012), a balanced argument schema is not just being aware of or engaging alternative points of view. It requires being critically consistent and making sure that the same critical standard one subjects other views to is also applied to one’s own preferred view. If merely looking into other views and listening to opposing ideas is all that one does, this does not necessarily help mitigate the bias tendency of discounting the evidence that is contrary to one’s prior belief. Accordingly, while it is true that there are further opportunities for learning about bias mitigation outside of one’s formal education, the opportunities that are readily available and accessible are deficient for addressing the part of myside bias that involves discounting evidence against one’s prior belief(s). These opportunities might ground a reasonable expectation to avoid ignorance rooted in simpler cases of confirmation bias, where merely looking into alternative sources would have been sufficient for remedying the ignorance. But this does not translate into a reasonable expectation to avoid more complex forms of biased reasoning.

We’ve now seen that the opportunities for developing the metacognitive skills necessary for mitigating myside bias are a mixed bag. Those outside of formal education are largely deficient. Those within primary and secondary education differ along the dimensions of individual schools and school districts. And lastly, formal education provides a sufficient opportunity to develop metacognitive skills, but only those who have chosen to pursue higher education can reasonably be expected to develop said skills via the opportunities of higher education.

6 Levels of Education and Reasonable Expectations

What does this mean for whether an agent can reasonably be expected to have the skills necessary for overcoming myside bias, and in turn, avoiding or correcting ignorance rooted in their myside bias? Let us call the group of agents i) who have pursued higher education, HED, ii) those who did not pursue higher education, but were not failed by their primary and secondary educational opportunities, SPED, and iii) those who were failed by their primary and secondary educational opportunities and did not pursue higher education, FED.

Arguably, only those in either HED or SPED can reasonably be expected to have robustly developed the skills necessary for overcoming myside bias. Accordingly, one may be tempted to draw the conclusion that ignorance rooted in myside bias would excuse members of FED per RE. However, this is not quite the conclusion that should be drawn. Moving to that conclusion would fail to appreciate the nuances between the various members of FED. Yes, none can reasonably be expected to have robustly developed the relevant skill-set. But this leaves room for the members to be under a reasonable expectation to have developed some degree of skill. Moreover, the degree of skill that individuals of FED can reasonably be expected to have developed will, again, vary. Not all sets of opportunities that have failed students are equal. Some will have fallen less short than others.

Unfortunately, there is not available empirical evidence that would give us a detailed picture of what the specific reasonable expectations are that hold for members of FED. But this is to be expected given the complex heterogeneity of the opportunities and circumstances that the numerous members of FED exist in; there is no shared level of skill that all members of FED can reasonably be expected to have developed. Nevertheless, we can infer a schema that outlines what the reasonable expectations track. The expectations for a given individual of FED that are reasonable will track the difficulty of overcoming the particular instance of myside bias and the particular skill-set that the individual could reasonably be expected to have developed. The worse their educational opportunities, and in turn the lower the quality of mitigation skills that they can reasonably be expected to have developed, the lower the bar is for an instance of myside bias to be one that they could not reasonably be expected to overcome. And conversely, the better their educational opportunities, and in turn the higher quality of mitigation skills that they can reasonably be expected to have developed, the higher the bar is for an instance of myside bias to be one that they could not reasonably be expected to overcome.

To help illustrate the schema in application, let us turn to a few different scenarios. First, consider an especially simple case of myside bias.

Jacob & Climate Change Denial: Jacob’s educational opportunities were not especially focused on developing the metacognitive skills for mitigating myside bias. He could not reasonably be expected to reliably mitigate his tendency towards myside bias due to the deficient opportunities. However, his education did at times allude to the importance of being critical of information, even information that is friendly to your own view. Moreover, he was taught the difference between weather and climate. Jacob uses this distinction to dismiss arguments in favor of climate change that make this mistake, but never considers whether his own alleged evidence makes this same mistake. But it does. Jacob’s anecdotal evidence of weather that he thinks bolsters his rejection of climate change makes this very mistake that he identifies in other lay arguments for climate change make.

Despite belonging to FED, Jacob could still be under a reasonable expectation to have mitigated his myside bias and noticed that his own reasoning involves the same mistake that he is critical of in others. Perhaps a realistic Jacob would not be under a reasonable expectation to have noticed this error in every instance that it occurs, especially if the topic is polarized.Footnote 11 However, he would still be under a reasonable expectation to have noticed the mistake at some point, since he is under a reasonable expectation to sometimes be critical of his own reasoning. While his education failed him, it did not fail him to the extent that he could not reasonably be expected to occasionally apply the same critical lens that he applies to others to his own reasoning, especially when the mistake he objects to is clearly the very same kind of mistake he himself is making. Thus, were Jacob to act wrongly out of this ignorance, such as by flouting any climate-related obligations he has as an individual (e.g., not voting against local policies that would help address carbon footprints), RE would not take his ignorance as an excuse.

It's worth noting that this does not mean that someone like Jacob can reasonably be expected to eventually correct all instances of myside bias-based ignorance. Not all cases of such ignorance are as simple as being remedied by applying a marginal degree of critical thinking to one’s own view and reasoning. Consider the following, more difficult case of myside bias for Jacob.

Doxxing Dan: Jacob becomes aware that his neighbor, Officer Dan, was one of the protestors outside the US capital on January 6th, 2021. Dan was one of the protestors who neither committed violence against capital police nor entered the building. Jacob is aware of this, but still lumps Dan in with the worst of Dan’s peers: those who violently attacked capital police and rummaged through the private offices of members of Congress. Of course, Jacob would not lump peaceful protestors in with other violent actors if it was a protest that he supported (e.g., one for police reform). Jacob fails to recognize this though; he simply thinks to himself: “If 9 people sit down at a table with 1 Nazi, there are 10 Nazis at the table.” In turn, Jacob takes it upon himself to dox Dan.Footnote 12 He posts to social media that Officer Dan is an extremist that supports domestic terrorism. Now Jacob may be right about the importance of shedding light on extremists in positions of power, but thanks to Jacob’s myside bias, he has falsely lumped Dan in with domestic terrorists. This leads to Dan being continuously harassed: an outcome that Jacob could reasonably have foreseen.

Jacob faces a more complicated instance of myside bias compared to the previous case. In Jacob & Climate Change Denial, he could overcome his ignorance simply by recognizing that his own reasoning makes the very mistake that he accuses others of making: conflating weather with climate. This takes just a marginal application of critical thinking. After all, once he has applied a marginal degree of critical thinking, it is not clear how he could sensibly try to further justify his reliance on weather as not a reliance on weather. In contrast, Jacob can readily point to a difference to try to justify generalizing in the case of Dan: a good number of Dan’s peers are in fact violent extremists. And you might sensibly, albeit mistakenly, worry that if someone stayed to peacefully protest, they must in some way support the violent attack that took place as part of the gathering. There is also the obstacle of the issue being highly polarized and politically charged, a setting that makes avoiding myside bias more difficult. In short, there is a difference in the level of skill required to overcome myside bias in these two cases. And being under a reasonable expectation to meet the lower skill requirement does not entail that one is also under a reasonable expectation to meet the higher skill requirement. Thus, it does not necessarily follow that he can reasonably be expected to recognize the generalization he is making as an unjustified generalization on par with the over generalizations that he would object to.

Nevertheless, it is important to emphasize that not all cases of ignorance on a polarized topic must be especially difficult to overcome. While we do tend to be more vulnerable to myside bias when the topic is polarized, this does not preclude some polarized cases from still being easy to avoid myside bias on, even for some members of FED. Take for instance a comparison between ordinary less than ethical dealings in politics and a former president taking highly sensitive documents pertaining to national security to personally store in a bathroom of his private club. Many actual individuals in the US do not support investigating the matter and dismiss such an investigation as a politically motivated stunt, since they downplay the severity of it and take it as simply business as usual of US politicians. In doing so, they might reason along the following lines: “If other reckless handling of information was not taken as gravely serious by the country, then the country shouldn’t take this case of reckless handling of information as worthy of serious investigation. This is just politically weaponization of the justice system.” However, it is hard to imagine that all members of FED who reasoned in this way due to myside bias are simply falling short of an unreasonable expectation. Despite the topic being highly politicized, it’s an easy case to note the difference between business-as-usual flouting of rules and those that so clearly and recklessly pose a threat to national security. Unless their educational opportunities are especially impoverished, it is likely that they could reasonably be expected to overcome their myside bias in such a clear and easy case.

As these cases illustrate, what expectations are reasonable for a given individual in FED is a matter of both the difficulty of overcoming the particular instance of bias and the particular skill-set that the individual could reasonably be expected to have developed in light of the opportunities that they could reasonably be expected to have taken. This schema can also be extended to groups in SPED and HED.

Since members of SPED and HED can reasonably be expected to have developed a higher degree of proficiency in bias mitigating skills, the bar of difficulty will need to be higher for an expectation that those members overcome the bias-based ignorance to be an unreasonable expectation. For instance, when a topic is not highly politicized or otherwise polarized and the case of myside bias is fairly straightforward, it is likely reasonable to expect someone in SPED to correct the bias. After all, the conditions for being in SPED is that one’s primary and secondary educational opportunities did not fail them in terms of providing adequate opportunities for developing bias mitigation skills. Of course, the reasonable expectation would not be to overcome all instances of straightforward, uncomplicated myside bias reasoning; no one can reasonably be expected to exercise such skills to a perfect degree. Nevertheless, as a member of SPED, they could reasonably be expected to reliably overcome straightforward instances of myside bias. However, the more interesting cases would be members of SPED and politicized or otherwise polarized topics. Can members of SPED be reasonably expected to overcome bias in those more interesting cases?

A plausible answer is that yes, but certainly not in all cases. Again, it will depend on just how difficult it is to overcome the myside bias. Take for instance the two issues of whether climate change is anthropogenically driven (as distinct from what particular policies best address the issue) and whether members of the public ought to wear face-coverings in indoor spaces during the height of the covid-19 pandemic. The latter is far more polarized than the former. Moreover, during the height of the pandemic, this was not only a relatively new and unfamiliar topic for most members of SPED (and the rest of us), but one that was actively targeted by misinformation campaigns. Consequently, the difficulty of overcoming myside bias is arguably lower when it comes to the causes of climate change. Thus, we should expect the range of members of SPED who can reasonably be expected to overcome myside bias pertaining to the cause of climate change to be larger than the range of members of SPED who can reasonably be expected to overcome myside bias pertaining to the importance of wearing face-coverings indoors during the pandemic. Of course, that is not to say that members of SPED cannot reasonably be expected to refrain from blocking hospitals in protest of mask mandates or threatening retail workers that are enforcing store mask policies. It is just to say that compared to myside bias thinking when it comes to the cause of climate change, it is more difficult for members of SPED to overcome myside bias thinking during the height of the pandemic about the importance of wearing masks indoors. Whether that difficulty difference makes it so a given member of SPED cannot reasonably be expected to overcome myside bias about the use of masks is less than clear. I suspect that, yet again, there won’t be a single answer for all members of SPED. It will be a factor of what their particular epistemic environment was like.

Consider a young adult freshly out of high school who is a member of SPED. Suppose that this individual’s social community is one that buys into the anti-mask rhetoric.Footnote 13 Of course, our young SPEDster is not so isolated that he is unfamiliar with expert testimony that masks reduce the spread of the virus. However, it does seem like an expectation that he overcome his myside bias and thereby not downplay the expert testimony as mere politically motivated scare tactics may in fact be unreasonable. It would depend on what rhetoric he was surrounded by and how difficult this made it for him to avoid downplaying evidence from those he and his community disagrees with.Footnote 14 In contrast, if we imagine our young SPEDster to have grown up in an environment that is less hostile towards scientific evidence that is inconvenient, then we start to shift towards a case where he could reasonably be expected to overcome his myside bias and thereby take seriously the expert testimony that is at odds with his initial false beliefs about the importance of masks as an imperfect public health tool.

As mentioned earlier, the schema works similarly with respect to members of HED; the point where difficulty in overcoming myside bias makes an expectation to overcome the bias an unreasonable expectation is higher still. For instance, while one variant of our young SPEDster was not under a reasonable expectation to overcome his ignorance about masks due to the difficulty of extreme political polarization, this would not be so for members of HED. Generally speaking, members of HED can reasonably be expected to not downplay the consensus of scientific experts (at least about matters in which they are experts) when it is at odds with their or their community’s prior beliefs. Due to the opportunities afforded by their higher education, they can reasonably be expected to see that a balanced argument schema is called for in such a case, where one applies critical scrutiny to their own temptation to downplay or dismiss the consensus of the scientific community.

However, cases can be more complicated when the myside bias does not involve downplaying the consensus of established scientific experts but pertains to downplaying the reasons cited by one’s dissenters on a political or moral issue. Take for instance the polarized and fixed camps that some members of HED settled into about the conflict between Hamas and Israel within the first few days after the October 7th, 2023 Hamas attack.Footnote 15

Presumably, a large set of those who were highly confident in their beliefs about the conflict held firm to their high degree of confidence as a matter of myside bias. Moreover, this was not limited to beliefs on just one side of the issue. For instance, some members of HED held firmly that any Israeli military response is unjustified, since an event like the Hamas attack is merely part of the cost of moving towards the decolonization of Gaza.Footnote 16 Presumably this ignorance was sometimes held and maintained by downplaying the significance of the lives and wellbeing of Israeli civilians, while thinking that the other side is downplaying the significance of the lives of Palestinian civilians; this would involve myside bias. Similarly, some members of HED held firmly that the Israeli military response would be justified regardless of the effects it has on Palestinian civilians, since they believed another attack like the one that took place on October 7th must be prevented. Again, this ignorance was sometimes arguably maintained by downplaying the significance of Palestinian civilian lives while thinking that the other side is downplaying the significance of the lives of Israeli citizens; this would involve myside bias.

Unlike the above case pertaining to an expectation to apply critical scrutiny to their own temptation to downplay or dismiss the consensus of the scientific community, this case involves an expectation to consistently apply critical scrutiny to one’s own evaluation of the issue and whether one is failing to appreciate the moral significance of what is cited as morally significant by those on the other side of the debate; in this case, the significance of civilian lives (either Palestinian or Israeli depending on which side of the debate one is downplaying).Footnote 17 Whether a given member of HED can reasonably be expected to avoid their ignorance that stems from myside bias would depend on whether their educational opportunities ground a reasonable expectation to step back and consistently critically reflect in cases of polarized contested political disagreements. While I doubt that there is a universal answer here due to the particular circumstances of various members of HED, it is plausible that many members of HED are under a reasonable expectation to step back and consistently apply critical scrutiny to their own view and the view of their dissenters. This is not to say that these members of HED can reasonably be expected to come to accurately access all of the details and various nuances to the topic at hand. It is just to say that ignorance has occurred across the different camps on the issue, and that some of these cases of ignorance involve falling short of a reasonable expectation to not succumb to myside bias in a way that downplays the significance of civilian lives and wellbeing.

7 Application of the Schema into Practice

The above section provides general guidelines for thinking about whether a particular non-ideal agent can reasonably be expected to have corrected or avoided an instance of myside bias based-ignorance. If it is a straightforward simple case of overcoming myside bias, then chances are that even with poor educational opportunities (outside of an especially impoverished educational experience), they could reasonably be expected to have avoided or corrected it. And as the case of myside bias-based ignorance is more difficult, then the educational opportunities that the person had needs to be of better quality if they are to be under a reasonable expectation to have avoided or corrected the ignorance.

A helpful way to frame this is in terms of a spectrum of difficulty, where we can place previous cases at different points along the spectrum. On one end, we have cases of myside bias that are especially easy cases, such as ignorance over the investigation of a former president’s threat to national security. The cases then increase in difficulty as we slide further along the spectrum, where we’ll find cases like Jacob & Climate Change. And as we move even further to the other of the difficulty spectrum, we would encounter cases like new public health policies and myside bias actively reinforced by powerful sources of misinformation. Now for a person to be under a reasonable expectation to overcome an instance of myside bias, it depends on their educational opportunities. For instance, the range of difficulty on the spectrum that a member of FED can reasonably be expected to overcome cover a smaller portion of the spectrum than a member of SPED, likewise for a member of SPED compared to a member of HED. Of course, this provides a less than precise means of making judgments about whether a given instance of myside bias based-ignorance excuses per RE. But that is to be expected given the nature of the phenomena that we’re dealing with.

Moreover, this does not preclude us from making reasonable judgments about when myside bias ignorance excuses for real non-ideal agents. If we have good reason to believe that a given non-ideal agent has a certain degree of educational opportunities, we can then make judgments about whether the bias beneath their ignorance likely excuses or not. And this can be done in a number of important cases of moral and political ignorance that we might care about. Take for instance a highly educated political figure that feeds into a hateful narrative for their own political gain. (I leave it up to the reader to fill in the details with their preferred figure.) Suppose that they mistakenly believe that what they’re doing is not immoral, and that this is partly a matter of myside bias; they downplay evidence that their action is morally problematic. Their educational opportunities make an expectation that they overcome this myside bias a reasonable expectation, whereas this is less likely to be so for a member of FED who does their own part in spreading hateful rhetoric via social media.

Furthermore, we can see that if we make the rhetoric particularly absurd, then the case would fall further on the left of our spectrum of difficulty, and thereby more likely that even members of FED wouldn’t be excused if they failed to put their myside bias in check. Consider for example the great replacement “theory” that circulates amongst white nationalists in the US. The idea is that political elites, often a more subtle way to reference Jews, are working towards making white Americans a minority of the population for their own political agenda. Once the absurdity of such a conspiracy is pointed out to believers, it should be an easy case to avoid myside bias and not double down on their antisemitism. This is so even if they’re members of FED. Thus, the various individuals that do hold such conspiracy beliefs (e.g., those behind the Unite the Right Rally) would likely not be excused by RE if their ignorance led them to acting wrongly. Of course, there can be exceptions. But the exceptions would be those whose educational or broader epistemic circumstances are exceptionally poor (e.g., especially insulated from ideas contrary to a white supremacist world view). To sum up, when putting the schema into practice, we can consider what we know of the particulars of an individual and their ignorance rooted in myside bias. We can then compare it to how it fits with other cases that have a clear spot on the spectrum of difficulty to then assess how likely it is that the individual can reasonably be expected to have the skills for overcoming myside bias in that situation.

8 Concluding Remarks

In this last section of the paper, I would like to address two final points. First, a reoccurring theme should be clear by now: whether a given instance of myside bias-based ignorance would count as an excuse on RE is context sensitive. It is sensitive to both i) the particulars of the difficulty of the topic that the ignorance is about and ii) the particulars of the agent’s educational opportunities, including how information is framed to them. Accordingly, to know whether a given non-ideal agent’s ignorance would be exculpating on RE, we would need to know those particulars. Sometimes we are in a position to know them, such as when we are acquainted with the individual in question or they are a publicly well-known enough figure that such details are readily accessible to strangers like us. Other times we are in a position to justifiably believe that the ignorance is likely (or unlikely) to be an excuse if we are in a good epistemic position with respect to how difficult the topic is and what the individual’s relevant educational opportunities were likely to be. Nevertheless, this is not the same as knowing that their ignorance is (or is not) in fact an excuse.

Furthermore, our epistemic position about the important particulars will be subpar at many other times. This occurs when we know little about a given individual aside from the fact that they’re ignorant on some topic. Consider for instance a complete stranger whose acting wrongly out of ignorance, perhaps rooted in myside bias, goes viral on the internet. Our epistemic position is probably especially poor. If there is no identifying information, it would be unreasonable to draw confident conclusions about their educational opportunities. And unless the topic of ignorance is one that even much of those in FED could reasonably be expected to have developed the skills to overcome, we won’t be in a good position to justifiably believe that the ignorance is an excuse or not. Or at the very least, I certainly don’t know or even have good access to this type of information when it comes to random non-ideal agents who are complete strangers to me. However, I suspect that is true for many of us when it comes to our epistemic position about such unknown non-ideal agents. After all, the information that is relevant to the issue is simply not often accessible to us without taking further (perhaps intrusive) steps to learn more about someone’s life. Accordingly, one important revisionary implication that RE seems to have for our practice of blame pertains to our position to know which instances of moral and political ignorance are in fact exculpating. At the very least, it has this implication when it comes to unknown strangers and topics of ignorance that are not especially easy to avoid myside bias on.

Lastly, I want to address an objection that has often come up in discussion of the account in this paper: that this account infantilizes a large number of adults who we would otherwise take as competent moral agents. One might worry that if RE takes many cases of moral and political ignorance to count as an excuse for members of FED, then this threatens to undermine their moral agency by denying their moral responsibility. However, the way that the above account of RE denies moral responsibility for a significant range of real non-ideal agents who act wrongly from ignorance neither denies their moral agency nor puts them on par with non-agents, such as children. This is because it does not deny that members of FED have the general capacity of moral agency. Instead, it just notes that in a surprising number of cases, they fail to meet the epistemic condition, i.e., their epistemic relation to the wrongness falls short of warranting blame for what would otherwise be blameworthy. This does not need to deny that they are moral agents, even if it denies moral responsibility—blameworthiness—for a given instance of ignorant wrongdoing. To see why, it would be helpful to consider a plausible aspect of our moral practice that pertains to the epistemic condition.

When we’re ignorant on a moral issue that is difficult to solve even amongst experts, (e.g., how much you ought to do to alleviate suffering amongst the world’s poor), and then act wrongly from that ignorance, we’re presumably excused. Or at the very least, we’re excused per RE. Now there are a number of difficult moral issues that we cannot reasonably be expected to arrive at the right answer, and so plausibly excused from blame. Yet this does not in the slightest suggest that we are not moral agents or that we are not generally morally responsible. Instead, it is to note that there is a domain of moral issues where we have a legitimate excuse for our ignorance by way of us failing the epistemic condition.

This is the same sort of moral phenomena that the above account is referring to when it suggests that many cases of moral and political ignorance count as an excuse for members of FED: if knowing better is sufficiently difficult for an individual through no fault of their own, such that they could not reasonably be expected to have avoided or corrected the ignorance, then their ignorance excuses. The above account of RE’s implications simply expands the range of cases where this occurs by noting that what counts as sufficiently difficult in this way will vary for individuals. Members of FED are moral agents and capable of moral responsibility. The domain where ignorance counts as an excuse is just larger for them due to the fact that reasoning abilities are largely shaped by our educational opportunities and the fact that their opportunities were impoverished. This expands the range where difficulty in knowing better precludes a reasonable expectation to know better.

In short, as human beings, we are naturally prone to cognitive biases like myside bias and can overcome them only by learning how to. This has an important implication for RE as an account of the epistemic condition of blame: the line that marks off the domain where ignorance counts as an excuse by way of it being ignorance that is sufficiently difficult to have corrected or avoided is not uniform for all moral agents. It will vary depending on what difficulties are in place and whether the individual can reasonably be expected to overcome that difficult. And in the case of ignorance that stems from myside bias, whether the agent can reasonably be expected to overcome the bias-based ignorance will vary according to the quality of their overall educational opportunities. Consequently, those whose educational opportunities have failed them through no fault of their own—members of FED—will likely be excused in more cases of ignorance rooted in myside bias than we might otherwise assume.Footnote 18 While those individuals whose educational opportunities did not fail them are less likely to be excused. Of course, whether a given non-ideal agent is in fact excused by way of the epistemic condition will depend on the particulars of the context. Similarly, whether we can know whether ignorance that is rooted in myside bias excuses will also depend on our epistemic position to know about those particulars. And this too will vary.