“It was never contended or conceited by a sound, orthodox utilitarian, that the lover should kiss his mistress with an eye to the common weal” (Austin 1832: 118).

1 Introduction

What should you do if you and a recognized peer discover that you disagree about some proposition? According to a simple version of a conciliatory view, you and your peer should (probably) suspend judgment.Footnote 1 On one way of developing this view, you and your peer each acquire a defeating reason. It undercuts the rational force of the evidence that you both took to support your positions. As things stand, it is not rational to remain committed without thinking the matter through again, gathering evidence, thinking about ways that you or your peer could have mishandled the evidence, etc.

The conciliatory view I have in mind is restricted in a number of ways that will simplify discussion. It pertains to full belief, not partial belief or degrees of belief. It tells us nothing about cases where one peer believes and the other suspends judgment. It might be hedged to handle cases where your peer’s response is too bizarre to take seriously. The addition of constraints and an exception clause to exclude cases of apparent severe cognitive malfunction do nothing to address the worry that is the focus of this paper, so I hope the readers will humor me and focus their attention on the specific difficulty that is the focus of this paper. The problem that we will discuss is the charge that even a restricted and qualified conciliatory view is incoherent because it is self-defeating.Footnote 2 If the objection has any force against a hedged and limited conciliatory view (CV hereafter), it threatens a broader class of conciliatory approaches and tells us something important about epistemic norms in general.

In the next section, I shall set out a version of the self-defeat objection to CV and argue that this objection misses its intended target. CV gets off on a technicality. We shall then consider a retooled version of the objection, one that gets to the heart of the objection against CV. We shall see that the retooled version of the objection fails, too. The failure should be instructive.

2 The simple self-defeat objection

Elga (2010) and Plantinga (2000) have both claimed in print that CV is, in some sense, self-defeating. The former says that this means that the view is incoherent because it ‘calls for its own rejection’ and gives us ‘inconsistent advice’. The latter that the view is ‘self-referentially inconsistent’ (2000: 522). One might think that if these charges were fairly applied to CV, they would show that CV was not true.

Let us focus on Elga’s formulation of the objection. He notes that it should be possible for there to be a disagreement about CV between two recognized peers (e.g., you and Tilda). You both have the same evidence, the same concern for settling the relevant questions correctly, the same levels of intelligence, and so it seems that the two of you should be on par. Upon discovering that you disagree about CV, the observation that Tilda believes CV to be false means that CV tells you that you now have a powerful reason not to believe CV.

Under these conditions, Elga says that CV ‘calls for its own rejection’. To help us see why this is bad, he offers us this example:

Suppose that Consumer Reports says, “Buy only toaster X,” while Smart Shopper says, “Buy only toaster Y.” And suppose that Consumer Reports also says, “Consumer Reports is worthless. Smart Shopper magazine is the ratings magazine to follow.” Then Consumer Reports offers inconsistent advice about toasters. For, one the one hand, it says directly to buy only Toaster X. But, on the other hand, it also says to trust Smart Shopper, which says to buy only Toaster Y. And it is impossible to follow both pieces of advice… Moral:… no inductive method can coherently recommend a competing inductive method over itself… it is incoherent for an inductive method to recommend two incompatible responses to a single course of experience. But that is exactly what a method does if it ever recommends a competing method over itself. (Elga 2010: 181).Footnote 3

The example is intended to show that there could be no (genuine) norm that says that we ought to follow Consumer Report’s recommendations since there could be situations in which this norm tells that we ought to respond in two (or more ways) that would be incompatible. In this case, it does so by both telling us to do one thing and follow another norm that tells us to do something incompatible with that. CV is supposed to tell us to follow norms like this.

The problem with this objection is not with the assumption that no genuine norm could tell us that we ought to ϕ and ought not ϕ under conditions that we could find ourselves in, but with the assumption that this is what CV does. Look closely. As stated, CV tells us that a thinker has a defeater when she finds herself in a situation of peer disagreement about CV. The defeater is a reason to decrease her confidence and/or suspend. The view says nothing further. If all it tells a thinker is that she should not believe CV, it does not offer the thinker inconsistent advice.Footnote 4

Readers might think that something of the original objection remains. The idea might be put this way. The reason that CV is incoherent is not that it says that it’s possible for a thinker to acquire evidence that requires her not to believe CV, but that CV implies this and implies that we should continue to be conciliatory in the face of peer disagreement.

This objection is not convincing as it is stated. In Elga’s example, Consumer Reports tells a shopper who ought to buy one toaster to buy Toaster X and to buy Toaster Y. Since these are distinct toasters, it would be impossible to for someone to do this and do all and only what she ought to do (i.e., buy one toaster). The relevant notion of incoherence is that it enjoins an agent to respond in two incompatible ways to her practical situation. (Essentially, the norm that enjoins us to follow the guidance of Consumer Reports generates dilemmas.) There is no such incompatibility, however, in the injunction to suspend judgment on CV and to continue to be conciliatory. Much in the way that Austin’s lovers are perfectly capable of kissing and thereby following the recommendation to contribute to the common weal without thinking anything about the common weal when kissing, a conciliatory thinker can continue to suspend when peers disagree without having any attitudes at all towards CV. The two things recommended (i.e., being conciliatory on some contested propositions, suspending on CV) are perfectly possible to do together. Thus, they aren’t incompatible. Thus, if CV is incoherent, it is not because it does what Consumer Reports does (i.e., require an individual to do two things that are impossible to do together). The simple objection simply misses its intended target.Footnote 5

3 The subtle self-defeat objection

The proponents of CV might agree with Elga that there are possible situations in which they ought to suspend judgment on whether CV is correct. We can concede this and still coherently maintain that we ought to conform to CV’s norms.Footnote 6 This might mean that we sometimes ought to, say, suspend judgment when we acquire a defeater even if we also ought to suspend judgment on whether the norm that requires this is correct, but perhaps this isn’t the end of the world.Footnote 7 Just as there is nothing incoherent in the idea that we ought to, say, act like utilitarians while believing nothing about the virtues of the utilitarian framework, there is nothing incoherent in the idea that we ought to, say, be conciliatory in the way that non-dogmatic people are while suspending judgment on whether CV’s norms are correct. At the very least, CV does not require us to respond in multiple ways that are incompatible. Thus, if the view is incoherent, more needs to be said to show this.

Elga anticipates this worry. What he says in response to this worry gives us the subtle objection to CV. In my view, it is the more important and interesting objection to CV. He asks us to consider a situation in which two peers who discover that they have incompatible views about how to respond to disagreement (‘C’ and ‘D’ stand for these views about how to handle disagreement). CV, Elga says, tells us that the thinker who believed C ought to abandon her belief in C and increase her confidence in D. Once we agree to this much, Elga thinks that C is shown to be incoherent:

… [N]otice that, when one shifts one’s view about the right way to respond to disagreement, one should correspondingly shift the way one responds to subsequent disagreements. In particular, when the above subject shifts his confidence away from view C and toward view D, that should correspondingly change the inductive method he implements. It will not be as dramatic a change as if he had become completely converted to view D, but it will be a change nonetheless. In other words, even in this sort of case, view C calls for a change in inductive method. And for certain choices of view D, view C calls for a change to a competing inductive method (2010: 182, fn. 8).

This objection involves some subtlety that was missing from the simple objection. This objection suggests that the simple response to the simple objection couldn’t suffice to save CV from self-defeat worries because it doesn’t address the part of the story in which someone moves from having allegiance only to CV to a state of mind in which they think there is some chance that some rival view is the correct one. In this state of mind, Elga thinks that the thinker ought to follow some new inductive method. If we continue to say that they ought to follow the inductive method recommended by CV, we have our incoherence. No thinker could possibly follow both methods since they offer incompatible advice. If we try to avoid this incoherence by saying that people ought to follow the new inductive method exclusively, we avoid incoherence but we have abandoned CV.

The subtle self-defeat argument thus seems to do what I have claimed the simple objection fails to do, which is to identify an actual inconsistency in the guidance that CV issues. It arises because CV (allegedly) tells a thinker to do a number of things that she cannot possibly do at once:

(a) The thinker who knows of the disagreement about CV ought to suspend on CV/decrease her confidence in CV;

(b) Because of (a), she ought to increase her confidence in some competing inductive method;

(c) Because of (b), she ought to conform to a new inductive method, one that takes account of her newfound confidence in the new inductive method and thus differs from the inductive method sanctioned by CV.

The inconsistency would arise because CV would continue to tell an agent that she ought to conform to CV but would also (allegedly) issue guidance that was incompatible with the guidance of the method mentioned in (c).

In the passage quoted above, it’s clear that Elga is relying on something like this proposal to generate the inconsistency:

EP: When a thinker shifts her confidence away from the belief that she ought to conform to inductive method A towards the view that she ought to conform to conform to inductive method B, she should correspondingly change the inductive method she implements.

It isn’t entirely clear what Elga thinks the thinker ought to change her inductive method to when these changes in confidence occur because it might be that the agent’s confidence is distributed between two inductive methods that offer incompatible advice and resist being harmonised. (Imagine someone recommending that we strike a compromise between one group that is calling for the equal treatment of two groups and another group that is calling for unequal treatment. It’s not easy to see what a compromise might look like here.) It doesn’t really matter for our purposes what specific alternative Elga thinks ought to be adopted when the shift occurs and someone decreases confidence in CV and, say, increases confidence in some kind of steadfast view, provided that the shift is to an inductive method that offers guidance that clashes with the guidance provided by CV. Let’s not worry about how we can recover the right inductive method in cases of divided confidence.

Let me note a few things about Elga’s Principle, EP. It looks similar to the Enkratic Requirement. What EP says, roughly, is that when we come to believe that some method is correct, we shouldn’t both believe that the method is correct and fail to believe in ways that conform to its directives. So, in essence, EP states that there is some kind of interesting normative connection between the following items: beliefs about the inductive methods we ought to follow, which are things that say that we ought or ought not have such and such attitudes under such and such conditions; beliefs about the conditions we’re in, and the beliefs, disbeliefs, and suspensions covered by these methods. What the Enkratic Requirement says, roughly, is we shouldn’t both believe that we ought or ought not have certain attitudes and fail to have the attitudes that ‘fit’ with these normative beliefs.Footnote 8 In essence, the Enkratic Requirement says that there is some kind of interesting normative connection between the following items: beliefs about what we ought or ought not believe and the beliefs, disbeliefs, or suspensions these normative beliefs are about.

On its face, it seems that EP would be false if the Enkratic Requirement were not genuine. If there were no interesting normative or rational connection between, say, the belief that we ought to believe p and the belief that p, we shouldn’t expect there to be any interesting normative or rational connection between beliefs about methods that tell us what we ought to believe under certain conditions and the beliefs themselves. If rationality were to say that it can (contrary to what the Enkratic Requirement states) be fully rational to believe in ways that clash with our fully rational beliefs about what we ought to believe, it couldn’t then say that rationality requires us to make sure that our beliefs about which inductive methods are correct mesh with the beliefs, disbeliefs, or suspensions they prescribe. The rational connections between, say, our beliefs concerning CV and various beliefs, suspensions, and disbeliefs would be mediated, in part, by the justification we had to believe that, say, CV directed us to believe, disbelieve, or suspend in certain kinds of cases. If we’re rationally permitted to believe p when we believe that we ought not believe p, it doesn’t make sense that rationality would insist that it would be wrong to believe p just because we rationally believed that some method is correct that tells us that we ought to believe p in the situation we’re in. The main difference between EP and the Enkratic Requirement, so far as I can see, is that EP says that there is a rational or normative connection between attitudes about attitudes we ought to have in a wide range of cases including the present one and the attitudes we have in the present case while the Enkratic Requirement is only concerned with a normative connection between normative beliefs about the attitudes we ought or ought not have in the present case and the attitudes we have in that case.

I have two reasons for mentioning the connections between EP and the Enkratic Requirement. The first is that we know that people often say that the Enkratic Requirement can be read in different ways. It might be false on one reading but correct on another (e.g., its narrow-scope reading might be false even if its wide-scope reading yields a genuine rational requirement). If that’s true of the Enkratic Requirement, we should expect that the same holds true for EP. The second is that it seems that a prima facie plausible case can be made that there is a tension between the Enkratic Requirement and other attractive rational norms or requirements. If that’s so, we should expect that there might be similar problems with combining EP with these further rational norms or requirements. I’ll have more to say about the potential conflicts between these further norms and EP or the Enkratic Requirement, but we first need to say something about the different ways we might read EP.

As Broome (1999) noted, the Enkratic Requirement can be read as having narrow- or wide-scope:

ERn: If one believes that one ought to believe p, one ought to believe p.

ERw One ought to see to it that one does not both: believe that one ought to believe p and fail to believe p.

We might think that only one of these captures the kernel of truth contained in the idea that rational thinkers have first-order attitudes and normative beliefs about such attitudes that cohere or mesh. If the Enkratic Requirement can be read in two ways, we should expect the same is true of Elga’s Principle. There is the narrow-scope reading:

EPn: If a thinker shifts her confidence away from the belief that she ought to conform to inductive method A towards the view that she ought to conform to inductive method B, she ought to change the inductive method she implements.

And there is the wide-scope reading:

EPw: A thinker should see to it that: she does not shift her confidence away from the belief that she ought to conform to inductive method A towards the view that she ought to conform to inductive method B and fail to change the inductive method she implements.

To evaluate Elga’s argument properly, we need to decide whether it is concerned with narrow- or wide-scope requirements.

Let’s start with the narrow-scope reading:

The Narrow-Scope Reading

Suppose a thinker believes to some degree that she ought not follow inductive method A and should follow inductive method B instead. If so, she ought not follow inductive method A but should follow some alternative that takes account of her beliefs concerning these two inductive methods. But inductive method A says that she ought to follow that very inductive method. So, inductive method A is spurious.

There are two obvious problems with this argument. The first is that it seems to overgeneralize. If there are any norms or inductive methods at all, there must be some that we ought to follow even when we suspect that they might not be genuine. This argument can be deployed to rule out these inductive methods. Second, nobody should think that a belief about which norm or method to follow by itself determines whether we ought to follow a norm or method. Some of our beliefs about norms and methods, after all, are deeply irrational or confused (e.g., if someone irrationally comes to believe that we ought to believe some contradictions, this by itself shouldn’t weaken any standing prohibition against believing such things). Because of this, we should reject the narrow-scope reading of EP. On this reading, we can ‘detach’ the conclusion that we are free to believe against the prescriptions of some inductive method simply because we have some possibly irrational degree of confidence in a competing one. Of course, once we reject EP on the narrow-scope reading, we reject the crucial premise in the argument.

Because the narrow-scope reading of EP doesn’t support any interesting arguments about norms or inductive methods, we should focus on the wide-scope reading. EPw tells us that a thinker should not both shift her beliefs about which norms to conform to and fail to change her inductive method. It doesn’t follow from EPw that someone who changes her beliefs about what methods to follow ought to change which methods she follows. To derive any conclusions about which methods a thinker ought to follow from EPw, we would have to establish that this thinker has changed her beliefs about which methods to follow and ought to believe that she ought to follow this new method. To establish anything about the first-order attitudes the thinker ought to have or ought not have, we need EPw to serve as a bridge that connects the rational status of the normative beliefs or beliefs about the methods to follow to the first-order attitudes that these normative beliefs or methods concern and we need some way to show that the thinker’s new beliefs about CV (i.e., that she ought to follow something other than CV) are the beliefs that she ought to have. So, in addition to a defence of the possibility of rational bridging (i.e., connecting the rational status of beliefs about the methods to the rational status of the beliefs that these methods direct us to form), we need a defence of rational conversion. To tell this story, we need to appeal to some further norm or norms that could explain why a thinker who initially, say, rationally believed CV should now believe that CV is incorrect and should believe instead that some rival view is correct.

The goal, then, would be to defend the possibility rational bridging and rational conversion so that we would have a defence of the crucial premises in this argument:

The Better Wide-Scope Reading

Suppose a thinker believes that she ought not follow CV and that she ought to follow some rival inductive method instead. And suppose that this is what the thinker ought to believe (e.g., because this is what her evidence supports). She ought to see to it she doesn’t both believe she ought to follow this alternative inductive method to CV and continue to follow CV. So, it’s not true that she ought to continue to follow CV. So, she ought to follow this alternative to CV instead.

If we can find good reasons to accept the premises of this argument, we should finally have what we need to vindicate Elga’s self-defeat objection to CV.

3.1 The subtle flaw in the subtle argument

There is a subtle flaw in the subtle argument. The success of the argument depends, in part, upon whether rational conversion is correct. It succeeds only if it’s possible for someone who, say, initially believes CV ends up believing that some rival view is correct and thereby ends up believing something that she ought to believe. The success of the argument also depends, in part, upon whether rational bridging is correct. It succeeds only if there is some necessary connection between the rational status of our normative beliefs or our beliefs about the methods we ought to follow and the attitudes that these beliefs or methods direct us to have. I think that a decent case can be made for rational conversion and a decent case can be made for rational bridging. The problem, however, is that there doesn’t seem to view that would serve Elga’s purposes that incorporates rational conversion and rational bridging.

Why would someone accept the possibility of rational conversion? It’s clear that Elga assumes that rational conversion is possible and I can see two potential motivations for it. First, we might appeal to some evidentialist norm to explain how rational conversion is possible:

Evidentialism: If a thinker is trying to settle the question whether p, she ought to have the attitude (belief, disbelief, or suspension) that fits her evidence (i.e., she ought to believe if she has sufficient evidence for belief, disbelieve if she has sufficient evidence for believing the negation, and ought to suspend otherwise).Footnote 9

The idea, simply put, might be that in some cases of disagreement about disagreement, we get sufficient evidence to believe that some rival to CV is correct. This, in turn, explains why we ought to believe that this rival is correct. Second, we might read CV itself as saying that because of the evidence or because of something else a thinker ought to end up believing that some rival to CV is correct. Either way, the underlying idea is that some norm or norms say that a situation can arise (e.g., a kind of disagreement about disagreement case) where it’s rational to believe that some rival to CV is correct. Either way, the explanation is that there is a norm that functions like the evidentialist norm in that it says that the evidential situation requires the agent to go beyond abandoning her belief in CV and to embrace some rival.Footnote 10

If we have some norm or norms that explains how rational conversion is possible, we still need a norm or norms that explains how the relevant kind of rational bridging could be possible. As we’ve seen, we need ERw and EPw to explain how rational bridging is possible. The problem with appealing to ERw (and, by extension, EPw) is that ERw generates ‘fixed-points’. A fixed-point, for our purposes, is a truth about the requirements of rationality that a thinker cannot rationally believe to be false. The existence of fixed-points places constraints on the range of propositions about rationality that a thinker might rationally believe. These constraints make it difficult to defend the position that rational conversion and rational bridging are both possible. The possibility of the relevant rational bridge might make us sceptical about the possibility of the relevant kind of rational conversion.

To see why, let’s consider this limited fixed-point thesis:

FPT: If an agent ought not ϕ, she ought not believe that she ought to ϕ.

To see how ERw generates fixed-points, let’s suppose that the agent is in a case in which she ought not ϕ. If she’s in this situation and she were to believe that she ought to ϕ, she would either satisfy ERw but violate this antecedent prohibition against ϕ-ing or she would conform to this antecedent requirement not to ϕ but then violate ERw by failing to ϕ in accordance with her judgment. Thus, if ERw is correct, when there are things that we ought not believe, there are correlative constraints on what we ought to believe about what we ought to believe. FPT gives us this surprising connection between the truth of some normative propositions and the rational or normative status of beliefs that concern these propositions—the prohibition against ϕ-ing comes with a prohibition against believing that ϕ-ing is mandatory.Footnote 11 Fixed-points, as we shall see, cause trouble for the subtle argument.

There are different explanations in the literature as to how there could be fixed-points of the kind I’ve suggested come with ERw. The language of fixed-points, to my mind, encourages us to think of things this way. There are certain requirements of rationality that are fixed and that apply to all of us regardless of how things seem to us. Because of these requirements, it turns out that our mistaken beliefs about what they require are themselves rational failings. According to Titelbaum, the reason that, as he puts it, the relevant mistakes about rationality are themselves mistakes of rationality (i.e., rational failings) is that we all have propositional justification to believe the relevant truths about rationality.Footnote 12 According to Littlejohn, the reason that there are these fixed-points is that we’re guilty of rational failings whenever we are insufficiently responsive to certain features of the situations we’re in. Just as we can manifest this unresponsiveness by forming the wrong beliefs (e.g., believing in ways that are dogmatic by ignoring the opinions of peers or experts who disagree), we can manifest this same unresponsiveness by believing that we ought to form these problematic beliefs (e.g., by endorsing norms that, inter alia, require us to respond to disagreements dogmatically).

I think that it’s helpful to think of these approaches to fixed-points as bottom-up. The relevant truths about what is rational to believe have a kind of explanatory priority and they somehow ground further requirements on our beliefs about what we ought to believe. We can contrast bottom-up approaches with top-down approaches to fixed-points. On these top-down approaches, we should really think of fixed-points as fixed-connections between the rational status of some normative beliefs and the rational status of the responses that they concern.Footnote 13 Once some belief about the requirements of rationality secures its status as a rational belief, this belief helps to shape or mould the rational requirements that apply to the attitudes that it concerns. Part of the explanation as to why, say, rationality requires you to be conciliatory or steadfast might be that you’ve come to rationally believe that this is what rationality requires of you. Had you come to rationally believe that rationality required something else, our top-down view says that it might have been that rationality required something else from you. These beliefs about the requirements of rationality don’t miss their targets because the place of the targets is partially determined by the rational attitudes we have about them.

Let’s consider the subtle self-defeat argument from the perspective of a bottom-up approach to FPT and ERw. The idea behind the bottom-up view is that there will be some things that are forbidden and that their status as forbidden cannot shift or change because a thinker gets evidence that suggests that the forbidden things are required. Someone who accepts FPT on this understanding will either say that it’s impossible for a thinker to have strong and undefeated reason to believe the relevant normative falsehoods (e.g., Titelbaum 2015a) or that the evidence that a thinker gets for believing the relevant falsehoods fail to justify the false normative beliefs (e.g., Littlejohn 2018). On Titelbaum’s approach, evidentialism or CV might be true and it might be compatible with ERw but we would never get a body of evidence that justifies believing that we ought to follow an inductive method other than the true one. On Littlejohn’s approach, a norm that says that we ought to believe falsehoods about what we ought to believe is itself false. If so, we cannot assume that we can have evidence that justifies believing CV and evidence that later justifies believing some rival view is correct. On this view, the existence of the rational bridge tells us that we cannot assume that it’s possible for thinkers to be rational in believing CV and rational in believing that some rival view is correct. We cannot assume that a thinker ought to believe that CV is incorrect unless we can assume that CV is indeed incorrect.Footnote 14

A further possible bottom-up approach would be one that says that the relevant truths about rationality are luminous or lustrous truths that tell us how all rational thinkers ought to respond to the situations that they’re in. They would be luminous if their truth were sufficient to ensure that everyone was in a position to know them and lustrous if their truth sufficed to ensure that everyone was in a position to justifiably believe them.Footnote 15 If they shined this way, perhaps this is why it wouldn’t be rational to form false beliefs about them.Footnote 16

There are a number of ways to develop this bottom-up approach. It is clear that if the bottom-up understanding of FPT is correct, the subtle self-defeat argument fails. We cannot assume that there is a body of evidence that a thinker could have that justifies believing that she ought to follow an inductive method that differs from the one recommended by CV unless we assume that CV is not true. In the context of trying to argue that CV is not true, we cannot help ourselves to the assumption that it is not.

There is a further problem if we appeal to the luminosity of rational requirements to explain rational bridging. This proposal doesn’t help Elga’s argument in the present setting. The self-defeat problem only gets off the ground if we think that a rational thinker’s rational attitudes about rational requirements can rationally change when her evidence changes (e.g., when she rationally believes CV but then comes to rationally increase her confidence in rival views when she discovers that people disagree with her about CV). If the truths about norms like CV were luminous, we might expect that lots of people do have sufficient evidence to believe the truths about the norms that govern belief, but we wouldn’t expect the people who had this evidence to disagree with one another about what the norms are.Footnote 17

Let’s consider top-down approaches to FPT and ERw. On such approaches, we have to assume that the underlying first-order requirements are malleable.Footnote 18 They would be malleable if their truth were sensitive to features of our epistemic state so that some rational thinker in our state couldn’t be mistaken about them by virtue of the fact that they would ‘bend’ to fit our best judgments about them. Something like this idea seems to be what Bradley (2019) has in mind when he suggests that the evidence we acquire about some norm can help to determine whether this norm even applies to the thinker in question (e.g., if we acquire strong evidence that we ought to be steadfast in the face of disagreement, this might help to explain why we ought to be steadfast when others with different evidence about the virtues of being conciliatory ought to be conciliatory).

The problem with views that explain FPT by positing malleable requirements is their malleability. If Elga’s argument assumes that FPT is correct because the requirements that govern belief are malleable, then he doesn’t need to rely on the self-defeat argument against CV. Any argument that purports to show that the norms that govern belief are malleable would show that there is not some fixed norm like CV that tells us which inductive methods always ought to be followed. This would make the self-defeat objection otiose. There is a more serious problem with this approach. If the top-down view is true because all the norms that govern belief are malleable, the evidentialist norm is not a genuine norm. If it is not a genuine norm, a crucial premise in the subtle argument is false. The subtle argument wouldn’t just be otiose, it would be unsound.

I find no reading of ERw and FPT that supports the two crucial assumptions of the subtle argument, the possibility of rational conversion and the possibility of rational bridging. So, I see no hope for the subtle argument. Either we cannot say that rational thinkers can change their minds about CV or we cannot say that a thinker ought to follow an inductive method other than CV without assuming that CV is not a genuine norm.

At this point, I would like to make a general point about any argument about epistemic norms that appeals to ERw and any norm like evidentialism or CV that tells us that we ought to believe things about norms when we acquire evidence against them. Given the plausible assumption that it is possible for a rational thinker to be uncertain about what rationality requires of her in some given situation, we might have good reason to think that evidentialism and ERw cannot both be true. It might be incoherent to combine them in a single view. If so, it would be a mistake to assume them both in trying to demonstrate the incoherence of some putative norm like CV.

If there were a coherent view that combined ERw and evidentialism, there might be a hope for the subtle argument, but there are reasons to be sceptical of this combination of views.Footnote 19 We can see this by thinking about the evidence of evidence principles that would have to hold if this combination were coherent. Evidentialism tells us that if a thinker has sufficient evidence to believe p and is actively considering whether p, that she ought to believe p. If we combine evidentialism with ERw and have an agent who knows that evidentialism is correct, the principle that states that sufficient evidence of sufficient evidence ensures sufficient evidence should hold:

SESE: If A has sufficient evidence that A has sufficient evidence to believe p, A has sufficient evidence to believe p.Footnote 20

Unfortunately, this evidence of evidence principle is problematic. Let’s suppose that evidential strength is measured probabilistically and that sufficient support requires a degree of evidential support of sufficient strength. (The degree of support that will be necessary will be a probability between .5 and 1.) Dorst (forthcoming) has argued that if rationality permits us to be uncertain about what our evidence supports, the strongest evidence of evidence principle that we can vindicate is this considerably weaker principle:

Fact 5.5: [Pe(Pe(h) ≥ t) ≥ s] → [Pe(h) ≥ ts]

Fact 5.5 tells us that evidence of evidence places some constraints on how weak the first-order evidence for a proposition might be, but it does not impose a constraint strong enough to support SESE. Suppose the minimum degree of support for believing a proposition is, say, .8 and suppose that the probability that p is .8 on your evidence is itself .9. You would have sufficient evidence to believe that you have sufficient evidence to believe p, but we could not assume that the evidential probability of p is anything greater than .72.Footnote 21 So, Fact 5.5 does not rule out counterexamples to SESE. If, as Dorst argues, we cannot vindicate anything stronger than Fact 5.5 in a framework that permits uncertainty about what our evidence warrants [i.e., that allows that, say, Pe(h) = .8 even if Pe(Pe(h) = .8) < 1)], we have a good reason to believe that SESE is too strong.Footnote 22

If Fact 5.5 is too weak to vindicate SESE, there is no modest view of rationality (i.e., one that tells us that it is sometimes rationally permissible to be uncertain about what your evidence supports and what rationality requires) that vindicates SESE. And if there is no modest view of rationality that vindicates SESE, there is no modest view of rationality that vindicates both evidentialism and ERw. If evidentialism is correct, the counterexamples to SESE should be counterexamples to ERw. If ERw is correct, there will be counterexamples to evidentialism.

Someone might defend SESE in spite of this.Footnote 23 If they defend SESE, they will have to adopt the immodest view. The immodest view says that no body of evidence can ever make it rational to be immodest (i.e., to be uncertain about what your evidence supports and uncertain about what rationality requires). I don’t think that it’s uncontroversial that the immodest view is false, so we shouldn’t just assume the modest view. The problem with the immodest view in this context is that it’s hard to see how this view could help reinstate the subtle argument against CV. It is an implication of the immodest view that it would be irrational for someone to be convinced of CV at one time and then later became convinced that they ought to follow some rival inductive method. Every body of evidence at every time should either make it certain that we ought to follow CV or that we ought to follow a rival. The immodest view might accommodate evidentialism and ERw, but it tells us that Elga would be wrong to think that a rational thinker might change her views about which inductive methods to follow over time. This immodest view allows for rational bridging, but it denies the possibility of rational conversion. So, I see no hope for the subtle argument.

4 Conclusion

I have looked and I have tried to look carefully. I can find no combination of views about first-order evidence and higher-order evidence that supports the simple or subtle self-defeat argument against CV. To run the self-defeat argument against CV, we have to appeal to a number of assumptions about rational belief and its connection to first-order and higher-order evidence that, upon reflection, either conflict with one another or cannot allow for the possibility that a rational thinker might believe CV now and come to doubt it later when they observe that others disagree. On the one hand, it seems that views that allow for a kind of level-splitting might recommend that we continue to be conciliatory whilst we suspend judgment on CV and other views that don’t allow for it seem to show that one or more of the background assumptions required by Elga’s argument is false. So, I think that CV has little to fear from the self-defeat argument. When faced with otherwise reasonable people who dogmatically assert that we ought to be dogmatic, it’s possible that the right response is to dogmatically cling to the general policy of not responding dogmatically. As stated, the view doesn’t sound all that plausible, but we have seen by now that there are coherent and defensible ways of describing the view so that it is immune to the self-defeat objection.