This paper defends a principle I call Equal Treatment, according to which the rationality of a belief is determined in precisely the same way as the rationality of any other state. For example, if wearing a raincoat is rational just in case doing so maximizes expected value, then believing some proposition P is rational just in case doing so maximizes expected value. This contrasts with the popular view that the rationality of belief is determined by evidential support. It also contrasts (...) with the common idea that in the case of belief, there are two different incommensurable senses of rationality, one of which is distinctively epistemic. I present considerations that favor Equal Treatment over these two alternatives, reply to objections, and criticize some arguments for Evidentialism. I also show how Equal Treatment opens the door to a distinctive kind of response to skepticism. (shrink)

Many have thought that it is impossible to rationally persuade an external world skeptic that we have knowledge of the external world. This paper aims to show how this could be done. I argue, while appealing only to premises that a skeptic could accept, that it is not rational to believe external world skepticism, because doing so commits one to more extreme forms of skepticism in a way that is self-undermining. In particular, the external world skeptic is ultimately committed to (...) believing a proposition P while believing that she shouldn’t believe P, an irrational combination of beliefs. Suspending judgment on skepticism is also problematic, for similar reasons; and, I argue, rational dilemmas are not possible; so, we should believe that skepticism is false. (shrink)

In part one I present a positive argument for the claim that philosophical argument can rationally overturn common sense. It is widely agreed that science can overturn common sense. But every scientific argument, I argue, relies on philosophical assumptions. If the scientific argument succeeds, then its philosophical assumptions must be more worthy of belief than the common sense proposition under attack. But this means there could be a philosophical argument against common sense, each of whose premises is just as worthy (...) of belief as the scientist’s philosophical assumptions. If so, then the purely philosophical argument will also succeed. In part two I consider three motivations, each of which comprises a distinct philosophical methodology, for the opposing view: (1) the Moorean idea that common sense enjoys greater plausibility than philosophy; (2) case judgments should trump general principles; (3) reflective equilibrium and conservatism. I argue that all three motivations fail. (shrink)

Evidentialists and Pragmatists about reasons for belief have long been in dialectical stalemate. However, recent times have seen a new wave of Evidentialists who claim to provide arguments for their view which should be persuasive even to someone initially inclined toward Pragmatism. This paper reveals a central flaw in this New Evidentialist project: their arguments rely on overly demanding necessary conditions for a consideration to count as a genuine reason. In particular, their conditions rule out the possibility of pragmatic reasons (...) for action. Since the existence of genuine pragmatic reasons for action is common ground between the Evidentialist and the Pragmatist, this problem for the New Evidentialist arguments is fatal. The upshot is that the deadlock between these two positions is restored: neither side can claim to be in possession of an argument that could convince the other. As it happens, I myself favor Pragmatism about reasons for belief, and although I don't claim to be able to convince a committed Evidentialist, I do make a prima facie case for Pragmatism by describing particular scenarios in which it seems to be true. I then go on to develop my own preferred version of the view: Robust Pragmatism, according to which a consideration never constitutes a reason for believing a proposition purely in virtue of being evidence for it. (shrink)

Those who model doxastic states with a set of probability functions, rather than a single function, face a pressing challenge: can they provide a plausible decision theory compatible with their view? Adam Elga and others claim that they cannot, and that the set of functions model should be rejected for this reason. This paper aims to answer this challenge. The key insight is that the set of functions model can be seen as an instance of the supervaluationist approach to vagueness (...) more generally. We can then generate our decision theory by applying the general supervaluationist semantics to decision-theoretic claims. The result: if an action is permissible according to all functions in the set, it’s determinately permissible; if impermissible according to all, determinately impermissible; and – crucially – if permissible according to some, but not all, it’s indeterminate whether it’s permissible. This proposal handles with ease some difficult cases ) on which alternative decision theories falter. One reason this view has been overlooked in the literature thus far is that all parties to the debate presuppose that an acceptable decision theory must classify each action as either permissible or impermissible. But I will argue that this thought is misguided. Seeing the set of functions model as an instance of supervaluationism provides a compelling motivation for the claim that there can be indeterminacy in the rationality of some actions. (shrink)

A number of Bayesians claim that, if one has no evidence relevant to a proposition P, then one's credence in P should be spread over the interval [0, 1]. Against this, I argue: first, that it is inconsistent with plausible claims about comparative levels of confidence; second, that it precludes inductive learning in certain cases. Two motivations for the view are considered and rejected. A discussion of alternatives leads to the conjecture that there is an in-principle limitation on formal representations (...) of belief: they cannot be both fully accurate and maximally specific. (shrink)

Some prominent evidentialists argue that practical considerations cannot be normative reasons for belief because they can’t be motivating reasons for belief. Existing pragmatist responses turn out to depend on the assumption that it’s possible to believe in the absence of evidence. The evidentialist may deny this, at which point the debate ends in an impasse. I propose a new strategy for the pragmatist. This involves conceding that belief in the absence of evidence is impossible. We then argue that evidence can (...) play a role in bringing about belief without being a motivating reason for belief, thereby leaving room for practical considerations to serve as motivating reasons. I present two ways in which this can happen. First, agents can use evidence as a mere means by which to believe, with practical considerations serving as motivating reasons for belief, just as we use tools (e.g. a brake pedal) as mere means by which to do something (e.g. slow down) which we are motivated to do for practical reasons. Second, evidence can make it possible for one to choose whether or not to believe – a choice one can then make for practical reasons. These arguments push the debate between the evidentialist and the pragmatist into new territory. It is no longer enough for an evidentialist to insist that belief is impossible without evidence. Even if this is right, the outcome of the debate remains unsettled. It will hang on the ability of the evidentialist to respond to the new pragmatist strategy presented here. (shrink)

The canonical Bayesian solution to the ravens paradox faces a problem: it entails that black non-ravens disconfirm the hypothesis that all ravens are black. I provide a new solution that avoids this problem. On my solution, black ravens confirm that all ravens are black, while non-black non-ravens and black non-ravens are neutral. My approach is grounded in certain relations of epistemic dependence, which, in turn, are grounded in the fact that the kind raven is more natural than the kind black. (...) The solution applies to any generalization “All F’s are G” in which F is more natural than G. (shrink)

Pragmatic responses to skepticism have been overlooked in recent decades. This paper explores one such response by developing a character called the Pragmatic Skeptic. The Pragmatic Skeptic accepts skeptical arguments for the claim that we lack good evidence for our ordinary beliefs, and that they do not constitute knowledge. However, they do not think we should give up our beliefs in light of these skeptical conclusions. Rather, we should retain them, since we have good practical reasons for doing so. This (...) takes the sting out of skepticism: we can be skeptics, of a kind, without thereby succumbing to practical or intellectual disaster. I respond to objections, and compare the position of the Pragmatic Skeptic to views found in the work of (among others) David Hume, William James, David Lewis, Berislav Marusic, and Robert Pasnau. (shrink)

There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)

This paper proposes that the question “What should I believe?” is to be answered in the same way as the question “What should I do?,” a view I call Equal Treatment. After clarifying the relevant sense of “should,” I point out advantages that Equal Treatment has over both simple and subtle evidentialist alternatives, including versions that distinguish what one should believe from what one should get oneself to believe. I then discuss views on which there is a distinctively epistemic sense (...) of should. Next I reply to an objection which alleges that non-evidential considerations cannot serve as reasons for which one believes. I then situate Equal Treatment in a broader theoretical framework, discussing connections to rationality, justification, knowledge, and theoretical vs. practical reasoning. Finally, I show how Equal Treatment has important implications for a wide variety of issues, including the status of religious belief, philosophical skepticism, racial profiling and gender stereotyping, and certain issues in psychology, such as depressive realism and positive illusions. (shrink)

Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.

In part one I present a positive argument for the claim that philosophical argument can rationally overturn common sense. It is widely agreed that science can overturn common sense. But every scientific argument, I argue, relies on philosophical assumptions. If the scientific argument succeeds, then its philosophical assumptions must be more worthy of belief than the common sense proposition under attack. But this means there could be a philosophical argument against common sense, each of whose premises is just as worthy (...) of belief as the scientist’s philosophical assumptions. If so, then the purely philosophical argument will also succeed. In part two I consider three motivations, each of which comprises a distinct philosophical methodology, for the opposing view: the Moorean idea that common sense enjoys greater plausibility than philosophy; case judgments should trump general principles; reflective equilibrium and conservatism. I argue that all three motivations fail. (shrink)