At least since Aristotle, deductive logic has had a special place in Western philosophy.Although the history of Western epistemology is largely a history of negative results (i.e., failures to show that the beliefs we take to be rational really are), there is a classical tradition extending back to Aristotle that has maintained its momentum into the 21st century largely on the basis of its paradigm of rational constraints on belief, the laws of deductive logic.Call this the classical rationalist tradition in epistemology.In Putting Logic in its Place, David Christensen argues that the classical rationalist tradition in epistemology has misunderstood the role of deductive logic as a source of constraints on rational belief.He argues that classical deductive logic is best understood as providing constraints on rational degrees of confidence, not as constraints on binary beliefs.

Rationalist Epistemic Probabilism

In the classical rationalist tradition, the laws of deductive logic are taken to apply to states of binary (all-or-nothing) belief.Typically, they provide the basis for two related constraints, deductive consistency (the requirement that the rational beliefs of an individual at a particular time be consistent) and deductive closure (the requirement that they be closed with respect to their deductive consequences).Following Christensen, I refer to these two constraints as the deductive cogency requirement.

In Chapter 3, Christensen uses a series of compelling examples to argue that deductive cogency is not a rational ideal at all.Then in Chapter 4, he argues persuasively that the philosophical arguments for a deductive cogency requirement don't necessitate such a strong requirement.The philosophical arguments can be accommodated by a different kind of requirement, not a deductive cogency requirement on rational binary belief, but rather a probabilistic coherence condition on rational degrees of confidence (i.e., degrees of belief).Probabilistic coherence is the requirement that rational degrees of confidence satisfy the laws of probability.This requirement generates analogs of the deductive cogency requirement at the level of degrees of confidence.For example, from the probability laws, it is easily derivable that the probability of p v -p must be one (which corresponds to the Law of the Excluded Middle); that the probability of p & -p must be zero (which corresponds to the Law of Non-Contradiction); and that if p deductively implies q, the probability of q must be at least as high as the probability of p (the analog of deductive closure).

In rejecting deductive cogency, Christensen breaks with the classical rationalist tradition on the place of logic in epistemology, but he does not break with rationalism about logic itself.His view about the place of deductive logic in epistemology can be described as a rationalist probabilist one.Christensen is not a radical probabilist like Jeffrey, because he is quite willing to entertain both binary belief and degrees of confidence, so long as it is recognized that the important level of epistemological analysis, the level at which rational constraints operate directly, is as constraints on degrees of confidence (80-88).Though he does not use the term rationalist himself and he does not want to be saddled with "any theoretically rich notion of a prioricity"(156), it seems that Christensen must be some sort of rationalist about deductive logic since, on his account, an ideally rational agent would need no empirical evidence to justify assigning probability one to all truths of classical logic.That makes his view a particularly strong form of rationalism about classical deductive logic.

Probabilism in epistemology emerged in the early 20th century from the work of Ramsey, DeFinetti, and Savage into the nature of practical rationality.For these theorists, the requirement of probabilistic coherence was ultimately a constraint on rational choice or rational preference.Call them practical probabilists.More recently, there have emerged probabilists (e.g., van Fraassen and Joyce) who argue for probabilistic coherence as a constraint on rational cognition, independent of the practical role of cognition in choosing.Call them epistemic probabilists.In Chapter 5, Christensen makes a strong case for epistemic, as opposed to merely practical, probabilism.

Thus, for the first five chapters of the book, Christensen is addressing philosophers who agree that deductive logic plays the role of some kind of a priori constraint (on belief, or degrees of confidence, or on preferences).Only in the final chapter does Christensen consider the objections of non-rationalists about deductive logic (e.g., Harman).I return to this issue below.

Let me begin with the strengths of the book.First, in his case against a deductive cogency requirement in chapter 3, Christensen builds upon and raises to a new level of sophistication the two most influential arguments in the literature (discussed by many authors, including, for example, Kyburg and Foley).The two arguments are based on two much discussed kinds of paradox, first, the lottery paradoxes and, second, the preface paradox and its relatives (which I refer to generally as fallibility paradoxes).I focus my discussion on the fallibility paradoxes, because, like Christensen, I think they are the more compelling examples (34).I discuss Christensen's argument on this issue more fully shortly and explain why it seems to me that the argument can be generalized to undercut Christensen's own view.

Second, in chapter 4 Christensen argues quite persuasively that the most influential philosophical arguments in favor of a deductive cogency requirement on rational belief actually do not selectively favor that constraint on rational belief over his probabilist constraints on degrees of confidence.This seems to me to be correct.However, as I illustrate shortly, it suggests that Christensen's position is less a rejection of the requirement of deductive cogency than a difference in the level to which the requirement applies.

Third, in the comparison of what I have referred to as practical and epistemic probabilism in chapter 5, Christensen shows how to reinterpret the two most influential arguments for probabilism, by "depragmatizing" Dutch-book arguments and "de-metaphysicizing" representation-theorem arguments so as to make them into plausible epistemic constraints rather than implausible practical ones (e.g., why worry about Dutch books if there are no bookies around?).

Fourth, in Chapter 6, Christensen argues effectively that even if there is a deontological component in the concept of epistemic justification, deontology is not an important element in the concept of rational belief.Christensen develops this argument into a forceful objection to naturalists (e.g., Kitcher and Kornblith) who have argued for an instrumentalist approach to epistemology, that epistemology should provide useful advice for epistemic improvement.Christensen argues that Kornblith's own argument against Stich's pragmatism depends on there being some role for epistemology to play other than providing useful advice for cognitive improvement (172).This role Christensen describes as "understanding the nature of rationality" (174), which he insists does not require any commitment to analyticity (171).In this, Christensen seems to me to be correct.However, I think there are serious difficulties for any account of human rationality and irrationality that takes probabilistic coherence to be an ideal, as I discuss below.

The Fallibility Paradoxes

In what is surely the most entertaining example in the book (40-53, 101-105), Christensen asks us to imagine three historians, X, Y, and Z.X regards Y and Z as having a somewhat neurotic obsession with detail-checking, which makes their books more reliable than his on details, though none of their books has ever been completely error-free.X, Y, and Z have each published a new book recently.Here is Christensen's summary of the case:"Professor X has expressed his firm beliefs that (1) every previous book in the field (including his own) has contained multiple errors; (2) he's not as careful a scholar as Y or Z; and (3) the new books by Y and Z will be found to contain errors." (45)

What about his own book?Does X expect reviewers to find any errors in it?Given X's opinions about Y and Z's superior fact-checking and his confidence that even their books contain errors, even if X currently believes every statement in his book, Christensen thinks it is intuitively quite absurd to think that it could be rational for X to believe that his new book is error-free.But, of course, that belief is required by deductive cogency.

The most entertaining part of the example comes later, when Christensen considers how serious the problem is for the advocates of deductive cogency (e.g., BonJour, Pollock, Maher, and Kaplan).Christensen imagines that there is a Society for Historical Exactitude which has offered a medal and a monetary prize for any historical book (advancing substantial theses) not shown to contain an error within one year of publication.Deductive cogency would require X to believe that his book will win the prize.Because the amount of prize money is large enough to make a big difference in X's life, X would have to believe that there would be big changes in his life soon.For example, it would not make sense for him to accept a good deal on a used car now when he was so close to being able to afford the car of his dreams (50-51, 101-102).Christensen concludes that the irrationality required by deductive cogency would send "ripples of intuitive irrationality" throughout his belief system (52).

Then Christensen generalizes the example from a problem for book authors to a problem for all human believers.All of us find ourselves in fallibility paradoxes, when, for example, we believe that at least one of our memory beliefs is false, or when we simply believe that at least one of our beliefs is false.Think of how insufferable a person would be if, when there was a conflict of memories, she always insisted that other people's memories were mistaken, never her own.Christensen provides many more examples than I can review here.In each case, the example is one in which deductive cogency would require a human being to have beliefs that strike most people as intuitively irrational.

Christensen considers and argues effectively against many attempts to neutralize the deductive cogency requirement from the effects of lottery and fallibility paradoxes (56-68).However, he fails to see that the fallibility paradoxes are a sword that cuts both ways.

How Fallibility Paradoxes Pose Problems for Christensen's Own Account

To see that the fallibility paradoxes generate problems for Christensen's probabilist view, note first that the computational difficulties of maintaining a consistent set of beliefs and the lack of decision procedure for the deductive consequence relation translate directly into practical difficulties for satisfying Christensen's probabilist constraints.In Christensen's probabilism, the analog of the consistency constraint is the requirement that one's degrees of belief be consistent with the probability laws.This would be a significant computational problem even for some beings with a finite number of degrees of confidence.But Christensen's probabilism requires an infinite number of degrees of confidence, including assigning degree of confidence of one to all instances of classical logical truths (of which there are infinitely many) and degree of confidence of zero to all logical falsehoods (of which there are also infinitely many).Christensen's analog to deductive closure is that as soon as an agent comes to assign a non-zero degree of confidence to a contingent proposition, s/he is required to assign at least that much confidence to all of its deductive consequences (of which, again, there are infinitely many).Like the deductive cogency requirement, Christensen's probabilistic coherence requirement can only be satisfied by a subject who is logically omniscient.

Christensen is aware that his account involves extreme idealization (150-153).The problem is that his idealization generates the same kinds of problems that he used to cast doubt on the deductive cogency requirement.Consider, for example, the evidence from Tversky and Kahneman and others showing that human beings, even those trained in statistics, tend to make assignments of degrees of confidence that are inconsistent with the probability laws, even in quite simple cases.And consider the computational difficulty of checking for consistency even in simple cases.This evidence provides the basis for a new kind of fallibility paradox, to the conclusion that it is extremely unlikely that any human being's degrees of confidence are completely coherent.It would be easy to construct a dialog (paralleling Christensen's example of the historian X) in which a psychologist, call him Amos, accepted all the evidence of human statistical errors and then insisted that he himself had completely coherent degrees of confidence.But, of course, Christensen's ideally rational agent would have completely coherent degrees of confidence, so Christensen's ideal could not be used to explain the irrationality of Amos's overconfidence.

But the problems with Christensen's idealization are even more serious than this example suggests.Christensen's ideally rational agents are logically omniscient.On Christensen's view, rationality requires anyone who knows the definition of pi to be certain or almost certain of the answer to the following question, without engaging in any empirical inquiry:What is the trillionth digit in the decimal expansion of pi?Christensen acknowledges that no human being could answer this question on the basis of a priori reasoning alone (153), but he does not pursue the implications of this fact for his view.To find out the trillionth digit of pi, human beings would have to rely on empirical evidence.They would have to run software that they had good reason to trust on computer hardware that they had good reason to trust or they would have to obtain the information from a source that they had good reason to trust, for example by googling it, as I did.Even if it were humanly possible to do the calculation, the computer calculation would justify much greater confidence than the results of a manual calculation carried out over years, because we have lots of evidence of the fallibility of human calculations.

So Christensen's model of ideal rationality is completely useless if we want to know what degree of confidence it is rational for a given human being to assign to the ten alternatives for the trillionth digit of pi.A good case can be made that, in the absence of empirical evidence, a rational human agent would assign an equal degree of confidence to each of the ten alternatives.Moreover, there is now substantial empirical evidence that each of the ten digits occurs with approximately equal frequency in the decimal expansion of pi.Thus, for many human beings, that empirical evidence makes it rational to assign equal degree of confidence to each of the ten alternatives.And no human being could rationally assign a high degree of confidence to the correct answer (2) without substantial reliance on empirical evidence.

Here is one final example of a fallibility paradox that presents problems for Christensen's account:When presented with a complex derivation of a surprising new theorem, any rational mathematician or logician will have less confidence in the conclusion than in the conjunction of the premises, to represent the significant possibility that there is an as-yet-undetected error of reasoning.For example, shortly after Andrew Wiles first presented his first "proof" of Fermat's Last Theorem, a colleague discovered an error in it.A year later, Wiles produced a new proof.It would have been irrational for Wiles to be certain that the new proof contained no errors when he first presented it, and thus it would have been irrational for Wiles to be as confident in the conclusion as he was in the conjunction of the premises.Over time, as other mathematicians reviewed the proposed proof and did not uncover any problems, it became rational for all mathematicians, including Wiles, to increase their confidence in the validity of the proof.But Christensen's ideally rational agent is logically omniscient, so s/he would have already been sure that Fermat's Last Theorem was true, or at least, that it was at least as likely as the conjunction of the premises of the proof.So, again, Christensen's account makes it impossible to explain important examples of human rationality based on our recognition of human fallibility, including our failures of logical omniscience.

Does Christensen Really Reject Deductive Cogency?

One of the most influential arguments for the deductive cogency requirement is the Argument Argument.The challenge is to make sense of our practices of reasoning without supposing that deductive consistency and deductive closure are requirements of rationality.As I have already mentioned, in chapter 4 Christensen responds to this challenge by arguing that a probabilist account contains analogs to deductive cogency that provide an alternative explanation of our practices of reasoning.However, it seems to me that the best way of understanding Christensen's position is not as a rejection of the deductive cogency requirement, but as a change in the level to which it applies.Consider, for example, a simple deduction:

(HM) All humans are mortal.

(SH) Socrates is human

(SM) Therefore, Socrates is mortal.

To the advocates of deductive cogency, this deductive argument can be useful in explaining why it would be rational for someone who believes the premises (HM and SH) to believe the conclusion (SM).Christensen suggests an alternative way of understanding the role of this argument.It can be useful in explaining why it is rational for someone to assign a degree of confidence to the conclusion (SM) that is no lower than the degree of confidence one assigns to the conjunction of the premises (HM&SH).

The problem for Christensen is that this probabilist account of the role of the argument itself can be stated as a more complex deductive argument.It takes as premises the probability laws, the agent's degrees of confidence in HM, SH, and HM&SH, (i.e., PROB(HM), PROB(SH), and PROB(HM&SH)), and deduces the result:PROB(SM) ≥ PROB(HM&SH).At this lower level of analysis, Christensen's view implies that a fully rational agent's degrees of confidence are deductively consistent with the probability laws and are deductively closed with respect to the probability laws.Call these two requirements the lower level deductive cogency requirement.

It is possible that Christensen would object to this characterization of his view as involving a confusion of levels.[1]Christensen could argue that we attribute degrees of confidence to agents without any presumption that agents explicitly reason with them.The attribution of degrees of belief to an agent is part of our theory of the agent.Any adequate normative theory, even a theory of proper heart functioning, must be consistent and deductively closed, but that does not impose a deductive cogency requirement on hearts.Call this the confusion-of-levels reply.

This reply gets something right.Surely Christensen does not have to attribute explicit knowledge of the probability laws to human beings in order to claim that satisfying those laws is a requirement for their degrees of confidence to be rational.But this does not distinguish his probabilistic coherence requirement from the standard deductive cogency requirement.It is not necessary to attribute explicit knowledge of the laws of classical logic to human beings in order to claim that the standard deductive cogency requirement applies to them.Most human beings are ignorant of both the probability laws and the laws of classical logic, but that does not prevent us from applying standards based on those laws to them.

The confusion-of-levels objection is unpersuasive, because even without attributing explicit knowledge of the laws of logic or the probability laws to individual agents, there is a clear sense in which both the standard deductive cogency requirement and the probabilistic coherence requirement build logical omniscience into the standards of epistemic rationality.No theory of proper heart functioning builds a requirement of logical omniscience into the standards for proper heart functioning.

Another way of making this point is to consider a theory of rational degrees of confidence that does not require probabilistic coherence.The theory might only require that an agent's degrees of confidence be closed with respect to short and simple deductive consequences (as defined by the theory) of the probability laws (where the combination of two short and simple inferences need not be either short or simple).To be adequate, any such theory would itself have to be consistent and deductively closed, but such a theory need not impose either a requirement that degrees of confidence be consistent with the probability laws (only that if there is an inconsistency, it cannot be derived by a short and simple deduction) or a requirement that rational degrees of confidence be deductively closed with respect to the probability laws (perhaps they would only be closed with respect to short and simple deductive consequences of the probability laws).

There is one more problem with the confusion-of-levels reply.The reply asserts that we do not explicitly reason with degrees of confidence.But this is not true.To take only one example, in most courses on practical reasoning, students are taught to reason with degrees of confidence by applying the probability laws to them.Thus, to explain what is wrong in many of the fallacies discovered by Tversky and Kahneman and others, one typically applies the probability calculus to degrees of confidence.So there are lots of explicit arguments that seem to presuppose probabilistic coherence as a normative ideal, and thus seem to be committed to the lower level deductive cogency requirement in our explicit reasoning about degrees of confidence.Thus, it seems to me that, while it is strictly true to say that Christensen rejects the deductive cogency requirement (because that requirement is typically stated to apply to states of binary belief), it is more accurate to say not that he rejects such a requirement, but that he endorses such a requirement at a lower level (one that applies to degrees of confidence rather than to binary belief).

Rationalism About Deductive Logic

Why would Christensen even be tempted to think that probabilism provides a useful idealization of human rationality?His main reason is given in chapter 6, where he contrasts factual and logical omniscience (153-157).Christensen begins by giving an example in which we would judge an agent to be irrational for failing to draw a deductive consequence of her beliefs.Then he generalizes from the one case to the conclusion that all such failures can be thought of as departures from an ideal of rationality.There are serious problems with this as an argument.First, as Christensen himself acknowledges, there are cases in which failure to obtain empirical evidence is judged to be irrational (155).So, parallel to Christensen's argument, one could define an ideal of rationality requiring that one obtain all possible empirical evidence.This would clearly not be an epistemic ideal that would be relevant to judging human rationality and irrationality.

Of course, it is open to Christensen to respond that we do not regard all cases of failure to obtain possible evidence as failures of rationality.The problem is that we do not regard all failures of logical omniscience as failures of rationality either.Was it irrational that the majority of early twentieth-century logicians were surprised by Gödel's Incompleteness Proof, even though they would have been quite confident of all of the premises (and of their conjunction)?Not on any reasonable conception of human rationality.So it seems to me that there is no reason for regarding either kind of ideal, either the ideal of a logically omniscient agent or the ideal of an agent who gathers all possible empirical evidence, as an interesting epistemic ideal for human beings.

In any case, Christensen's entire position depends on being able to draw a distinction between logical knowledge and factual knowledge.That is what makes him a rationalist about deductive logic.Anyone who is a rationalist about deductive logic should at least address what might be called the holist challenge to rationalism about deductive logic:According to probabilism, rationality requires that all instances of truths of classical logic be assigned degree of confidence of one.But couldn't there be good reasons for giving up classical logic or, at least, for considering whether it should be given up?

This is not merely an abstract question.There have been proposals for non-classical logics in mathematics, physics, metaphysics, and semantics.If the advocates of these views could rationally have assigned any non-zero degree of confidence to them, then it must be rational to assign degree of confidence less than one to some truths of classical logic.But if it is even possible for the advocates of these views to rationally argue for their views, then it must be rational to consider assigning degree of confidence of less than one to some truths of classical logic.The rationalist about deductive logic cannot explain how this could be rational.The most significant omission from Christensen' book is any defense of rationalism about deductive logic against the holist challenge.

Conclusion

In this book, Christensen has made important contributions to our understanding of epistemology and the epistemology of deductive logic and has done so in a surprisingly accessible way.His discussions of binary belief and probabilism, of the lottery paradoxes and the fallibility paradoxes, of the epistemological significance of Dutch Book arguments and of representation theorems, and of deontology and instrumentalism in epistemology, are essential reading on those topics.His book is essential reading for rationalists about deductive logic, including classical rationalists, practical probabilists, and epistemic probabilists.Non-rationalists about deductive logic will find it less compelling.