This paper shows how the availability heuristic can be used to justify inference to the best explanation in such a way that van Fraassen's infamous "best of a bad lot" objection can be adroitly avoided. With this end in mind, a dynamic and contextual version of the erotetic model of explanation sufficient to ground this response is presented and defended.

This paper introduces a model for evidence denial that explains this behavior as a manifestation of rationality and it is based on the contention that social values (measurable as utilities) often underwrite these sorts of responses. Moreover, it is contended that the value associated with group membership in particular can override epistemic reason when the expected utility of a belief or belief system is great. However, it is also true that it appears to be the case that it is still (...) possible for such unreasonable believers to reverse this sort of dogmatism and to change their beliefs in a way that is epistemically rational. The conjecture made here is that we should expect this to happen only when the expected utility of the beliefs in question dips below a threshold where the utility value of continued dogmatism and the associated group membership is no longer sufficient to motivate defusing the counter-evidence that tells against such epistemically irrational beliefs. (shrink)

In this paper significant challenges are raised with respect to the view that explanation essentially involves unification. These objections are raised specifically with respect to the well-known versions of unificationism developed and defended by Michael Friedman and Philip Kitcher. The objections involve the explanatory regress argument and the concepts of reduction and scientific understanding. Essentially, the contention made here is that these versions of unificationism wrongly assume that reduction secures understanding.

Hans Reichenbach’s pragmatic treatment of the problem of induction in his later works on inductive inference was, and still is, of great interest. However, it has been dismissed as a pseudo-solution and it has been regarded as problematically obscure. This is, in large part, due to the difficulty in understanding exactly what Reichenbach’s solution is supposed to amount to, especially as it appears to offer no response to the inductive skeptic. For entirely different reasons, the significance of Bertrand Russell’s classic (...) attempt to solve Hume’s problem is also both obscure and controversial. Russell accepted that Hume’s reasoning about induction was basically correct, but he argued that given the centrality of induction in our cognitive endeavors something must be wrong with Hume’s basic assumptions. What Russell effectively identified as Hume’s (and Reichenbach’s) failure was the commitment to a purely extensional empiricism. So, Russell’s solution to the problem of induction was to concede extensional empiricism and to accept that induction is grounded by accepting both a robust essentialism and a form of rationalism that allowed for a priori knowledge of universals. So, neither of those doctrines is without its critics. On the one hand, Reichenbach’s solution faces the charges of obscurity and of offering no response to the inductive skeptic. On the other hand, Russell’s solution looks to be objectionably ad hoc absent some non-controversial and independent argument that the universals that are necessary to ground the uniformity of nature actually exist and are knowable. This particular charge is especially likely to arise from those inclined towards purely extensional forms of empiricism. In this paper the significance of Reichenbach’s solution to the problem of induction will be made clearer via the comparison of these two historically important views about the problem of induction. The modest but important contention that will be made here is that the comparison of Reichenbach’s and Russell’s solutions calls attention to the opposition between extensional and intensional metaphysical presuppositions in the context of attempts to solve the problem of induction. It will be show that, in effect, what Reichenbach does is to establish an important epistemic limitation of extensional empiricism. So, it will be argued here that there is nothing really obscure about Reichenbach’s thoughts on induction at all. He was simply working out the limits of extensional empiricism with respect to inductive inference in opposition to the sort of metaphysics favored by Russell and like-minded thinkers. (shrink)

It is an under-appreciated fact that Quine's rejection of the analytic/synthetic distinction, when coupled with some other plausible and related views, implies that there are serious difficulties in demarcating empirical theories from pure mathematical theories within the Quinean framework. This is a serious problem because there seems to be a principled difference between the two disciplines that cannot apparently be captured in the orthodox Quienan framework. For the purpose of simplicity let us call this Quine's problem of demarcation. In this (...) paper this problem will be articulated and it will be shown that the typical sorts of responses to this problem are all unworkable within the Quinean framework. It will then be shown that the lack of resources to solve this problem within the Quinean framework implies that Quine’s version of the indispensability argument cannot get off the ground, for it presupposes the possibility of making such a distinction. (shrink)

In this article the standard philosophical method involving intuition-driven conceptual analysis is challenged in a new way. This orthodox approach to philosophy takes analysanda to be the specifications of the content of concepts in the form of sets of necessary and sufficient conditions. Here it is argued that there is no adequate account of what necessary and sufficient conditions are. So, the targets of applications of the standard philosophical method so understood are not sufficiently well understood for this method to (...) be dependable. (shrink)

In this chapter we consider three philosophical perspectives (including those of Stalnaker and Lewis) on the question of whether and how the principle of conditional excluded middle should figure in the logic and semantics of counterfactuals. We articulate and defend a third view that is patterned after belief revision theories offered in other areas of logic and philosophy. Unlike Lewis’ view, the belief revision perspective does not reject conditional excluded middle, and unlike Stalnaker’s, it does not embrace supervaluationism. We adduce (...) both theoretical and empirical considerations to argue that the belief revision perspective should be preferred to its alternatives. The empirical considerations are drawn from the results of four empirical studies (which we report below) of non-experts’ judgments about counterfactuals and conditional excluded middle. (shrink)

Paradoxes have played an important role both in philosophy and in mathematics and paradox resolution is an important topic in both fields. Paradox resolution is deeply important because if such resolution cannot be achieved, we are threatened with the charge of debilitating irrationality. This is supposed to be the case for the following reason. Paradoxes consist of jointly contradictory sets of statements that are individually plausible or believable. These facts about paradoxes then give rise to a deeply troubling epistemic problem. (...) Specifically, if one believes all of the constitutive propositions that make up a paradox, then one is apparently committed to belief in every proposition. This is the result of the principle of classical logical known as ex contradictione (sequitur) quodlibetthat anything and everything follows from a contradiction, and the plausible idea that belief is closed under logical or material implication (i.e. the epistemic closure principle). But, it is manifestly and profoundly irrational to believe every proposition and so the presence of even one contradiction in one’s doxa appears to result in what seems to be total irrationality. This problem is the problem of paradox-induced explosion. In this paper it will be argued that in many cases this problem can plausibly be avoided in a purely epistemic manner, without having either to resort to non-classical logics for belief (e.g. paraconsistent logics) or to the denial of the standard closure principle for beliefs. The manner in which this result can be achieved depends on drawing an important distinction between the propositional attitude of belief and the weaker attitude of acceptance such that paradox constituting propositions are accepted but not believed. Paradox-induced explosion is then avoided by noting that while belief may well be closed under material implication or even under logical implication, these sorts of weaker commitments are not subject to closure principles of those sorts. So, this possibility provides us with a less radical way to deal with the existence of paradoxes and it preserves the idea that intelligent agents can actually entertain paradoxes. (shrink)

This book is a sustained defense of the compatibility of the presence of idealizations in the sciences and scientific realism. So, the book is essentially a detailed response to the infamous arguments raised by Nancy Cartwright to the effect that idealization and scientific realism are incompatible.

Searle’s Chinese Room Argument (CRA) has been the object of great interest in the philosophy of mind, artificial intelligence and cognitive science since its initial presentation in ‘Minds, Brains and Programs’ in 1980. It is by no means an overstatement to assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. It is then especially interesting to note that relatively little has been said about the detailed logic of the argument, whatever significance (...) Searle intended CRA to have. The problem with the CRA is that it involves a very strong modal claim, the truth of which is both unproved and highly questionable. So it will be argued here that the CRA does not prove what it was intended to prove. (shrink)

This paper has three interdependent aims. The first is to make Reichenbach’s views on induction and probabilities clearer, especially as they pertain to his pragmatic justification of induction. The second aim is to show how his view of pragmatic justification arises out of his commitment to extensional empiricism and moots the possibility of a non-pragmatic justification of induction. Finally, and most importantly, a formal decision-theoretic account of Reichenbach’s pragmatic justification is offered in terms both of the minimax principle and the (...) dominance principle. (shrink)

Stalnaker argued that conditional excluded middle should be included in the principles that govern counterfactuals on the basis that intuitions support that principle. This is because there are pairs of competing counterfactuals that appear to be equally acceptable. In doing so, he was forced to introduced semantic vagueness into his system of counterfactuals. In this paper it is argued that there is a simpler and purely epistemic explanation of these cases that avoids the need for introducing semantic vagueness into the (...) semantics for counterfactuals. (shrink)

The main question addressed in this paper is whether some false sentences can constitute evidence for the truth of other propositions. In this paper it is argued that there are good reasons to suspect that at least some false propositions can constitute evidence for the truth of certain other contingent propositions. The paper also introduces a novel condition concerning propositions that constitute evidence that explains a ubiquitous evidential practice and it contains a defense of a particular condition concerning the possession (...) of evidence. The core position adopted here then is that false propositions that are approximately true reports of measurements can constitute evidence for the truth of other propositions. So, it will be argued that evidence is only quasi-factive in this very specific sense. (shrink)

This paper introduces a new argument for the safety condition on knowledge. It is based on the contention that the rejection of safety entails the rejection of the factivity condition on knowledge. But, since we should maintain factivity, we should endorse safery.

Imre Lakatos' views on the philosophy of mathematics are important and they have often been underappreciated. The most obvious lacuna in this respect is the lack of detailed discussion and analysis of his 1976a paper and its implications for the methodology of mathematics, particularly its implications with respect to argumentation and the matter of how truths are established in mathematics. The most important themes that run through his work on the philosophy of mathematics and which culminate in the 1976a paper (...) are (1) the (quasi-)empirical character of mathematics and (2) the rejection of axiomatic deductivism as the basis of mathematical knowledge. In this paper Lakatos' later views on the quasi-empirical nature of mathematical theories and methodology are examined and specific attention is paid to what this view implies about the nature of mathematical argumentation and its relation to the empirical sciences. (shrink)

This paper introduces a new argument against Richard Foley’s threshold view of belief. His view is based on the Lockean Thesis (LT) and the Rational Threshold Thesis (RTT). The argument introduced here shows that the views derived from the LT and the RTT violate the safety condition on knowledge in way that threatens the LT and/or the RTT.

In a recent revision (chapter 4 of Nowakowa and Nowak 2000) of an older article Leszek Nowak (1992) has attempted to rebut Niiniluoto’s 1990 critical suggestion that proponents of the Poznań idealizational approach to the sciences have committed a rather elementary logical error in the formal machinery that they advocate for use in the analysis of scientific methodology. In this paper I criticize Nowak’s responses to Niiniluoto’s suggestion, and, subsequently, work out some of the consequences of that criticism for understanding (...) the role that idealization plays in scientific methodology. (shrink)

In a recent article, Peter Gärdenfors (1992) has suggested that the AGM (Alchourrón, Gärdenfors, and Makinson) theory of belief revision can be given an epistemic basis by interpreting the revision postulates of that theory in terms of a version of the coherence theory of justification. To accomplish this goal Gärdenfors suggests that the AGM revision postulates concerning the conservative nature of belief revision can be interpreted in terms of a concept of epistemic entrenchment and that there are good empirical reasons (...) to adopt this view as opposed to some form of foundationalist account of the justification of our beliefs. In this paper I argue that Gärdenfors’ attempt to underwrite the AGM theory of belief revision by appealing to a form of coherentism is seriously inadequate for several reasons. (shrink)

In the preface paradox the posited author is supposed to know both that every sentence in a book is true and that not every sentence in that book is true. But, this result is paradoxically contradictory. The paradoxicality exhibited in such cases arises chiefly out of the recognition that large-scale and difficult tasks like verifying the truth of large sets of sentences typically involve errors even given our best efforts to be epistemically diligent. This paper introduces an argument designed to (...) resolve the preface paradox so understood by appeal to the safety condition on knowledge. (shrink)

Recently a number of variously motivated epistemologists have argued that knowledge is closely tied to practical matters. On the one hand, radical pragmatic encroachment is the view that facts about whether an agent has knowledge depend on practical factors and this is coupled to the view that there is an important connection between knowledge and action. On the other hand, one can argue for the less radical thesis only that there is an important connection between knowledge and practical reasoning. So, (...) defenders of both of these views endorse the view that knowledge is the norm of practical reasoning. This thesis has recently come under heavy fire and a number of weaker proposals have been defended. In this paper counter-examples to the knowledge norm of reasoning will be presented and it will be argued that this viewand a number of related but weaker viewscannot be sustained in the face of these counter-examples. The paper concludes with a novel proposal concerning the norm of practical reasoning that is immune to the counter-examples introduced here. (shrink)

Recently Timothy Williamson (2007) has argued that characterizations of the standard (i.e. intuition-based) philosophical practice of philosophical analysis are misguided because of the erroneous manner in which this practice has been understood. In doing so he implies that experimental critiques of the reliability of intuition are based on this misunderstanding of philosophical methodology and so have little or no bearing on actual philosophical practice or results. His main point is that the orthodox understanding of philosophical methodology is incorrect in that (...) it treats philosophical thought experiments in such a way that they can be “filled in” in various ways that undermines their use as counter-examples and that intuition plays no substantial role in philosophical practice when we properly understand that methodology as a result of the possibility of such filling in. In this paper Williamson’s claim that philosophical thought experiments cases can be legitimately filled in this way will be challenged and it will be shown that the experimental critique of the intuition-based methods involved a serious issue. (shrink)

Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the posterior probabilities will (...) be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)

This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.

Defenders of doxastic voluntarism accept that we can voluntarily commit ourselves to propositions, including belief-contravening propositions. Thus, defenders of doxastic voluntarism allow that we can choose to believe propositions that are negatively implicated by our evidence. In this paper it is argued that the conjunction of epistemic deontology and doxastic voluntarism as it applies to ordinary cases of belief-contravening propositional commitments is incompatible with evidentialism. In this paper ED and DV will be assumed and this negative result will be used (...) to suggest that voluntary belief-contravening commitments are not themselves beliefs and that these sorts of commitments are not governed by evidentialism. So, the apparent incompatibility of the package views noted above can be resolved without ceding evidentialism with respect to beliefs. (shrink)

This paper presents a case for the claim that the infamous miners paradox is not a paradox. This contention is based on some important observations about the nature of ignorance with respect to both disjunctions and conditional obligations and their modal features. The gist of the argument is that given the uncertainty about the location of the miners in the story and the nature of obligations, the apparent obligation to block either mine shaft is cancelled.

In this paper we argue that dissociative identity disorder (DID) is best interpreted as a causal model of a (possible) post-traumatic psychological process, as a mechanical model of an abnormal psychological condition. From this perspective we examine and criticize the evidential status of DID, and we demonstrate that there is really no good reason to believe that anyone has ever suffered from DID so understood. This is so because the proponents of DID violate basic methodological principles of good causal modeling. (...) When every ounce of your concentration is fixed upon blasting a winged pig out of the sky, you do not question its species' ontological status. James Morrow, City of Truth (1990). (shrink)

Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of Bayesian (...) inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)

In this paper Timothy Williamson’s argument that the knowledge norm of assertion is the best explanation of the unassertability of Morrean sentences is challenged and an alternative account of the norm of assertion is defended.

Some recent work by philosophers of mathematics has been aimed at showing that our knowledge of the existence of at least some mathematical objects and/or sets can be epistemically grounded by appealing to perceptual experience. The sensory capacity that they refer to in doing so is the ability to perceive numbers, mathematical properties and/or sets. The chief defense of this view as it applies to the perception of sets is found in Penelope Maddy’s Realism in Mathematics, but a number of (...) other philosophers have made similar, if more simple, appeals of this sort. For example, Jaegwon Kim, John Bigelow, and John Bigelow and Robert Pargetter have all defended such views. The main critical issue that will be raised here concerns the coherence of the notions of set perception and mathematical perception, and whether appeals to such perceptual faculties can really provide any justification for or explanation of belief in the existence of sets, mathematical properties and/or numbers. (shrink)

Theorists in various scientific disciplines offer radically different accounts of the origin of violent behavior in humans, but it is not clear how the study of violence is to be scientifically grounded. This problem is made more complicated because both what sorts of acts constitute violence and what needs to be appealed to in explaining violence differs according to social scientists, biologists, anthropologists and neurophysiologists, and this generates serious problems with respect to even attempting to ascertain the differential bona fides (...) of these various explanatory programs. As a consequence, there is little theoretical reason to suspect that efforts to prevent violence will have any appreciable effect. In this paper we investigate the general issue of whether any of the general theoretical approaches to violent behavior can reasonably be taken to be the best approach to the explanation of seriously violent behavior. Our more specific aim is to examine the controversial explanation of violent behavior offered by Lonnie Athens in order to ascertain whether it can be seriously considered to be the best explanation of violent behavior. (shrink)

The development of possible worlds semantics for modal claims has led to a more general application of that theory as a complete semantics for various formal and natural languages, and this view is widely held to be an adequate (philosophical) interpretation of the model theory for such languages. We argue here that this view generates a self-referential inconsistency that indicates either the falsity or the incompleteness of PWS.

In this paper it is argued that the conjunction of linguistic ersatzism, the ontologically deflationary view that possible worlds are maximal and consistent sets of sentences, and possible world semantics, the view that the meaning of a sentence is the set of possible worlds at which it is true, implies that no actual speaker can effectively use virtually any language to successfully communicate information. This result is based on complexity issues that relate to our finite computational ability to deal with (...) large bodies of information and a strong, but well motivated, assumption about the cognitive accessibility of meanings of sentences ersatzers seem to be implicitly committed to. It follows that linguistic ersatzism, possible world semantics, or both must be rejected. (shrink)

In his 1993 article George Bealer offers three separate arguments that are directed against the internal coherence of empiricism, specifically against Quine’s version of empiricism. One of these arguments is the starting points argument (SPA) and it is supposed to show that Quinean empiricism is incoherent. We argue here that this argument is deeply flawed, and we demonstrate how a Quinean may successfully defend his views against Bealer’s SPA. Our defense of Quinean empiricism against the SPA depends on showing (1) (...) that Bealer is, in an important sense, a foundationalist, and (2) that Quine is, in an important sense, a coherentist. Having established these two contentions we show that Bealer’s SPA begs the question against Quinean empiricists. (shrink)

In this paper it is argued that three of the most prominent theories of conditional acceptance face very serious problems. David Lewis' concept of imaging, the Ramsey test and Jonathan Bennett's recent hybrid view all face viscious regresses, or they either employ unanalyzed components or depend upon an implausibly strong version of doxastic voluntarism.

In this paper I argue that the best explanation of expertise about taste is that such alleged experts are simply more eloquent in describing the taste experiences that they have than are ordinary tasters.

The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in decision situations, and so they are (...) incomplete. Second, it will be argues that when we attempt to incorporate the knowledge of such causal connections into Bayesian decision theory, a substantial technical problem arises for which there is no currently available solution that does not suffer from some damning objection or other. From a broader perspective, this then throws into question the use of decision theory as a model of human or machine planning. (shrink)

Some contemporary theologically inclined epistemologists, the reformed epistemologists, have attempted to show that belief in God is rational by appealing directly to a special kind of experience. To strengthen the appeal to this particular, and admittedly peculiar, type of experience these venture to draw a parallel between such experiences and normal perceptual experiences in order to show that, by parity of reasoning, if beliefs formed on the basis of the later are taken to be justified and rational to hold, then (...) beliefs formed on the basis of the former should also be regarded as justified and rational to hold. Such appeals to religious experience have been discussed and/or made by Robert Pargetter, Alvin Plantinga and William Alston and they claim that they provide sufficient warrant for religious beliefs, specifically for the belief that God exists. The main critical issue that will be raised here concerns the coherence of this notion of religious experience itself and whether such appeals to religious experience really provide justification for belief in the existence of God.<br><br>. (shrink)

In a series of influential articles, George Bealer argues for the autonomy of philosophical knowledge on the basis that philosophically known truths must be necessary truths. The main point of his argument is that the truths investigated by the sciences are contingent truths to be discovered a posteriori by observation, while the truths of philosophy are necessary truths to be discovered a priori by intuition. The project of assimilating philosophy to the sciences is supposed to be rendered illegitimate by the (...) more or less sharp distinction in these characteristic methods and its modal basis. In this article Bealer's particular way of drawing the distinction between philosophy and science is challenged in a novel manner, and thereby philosophical naturalism is further defended. (shrink)

The main examples of pragmatic encroachment presented by Jason Stanley involve the idea that knowledge ascription occurs more readily in cases where stakes are low rather than high. This is the stakes hypothesis. In this paper an example is presented showing that in some cases knowledge ascription is more readily appropriate where stakes are high rather than low.

It is a commonplace belief that many beliefs, e.g. religious convictions, are a purely private matter, and this is meant in some way to serve as a defense against certain forms of criticism. In this paper it is argued that this thesis is false, and that belief is really often a public matter. This argument, the publicity of belief argument, depends on one of the most compelling and central thesis of Peircean pragmatism. This crucial thesis is that bona fide belief (...) cannot be separated from action. It is then also suggested that we should accept a form of W. K. Clifford's evidentialism. When these theses are jointly accepted in conjunction with the basic principle of ethics that it is prima facie wrong to act in such a way that may subject others to serious but unnecessary and avoidable harm, it follows that many beliefs are morally wrong. (shrink)

In this paper the strategy for the eliminative reduction of the alethic modalities suggested by John Venn is outlined and it is shown to anticipate certain related contemporary empiricistic and nominalistic projects. Venn attempted to reduce the alethic modalities to probabilities, and thus suggested a promising solution to the nagging issue of the inclusion of modal statements in empiricistic philosophical systems. However, despite the promise that this suggestion held for laying the ‘ghost of modality’ to rest, this general approach, tempered (...) modal eliminativism, is shown to be inadequate for that task.<br><br>. (shrink)

Given the sheer vastness of the totality of contemporary human knowledge and our individual epistemic finitude it is commonplace for those of us who lack knowledge with respect to some proposition(s) to appeal to experts (those who do have knowledge with respect to that proposition(s)) as an epistemic resource. Of course, much ink has been spilled on this issue and so concern here will be very narrowly focused on testimony in the context of epistemological views that incorporate evidentialism and internalism, (...) and which are either reductivist or non-reductivist in nature. Also, as the main question about testimony addressed here is whether or not testimony can provide any basic justification at all, attention will be narrowly focused on the simple case where one is presented with testimony that something is the case from only one source and on one occasion. It turns out that there are some seriously odd epistemic features of such appeals to expertise that arise both for those who intend to accept internalism, evidentialism and reductivism about justification by testimony and for those who intend to accept internalism, evidentialism and non-reductivism about justification by testimony. (shrink)