Browse Options

Browse by Volume

This paper investigates what happens when we merge two different lines of theorizing about counterfactuals. One is the comparative closeness view, which was developed by Stalnaker and Lewis in the framework of possible worlds semantics. The second is the interventionist view, which is part of the causal models framework developed in statistics and computer science. Common lore and existing literature have it that the two views can be easily fit together, aside from a few details. I argue that, on the contrary, transplanting causal-models-inspired ideas in a possible worlds framework yields a new semantics. The difference is grounded in different algorithms for handling inconsistent information, hence it touches on issues that are at the very heart of a semantics for contrary-to-fact conditionals. Roughly, Stalnaker/Lewis semantics requires us to evaluate the consequent of a counterfactual at all closest antecedent-verifying possibilities. Causal-models-based semantics also does this, but in addition uses the information contained in the antecedent, together with background causal information, to shift what worlds count as closest. This makes systematically different predictions and generates a new logic. The upshot is that we have a new semantics to study, and a substantial theoretical choice to make.

This paper has two aims. The first is to use contemporary discussions of naïve realist theories of perception to offer an interpretation of Merleau-Ponty’s theory of perception. The second is to use consideration of Merleau-Ponty’s theory of perception to outline a distinctive version of a naïve realist theory of perception. In a Merleau-Pontian spirit, these two aims are inter-dependent.

What if your peers tell you that you should disregard your perceptions? Worse, what if your peers tell you to disregard the testimony of your peers? How should we respond if we get evidence that seems to undermine our epistemic rules? Several philosophers (e.g. Elga 2010, Titelbaum 2015) have argued that some epistemic rules are indefeasible. I will argue that all epistemic rules are defeasible. The result is a kind of epistemic particularism, according to which there are no simple rules connecting descriptive and normative facts. I will argue that this type of particularism is more plausible in epistemology than in ethics. The result is an unwieldy and possibly infinitely long epistemic rule — an Uber-rule. I will argue that the Uber-rule applies to all agents, but is still defeasible — one may get misleading evidence against it and rationally lower one’s credence in it.

Many philosophers hold out hope that some final condition on knowledge will allow us to overcome the limitations of the classic "justified true belief" analysis. The most popular intuitive glosses on this condition frame it as an absence of epistemic coincidence (accident, luck). In this paper, I lay the groundwork for an explanationist account of epistemic coincidence—one according to which, roughly, beliefs are non-coincidentally true if and only if they bear the right sort of explanatory relation to the truth. The paper contains both positive arguments for explanationism and negative arguments against its competitors: views that understand coincidence in terms of causal, modal, and/or counterfactual relations. But the relationship between these elements is tighter than typical. I aim to show not only that explanationism is independently plausible, and superior to its competitors, but also that it helps make sense of both the appeal and failings of those competitors.

The status of the laws of nature in Hobbes’s Leviathan has been a continual point of disagreement among scholars. Many agree that since Hobbes claims that civil philosophy is a science, the answer lies in an understanding of the nature of Hobbesian science more generally. In this paper, I argue that Hobbes’s view of the construction of geometrical figures sheds light upon the status of the laws of nature. In short, I claim that the laws play the same role as the component parts – what Hobbes calls the “cause” – of geometrical figures. To make this argument, I show that in both geometry and civil philosophy, Hobbes proceeds by a method of synthetic demonstration as follows: 1) offering a thought experiment by privation; 2) providing definitions by explication of “simple conceptions” within the thought experiment; and 3) formulating generative definitions by making use of those definitions by explication. In just the same way that Hobbes says that the geometer should “put together” the parts of a square to learn its cause, I argue that the laws of nature are the cause of peace.

Within Kantian ethics and Kant scholarship, it is widely assumed that autonomy consists in the self-legislation of the principle of morality (the Moral Law). In this paper, we challenge this view on both textual and philosophical grounds. We argue that Kant never unequivocally claims that the Moral Law is self-legislated and that he is not philosophically committed to this claim by his overall conception of morality. Instead, the idea of autonomy concerns only substantive moral laws (in the plural), such as the law that one ought not to lie. We argue that autonomy, thus understood, does not have the paradoxical features widely associated with it. Rather, our account highlights a theoretical option that has been neglected in the current debate on whether Kant is best interpreted as a realist or a constructivist, namely that the Moral Law is an a priori principle of pure practical reason that neither requires nor admits of being grounded in anything else.

In this essay, I discuss two familiar objections to Hume's account of cognition, focusing on his ability to give a satisfactory account of the more normative dimensions of thought and language use. In doing so, I argue that Hume’s implicit account of these issues is far richer than is normally assumed. In particular, I show that Hume’s account of convention-driven artificial virtues like justice also applies to the proper use of conventional public languages. I then use this connection between Hume’s conception of language and his moral theory to show how he can respond to a number of basic objections to his views. In doing so, I explore the sense in which human cognition is essentially linguistic, and so social, for Hume, as well as many other issues concerning the relationship between Hume’s philosophy of mind and language, his epistemology, and his ethics.

In his discussions of the Eucharist, Descartes gives prominent place to the notion of the “surfaces” of bodies. Given this context, it may seem that his account of surfaces is of limited interest. However, I hope to show that such an account is in fact linked to a philosophically significant medieval debate over whether certain mathematical “indivisibles”, including surfaces, really exist in nature. Moreover, the particular emphasis in Descartes on the fact that surfaces are modes rather than parts of bodies bespeaks the influence of the later scholastic Francisco Suárez. However, in his own contribution to the medieval debate, Suárez refrained from identifying surfaces with modes, holding instead that they are special “constituents” of bodies that differ from the parts of which these bodies are composed. Two main conclusions derive from the comparison of the views of Suárez and Descartes on surfaces. The first is that Descartes’s “modal realist” account is in fact superior to the “moderate realist” account that Suárez offers, for reasons internal to Suárez’s own system. The second is that Suárez’s reasons for refraining from adopting modal realism in this case serve to highlight a serious deficiency in Descartes’s version of this view. In this way, a consideration of the relevant Suárezian background allows us to better appreciate both the strengths and the weaknesses of Descartes’s stance on the metaphysics of surfaces.

In this paper, we develop the notion of a natural convention, and illustrate its usefulness in a detailed examination of indirect requests in English. Our treatment of convention is grounded in Lewis’s (1969) seminal account; we do not here redefine convention, but rather explore the space of possibilities within Lewis’s definition, highlighting certain types of variation that Lewis de-emphasized. Applied to the case of indirect requests, which we view through a Searlean lens, the notion of natural convention allows us to give a nuanced answer to the question: Are indirect requests conventional? In conclusion, we reflect on the consequences of our view for the understanding of the semantics/pragmatics divide.