Leibniz notoriously insisted that no two individuals differ solo numero, that is, by being primitively distinct, without differing in some property. The details of Leibniz’s own way of understanding and defending the principle –known as the principle of identity of indiscernibles (henceforth ‘the Principle’)—is a matter of much debate. However, in contemporary metaphysics an equally notorious and discussed issue relates to a case put forward by Max Black (1952) as a counter-example to any necessary and non-trivial version of the principle. (...) Black asks us to imagine, via one of the fictional characters of his dialogue, a world consisting solely of two completely resembling spheres, in a relational space. The supporter of the principle is then forced to admit that although there are ex hypothesi two objects in that universe, there is no property (except trivial ones), not even relational ones, to distinguish them, and hence the necessary version of the principle is falsified. In this essay I will argue that Black’s possible world, together with the dialectic between the potential friends and foes of the Principle as expounded by Black himself.. (shrink)

I here propose a hitherto unnoticed possibility of solving embedding problems for noncognitivist expressivists in metaethics by appeal to Conceptual Role Semantics. I show that claims from the latter as to what constitutes various concepts can be used to define functions from states expressed by atomic sentences to states expressed by complex sentences, thereby allowing an expressivist semantics that satisfies a rather strict compositionality constraint . The proposal can be coupled with several different types of concept individuation claim , and (...) is shown to pave the way to novel accounts for, e.g., negation. (shrink)

I here argue that Ted Sider's indeterminacy argument against vagueness in quantifiers fails. Sider claims that vagueness entails precisifications, but holds that precisifications of quantifiers cannot be coherently described: they will either deliver the wrong logical form to quantified sentences, or involve a presupposition that contradicts the claim that the quantifier is vague. Assuming (as does Sider) that the “connectedness” of objects can be precisely defined, I present a counter-example to Sider's contention, consisting of a partial, implicit definition of the (...) existential quantifier that in effect sets a given degree of connectedness among the putative parts of an object as a condition upon there being something (in the sense in question) with those parts. I then argue that such an implicit definition, taken together with an “auxiliary logic” (e.g., introduction and elimination rules), proves to function as a precisification in just the same way as paradigmatic precisifications of, e.g., “red”. I also argue that with a quantifier that is stipulated as maximally tolerant as to what mereological sums there are, precisifications can be given in the form of truth-conditions of quantified sentences, rather than by implicit definition. (shrink)

A dialetheia is a sentence, A, such that both it and its negation, ¬A, are true (we shall talk of sentences throughout this entry; but one could run the definition in terms of propositions, statements, or whatever one takes as her favourite truth-bearer: this would make little difference in the context). Assuming the fairly uncontroversial view that falsity just is the truth of negation, it can equally be claimed that a dialetheia is a sentence which is both true and false.

Many philosophers claim that understanding a logical constant (e.g. ‘if, then’) fundamentally consists in having dispositions to infer according to the logical rules (e.g. Modus Ponens) that fix its meaning. This paper argues that such dispositionalist accounts give us the wrong picture of what understanding a logical constant consists in. The objection here is that they give an account of understanding a logical constant which is inconsistent with what seem to be adequate manifestations of such understanding. I then outline an (...) alternative account according to which understanding a logical constant is not to be understood dispositionally, but propositionally. I argue that this account is not inconsistent with intuitively correct manifestations of understanding the logical constants. (shrink)

Knowledge of the basic rules of logic is often thought to be distinctive, for it seems to be a case of non-inferential a priori knowledge. Many philosophers take its source to be different from those of other types of knowledge, such as knowledge of empirical facts. The most prominent account of knowledge of the basic rules of logic takes this source to be the understanding of logical expressions or concepts. On this account, what explains why such knowledge is distinctive is (...) that it is grounded in semantic or conceptual understanding. However, I show that this cannot be the correct account of knowledge of the basic rules of logic, because it is open to Gettier-style counter-examples. (shrink)

The problem of logical constants consists in finding a principled way to draw the line between those expressions of a language that are logical and those that are not. The criterion of invariance under permutation, attributed to Tarski, is probably the most common answer to this problem, at least within the semantic tradition. However, as the received view on the matter, it has recently come under heavy attack. Does this mean that the criterion should be amended, or maybe even that (...) it should be abandoned? I shall review the different types of objections that have been made against invariance as a logicality criterion and distinguish between three kinds of objections, skeptical worries against the very relevance of such a demarcation, intensional warnings against the level at which the criterion operates, and extensional quarrels against the results that are obtained. I shall argue that the first two kinds of objections are at least partly misguided and that the third kind of objection calls for amendment rather than abandonment. (shrink)

What is a logical constant? The question is addressed in the tradition of Tarski's definition of logical operations as operations which are invariant under permutation. The paper introduces a general setting in which invariance criteria for logical operations can be compared and argues for invariance under potential isomorphism as the most natural characterization of logical operations.

Some philosophers find the following thesis attractive: for every logical constant C there is a set of logical rules of inference R such that a subject knows the meaning of C if and only if she accepts the rules in R. I point out some obvious but, apparently, easily forgotten difficulties concerning this thesis.

I present a notion of invariance under arbitrary surjective mappings for operators on a relational finite type hierarchy generalizing the so-called Tarski-Sher criterion for logicality and I characterize the invariant operators as definable in a fragment of the first-order language. These results are compared with those obtained by Feferman and it is argued that further clarification of the notion of invariance is needed if one wants to use it to characterize logicality.

There is as yet no settled consensus as to what makes a term a logical constant or even as to which terms should be recognized as having this status. This essay sets out and defends a rationale for identifying logical constants. I argue for a two-tiered approach to logical theory. First, a secure, core logical theory recognizes only a minimal set of constants needed for deductively systematizing scientific theories. Second, there are extended logical theories whose objectives are to systematize various (...) pre-theoretic, modal intuitions. The latter theories may recognize a variety of additional constants as needed in order to formalize a given set of intuitions. (shrink)

This paper discusses the history of the confusion and controversies over whether the definition of consequence presented in the 11-page 1936 Tarski consequence-definition paper is based on a monistic fixed-universe framework?like Begriffsschrift and Principia Mathematica. Monistic fixed-universe frameworks, common in pre-WWII logic, keep the range of the individual variables fixed as the class of all individuals. The contrary alternative is that the definition is predicated on a pluralistic multiple-universe framework?like the 1931 Gödel incompleteness paper. A pluralistic multiple-universe framework recognizes multiple (...) universes of discourse serving as different ranges of the individual variables in different interpretations?as in post-WWII model theory. In the early 1960s, many logicians?mistakenly, as we show?held the ?contrary alternative? that Tarski 1936 had already adopted a Gödel-type, pluralistic, multiple-universe framework. We explain that Tarski had not yet shifted out of the monistic, Frege?Russell, fixed-universe paradigm. We further argue that between his Principia-influenced pre-WWII Warsaw period and his model-theoretic post-WWII Berkeley period, Tarski's philosophy underwent many other radical changes. (shrink)

This paper contains five observations concerning the intended meaning of the intuitionistic logical constants: (1) if the explanations of this meaning are to be based on a non-decidable concept, that concept should not be that of 'proof'; (2) Kreisel's explanations using extra clauses can be significantly simplified; (3) the impredicativity of the definition of → can be easily and safely ameliorated; (4) the definition of → in terms of 'proofs from premises' results in a loss of the inductive character of (...) the definitions of ∨ and ∃; and (5) the same occurs with the definition of ∀ in terms of 'proofs with free variables'. (shrink)

The paper investigates the propriety of applying the form versus matter distinction to arguments and to logic in general. Its main point is that many of the currently pervasive views on form and matter with respect to logic rest on several substantive and even contentious assumptions which are nevertheless uncritically accepted. Indeed, many of the issues raised by the application of this distinction to arguments seem to be related to a questionable combination of different presuppositions and expectations; this holds in (...) particular of the vexed issue of demarcating the class of logical constants. I begin with a characterization of currently widespread views on form and matter in logic, which I refer to as ‘logical hylomorphism as we know it’—LHAWKI, for short—and argue that the hylomorphism underlying LHAWKI is mereological. Next, I sketch an overview of the historical developments leading from Aristotelian, non-mereological metaphysical hylomorphism to mereological logical hylomorphism (LHAWKI). I conclude with a reassessment of the prospects for the combination of hylomorphism and logic, arguing in particular that LHAWKI is not the only and certainly not the most suitable version of logical hylomorphism. In particular, this implies that the project of demarcating the class of logical constants as a means to define the scope and nature of logic rests on highly problematic assumptions. (shrink)

The paper investigates the propriety of applying the form versus matter distinction to arguments and to logic in general. Its main point is that many of the currently pervasive views on form and matter with respect to logic rest on several substantive and even contentious assumptions which are nevertheless uncritically accepted. Indeed, many of the issues raised by the application of this distinction to arguments seem to be related to a questionable combination of different presuppositions and expectations; this holds in (...) particular of the vexed issue of demarcating the class of logical constants. I begin with a characterization of currently widespread views on form and matter in logic, which I refer to as ‘logical hylomorphism as we know it’—LHAWKI, for short—and argue that the hylomorphism underlying LHAWKI is mereological. Next, I sketch an overview of the historical developments leading from Aristotelian, non-mereological metaphysical hylomorphism to mereological logical hylomorphism (LHAWKI). I conclude with a reassessment of the prospects for the combination of hylomorphism and logic, arguing in particular that LHAWKI is not the only and certainly not the most suitable version of logical hylomorphism. In particular, this implies that the project of demarcating the class of logical constants as a means to define the scope and nature of logic rests on highly problematic assumptions. (shrink)

Donald Dvaidson has claimed that a theory of meaning identifies the logical constants of the object language by treating them in the phrasal axioms of the theory, and that the theory entails a relation of logical consequence among the sentences of the object language. Section 1 offers a preliminary investigation of these claims. In Section 2 the claims are rebutted by appealing to Evans's paradigm of a theory of meaning. Evans's theory is deliberately blind to any relation of logical consequence (...) among the sentences of the object language, and entails only what Evans takes to be a distinct and deeper relation of structural validity among the sentences of the object language. In Section 3 we turn to Evans's motivation in order to compare the two paradigms of a theory of meaning. Evans laid down criteria under which a theory of meaning gives what he called a ‘transcendent’ semantic classification of the lexicon of the object language, in contrast to a mere ‘immanent’ classification. However, when these criteria are applied we find that, pace Evans, they favour Davidson's paradigm over Evans's. In the final section we show that Evans's conception of structural consequence turns out to be a deeper formulation of logical consequence. (shrink)

I argue for the thesis (UL) that there are certain logical abilities that any rational creature must have. Opposition to UL comes from naturalized epistemologists who hold that it is a purely empirical question which logical abilities a rational creature has. I provide arguments that any creatures meeting certain conditions—plausible necessary conditions on rationality—must have certain specific logical concepts and be able to use them in certain specific ways. For example, I argue that any creature able to grasp theories must (...) have a concept of conjunction subject to the usual introduction and elimination rules. I also deal with disjunction, conditionality and negation. Finally, I put UL to work in showing how it could be used to define a notion of logical obviousness that would be well suited to certain contexts—e.g. radical translation and epistemic logic—in which a concept of obviousness is often invoked. (shrink)

I argue that it is rational for a person to believe the conjunction of her beliefs. This involves responding to the Lottery and Preface Paradoxes. In addition, I suggest that in normal circumstances, what it is to believe a conjunction just is to believe its conjuncts.

There have been several different and even opposed conceptions of the problem of logical constants, i.e. of the requirements that a good theory of logical constants ought to satisfy. This paper is in the first place a survey of these conceptions and a critique of the theories they have given rise to. A second aim of the paper is to sketch some ideas about what a good theory would look like. A third aim is to draw from these ideas and (...) from the preceding survey the conclusion that most conceptions of the problem of logical constants involve requirements of a philosophically demanding nature which are probably not satisfiable by any minimally adequate theory. (shrink)

Analyses C S Lewis's argument for the existence of 'something in addition to nature' - i.e., something which is of a kind that neither depends on nature's interlocking system, nor could be explained as being a necessary product of it. This singular exceptional item, Lewis argued, is rational thought, 'which is not part of the system of nature'.

The philosophical discussion about logical constants has only recently moved into the substructural era. While philosophers have spent a lot of time discussing the meaning of logical constants in the context of classical versus intuitionistic logic, very little has been said about the introduction of substruc-tural connectives. Linear logic, affine logic and other substructural logics offer a more ﬁne-grained perspective on basic connectives such as conjunction and disjunction, a perspective which I believe will also shed light on debates in the (...) philosophy of logic. In what follows I will look at one particularly interesting instance of this: The development of the position known as logical inferentialism in view of substructural connectives. I claim that sensitivity to structural properties is an interesting challenge to logical inferentialism, and that it ultimately requires revision of core notions in the inferentialist litera-ture. Speciﬁcally, I want to argue that current deﬁnitions of proof theoretic harmony give rise to problematic nonconservativeness as a result of their insensitivity to substructurality. These nonconservativeness results are undesirable because they make it impossible to consistently add logical constants that are of independent philosophical interest. (shrink)

The model-theoretic analysis of the concept of logical consequence has come under heavy criticism in the last couple of decades. The present work looks at an alternative approach to logical consequence where the notion of inference takes center stage. Formally, the model-theoretic framework is exchanged for a proof-theoretic framework. It is argued that contrary to the traditional view, proof-theoretic semantics is not revisionary, and should rather be seen as a formal semantics that can supplement model-theory. Specifically, there are formal resources (...) to provide a proof-theoretic semantics for both intuitionistic and classical logic. We develop a new perspective on proof-theoretic harmony for logical constants which incorporates elements from the substructural era of proof-theory. We show that there is a semantic lacuna in the traditional accounts of harmony. A new theory of how inference rules determine the semantic content of logical constants is developed. The theory weds proof-theoretic and model-theoretic semantics by showing how proof-theoretic rules can induce truth-conditional clauses in Boolean and many-valued settings. It is argued that such a new approach to how rules determine meaning will ultimately assist our understanding of the apriori nature of logic. (shrink)

It may be that all that matters for the modalities, possibility and necessity, is the object named by the proper name, not which proper name names it. An influential defender of this view is Saul Kripke. Kripke’s defense is criticized in the paper.

Some bilateralists have suggested that some of our negative answers to yes-or-no questions are cases of rejection. Mark Textor (2011. Is ‘no’ a force-indicator? No! Analysis 71: 448–56) has recently argued that this suggestion falls prey to a version of the Frege-Geach problem. This note reviews Textor's objection and shows why it fails. We conclude with some brief remarks concerning where we think that future attacks on bilateralism should be directed.

Timothy Smiley's wonderful paper 'Rejection' (Analysis 1996) is still perhaps not as well known or well understood as it should be. This note first gives a quick presentation of themes from that paper, though done in our own way, and then considers a putative line of objection - recently advanced by Julien Murzi and Ole Hjortland (Analysis 2009) - to one of Smiley's key claims. Along the way, we consider the prospects for an intuitionistic approach to some of the issues (...) discussed in Smiley's paper. (shrink)

The focus of this paper are Dummett's meaning-theoretical arguments against classical logic based on consideration about the meaning of negation. Using Dummettian principles, I shall outline three such arguments, of increasing strength, and show that they are unsuccessful by giving responses to each argument on behalf of the classical logician. What is crucial is that in responding to these arguments a classicist need not challenge any of the basic assumptions of Dummett's outlook on the theory of meaning. In particular, I (...) shall grant Dummett his general bias towards verificationism, encapsulated in the slogan 'meaning is use'. The second general assumption I see no need to question is Dummett's particular breed of molecularism. Some of Dummett's assumptions will have to be given up, if classical logic is to be vindicated in his meaning-theoretical framework. A major result of this paper will be that the meaning of negation cannot be defined by rules of inference in the Dummettian framework. (shrink)

This paper discusses proof-theoretic semantics, the project of specifying the meanings of the logical constants in terms of rules of inference governing them. I concentrate on Michael Dummett’s and Dag Prawitz’ philosophical motivations and give precise characterisations of the crucial notions of harmony and stability, placed in the context of proving normalisation results in systems of natural deduction. I point out a problem for defining the meaning of negation in this framework and prospects for an account of the meanings of (...) modal operators in terms of rules of inference. (shrink)

This is a lightly edited version of my comments on Brandom’s Lecture 2, as delivered in Prague at the “Prague Locke Lectures” in April, 2007. I try to say why Brandom’s proposed demarcation is significant, by placing it in a broader context of demarcation proposals from Kant to the twentieth century. I then raise some questions about the basic ingredients of Brandom’s demarcation—the notions of PP-sufficiency and VP-sufficiency—and question whether the vocabulary of conditionals, Brandom’s paradigm for logical vocabulary, can be (...) universal-LX. (shrink)

Logic is usually thought to concern itself only with features that sentences and arguments possess in virtue of their logical structures or forms. The logical form of a sentence or argument is determined by its syntactic or semantic structure and by the placement of certain expressions called “logical constants.”[1] Thus, for example, the sentences Every boy loves some girl. and Some boy loves every girl. are thought to differ in logical form, even though they share a common syntactic and semantic (...) structure, because they differ in the placement of the logical constants “every” and “some”. By contrast, the sentences Every girl loves some boy. and Every boy loves some girl. are thought to have the same logical form, because “girl” and “boy” are not logical constants. Thus, in order to settle questions about logical form, and ultimately about which arguments are logically valid and which sentences logically true, we must distinguish the “logical constants” of a language from its nonlogical expressions. (shrink)

Let us sum up. We began with the question, “What is the interest of a model-theoretic definition of validity?” Model theoretic validity consists in truth under all reinterpretations of non-logical constants. In this paper, we have described for each necessity concept a corresponding modal invariance property. Exemplification of that property by the logical constants of a language leads to an explanation of the necessity, in the corresponding sense, of its valid sentences. I have fixed upon the epistemic modalities in characterizing (...) the logical constants: to be a logical constant in the language of a population is to be invariant over a modality describing complete possible epistemic states of that population (or an idealized analogue thereof). The grounds for this characterization are these: (1) It leads, I believe, to an extensionally reasonable demarcation of the logical constants, including clear cases and excluding clear non-cases. It gives a principled criterion for deciding unclear cases. (2) It provides an analysis of the topic-neutrality of logic. (3) It leads to an explanation of the epistemic necessity of the logical truths in terms of the topic-neutrality of the logical constants.All the same, it is reasonable to ask, even if the suggested demarcation of logic is extensionally correct, whether it can reasonably be expected to be fundamental. The epistemic invariance of an expression is a rather striking property, one which we should want to explain. What is missing, then, is an explanation of the distinguishing epistemic properties of the constants in terms of more fundamental properties involving their understanding and use. It would be these that properly define the nature, not just the extent, of logic. (shrink)

In this paper, we shall confine ourselves to the study of sentential constants in the system R of relevant implication.In dealing with the behaviour of the sentential constants in R, we shall think of R itself as presented in three stages, depending on the level of truth-functional involvement.

Sentential constants have been part of the R environment since Church [1]. They have had diverse uses in explicating relevant ideas and in sim- plifying them technically. Of most interest have been the Ackermann pair of constants t; f, functioning conceptually as a least truth, and as a greatest , under the ordering of propositions under true impli- cation. Also interesting have been the Church constants F; T, functioning similarly as least greatest propositions.