Pages

Thursday, 1 March 2012

Identity Expressed with One-Place Predication

Introduction

Frege's famous paper On Sense and Reference begins with the question of whether identity is a relation. Frege then goes immediately on to ask whether it is a relation between objects, or their names. The latter question then sees most of the action.

This is a confusing issue. What is a relation, anyway? What does it mean to hold that identity statements ascribe a relation, as opposed to doing something else? Might there not be various ways of categorizing things, perhaps involing, or giving rise to, slightly different senses of 'relation'?

It is not my aim here to give an overall discussion of the main philosophical problems surrounding identity statements. My purpose is to show how easily we can modify first-order logic with identity (FOL=) so that identity statements are treated as one-place predications rather than two-place relational predications. Comparing the result with natural language identity statements such as 'Hesperus is Phosphorus' makes the occurence of 'is' look more like a copula ("the 'is' of predication") rather than a relation symbol (some special "'is' of identity"). Sentences like 'Hesperus is identical to Phosphorus' then look, by contrast, more comparable to the familiar '=' form in logic - that is, more like they contain a relation-symbol.

I had thought of this possibility before, at least in part, but it came forcefully to mind recently when I was reading Delia Graff Fara's draft paper, 'Names as Predicates'. The theory put forward there is sophisticated, but my basic thought was: if, as Fara argues, 'Hesperus is Phosphorus' is not an identity, but a statement attributing to Hesperus the property of being Phosphorus, then what do countas identity statements? Statements involving variables? But they can also be treated as one-place predications. Instead of saying these are not identity statements, why not let them be the paradigms of identity statements, and just say that identity statements can be construed as one-place predications? (For Fara, I think, an example of a genuine identity statement would be 'Hesperus is identical to Phosphorus' - cf. the paragraph above. That is, Fara makes it a requirement of identity-statementhood that the statement have a two-place relational syntax, whereas I don't wish to. This is a fairly unimportant terminological difference as far as I can see.)

The Strategy

We make three modifications to the ordinary syntax and semantics of first-order logic with identity:

- Instead of having a special symbol '=' in our stock of two-place predicates, we add two pointy bracket symbols '<' and '>' to the vocabulary.

- Add the following clause to the recursive specification of the well-formed formulae: For all terms T, '<T>' is a one-place predicate. ('T' here is a syntactic variable, specifically a term placeholder.)

- Instead of mapping '=' to a set of repetitive ordered pairs - one for each object in the domain, containing that object twice, i.e. "the identity relation" construed extensionally - we add the following rule to the semantics: For any term T which has a referent, let the sole member of <T>'s extension be T's referent.

(Note on quantified formulae: this works most clearly with the style of semantics where one considers models which contain a new constant in place of the variables bound by the quantifier, but it also works with variable-assignment semantics, if we class assignments to variables as referents.)

Now, in place of, e.g., 'a = b', we write '<b>a'. In place of '∃x (x = x)', we write '∃x(<x>x)', etc.

Remarks

This way of doing things is interesting in that we can, in an important sense, say everything we said with '=', while using a language that doesn't suggest any talk about identity as a relation which holds between all objects and themselves. The illumination this affords is, I think, the sort of thing Wittgenstein was talking about when he wrote the following:

Each time I say that, instead of such and such a representation, you could also use this other one, we take a further step towards the goal of grasping the essence of what is represented. (Philosophical Remarks, sect. 1.)

I am also reminded, in an obscure way, of this unforgettable passage in Russell's Logical Atomism lectures:

There is a good deal of importance to philosophy in the theory of symbolism, a good deal more than at one time I thought. I think the importance is almost entirely negative, i.e. the importance lies in the fact that unless you are fairly self-conscious about symbols, unless you're fairly aware of the relation of the symbol to what it symbolizes, you will find yourself attributing to the thing properties which only belong to the symbol. That, of course, is especially likely in very abstract subjects such as philosophical logic, because the subject-matter that you are supposed to be thinking about is so exceedingly difficult and elusive that any person who has ever tried to think about it knows you do not think about it except perhaps once in six months for half a minute. (Logical Atomism Lectures, Logic and Knowledge,p. 185.)

12 comments:

You say, "In place of '∃x (x = x)', we write '∃x(〈x〉x)'". Could you explain this a bit more? You say you treat "〈a〉" as a predicate, but if so, the position between the angle brackets is not open for quantification. It would require some second-order resources to quantify into that. This might not be a bad thing since many people think identity is a second-order notion anyway.

If you can form even where t is or contains a variable, I don't see that you've changed anything. What's the difference between and the string t=? They both form atomic formulas when concatenated with a term, and quantification needs to work the same for both, as Shawn says, if your language is to be first order. I don't think the difference between writing an expression with two symbols, as $x$y, and writing it with one, as x$y - and this is the only difference between y and x=y - is enough to ensure that the expression isn't being used to talk about a relation, as opposed to a one-place predicate.

As I have construed things, occurences of variables inside term-containing predicates can be bound by (first-order) quantifiers. (Or can you see a problem arising with this which I've missed?

Anonymous,

You're right that, syntactically, that's all the difference amounts to. The point is that when we do things this way, identities get classified as one-place predications, and their semantics does not explicitly involve any identity relation. I find this instructive.

At the end of your main comment, you use 'talk about' in a way I'm not quite clear about. In one sense, I'm happy to admit that formulae like '< b >a' "talk about a relation", in saying something which is equivalent to the relational expression 'a = b'. But in at least two other senses, such formulae don't talk about a relation: they don't refer to any relation, nor to they contain a relation-symbol.

What you've done here looks to me like a special case of the general operation of currying a function: replacing many-argument operations with one-argument higher-order operations. For example, the two-place addition function can be replaced with a one-place function add, where add(x) is a function that takes each number y to x+y.

The fact that it's general doesn't make it uninteresting. (It's a neat trick, and one that is often used in computer science to simplify function representations.) But it does raise the question: is there anything special about identity that makes you think its curried form is particularly illuminating?

Also, I think that describing this as "identity as a one-place predicate" is a bit misleading. Being-identitical-to-Jeff is a one-place predicate. But *identity*, in general, in your account is represented by the operation ⟨·⟩. That isn't a one-place predicate; it's something that takes terms to one-place predicates—a kind of higher-order function symbol.

OK. I was pointing out that the syntactic difference is negligible, since in the syntactic sense there's nothing that makes putting a single symbol between two variables a better way of representing a relation than using two symbols. It's the semantics we use that make our relation symbols relation symbols. But I can see now that you are actually more interested in an aspect of the semantics.

Still, though, there are two aspects to the semantics. One is purely extensional - associating formulas with the models (and variable assignments) that satisfy them. You don't claim to be making any change to this aspect, because < a >b and a=b are equivalent. The other is about how to decide compositionally whether a particular model stands in that relation to a particular formula, and that's the part that makes use of an identity relation or your new identity predicates. You are making a change here, as you point out, but the introduction of the symbols < > is fairly superfluous, because you could just decide that our procedure for evaluating a=b in a model will be to interpret a= as the predicate with extension {a} and then check if b is an element (i.e. to interpret a= as you plan to interpret < a >). So I'm guessing that you think there is something further to be gained from changing the syntax. Is it just to prevent logicians from thinking of = in terms of a relation out of habit?

Thanks for commenting. I'm not sure if I understand your question, or precisely what currying amounts to for two-place relations in a first-order language.

Suppose you wanted to do the same thing with 'loves'/'L', with curly brackets this time: you'd have to say something like: for all terms T, let {T} be a predicate, with everything which loves T's referent in its extension.

But I don't need to do that in the case of identity: I don't need to say 'Let everything which is identical to T's referent be in the extension'. I can just say 'Let T's referent be in the extension'. So this doesn't seem like just another case of "currying a relation", if I understand the procedure rightly.

On your last point: I agree that it would be misleading to describe anything in the above post as 'identity as a one-place predicate', for just the reason you give. I don't describe it that way.

Anonymous,

That all seems right, yes. I accept that I could have just changed the semantics while keeping the syntax the same. I think changing the syntax in this way indicates the changed semantics. So yes, you could say it's to prevent people seeing it as a relational form out of habit. (I should say though, in case it seems otherwise, that I don't particularly want anyone to adopt my notation.)

Well, just saying that isn't quite going to work, since it doesn't rule out that other things besides T's referent might also be in the extension. So you'd want to say: let T's referent and only T's referent be in the extension. I'm used to analyzing this "only" in terms of identity: if x is in the extension then x is identical to T's referent. But maybe you could come up with an alternative approach. I'd be interested to see it worked out.

Similarly, if the extension of '<a>' is a set, then you can specify it as { the extension of 'a' }, and again identity doesn't come up explicitly. I'm used to analyzing this set notation in terms of identity, too: x is in {y, z} iff x = y or x = z. But maybe you could explain things differently, and I'd also be interested to see those details worked out.

When writing the post, I also thought about the fact that there is no very explicit ban on multiple objects being in the extension of a term-containing predicate. Note however that I do say 'let the sole member of' a term-containing predicate be the contained term's referent. So no model conforming to my specifications will give a term-predicate an extension containing multiple objects.

Now it may be that the best way to define 'sole', or 'only', involves the notion of identity, but I'm not sure that's any sort of problem for what I'm saying. Firstly, the metalinguistic use of identity in a definition of 'sole' or 'only' could itself be construed much as I construe it in the object language. Secondly, and perhaps more to the point, I'm not trying to give a reductive analysis of the notion of identity.

That sounds fine to me, but if that's your attitude then I'm not sure how to take your response about how identity is special. Couldn't we make the same moves for an arbitrary relation? The extension of the one-place predicate "a loves ..." is the set of things which are a-loved.

Sorry for the late reply. I don't think that is making the same moves. You need to specify a relation in your example ('loved' in 'a-loved'), but in the case of identity I can just say: let the extension of 'is T', where 'T' is a term, be {T}.

(This is basically just a repetition of what I said in my second last comment in the paragraph beginning 'But I don't need to...'. I think it shows the answer to your question - 'Couldn't we make the same moves for an arbitrary relation?' - to be 'no'.)