Fission, Fusion and Methuselah in the Real World: Part 1

I’m going to blog the paper I’m currently writing. Here’s the first section; I’ll put up the next few sections over the next few days. Obviously, this is first draft material, but that’s what blogs are for!

I Do We Discover or Invent Answers to Fission, Fusion, and Methuselah Cases?

The literature on identity is littered with fission (and fusion) problems: How do we identify continuous identity in a case where one person splits into two? This can be a split brain case, as in Nagel[1], amoeba-like fission, as in Williams[2], and Parfit[3] (1987) or Parfit’s teletransporter cases (Ibid, pp 200) or Lewis’ unspecified fissioning[4], or one of the many variants of these found in Shoemaker[5],[6] , John Perry[7] or more recent work by Bakajian,[8] or many others.

Somewhat less work has been done on the “Methuselah problem,” that, is the question of whether someone who lives 900 years, and undergoes a standard rate of change in personality, memory, bodily components, etc., can be said to be the same person. However, Lewis (1976, pp 30), Parfit and others have also tackled this, and it forms an important part of the continuity literature.

I want to note two things about these problems: As presented, it seems likely that there is no solution to them, because our use of person developed without access to such problems, and thus there’s no particular reason to think it would have automatic and clear-cut use in these cases.

Second, I think there are real world correlates of these cases, and applying some of the work on the imaginary cases might open up possibilities for thinking about these cases differently, and productively.

On the first note, Parfit cites Quine as saying “I wonder whether the limits of the [science-fiction-case] method are properly heeded. To see what is ‘logically required’ for sameness of person under unprecedented circumstances is to suggest that words have some logical force beyond what our past needs have invested them with.” (in Parfit, p200) I think it was Hilary Putnam who made a similar claim with regard to whether we could say, at some point in the future, when robots start acting like humans, that the robots have consciousness: (if I recall correctly) he said it would simply be a decision, a convention of language that we would adopt.

That strikes me as wrong, as it seems like there’s a matter of fact about whether an entity is conscious or not.

But I’m not convinced that there is always a matter of fact about whether an entity is the same person as some prior entity. We can demand an answer, but one of the reasons that we have such disagreement on the puzzle cases is that we’re not discovering the truth about the matter, but rather putting forward proposals for policies. If Brown splits into two people, they’re either both Brown, or one of them is, or neither of them are, or each is to a diminished extent, etc.

I think this impasse is reached in part because all of the answers seem reasonable. In fact, the argument usually involves claiming that a rejected answer has consequences “we” would rather not accept (this is Lewis’ move, for example, and Bakajian’s and Williams’.) But that’s not necessarily an epistemic reason for rejecting a position. Lots of true things are things we’d rather were not true. Further, these rejections appeal usually to intuitions about how “we” use words (Lewis, Bakajian, Shoemaker) or what sort of entity “we” would feel an identification with (Williams, Nagel, Parfit.) I think the problem with this “we” is that, given how much disagreement on these cases there is, the extension of the “we” is fairly uncertain. Further, if X-Phi has been informative (and I think it has been!) it has been so in showing that what are often thought by philosophers to be generally accepted intuitions turn out to be not so widely accepted as was thought.