Contents

Some people seem to have problems conceiving philosophical zombies. In this post, I try to show that they are conceivable. (And I only defend that claim) I am not endorsing any particular philosophy of mind here.

Note that according to the Philpapers survey, P-zombie inconceivability is the minority position.

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

Zombies looked conceivable when you looked out at a beautiful sunset and thought about the quiet inner awareness inside you watching that sunset, which seemed like it could vanish without changing the way you walked or smiled; obedient to the plausible-sounding generalization, "the inner listener has no outer effects". That generalization should _stop _seeming possible when you say out loud, "But wait, I am thinking this thought right now inside my auditory cortex, and that thought can make my lips move, translating my awareness of my quiet inner listener into a motion of my lips, meaning that consciousness is part of the minimal closure of causality in this universe." I can't think of anything else to say about the conceivability argument. The zombies are dead.

I'll try here to help anyone conceive P-zombies.

(I)

Let's start by the trivial zombie case. In this hypothetical world, you have beings that are like humans but are not conscious. In this world, there is no vocabulary relating to consciousness - no talk of pain, unconsciousness, redness, qualia, etc-.

Is there any problem conceiving this world?

When you first arrive there, you see what you would expect to see: cities, roads, people doing what people do, etc. You would initially think that they are conscious, but on a closer look, you would realise that they never talk about anything mental!

I guess that you can conceive this. But with this, you haven't yet joined the ranks of Zombie-ism:

If zombies are to be counterexamples to physicalism, it is not enough for them to be behaviorally and functionally like normal human beings: plenty of physicalists accept that merely behavioral or functional duplicates of ourselves might lack qualia. **Zombies must be like normal human beings in all physical respects. **

These lite-zombies do not engage in certain activies that regular humans do engage in, like discussing consciousness.

(II)

Now, let us move on to the almost-full zombie world, where zombies do talk about everything we talk about.

A hypothetical dialogue between a human and a P-zombie could go like this:

H: What colour is that leaf?

Z: Green, why?

H: But what do you mean by that?

Z: I mean that the leaf has a colour, and that colour is green. Most precisely, when I see the leaf, certain wavelengths enter through my eyeballs and induce certain activity in a certain region of my brain, causing me to utter the words just uttered. 'Seeing green' is shorthand for that process, a particular way of talking about the behavior of certain physical systems, like the brain.

This almost-full zombie world is not yet the zombie world. Daniel Dennett argues here that P-zombies have to act in every possible circumstance like humans. It could be said, however, that there are humans who could disbelieve in consciousness (proclaiming it to be an illusion, a product of a cognitive algorithm), and thus end up talking like that. Those humans and their zombie versions, would produce the same behaviour, hence at least for a set of humans, zombie-ism is true. That is, there exist a subset of the world such that it is possible to create a Zombie World for that subset of the world.

Now, what about most of us who are not insane and recognise the obviousness of consciousness?

My zombie version would have to write the same posts I do, and talk about consciousness in the same way as I do. In doing so, the zombie would not be confused/understand anything, have beliefs/thoughts/preferences, be justified or unjustified in anything he says, etc. Those are all mental categories. If the zombie says 'I am in pain' that event is in the same category as a simple print("I am in pain") compiled and executed by a computer. That is, is is true in a trivial functionalist sense (It does what systems that we usually think feel pain do), but false in a strict sense (There is no associated qualia of pain, or preference against pain).

(III)

Let us confront anti-zombie arguments and show why they fail. First, two in the SEP (I do not address verificationism or functionalism because they basically beg the question being discussed)

Zombies’ utterances. Suppose I smell roasting coffee beans and say, ‘Mm! I love that smell!’. Everyone would rightly assume I was talking about my experience. But now suppose my zombie twin produces the same utterance. He too seems to be talking about an experience, but in fact he isn't because he's just a zombie. Is he mistaken? Is he lying? Could his utterance somehow be interpreted as true, or is it totally without truth value? Nigel Thomas (1996) argues that ‘any line that zombiphiles take on these questions will get them into serious trouble’.

Is he mistaken? No, he is not. He just does what he is supposed to do.

Is he lying? No. Lying in the usual sense requires the intention to deceive (or some other related mental property).

Is the utterance true? As I discuss above, in a strict sense, no. In a loose, metaphorical, poetic, sense, yes. We could talk about zombies using our mental categories because it is useful to do so in the same way we use them to understand other people.

Arguably it is a priori true that phenomenal consciousness, whether actual or possible, involves being able to refer to and know about one's qualia. If that is right, any zombie-friendly account faces a problem. According to the causal theory of reference — which is widely accepted — reference and knowledge require us to be causally affected by what is known or referred to (Kripke 1972/80); and it seems reasonable to suppose that this too is true a priori if at all. On that basis, in those epiphenomenalistic worlds whose conceivability seems to follow from the conceivability of zombies — worlds where qualia are inert — our counterparts cannot know about or refer to their qualia. That contradicts the assumption that phenomenal consciousness requires reference to qualia, from which it follows that such epiphenomenalistic worlds are not possible after all. Therefore zombies are not conceivable in the relevant sense either, since their conceivability leads a priori to a contradiction. To summarize: if zombies are conceivable, so are epiphenomenalistic worlds. But by the causal theory of reference, epiphenomenalistic worlds are not conceivable; so zombies are not conceivable either.

Indeed! Zombies do not know anything in the sense we do. But this argument is good, as is related to the one Yudkowsky makes, which we will discuss later.

These claims all follow from our very standard and well-established info theory. We get info about things by interacting with them, so that our states become correlated with the states of those things. But by assumption this hypothesized extra “feeling” state never interacts with anything. The actual reason why you feel compelled to assert very confidently that you really do feel has no causal connection with whether you actually do really feel. You would have been just as likely to say it if it were not true. What could possibly be the point of hypothesizing and forming beliefs about states about which one can never get any info?

Epiphenomenalists could hold that there is interaction between that extra "feeling" and the physical world. Causality would run from the brain to consciousness, and not the other way around. This way, the information about the world is reflected as a conscious experience, but conscious experiences themselves would not be causing anything. The brain would be doing its thing, generating consciousness and making you believe you are doing what you do because this or that mental property. So yes, you could have (in a universe where there is a zombie you) said the exact same thing without the conscious experience associated with it. As Max Tegmark puts it, consciousness could be 'what information being processed feels like'. (Tegmark, 2014)

Secondly, one can get lots of info about conscious experience: what something feels like is knowledge about something. Hanson rejects this, saying that he meant the technical definition of information -within information theory-. But the fact that information theory cannot (as of today, at least) capture mental events is no reason to say that knowing what something feels like is having information in the commonly used sense of information.

If something has no causal effect, you can't know about it. The territory must be causally entangled with the map for the map to correlate with the territory. To 'see' something is to be affected by it. If an allegedly physical thing or property has absolutely no causal impact on the rest of our universe, there's a serious question about whether we can even talk about it, never mind justifiably knowing that it's there.

Which is the point that I have answered before: With epiphenomenalism, causality runs in one direction.

It is a standard point—which zombie-ist philosophers accept!—that the Zombie World's philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness. At this point, the Zombie World stops being an intuitive consequence of the idea of an inner listener. Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world. You can argue clever reasons why this is not so, but you have to be clever. You are no longer playing straight to the intuition.

Seemingly. This is in part the reason why I presented the limited Zombie World scenario above: because in general what seems to cause trouble to people trying to conceive P-zombies are P-zombies doing things that seem totally impossible to do without consciousness.

Zombie-ists are property dualists—they don't believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical. "Beyond the physical"? What does that mean? It means the extra properties are there, but they don't influence the motion of the atoms, like the properties of electrical charge or mass. The extra properties are not experimentally detectable by third parties; you know you are conscious, from the _inside_of your extra properties, but no scientist can ever directly detect this from outside.

Yudkowsky asserts that Zombie-ists are property dualists, but as I read it, being able to conceive Zombies does not imply a particular philosophy of mind. I can, for example, hold that it is possible - but not the case - that our universe was created by Alien Gods. That doesn't make me an Alien God believer. Yudkowsky says that epiphenomenalism (and other Chalmersian views) is implausible and there are more plausible views -fair enough-. But that does little to criticise zombies.

That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.

Your intuition that no material substance can possibly add up to consciousness is incorrect. If you_actually_knew exactly why you talk about consciousness, this would give you new insights, of a form you can't now anticipate; and afterward you would realize that your arguments about normal physics having no room for consciousness were flawed.

Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. _The fact that I have typed this paragraph _would at least seem to refute the idea that consciousness has no experimentally detectable consequences.

Not quite. It is plausible, and I subscribe to the same view, that consciousness causes stuff. But it is also plausible - possible - that we act and consciousness doesn't do anything. Consciousness is just an ex-post feeling of what the system just did.

You your brain could have done X (perfectly explainable by physics) and then the conscious I is notified of the decision, perhaps even with an attached feeling of having chosen. This view, I would say, does get some support from neuroscientific research, so we cannot reject it out of hand. And if we can't, Eliezer's enjoyable paragraph writing is no rebuttal to Zombie-ism.

**(IV) **

One final act of nigromancy, if the above failed to put a nice picture of a P-zombie in your mind.

Assume this disjunction is true: Either consciousness is some fundamental component of the universe (Panpsychism), or that the laws of physics in this universe entail that consciousness is a property that certain systems have.

With this, we remove the epiphenomenalism and the weirdness that was bothering Yudkowsky out of the table.

Now, imagine that a regular person (not a Zombie) is asked to say a number between 1 and 10. What happens there is that soundwaves travel through the air, induce some movement in the tympanic membrane in the ear, and in turn. The vibrations end up in the cochlea, which will transmit them as electrical impulses to the brain, where they will interact with other impulses. After a while, some impulses will come down and move the vocal folds and the diaphragm so as to produce speech.

This is accompanied by, perhaps, an inner monologue of 'Which number should I choose?' and a feeling of actually freely choosing something.

In the above process, imagine being able to get into the brain, to trace everything as it happens. Perhaps you even have the Equations of Consciousness at your disposal, that explain that this bunch of interconnected neurons here and there gather some proto-conscious events and produce consciousness or whatever. The proto-conscious events, say, create a subjective experience of varying complexity if they are integrated into a greater whole.

Now, once that you have imagined how the hard problem of consciousness is solved, imagine a universe different from ours. This universe has only two dimensions, and the entropy of the universe as whole stays constant. In addition, there are no Equations of Consciousness, and no protoconscious events. There are 2D beings that walk around, and are obviously not conscious. Imagine things like insects and simple reptiles. These are zombies -in a sense- but not the zombies we want.

You just imagined an universe of zombies. Now quickly change from 2 to 3 dimensions, and from deltaS=0 to deltaS>0.

Bring to your mind what you saw earlier: all the electrical impulses going around in the brain to produce a response. No consciousness involved.

(V)

What's the point of P-zombies? As I see it, at least to illustrate our present lack of understanding regarding consciousness, and to substantiate one of the five premises that cannot be accepted together, and that compose the Mind-Body problem

The properties of a system are a necessary consequence of the properties, activities, and arrangements of the components

People are made of atoms, quarks, etc.

Atoms are purely physical things: their properties, activities and relations are all physical.

People have mental states

No statement ascribing a mental predicate can be derived from any set of purely physical descriptions / The existence of a given mental state is not a (metaphysically) necessary consequence of the existence of any purely physical state

The Zombie argument (or Mary's room, or the case of inverted qualias*) give us reason to think that 5 is true. Regardless of what's the truth regarding how consciousness works in our universe, it still could have been in any other way. Perhaps a universe with consciousness will be different as a universe without it.

But accepting that consciousness it not a passive inert thing -and thus agreeing with Yudkowsky- does not do. I could still have been a passive inert thing.

I don't think this refutes physicalism. We may still find consciousness particles, or consciousness equations. But it does suggest that it might be wrong.

*Perhaps this is even better than zombies: To motivate that there is no necessary connection between the physical and the mental, think that what you see as red you might see as yellow, and viceversa. We could live in the same universe, talk in the same way, and still have different qualias. An external observer, examining only physical properties, would not be able to guess how the qualias are in the universe. Doesn't this beg the question against physicalism? In the end in boils down to a philosophical argument, build with the knowledge we now have, to see what is the most plausible thing to believe: will we ever find physical laws and explanations for consciousness, or not? Previous attempts at this game (the infamous elan vital) were successfully explained. But this time seems different. Now we are probing deep into the brain, we are groping our way towards understanding Physics, and while there are a few unsolved problems in physics, we don't even have a way to link that enormous edifice of Physics to conscious experience. There is nothing within string theory or loop quantum gravity to explain consciousness. This fact, plus the easiness of imagining zombies, plus the seeming conceptual gap between the physical and the subjective are strong arguments against physicalism being true.

Some people, admitting this,take a different route: deny consciousness. To say that it is an illusion. That because it doesn't fit or it is not expected to fit within a scientific understanding That, I think, is essentially a reductio ad absurdum of physicalism. If something is evident is that we are conscious. Like Descartes famously said, there cannot be anything more certain than that. Whatever conclusion one arrives at , if it conflicts with that fact, it means that one ought to reject one of the other premises in the argument.

We still could have, somehow, physicalism and consciousness. This is David Pearce's Non-materialist physicalism proposal. But if we ever prove that physicalism does not allow us to explain consciousness as a real phenomenon -thus forcing us to be eliminativists on pain of logical inconsistency- that would count as a refutation of physicalism, not of consciousness.

I don't think the Zombie argument entails that, so one is not forced to believe much else if one accepts that conceiving zombies is possible. It does show that there exists a possible universe in which physics does not explain consciousness. Do we live in such a universe? We might know in some decades. I hope we don't, we'd have to live with the mystery forever!

EDIT (31/07/16) It appears Yudkowsky has a different concept of zombies than mine. His is based on a reading of Chalmer's The Conscious Mind, and such reading is plausible. But recently Chalmers has said that zombies do not imply epiphenomenalism, so while they might be implausible under his concept, they are not under mine. See 17:03 to 20:25 of this video.

EDIT (04/04/17): I made a mistake: while it is true that you can have consciousness along for the ride with one-way causality and have it experiencing things, epiphenomenalism doesn't allow consciousness to in turn cause my writings about consciousness. So non-epiphenomenal zombies are still conceivable, epiphenomenalism is troublesome.

BONUS: Did you know that perhaps consciousness is not required for any task in particular (as far as we have tested), but that it makes some activities easier or more efficient? (Shea and Frith 2016)

There is plenty of evidence suggesting that consciousness, or an attentional phenomenon closely related to consciousness, is important for many forms of learning, memory and voluntary control of behaviour. However, a leading strategy in scientific research on consciousness is in search of something stronger: tasks that can only be performed using conscious representations. For example, it has been variously claimed that consciousness is required in order: to integrate or bind perceptual features (Dehaene and Naccache 2001; Baars 2002; Tononi 2004); to keep a representation online in the absence of stimulation (Greenwald et al. 1996; Dehaene and Naccache 2001) or to integrate motivational states with causal learning (Dickinson and Balleine 2009).

The history of this enterprise is not encouraging. Almost all proposed functions have been matched by plausible findings where the effect is shown to be produced in the absence of consciousness (Faivre et al. 2014; Soto et al. 2011 and Winkielman et al. 2005, respectively). Certainly, there is no clear case of a task for which consciousness is required. Disputes arise because of the intricacies of measuring consciousness and its absence – such that some researchers doubt that there is any non-chance performance without consciousness (Newell and Shanks 2014). So the case that there is actually any cognitive processing of non-conscious representations is far from conclusive (Phillips 2015).

I'm currently reading The Illusion of Conscious Will by Daniel Wegner (https://mitpress.mit.edu/books/illusion-conscious-will).

I haven't finished it yet, but Wegner reviews a lot of research on the topic of consciousness and it's leading him pretty hard into an interpretation of consciousness as something akin to a feeling derived after a pre-conscious thought or action has been decided on, to better establish authorship and responsibility over said thought/action and perhaps to enable better social communication of it. At most, consciousness is a type of mental module that helps us string together more complex sequences of thoughts or actions that we are not familiar with and have not become fully routine to us. If we repeat them often enough, they no longer (or rarely) enter the realm of conscious awareness.

I may be oversimplifying, but this interpretation leaves open the possibility of P-Zombies existing in our universe, for you would only need an intelligence that did not require conscious awareness to assist in learning new or complex tasks. Something like the aliens in Peter Watts' novels Blindsight and Echopraxia, for example.

"Epiphenomenalists could hold that there is interaction between that extra “feeling” and the physical world. Causality would run from the brain to consciousness, and not the other way around. This way, the information about the world is reflected as a conscious experience, but conscious experiences themselves would not be causing anything. The brain would be doing its thing, generating consciousness and making you believe you are doing what you do because this or that mental property."