I always hesitate here, because in discussions on consciousness, there seems to be an awful lot of people talking past each other, even more so than on other philosophical issues. Often, it seems, people can’t even agree on what they mean by “consciousness.”

I’ve sometimes thought of telling people that if they want a sense of how I think about consciousness, they should read David Chalmers (whose views I’m very sympathetic to) and Ned Block (who has somewhat different views, but who still tends to make a lot of sense to me).

If you read Chalmers and Block and they make sense to you (my thinking goes), you’ll have a sense of where I am on consciousness, but if their writings look like gobbledygook to you then I don’t know how to talk to you about consciousness.

But it occurs to me that I could save people some work by pulling out some relevant quotes from Chalmers and Block and maybe some other people, and get a feel for my readership by asking people if the quotes “click” or not.

I’ll start with Chalmers, who’s famous for coining the phrase “the hard problem of consciousness,” which he distinguishes from various (relatively) easy problems. The (relatively) easy problems are ones that look like they should be solvable with a straightforward empirical investigation, even if it takes a century or two.

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

If any problem qualifies as the problem of consciousness, it is this one. In this central sense of “consciousness”, an organism is conscious if there is something it is like to be that organism, and a mental state is conscious if there is something it is like to be in that state. Sometimes terms such as “phenomenal consciousness” and “qualia” are also used here, but I find it more natural to speak of “conscious experience” or simply “experience”. Another useful way to avoid confusion (used by e.g. Newell 1990, Chalmers 1996) is to reserve the term “consciousness” for the phenomena of experience, using the less loaded term “awareness” for the more straightforward phenomena described earlier. If such a convention were widely adopted, communication would be much easier; as things stand, those who talk about “consciousness” are frequently talking past each other.

Chalmers ends up with a view he calls “property dualism.” Ned Block, on the other hand, is a physicalist, but shares Charlmers’ sense that there’s something very puzzling about consciousness.

Looking just now, I’m having a bit of trouble finding a juicy quote to pull out from Block, but I can approvingly quote him on one thing. In his essay, “Troubles with Functionalism,” after describing what he claims is a counterexample to functionalist theories of mind, he says:

What makes the homunculi-headed system (count the two systems as variants of a single system) just described a prima facie counterexample to (machine) functionalism is that there is prima facie doubt whether it has any mental states at all-especially whether it has what philosophers have variously called “qualitative states,” “raw feels,” or “immediate phenomenological qualities.” (You ask: What is it that philosophers have called qualitative states? I answer, only half in jest: As Louis Armstrong said when asked what jazz is, “If you got to ask, you ain’t never gonna get to know.”) In Nagel’s terms (1974), there is a prima facie doubt whether there is anything which it is like to be the homunculi-headed system.

That’s a nice expression of the kind of difficulties that sometimes occur when talking about these issues. Daniel Dennett, for example, has an essay titled “Quining Qualia,” “qualia” being another term for things like Chalmers’ “the quality of deep blue,” what Block calls “qualitative states” etc. As for “quining,” Dennett says:

The verb “to quine” is even more esoteric. It comes from The Philosophical Lexicon (Dennett 1978c, 8th edn., 1987), a satirical dictionary of eponyms: “quine, v. To deny resolutely the existence or importance of something real or significant.” At first blush it would be hard to imagine a more quixotic quest than trying to convince people that there are no such properties as qualia; hence the ironic title of this chapter. But I am not kidding.

My goal is subversive. I am out to overthrow an idea that, in one form or another, is “obvious” to most people–to scientists, philosophers, lay people. My quarry is frustratingly elusive; no sooner does it retreat in the face of one argument than “it” reappears, apparently innocent of all charges, in a new guise.

To people like Chalmers, Block, and myself, Dennett’s statements do indeed seem extremely odd, to the point that there’s been some joking to the effect maybe Dennett doesn’t have subjective mental states!

In one place, though, Dennett has given me the impression that he understands what Chalmers et al. are talking about. In an interview with Susan Blackmore in her book Conversations on Consciousness, he says:

The zombie hunch is the idea that there could be a being that behaved exactly the way you or I behave, in every regard—it could cry at sad movies, be thrilled by joyous sunsets, enjoy ice cream and the whole thing, and yet not be conscious at all. It would just be a zombie.

Now I think that many people are sure that hunch is right, and they don’t know why they’re sure. If you show them that the arguments for taking zombies seriously are all flawed, this doesn’t stop them from clinging to the hunch. They’re afraid to let go of it, for fear they’re leaving something deeply important out. And so we get a bifurcation of theorists into those who take the zombie hunch seriously, and those who, like myself, have sort of overcome it. I can feel it, but I just don’t have it any more.

[...]

Oh, it doesn’t just tempt me. I deliberately go out of my way, every now and then, to give myself a good instance of the zombie hunch. I talk to myself, ‘Come on Dan, think about it this way. Now can you feel it?’ Oh, I can feel it all right. It reminds me of how you can look out on a clear night and, if you think about it right, and look at the sky and sort of tip your head just so, you can actually feel the earth in its orbit around the sun. You can see what your position is, how the earth is turning, how it’s also in orbit, and it all sort of falls into place. You think ‘Oh, isn’t that quaint?’

This is a lovely perspective shift, but it takes knowledge and some very specific direction of attention to get into that frame of mind. Well, I think for people who have the zombie hunch and don’t know how to abandon it, they have to learn to do something like that too. But they just haven’t tried, and they don’t want to.

So maybe Dennett does understand what Chalmers et al. are talking about… or does he?

Dennett talks about “the zombie hunch” as a standard philosophical intuition, like thinking in some thought experiment that Smith doesn’t know, or that it would be wrong to throw a certain switch. But it seems to me that our awareness of our own subjective experience is very different from intuitions in that sense, and I suspect most people with my general perspective would say the same. So I’m genuinely puzzled as to what exactly is going on inside Dennett’s head here.

All in all, Metzinger actually comes up with a model that proposes to solve the hard problem of consciousness. He uses a wide array of lesion studies, disorders, fMRI studies, and machine models to make his case.

Reginald Selkirk

Chalmers: If such a convention were widely adopted, communication would be much easier; as things stand, those who talk about “consciousness” are frequently talking past each other.

This is a very important point. As with free will, discussions about consciousness are frequently marked by participants using different definitions, rather than disagreeing on substantial points. Much time and effort is wasted.

As Nagel (1974) has put it, there is something it is like to be a conscious organism.

There is also undeniably ‘something it is like to be’ a bacterium swimming in one direction because its chemosensors have picked up traces of possible food. Yet no one seems to think it mystical, or in need of explanation. My own view is that the whole deal with “qualia” or “experience” is probably the wrong question. You insist there must be something there, and yet you cannot demonstrate it, nor even define it. Some day they may be looked back upon the same way we look back upon the physicochemical concepts of “the ether” or phlogiston. I think this view may align with Dennett’s.

Chris Hallquist

So when Nagel talks about “what it’s like,” he’s talking about inner, subjective experience. I won’t claim to know for sure that bacteria don’t have this, but it would be very surprising to me. Are you sure we’re on the same page in terms of what we’re talking about here?

Ray

Sounds to me like he wants to imagine himself as a bacterium. Thus he wants an answer of the form “being a bacterium is very much like being Nagel except for X, Y and Z.” Not so easy when bacteria are very much unlike Nagel indeed. However, if your reference point were bacterium A, being bacterium B might be very much the same, except that different chemicals are attractive etc. Now this description wouldn’t be much good to bacteria for the rather mundane reason that they don’t parse English and don’t have any interest in anything remotely analogous to thought experiments. -but it seems to me that these considerations are perfectly addressed in information processing terms with no need to resort to any “hard problem” of consciousness.

Also, if you feel the need to invoke something ineffable, how are you so sure that bacteria don’t have an inner life? The only information you have to differentiate Nagel from the bacterium is of the ordinary information processing sort. No?

Reginald Selkirk

I’m sure that I am not on the same page as Nagel. It would be up to him to demonstrate that he is actually on a valid page. He noticably cannot demonstrate or define “inner life, ” “experience,” etc. Switching from one term to another offers no demonstrative or explanatory power, so I would need to be show that it is not just dodging.

Cast your thoughts back 0.5 or more centuries. Almost anyone would have insisted that there was something more to life than biochemistry. Go back in time, and those around you would have insisted, “sure, you can explain carbohydrate metabolism. Maybe you can explain ion channels. But still, when you get done carving away all those pieces, there will still be some ineffable residue which is “life.” But today, vitalism is deader than a doornail to all but the more science-phobic philosophers. Maybe consciousness will go the same way.

Reginald Selkirk

So when Nagel talks about “what it’s like,” he’s talking about inner, subjective experience. I won’t claim to know for sure that bacteria don’t have this, but it would be very surprising to me.

Ha ha! You have fallen into my trap. OK, so you don’t think a bacterium doesn’t have “inner, subjective experience.” How about a protist? It senses more things in its environment and has more complex behaviour. No? How about a worm? They even have a neural system. How about a mantis shrimp? If there is an “inner, subjective experience” involved with seeing colours, then they must have a ton of it, because some species can see over ten colors! (Various visible colours along with infrared, ultraviolet, and multiple variations of polarized light. Surely there is an “inner, subjective experience” involved with seeing light polarization.) How about a fish? A Frog? … Or working down from the other end, do chimpanzees have “inner subjective experience”? How about a dog? And so on…

Perhaps eventually it will occur to you what the purpose of this exercise is:we are mapping the boundaries of your anthropocentric bias.

MountainTiger

Dennet makes much more sense to me. Claims like, “why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does,” confuse the hell out of me. On what basis is it unreasonable? Do we have examples of human-like information processors that do not demonstrate consciousness?

Reginald Selkirk

Do we have examples of human-like information processors…

No. Computers are un-human-like in important ways relevant to the question. Other animals (dolphins, the usual suspects) may have consciousness, but it it difficult to find out, and depends on your definitions. There are a few things like the mirror test that give us only very limited answers.

http://calebscblog.blogspot.com/ Caleb O

It’s objectively unreasonable because physical things are non-experiential. It’s totally unclear how one gets a bunch of experiential stuff from non-experiential stuff.

MountainTiger

I’m not sure what you mean by “experiential.”

http://calebscblog.blogspot.com/ Caleb O

Experiential stuff is stuff that experiences. In some sense that isn’t very enlightening, but I don’t know what else to say. It seems to me that ‘experience’ is fairly clear. Or do you want more detail?

I suppose one could add that to experience any x is to have a qualitative sensation of x or that x. But there might be some experiences that don’t include qualia.

MountainTiger

I think I put that poorly. How does one draw a distinction between what is experiential and what is non-experiential? Is a baby experiential? A chimpanzee? A dog? A rat? A gecko? A spider? I presume that you would agree with Chris that a bacterium lacks the quality we are looking for, but where does it appear? Is experiential-ness a unified quality that either is or is not present, or a composite of qualities that may be found separately from each other? Since you claim that physical things are non-experiential by definition, how does the non-physical component that allows experiential-ness arise from physical things?

And not so very long ago, it was totally unclear how one gets a bunch of living stuff from lon-living stuff. The parallel to vitalism is strong.

J. Goard

“Do we have examples of human-like information processors that do not demonstrate consciousness?”

Of course we do: every human except me.

Ray

I’ve got to say, Dennett makes a lot more sense to me than the other two philosphers you quote. Discussions with people who believe in qualia (at least in the sense that Dennett is trying to deny) remind me of people who get all excited about “objective morality” — they just keep talking in circles redefining the problematic term in terms of equally problematic synonyms.

I can’t see how “consciousness” can possibly refer to something which is not observable from the third person perspective. If that was the case, why would you adopt a term to describe your first person experience, “consciousness”, which was invented by people who have no first person access to your mental states, and who used the term only to describe the mental states of their long dead contemporaries, to whose minds you in turn have no first person access?

The only way I could see this working was if “consciousness” was only useful in describing what your mind is not (i.e. not accessible from a third person perspective,) but then I don’t see how it would be possibly to establish that your consciousness had anything worth speaking of in common with my consciousness — after all, on the “qualia” theory, the taste of cinnamon and the sound of a pipe organ are supposed to be equally ineffable, but they have little if anything in common.

I also object to the following reasoning: “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”

I don’t see how you can claim it is unreasonable for a physical system to have “inner life” unless you find a way to describe what “inner life” is supposed to be in physical terms. Then of course, there’s the issue that no one makes that assumption in practice. People attribute inner life to others of the same species, exclusively based on the fact that they have similar physical behavior to themselves (unless you want to posit some sort of spooky ESP zombie-dar by which a human can tell the difference between another human and a p-zombie.)

AndrewR

You know, it all sounds a little like an argument from ignorance. Attempting to summarise the positions below (let me know if I’ve misunderstood).

Chalmers: “We don’t understand how subjective conscious experiences arise from the mechanics of the brain. It seems to me that these experiences are special in some way that makes them non-reducible to low-level brain algorithms.”

Dennett: “We don’t understand how subjective conscious experiences arise from the mechanics of the brain. These experiences will turn out to be reducible to low-level brain algorithms. If it seems like they can’t, this is just our intuitions leading us astray.”

If this is an accurate summary, I’d lean slightly towards Dennett on the basis that in the history of humanity discovering how things work the trend is overwhelmingly against “magical extra stuff” being the explanation.

Also reminds me of a quote from the (now defunct) wrongbot blog:

“As soon as a philosopher begins talking about mental states as though they are fundamental or uniquely important things and not convenient abstractions for talking about particles bopping around in a particular way, you should know you’re in trouble”

http://calebscblog.blogspot.com/ Caleb O

I don’t think this is quite right.

Chalmers’ grants that subjective experience is a real phenomena whereas Dennet appears to think that it is not. Note that one can explain X by Y without reducing X to Y. So Chalmers grants that mental states can be explained by physical states. So a given mental state might correlate with, or be caused by, a given physical state But that doesn’t show that mental states reduce to physical states.

I suppose I don’t know what the wrongbot quote is pointing at. Physical states if they are anything are nonexpereintial states. And there are clearly experiential states, which we might label as mental states. Anyone who denies this, has to deny their are such things as beliefs, intuitions, and subjects which is totally implausible! So prima facie physical states and mental states are completely different things.

I don’t see an argument from ignorance sneaking into this line of thought but perhaps this is mistaken.

AndrewR

I don’t think that Dennett is denying that subjective experiences exist, he’s just saying that they aren’t a new special class of thing, but are as much a result of the physical mechanisms of the brain as anything else, though we’re a long way from discovering exactly how.

Chalmers appears to be claiming that there is something special about subjective experiences that means it won’t be possible _even in theory_ to explain them as a result of physical mechanisms.

If Chalmers was saying instead something like “We are so far from a model of consciousness based on neuroscience it’s not funny, so let’s try and work at a higher level of abstraction in the meantime to try and get some useful work done” I’d be more sympathetic, but he isn’t.

>> So prima facie physical states and mental states are completely different things

I suspect it only seems like that because we don’t have a deep understanding of how our brains work.

http://calebscblog.blogspot.com/ Caleb O

I haven’t read Chalmers but I think he is open to the option that we can explain a far range of mental states with reference to physical states. Moreover again I don’t think that to explain X by Y entails that X is reducible to Y. Chalmers, being a property dualist, does think that there is one kind of stuff: namely physical stuff but that that stuff can have physical and mental properties.

On the other hand as far as I understand Dennet he does deny that qualia is a useful concept quite explicitly, from which it seems the denial of subjective experience would follow.

So you might actually be closer to the Chalmers side of things afterall!

Tim Milburn

Hi there, I’m more on Dennett’s side of the issue (though like him I can at least recognise the kind of intuition which implies that consciousness is non-physical). One simple, slogan-like way of putting my view is that “Conscious experience is not experience of consciousness”. To work out whether zombies are possible or not, we would have to characterise the physical (specifically, the brain), characterise consciousness, and then compare their characters to see whether they are distinct or not. However consciousness would seem to be a mode by which we experience such thing as colours, smells or whole objects out there in the world. Consciousness is not something that we actually experience. This makes consciousness difficult or impossible to characterise. We just can’t get the kind of perspective we would need. And it is also very easy for one to imagine that one has that perspective when one doesn’t. This means that the required comparison between the character of the physical brain and the character of consciousness is also difficult or impossible to do, because we don’t have a usable (or relevant) characterisation of consciousness. This consideration doesn’t rule out zombies or dualism, but it does suggest that they are not properly established – so I return to what I see as the rational default (just like atheists do when we find that the existence of gods cannot be properly established).

So that’s my view in short. Thanks

So that’s my view in short

http://calebscblog.blogspot.com/ Caleb O

I take it that an intuition is a seeming state. To have a seeming state is to have the experience that x seems to be the case. So in order to have intuitions at all one must have experience. So against Dennet we can conclude that there are experiences. If you take consciousness just to be experience then your slogan is false. So perhaps I am not really addressing your point if you take consciousness to be something else.

Tim Milburn

Hi, thanks for the response,

Another way of putting my view is that consciousness is the medium of our experience, but it doesn’t actually occur in experience as content that we can examine. A sentence written in black ink need not contain any information about black ink. My suggestion is quite simple minded (hopefully lateral). Looking around your house, you’ll probably beable to focus on a book or pen, to examine and characterise it. Now where is the “consciousness” in your experience, in order that you might examine and characterise it? One can reflect on, say, the nature of colour, but my suspicion is that the result will be a confabulation.

If I understand your comment – the issue for me is not whether there is a “seeming state”. I am prepared to take that for granted. The issue is how we characterise that “seeming state” so that we can then assess whether it is reducible to the physical brain or not. The “seeming state” is immersive, one cannot get the kind of perspective one would need on it to get the characterisation to assess reducibility.

hf

If I understand Block – and I’m not at all sure that I do, or that we share a definition of “functionalism” – I think his system of homunculi would indeed have consciousness. And this seems no more odd than a system of cells having consciousness. By contrast, the old Chinese Room (IIRC) need not have qualia because it need not have the capacity to learn. I tentatively assert that subjective consciousness requires at least potential learning (which might or might not work out to mean that it requires a roughly-Bayesian update to happen somewhere, as a necessary-but-hopefully-not-sufficient condition) and therefore must take place over time.

Speaking of learning, orthonormal’s example of this seems decisive to me in the sense that “Martha” appears to have qualia. At least, I think that you would only need to add more of these functional properties to the system before it has every kind of “feeling” that you can be certain of possessing yourself. Since we seem to agree that qualia carry certainty of this kind, Martha must have them. Dennett appears to go the other route and deny that your feelings are qualia, on the grounds that Martha has them too and we assumed she was purely physical in nature.

In other words, I think that it does come down to intuitions about Martha and the homunculi. I further assert that orthonormal produces conflicting intuitions, at the very least, and thereby proves their unreliability. This would appear to take us back to the abundant evidence that our world is made of math.

anatman

i’m afraid it is you and chalmers etc. who are talking nonsense. what you are propounding is just a slightly disguised god of the gaps argument. ‘i don’t understand this therefore qualia.’ this is why i generally shitcan any supposed philosophical treatise that promotes intuition to evidence.

Skef

Could some or all of those agreeing with Dennett on this thread give a quick answer to these prompts, so we can see where the rubber hits the road?

1) Can there be torture? That is, can torture exist? 2) If you answer “yes” to 1, is torture that leaves no evident physical damage (in terms of ability to move, etc.) something that should be avoided? 3) If you answer “yes” to 2, try to give a brief summary of what allows there to be torture and what makes it wrong in the absence of subjective experience.

These are not “gotcha” questions, in that I assume that there are readily available eliminativist answers to them. But the specifics can be relevant to this sort of discussion.

AndrewR

Dennett and (I hope) those agreeing with him in this thread are not claiming that subjective experiences don’t exist, they just don’t think that they are some sort of special dualist “stuff” that can’t be reduced to materialism.

Reginald Selkirk

Skef: … torture

So now we have to supply a basis for morality in order to argue against consciousness? that seems totally off-base.

Hrafn

Skef:

1) Yes.

2) Yes.

3) From my (admittedly limited) understanding of neuroscience, it would appear to be at least potentially possible to cause intense pain (e.g. by direct stimulation of nerve endings) without physical damage. Prolonged inducement would seem likely to cause severe psychiatric problems (and associated permanent or semi-permanent neurological changes — which I would be highly surprised if they were not *objectively* detectable, given modern brain-scanning technology), most probably something along the lines of PTSD.

Skef

Hrafn: Pain is the classic example of a subject experience. I can’t experience your pain, and you can’t experience mine (we’ll leave Clinton out of it). And while there are arguments as to what parts of experience constitute pain, part of the experience is generally taken to be a body-localized feeling, which is the sort of thing often talked about in terms of qualia.

You can cash out pain as neural damage, but really that pushes the problem forward. Are we to say that the only problems with PTSD are behaviorial? If not, we’ll be talking about the subjective feelings of the PTSD sufferer again at some point in the future.

AndrewR: Dennett is at least often, if not generally, taken as an eliminativist. Saying “there are no such properties as qualia” (as he does in the quote above) is different from saying “qualia can be reduced to physical phenomena”. The attack is pretty broad: I cannot prove I have subjective experiences to you, you can’t prove it to me, so science inherently can’t study it, so it’s nonsense … a kind of fictionalism. But there are different forms of eliminativism, that make different claims:http://en.wikipedia.org/wiki/Eliminative_materialism

Don’t assume that the guy in the lab coat isn’t crazy. None of these folks are scientists, regardless of whose side they claim to be on.

“Everything real has properties, and since I don’t deny the reality of conscious experience, I grant that conscious experience has properties”

and

“Qualia are supposed to be special properties, in some hard-to-define way. My claim–which can only come into focus as we proceed–is that conscious experience has no properties that are special in any of the ways qualia have been supposed to be special”

Maybe his position has become more extreme since but it doesn’t sound like he’s denying subjective experiences exist here.

I do agree that Eliminative Materialism as described in the Wikipedia article you linked to seems like a bizarre position to take. The fact that something is made up of simpler components or processes doesn’t mean that it doesn’t exist.

Hrafn

My take on the ‘Hard Problem of Consciousness’ is that, having acceded to an uncompromisingly subjective/experiential definition of ‘consciousness’, the only thing surprising about it being “hard” to come up with anything objective and/or intersubjective to say about it, is that anybody is surprised at the difficulty in the first place.

In other words, the reason that the “problem” is “hard” appears to be that it is defined in such a way that it cannot help but be hard.

It would seem to me that, unless you’re willing (at least for the sake of argument) to accept a degree of objectivity/intersubjectivity into the definition, you cannot help but condemn yourself to be the sole consciousness in a sea of (apparent) philosophical zombies (PZs). As things stand, nobody else can say anything about their experience of consciousness that can distinguish themselves from a PZ, and even if you knew that such a non-PZ existed, nothing you could say about your own experience could distinguish yourself to them either. I cannot help but consign such a definition to the category of ‘not particularly useful for further analysis’, even if it is philosophically well-formed.

From this you can probably deduce that I have little patience for the argumentation of Chalmers and his ilk, an impatience that apparently puts me beyond the philosophic Pale.

Skef

It certainly doesn’t put you beyond the pale. But if the only basis for studying consciousness is intersubjectivity (which carries with it all of the unreliability of introspection), the study of non-human consciousness may be hopeless. Someone suggested subjectivity at the level of minimally-sensing organisms. How could we ever establish that? Does your computer experience flashes of subjectivity, however inconsistent or temporary, as it processes? How would we know?

Unless what you’re saying is just that at some point someone will reformulate the problem to make it tractable. That’s certainly possible, but a) one can say that about almost any difficult problem, and b) the sense of the hardness of the problem comes from the work that has gone in to attempts to reformulate it, and the unsatisfactory proposals that have resulted from the attempt.

Skef

(Incidentally, the eliminativist position is sometimes summarized as “we’re all (philosophical) zombies”. And that isn’t just an unfair caricature of some opponents, Dennett actually says this in Consciousness Explained, although he then insists it’s bad sport to quote him out of context (I think we’re sufficiently in-context here). That seems very much in conflict with your own view, so I suspect you should be impatient with everyone, not just the qualia side.)

Skef

Chris: I assume this is the point where Joshua Knobe yells that I’ve ruined everything and throws a flaming armchair at me.

Hrafn

Skef:

1) Thank you for preaching at me the revealed dogma of the Church of Philosophy of Mind. Admittedly, I don’t know why you did so, as I simply answered your questions. I *did not* ask for a sermon.

2) “Pain is” a objectively-detectable physical phenomenon. Ignoring this fact does not make it go away.

3) I see no reason to “cash out” (whatever the hell that means) pain as anything. Pain is (primarily) one form of neural signal between nerve-endings and the brain. To leave the brain out of it would appear to be to ignore a crucial aspect of *what pain is*. If pain did not have an affect on the brain, then it wouldn’t be “pain” as we know it. Further, overload of *any* neural signal (pain, pleasure, whatever) would seem likely to have a deleterious affect on the brain.

4) I neither stated, nor even remotely suggested sympathy for behaviouralism (let alone some form of radical behaviourism). Please excise from your brain the (apparently common) Church of Philosophy of Mind stereotype that those who reject their dogma *must* be radical-behaviouralists/eliminative-materialists or something similar.

5) Your entire first two paragraphs therefore appear to be directed at some different Hrafn, who wrote a different post on a different forum. If your response to any attempt to answer your questions is simply to preach orthodox Philosophy of Mind talking points at the answerer, I don’t see what purpose they serve. Anybody who answers your first two questions in the affirmative is quite likely to have *very little* sympathy for this dogma.

http://verbosestoic.wordpress.com Verbose Stoic

“2) “Pain is” a objectively-detectable physical phenomenon. Ignoring this fact does not make it go away.”

Well, if we leave the brain out of it for a moment, that isn’t true … or, at least, not reliably true. If you only look at my completely external behaviour, you will get indications that I may be in pain, but if I don’t act as if I am in pain and yet am experiencing pain I really am, in fact, experiencing pain. And, of course, we all know that I can act as if I am in pain even if I’m not actually in pain (actors do it all the time). So it’s hard, then, to see where you get your assurance that it is just a fact that pain is an objectively-detectable physical phenomenon, since even if we go INTO the brain all you have is a correlation to the subjective experience. But being in pain is having that subjective experience.

“Pain is (primarily) one form of neural signal between nerve-endings and the brain. To leave the brain out of it would appear to be to ignore a crucial aspect of *what pain is*. If pain did not have an affect on the brain, then it wouldn’t be “pain” as we know it. ”

Well, says who? Almost no one ever thinks that pain just IS a signal between the nerve-endings and the brain. It’s really nothing more than the specific subjective experience that we have that we assume that other people have in similar cases and when they act in similar ways. Even if the materialist theory is true, the brain relation is nothing more than how pain is IMPLEMENTED in humans, but there is no reason to think that if we discovered that, say, some form of dualism is true that we’d suddenly have a different definition of pain.

Or, to put it better, while to YOU the meaning of “pain” is critically attached to the brain, for me it clearly isn’t. We then have to be able to settle which of us has the better definition, but if you cycle back to my first paragraph you’ll see some of the issues with your definition.

“I neither stated, nor even remotely suggested sympathy for behaviouralism (let alone some form of radical behaviourism). Please excise from your brain the (apparently common) Church of Philosophy of Mind stereotype that those who reject their dogma *must* be radical-behaviouralists/eliminative-materialists or something similar.”

Saying that pain is primarily a brain function meaning that it is defined as that is, it seems to me, pretty much an eliminative position about pain, meaning that we wouldn’t need to talk about the subjective expereince at all. So, let me ask you this: if in a person there was that signal that indicates pain, and yet they insist that they aren’t feeling any pain, would you say that they are or are not in pain, based on the preponderance of evidence? Presume you have no reason to think that they are lying to you.

Hrafn

My last post to Skef may have been too combative. But such posts as Skef’s response to my answers to his questions do not help dialogue. They are chock-full of assertion, assumption and non-shared bases for argument. They very much set the stage for talking past each other. In fact I could see *almost nothing* of myself in Skef’s response — hence my “some different Hrafn” barb.

Skef

All I’ve tried to do here (and as far as I can tell all I have done here) is argue that a problem many people see as hard, and only admitting of answers with one or another strange components, is actually hard. Part of that argument was an attempt to clarify Dennett’s position on the subject. In doing that I’ve never defended one position over another. It’s true that I think people who work on these problems are not just idiots who are missing the obvious solution, or the obvious way of adjusting the problem so as to bring consciousness under straightforward scientific study. And I suppose that view is a form of dogma.

hf

I do think there exists a hard or difficult problem related to all of this, namely finding (and having confidence in) a precise set of sufficient conditions for producing subjective experience. I hope that we can get away with finding some necessary conditions. We have some reason to think we can one day create an artificial mind smarter than humans. Giving it some qualia-necessary conditions to avoid while simulating human minds (and making sure the AI itself does not meet those conditions) might allow it to ethically solve the problem on our behalf, or increase our own intelligence to the point where we can solve it. Though the latter would seem to require having a solution already in hand.

Hrafn

Skef:

1) How would you describe (a) an inability to see the merit of much (most?) Philosophy of Mind argumentation combined with (b) an inability to see anything of myself in orthodox PoM responses,if not as being “beyond the philosophic Pale” (BPP)?

2) Intersubjectivity does not eliminate “study of non-human consciousness”, it allows it. You cannot study any other consciousness (human or non-human) through introspection. However, allowing indirect/instrumental detection allows you to admit their existence.

3) “Someone suggested subjectivity at the level of minimally-sensing organisms.” I’m afraid that this sentence sounds like word-salad to my BPP-impaired comprehension. Subjectivity is (bright flashing lights) *introspection only*. You can therefore only *assume* (or not) that another lifeform has it, not reasonably “suggest” it, let alone expect to “establish” it.

4) I’m merely feeling my way here, but what I think I’m suggesting is that any intuitive/introspective/navel-gazing-based/top-down definition doesn’t get us anywhere, and that a bottom-up/functional/objective-data-based definition cannot help but prove more fruitful. This does of course leave us with the problem of PZs, but that problem can be lessened by division. (a) Everybody but you is just a PZ, in which case all you’ve got is your own consciousness and introspection to go on, and there’s no point in either listening to or reporting to the rest of us. (b) Some, but not all of us are PZs, but you can’t tell which is which, in which case we can only hope that there’s enough non-PZs so as not to distort the results *too much*. (c) None of us are PZs, in which case everything is hunky dorry. Even if PZs may exist, I can’t see any line of fruitful enquiry that comes from not assuming that they don’t.

Hrafn

Skef:

My problem with your response is that pain is (bells flashing, sirens-wailing) *NOT* just “subject experience”. It has an objectively-detectable existence and effect. This fact is what allowed me to answer “yes” to your first two questions and give a (hopefully) coherent answer to your third.

Your (apparent) failure to deal with my raising of that aspect of pain (or alternatively, caricaturing and/or pigeon-holing it as ‘behaviouralism’) means that your response felt like you were talking past me.

I don’t know that I would go so far as to declare the PoM/introspective/subjective definition of consciousness to be ‘false’, but I’ve seen nothing to date that suggests that it is in any way analytically useful or fruitful. I don’t think that that makes me a behaviouralist.

Hrafn

One problem that I’m having is that none of the standard ‘theories’ of PoM (Eliminative materialism, Type physicalism, Revisionary materialism) seem to be particularly good fits for my views (which may make me overly sensitive to pigeon-holing), at least according to Wikipedia. Is there a (well-defined) viewpoint of ‘Emergent materialism’, ‘Emergent physicalism’ or similar based upon http://en.wikipedia.org/wiki/Emergence? Or should this be classified under one of the standard ones (and if so how)?

hf

I hope you mean a form of functionalism, which seems obviously true to this drunken layperson. (I have some hope that even an AI restricted in the ways I just mentioned above would arrive at functionalism, due to the doctrine’s obvious nature.)

Among other objections, the linked SEP page mentions the possibility of different qualia for “blue” as depicted in the OP’s picture. But the strong form of this may contradict itself, if I understand this image correctly. Our experience of color clearly includes some comparison to other stimuli, and we can at least observe this process in action. Why can’t color experience consist of such comparisons?

http://verbosestoic.wordpress.com Verbose Stoic

Because that wouldn’t be what a colour experience IS. The colour experience is not a set of comparisons, but instead an actual experience we have that is likely INFORMED by — or informs — those sorts of comparisons. From psychology, in fact, we know that in general we do both top-down and bottom-up processing to get these experiences, so it’s a complex set of interactions that produces the experience that we then, we think, react to, but saying that a colour experience is just some sort of comparison or aggregation is what drives “qualia-freaks” insane: a colour experience is the EXPERIENCE ITSELF, and saying that it really is just something other than that always smacks of not taking experiences seriously.

hf

Because that wouldn’t be what a colour experience IS.

I dispute precisely that. Look, I understand you have a sense of a difference that you can’t explain without begging the question. I say that “Martha,” in the posts I linked before, could have exactly the same sense. And in her it doesn’t seem ineffable at all, not if you read the details that orthonormal lays out. Her scientific awareness of ‘neural’ links does not play the same role in her physical system as do the links themselves. Behold, a “first-person ontology!”

That paper argues, essentially, that phenomenal experience and the cognitive functions that can be spawned from phenomenal experiences are clearly not the same thing. So when you outline all of the things that “Martha” can do and say that none of them really change or that she doesn’t learn anything else when she is cured of her colour-blindness, I agree that they wouldn’t change, because those are all cognitive or psychological states, and it is possible to have the same psychological states although you are having different phenomenal ones. So my question, then, is if Martha has different phenomenal states after she is cured when compared to before she was cured. And if you are going to say that she has been cured of colour-blindness, it seems that you are going to have to say that before the operation she didn’t not have phenomenal experiences that contained colour but after the operation she did, and so her phenomenal experiences absolutely changed. And so she does, in fact, learn something new after, which is indeed what it is actually like to experience that banana as being yellow as opposed to it being a shade of gray from the first person perspective, because all of her psychological and neural information doesn’t contain that … and can’t, or else she wouldn’t have been colour-blind in the first place.

For me, phenonemal experiences are inputs that generate representations, which is what you seem to reference in your posts. But you can’t collapse the experiences to the representations because it’s quite possible to generate the same representations from different phenomenal experiences, even outside of Mary and Martha cases.

This also answers Reginald’s comment below; the shrimp clearly in some way represent colours, but are those representations formed through actual phenomenal experiences? I don’t know, and it is obviously really, really hard to tell … which shouldn’t count against my theory because my theory says that since phenomenal experiences are indeed subjective it will indeed be really, really hard to tell if they are there in anyone else.

Note, however, that none of what you said seems to be addressing the definitional point I raised, that what it means to be a colour experience is simply to be having an experience of a colour. This seems rather obvious to me; it’s clearly tautological in a good way. The debate is over whether or not what it means to be CONSCIOUS is just that you are having these experiences, but if we examine the cases where we think we are conscious or unconscious it really does seem that that “having expereinces” thing is the key differentiator.

Reginald Selkirk

and saying that it really is just something other than that always smacks of not taking experiences seriously.

Should something that cannot be defined or demonstrated be taken seriously?

http://verbosestoic.wordpress.com Verbose Stoic

But I have both defined it — if only loosely — and certainly can demonstrate it: in myself by simply experiencing things, and in you by asking you to experience things. If when you experience things you do not, in fact, have a demonstration of experience, then I’m afraid you’re a zombie, and there’s no hope for you [grin].

Reginald Selkirk

a colour experience is the EXPERIENCE ITSELF, and saying that it really is just something other than that always smacks of not taking experiences seriously.

Okey dokey. I will put you on the list of people who think that mantis shrimp have qualia. Because boy, do they experience a lot of colors.

Hrafn

Rereading Skef’s original response, I think I may have slightly misinterpreted it, but still very much disagree with it, and do not feel that it successfully engages with my answer. So, a series of “no”s:

No, I was not “cash[ing] out pain as neural damage” I was simply defining pain holistically.

No, that does not just “push[] the problem forward.” Many brain-states are likewise objectively-detectable.

“Are we to say that the only problems with PTSD are behaviorial?” No. I said “psychiatric problems” *not* “behavioural problems”.

“If not, we’ll be talking about the subjective feelings of the PTSD sufferer again at some point in the future.” Not necessarily. A physician might well decide that a psychiatric condition that left a patient permanently or arbitrarily relaxed and/or euphoric is just as problematic as one that left them permanently or arbitrarily stressed and/or depressed. Is it the subjective contents of the psychiatric condition that is problematic, or the fact that they are maladaptive to environmental cues?

And even if we return to “subjective feelings” at some stage of reasoning out why we treat PTSD, does that mean that pain does not have a physical reality, or simply that physicians are people too, and subject to empathy (a fairly well-known and widely-studied phenomenon)?

C.J. O’Brien

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

Chalmers’s problem is hard because of the way he frames it. Notice, first of all, how his chosen examples of qualia, “the quality of deep blue, the sensation of middle C,” identify subtleties of experience that exactly one species of animal has the time to care about or the ability to name. As unobjectionable and offhand as it seems, this is a gambit; it rhetorically orphans human consciousness from its evolutionary past.

Then he breezes by “visual and auditory information-processing” on his way to the hard problem, somehow by sidestepping all the hard problems. Because the limits on the ability to hardwire all this processing such that it reliably returns the right behavioral response is precisely why the fuzzy heuristics of “raw feels” or whatever you want to call them are necessary.

Take a simple nervous system like that of a flatworm or what have you, and consider it a series of switches, but interconnected as a literal neural net with a small degree of plasticity: some relatively rigid learning responses to environmental feedback. That’s plausibly about the limit of what a pZombie could have. The kind of behavioral plasticity in an information rich environment with reliable real-time responses to stimuli exhibited by vertebrates could not be carried out by a scaled up flatworm neurology given the limits of physiology and the open-ended nature of the challenges faced constantly by the visual and auditory information-processing system. You can’t reliably return a quick enough response that’s broadly informed of existing conditions without some mechanism for “broadcast”: processed results we’re aware of are just those that are globally available to a host of subsystems likewise processing in parallel. It’s not “unreasonable” at all: something like awareness or conscious experience is the only alternative to prohibitively expensive processing capability if you want behavioral plasticity in complex dynamic environments, which almost all large animals require.

The rejoinder to all of this is “yes, fine, but why does it have to feel, or be like, anything? Why doesn’t this processing-system-wide information sharing mechanism just flip ever-more “fuzzy” switches at the right level of tradeoff?” And the answer is that it doesn’t really have to be like anything, if rapid heuristic-driven responses were all that was needed, but for such a system to continuously model a complex and dynamic environment in which it is ineluctably active itself, it must model itself and retain information (with all the same information-processing constraints) about its own history, including its own states and dispositions at the time of different outcomes.

Here’s the hard problem: the system is generating vastly complex physical states that it needs to also model at some level of fidelity. The benefits of retaining this kind of information about the past, from the last few seconds to the decades of episodic memory that humans command, are obvious. The solution is to use the conveniently available shorthand channel in the processing system as a means of representing the underlying physical states. Human consciousness, which is special, is so because of the unequaled level of fidelity of this self-representation; awareness itself is a computational shortcut with a long evolutionary history.

Hrafn

C.J. O’Brien:

I’m not sure if I understand everything that you say, or agree with all of it — but *will* say that what you’re saying makes an enormous amount more sense to me than what Chalmers was saying.

Hrafn

HF:

I’m not sure that I fully agree with Functionalism either. There is a difference between positing that a state of consciousness arises (‘to become ‘a function’) through emergence from lower/simpler cogitative states and admitting that this posited function contains all that is useful to know about this emergent phenomenon — such that the phenomenon can be fully identified with the function. My suspicion is that although the identification may be a reasonable ‘first order approximation’, a full description of its emergence would provide useful ‘second order’/’flavour’ information.

As to the optical illusion, I’m not sure if it tells us anything about extrapolatable about human experience, beyond a evolutionary-hardwired hypertrophied tendency towards pattern recognition. Does the qualiaistic experience of ‘brighter/darker than its immediate surroundings’ tell us anything more profound about our consciousness?

hf

Ahem. Would ” a full description of its emergence” describe any non-functional aspects? And if you think it actually would, how would it do so?

As to your second paragraph, imagine someone who has never had any subjective visual sensation except for a particular shade of blue. I assert that this can’t logically happen. We can imagine evidence that would lead us to doubt my claim, in the way that the existence of ten possible digits and the lack of a reported statistical pattern in the base-ten digits of pi should lead you to doubt the claim that the 3000th one is 7 (if you don’t know the answer and have no reason to trust me). Just now I googled ‘seeing only one color’ and found Simple Wikipedia saying, “There are three types of cone cells in the human eye [reacting to different wavelengths of light]. If there are two types, a person will have a hard time to tell certain colors apart. If there is only one type, a person will not see color at all.” (Non-simple version here. If these people had an ineffable qualia of blue for blue light, we would expect them to report some difference they couldn’t explain when you show them different wavelengths and tell them to examine that one closely. This should not occur in exactly the same way when you tell them to look closely at orange light or what have you.) Possibly we’ll find that study of red-green color blindness supplies weaker evidence against my assertion. But what we have now looks pretty suggestive and it points in the right direction.

machintelligence

I’m not sure where this fits in to the question of qualia, but there are are color blind people who have synesthesia of the type that “sees” numbers in color who report that some numbers have “Martian” colors. That is, they are colors that they have never seen in their lives naturally (or on Earth, as it were). This suggests that there are “hard wired” responses to color that are present even though the receptors in the eye are not functional, and the individuals have never experienced them that way. V. S. Ramachandran has done some work on this.

http://verbosestoic.wordpress.com Verbose Stoic

I very much lean towards Chalmers, but in my experience the big clash between the two sides — and why their views are often so incomprehensible to each other — is pretty much over the first-person/third-person divide as to what it means to be conscious. People like Dennett and others I’ve encountered define consciousness by what you see from the third-person, and say that what it means to be conscious is just to have those third-person behaviours. Some go further and try to link it to the brain, but for the most part I don’t think those views are that popular; even people like Dennett seem to accept that reducing it to that might be a bit too far, and it also gets countered by both emergentists and functionalists, at least as for it being what it MEANS to be conscious (almost all materialists, for example, think that it is implemented by the brain, but most accept that that’s not what it means to have consciousness). People like Chalmers define consciousness by the first-person experience, and hence by qualia. So no matter what your external behaviour or brain states are, whether you’re conscious or not is determined by that internal experience.

So now you can see why when they come together what they say doesn’t make a lot of sense to the other side, as they start from compeltely different starting points and definitions in trying to talk about the same thing. My goal in these discussions has been to start from their starting point and show that from my starting point what they are saying clearly isn’t true.

Hrafn

Verbose Stoic:

Even if to “define consciousness by the first-person experience” is ‘true’, is it *useful*? How do you generalise universal truths (surely what Philosophy is attempting to do) from introspection?

I cannot help but notice that Chalmers’ commentary seems to be entirely, or almost-entirely, made up of negative claims. Is a definition of consciousness that, even if it is true, appears to serve no purpose other than to lead us straight to the dead-end of ‘The Hard Problem of Consciousness’ a useful one?

http://verbosestoic.wordpress.com Verbose Stoic

“Even if to “define consciousness by the first-person experience” is ‘true’, is it *useful*? How do you generalise universal truths (surely what Philosophy is attempting to do) from introspection?”

Well, I think that Philosophy is about getting at truths, not about getting at universal truths. Philosophy can at least try to deal quite well with subjective truths that don’t generalize beyond the individual having them, which it seems to me is what consciousness is. So I accept that I will have to make a less than justified leap from my experiences to that of other minds when talking about consciousness, but philosophically I already have to do that to leap from my personal experiences to “table”, so it’s hardly that much further a leap. And as I pointed out there are some very serious conceptual problems with trying to define consciousness by anything third person, mostly because we can easily see how it comes apart and it seems that at least from the first-person perspective what interests me, at least, about consciousness is not what’s out there, but about what’s inside my own experiences. Figuring out what other people are at least likely experiencing is interesting, but far less interesting than understanding the experiences that I’m acutally having.

As for how to go about it, you do it the same way you do everything else: intersubjectively, by doing introspection and describing the results of it to others so that they can say whether they have the same sorts of experiences as you do. Not perfect, but it will do.

Hrafn

Or to put it another way, can myself (or even somebody more articulate) attempting to tell you what I think it’s like to experience consciousness tell you anything profound about the phenomenon? If so, then there would appear to be some flaw in the Philosophic Zombie theory. If not, then philosophical enquiry on this basis would appear to be pointless.

Hrafn

Verbose Stoic:

My, *my*, *MY*, you’re getting stuck on the word “universal”. My point was that philosophy is more about “I think therefore I am” than “I think I like peanut butter”. If some other word (“transcendental” maybe?) describes this better then go with it.

As far as ‘experiencing-consciousness-and-describing-what-I-experience’ — if that what rocks your boat then go with it. But it doesn’t sound like Philosophy as I understand it (I would expect something beyond the purely experiential and descriptive — some form of abstraction/analysis/etc). It sounds more like some sort of structured meditation regime to me. Whether it can tell us anything profound about consciousness is an open question. But the fact that so many first-person advocates seem to spend so much time on how hard it is to make sense of it all from an experiential perspective makes me doubt that they’ve got all that much else to say.

http://verbosestoic.wordpress.com Verbose Stoic

“My, *my*, *MY*, you’re getting stuck on the word “universal”. My point was that philosophy is more about “I think therefore I am” than “I think I like peanut butter”. If some other word (“transcendental” maybe?) describes this better then go with it.”

For someone who went off on someone in this thread already, you are being awfully condescending here, and sadly over a point that I really wasn’t talking about. Again, for me the key is the difference between the first-person view and the third-person view, and to me consciousness is about the former. Whether you believe that things from the first-person view are universal, transcendental, useful, profound or interesting enough to be philosophy is really no concern of mine, except to point out that the people who are doing philosophy DO find the questions interesting or philosophical enough to ask and try to answer them. On the other hand, I did relate issues with using a third-person definition of consciousness and, in fact, even raised issues against what seemed to be your definition that you have not even bothered to address, and which is indeed crucial to what I have been talking about in this thread. Thus, I see this reply as nothing more than a distraction from what’s really going on in this discussion, or, at least, from what I’m talking about and am thus puzzled as to why you are so willing to be condescending to defend these rather irrelevant points.

Reginald Selkirk

Could I convince you to tackle the parallel to vitalism in a separate post? If we had had this conversation a century ago, and I had tried to convince you that I could construct a living bacterium using only biochemicals, it seems unlikely that you would have believed me. You would have insisted that I could add all the chemicals I wanted, but at some point I would need to add some “living essence.” My how things have changed.

The key difference is that in the study of consciousness, the corners are still dark. Heck, much of the room is still dark.

http://verbosestoic.wordpress.com Verbose Stoic

Whenever I see this parallel, it always strikes me as being a concession instead of being an argument that I need to worry about, because those who are saying that it doesn’t seem to work aren’t just saying that they can’t see how it could, but are giving reasons why it doesn’t seem to be possible, and thus are raising problems against the materialist approach. To claim that it’s like vitalism or like other claims of non-material things is, to my mind, to concede that you can’t solve the problems, but you expect to be able to someday. Which is, of course, fair enough, but you will have to grant that until you do it is perfectly reasonable for me and for anyone to say that they won’t accept your theory until it does, and will prefer a theory that seems to have less of the problems that they care about until you do.

Reginald Selkirk

Yes, it is saying that the problem might be solved in the future. In doing so, it points out that resistance appears very much like an argument from ignorance. You are welcome to cling to your ignorance in the meantime.

http://verbosestoic.wordpress.com Verbose Stoic

How can you claim Im clinging to my ignorance when you yourself admit that you don’t know how to solve the problem? What we have now are competing unproven theories, not one side clinging to dogma against overwhelming evidence from the other side. Get that overwhelming evidence, and then we’ll talk.

Reginald Selkirk

One side has a stupendously awesome track record, and is packing Occam’s razor. One doesn’t.

http://verbosestoic.wordpress.com Verbose Stoic

Occam’s Razor doesn’t apply unless you have all of the same evidence, and the complaint is that your “side” is actually ignoring not only a large part of the evidence, but the critical part of it.

Second, what “track record” are you claiming? Remember, the key differentiator as I said is first versus third-person analysis here. You can’t actually claim “materialism” because the materialist theories of mind don’t refer to the same sort of materialism as was used to, say, oppose supernaturalist theories.

Reginald Selkirk

and will prefer a theory that seems to have less of the problems that they care about until you do.

I disagree that an appeal to the non-material, which has zero evidence to support it, has “less problems.” It appears to entail very great problems. Whether or not you care about them is nothing to me.