About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Monday, August 26, 2013

What Is It Like to Be a Nagel?

by Steve Neumann

Much to my parents’ chagrin, I was a teenage somnambulist. Like my father and brother before me, I would periodically wander the tiny halls and stairs of our small house in the middle of the night, seemingly coherent but unconscious of what I was doing. I’m told I could carry on quite a reasonable conversation. Most of the time, a simple word from my mother would send me back to bed, none the wiser. Occasionally, though, I would wake up in the middle of one of these nocturnal excursions, baffled and a little bit embarrassed. One time I even punched out my bedroom window because I was apparently dreaming that I was trapped. That sure woke me up! Fortunately, I wasn’t hurt.

We’re all familiar with the experience of dreaming, but usually only after we wake up in the morning and recall the dream. We wake with a memory of the experience that happened we-don’t-know-when. There are some who experience what’s called lucid dreaming, where you become aware that you’re dreaming while you’re still in the dream. Currently no one knows why some people randomly “wake up” within a dream — it just seems to happen, and suddenly you’re like “Holy crap! I’m dreaming!” I’ve only ever experienced this one time: I was dreaming that I was walking along a tree-lined avenue with my doppelgänger when I suddenly realized I was in a dream; and so I started asking the other me questions about myself. Very bizarre!

Consciousness is at the same time the most familiar and most inscrutable phenomenon human beings can contemplate. I think it’s also the most contentious, because it is the most treasured aspect of what it is to be human. We can come to terms with the fact that the sun doesn’t revolve around the Earth, or that seemingly “solid” matter is mostly empty space, or even that we evolved from “lower” life forms. But when someone tells us that consciousness is an illusion, we reflexively say “Bollocks!” Unless you’re an astrologer, the relative positions of celestial bodies like the sun and our Earth don’t mean a thing. Likewise, whether we evolved over millions of years or were created ex nihilo yesterday, what really matters is that we’re here, now. What matters is the quality of our relationships, the progress of our personal projects, the innigkeit of a piece of music or the bouquet of a glass of Bordeaux.

Back in 1974, philosopher Thomas Nagel argued that the point about conscious experience is the “what it feels like” aspect of it. You have no access to my experience of the hauntingly beautiful Prelude in C-Sharp Minor by Rachmaninov, for instance, or the ripe dark fruits and velvety tannins washing over my palate from my glass of red wine. Neuroscientists may be able to give you detailed descriptions and scans of what my brain cells are doing when I experience these things, but you still can’t experience what they’re like from my perspective. Nagel asked if we could be able to know what it’s like to be a bat, from inside the bat’s experience of the world. He chose the bat as an example because it possesses a sense we humans don’t, echolocation, in order to make the point that we can’t really know what’s going on inside the head of another entity. Presumably, echolocation is to the bat what vision is to us: it’s the primary means of navigating the world. I personally can’t imagine what it’s like to be a bat. I can’t even really imagine what it’s like to be one of the dogs I train for a living.

Nagel ignited a firestorm of critical backlash last year with his book, Mind and Cosmos. I haven’t read the entire book; I’ve read excerpts and reviews, both favorable and critical. But last weekend he published a nice summary of his views in The New York Times. His basic claim is that “the theory of evolution ... must become more than just a physical theory” if it is to account for consciousness. He received many biting reprimands from scientists and philosophers of science. Nagel believes that the popularity of the materialist approach to consciousness that is born of the naturalism of the sciences is due to a consensus feeling that it is the best refutation of the dominant theistic worldview of society.

I have sympathy for both sides of this issue. On the one hand, I do believe that a thorough-going naturalism is in fact the best remedy for the supernaturalism of religion and other paranormal beliefs (e.g., Ouija boards, dowsing, etc.). It would be nice to be able to bottle and market a version of Dennett’s infamous “universal acid” in that regard. On the other hand, I experience the problem of consciousness as a significant problem. Once I emerged from my own born-again Christian background over a decade ago, I was able to jettison the belief in a core, immutable and immaterial soul without much anguish. However, I still clung rather fervently to the idea of a truly free will, in the Libertarian sense. Intellectually, I understood and accepted the arguments against such a will, but emotionally it was a struggle. Through the practice of meditation and working intimately with dogs I was able to gain a decisive experiential component that finally changed my attitude toward it.

Like Nagel, I am an atheist with a naturalistic worldview (and an atheist primarily because of this worldview). And I have no problem with attributing many of the heretofore “uniquely human” features to the purportedly adaptive qualities they provided to our ancestors. In fact, I have no problem believing that we have consciousness itself because it was adaptive. In my opinion, it’s easy to see why being able to simplify, organize, reflect on and prioritize the vast amounts of sensory information coming in would be useful to an organism such as ourselves. And, based on my reading, I’m of the belief that the self, the “entity” to which consciousness is generally attributed, is the brain’s internal model of all the internal-external stimuli (or of internal and external “reality,” let’s say) that is simply impossible for us to recognize as a model while it’s there. Where “there” is must remain an issue about which I’m agnostic, at least for the time being.

I agree with Nagel that science has made great strides in understanding such neurological functions as vision, learning, memory, emotions, etc. And, like I said above, one day neuroscientists may be able to show me exactly what my brain is doing when I am enjoying a glass of Amarone della Valpolicella; but it’s this aspect of enjoyment that can’t be described objectively by the neuroscientist. There won’t be any description or even any mention of the qualia of my experience. There won’t be any talk of how the wine appears ruby-red in my glass, or the warm spiciness of the bouquet as I lift it to my mouth, or the velvety feeling on my tongue, or the hint of raisined fruit I taste before I swallow.

This is a huge problem for Nagel. It’s a problem for me, too, but not a huge one. He believes that it’s not possible in principle for our current scientific paradigm to even touch upon the problem of consciousness. But he claims to be a non-materialist naturalist, and it’s difficult to determine what that means — or even if that’s possible. On the face of it, it sounds like an oxymoron. On the one hand, he’s saying that consciousness isn’t a divine gift; but on the other hand, he’s saying that it’s not an accident of mindless nature, either. But if he’s dabbling in teleology, wanting to have his cake and eat it, too, then he’s got to come up with a heuristic for scientific researchers to discover the naturalistic process by which the result of consciousness is achieved in the mammalian brain — because they’re not going to do it themselves.

I’m not sure how close Nagel’s views are to that of David Chalmers, who advocates the view that consciousness be considered a fundamental feature of the universe (though not necessarily in a panpsychic way), such as the mass and charge of subatomic particles. Both he and Nagel express the need for contemporary scientific orthodoxy to change in a significant way, in order to accommodate subjective experience. But it seems like no one has any idea about how to even begin investigating it — at least without simply declaring it doesn’t exist and moving on to other things. Current research is primarily focussed on the neural correlates of consciousness, or on what survival advantage it conferred upon our ancestors, or on how a “self” is created in the brain — on how awareness is really just a model of attention that enables the human organism to navigate the world. But none of it (so far as I know) is really serious about tackling our mental lives. The working presumption is that consciousness isn’t a non-physical feeling that emerges from the activity of billions of neurons, but is merely the verbal expression of complexly computed information. Not to sound too Nagelian, but I hope we can do better than that.

So I know what it feels like to be a Nagel, at least in certain respects. Though I don’t share his conviction that the mental life of animal organisms “cannot be fully understood through the physical sciences alone,” I do believe that they are leaving out something important if they can’t explain the subjective nature of experience. My naturalistic worldview inclines me to conclude that consciousness is a fully natural (and eventually a fully tractable) phenomenon. And the history of science is chock full of phenomena that once seemed to us contrary to common sense but which turned out to be something entirely unexpected — or literally unimaginable.

And regardless of where we currently stand with an explanation of consciousness, materialist or not, Nagel’s value doesn’t consist in a détente between theism and atheism by virtue of throwing a bone to creationists and Intelligent Design proponents. So what if they get distracted by the smell of blood in the water? They’re in the minority — and when they’re done devouring their bit of chum there won’t be anything left for them anyway. I have confidence that intelligent and open-minded believers will evaluate the debate conscientiously. Nagel’s value lies in the fact that his (erstwhile) “celebrity” among public intellectuals forces a conversation about an important point on an important topic. In this case, the fact that a true science of consciousness must be able to account for qualia, or subjective experience, or whatever you want to call it. You know what I’m talking about. I value renegades, and I can tolerate a considerable amount of risk if I think it will ultimately work out for good.

The final account may tell us that consciousness is indeed something such as light that is made up of electromagnetic waves, where the electromagnetic waves don’t cause light but simply are light; or it may very well tell us that it is a heretofore unknown fundamental feature of reality, like those elusive vibrating strings. It’s likely the final account will be the most counter-intuitive explanation human beings have had to accept. It will have to completely dismantle the Cartesian Theater, to put it in Dennett’s terms. This theater has been built of some of the strongest materials on Earth, and human beings have turned it into a sacrosanct temple, ornamented with all the ostentatiously shimmering desiderata of millennia.

I don’t know that I fully agree with Chalmers that consciousness should be considered a fundamental feature of the world; the equations of physics sure seem to be able to account for all the forces that exist in nature, at least on the human scale. But if the brain’s main function is executive control of the organism, charged with planning and decision-making and the like, why couldn’t it have evolved without consciousness? And even if scientists were able to definitively establish that consciousness is in fact a fundamental feature, or that the brain serves as “both the transmitter and the receiver of its own electromagnetic signals in a feedback loop” generating a special electromagnetic field that is the very seat of consciousness, that still wouldn’t account for the subjective flavor of our experience. Or would it?

I was able to mourn the death of the soul and lament the loss of free will, and still remain a happy inhabitant of the Earth. Can I mourn the death of my self?

65 comments:

"But if the brain’s main function is executive control of the organism, charged with planning and decision-making and the like, why couldn’t it have evolved without consciousness?"

Searle published a commentary on PNAS not too long ago ("Theory of mind and Darwin's legacy") about this issue. This question assumes that whatever executive functions the mind is capable of, can be performed equally well without consciousness (ie. consciousness has no function). But what is the basis of this assumption? Apparently it is the conceivability of zombies. But conceivability alone is not sufficient. We can conceive of birds that fly without wings, but that does not mean that wings have no function. We don't ask "why couldn't birds have evolved without wings?". That simply is not a very interesting question. Searle goes on to suggest many possible functions of consciousness.

I can never understand how zombies are conceivable. Say you are having lunch with a zombie. It looks at the beautiful color of the wine and observe that the ruby color is warm like a summer night which evoke its childhood memories. It might even cite a verse of Shakespeare. How does the zombie do it without qualia? To a zombie, the ruby color of the wine is no different from the color of the sky. Since it does not feel, it has no reason to comment on the color of anything.

Yeah, I don't think zombies are possible. Hypothetically, maybe, but not actually. Any entity that that would make such comments would be conscious, in my opinion.

With regard to the "epiphenominality" of consciousness, I go back and forth about it; I'm currently leaning toward the idea that the self-awareness aspect of consciousness does actual work, but maybe qualia are epiphenomenal. I mean, how something feels to me doesn't seem to have an actual effect on anything.

"I mean, how something feels to me doesn't seem to have an actual effect on anything."

I don't agree. Feeling pain is much more intense than feeling an itch. It's easy to ignore an itch but it's very hard to ignore pain. If how something feels has no effect on anything, why is relieving pain such a big medical problem but relieving itch is not? Likewise, a deafening loud guitar solo is not the same as a softly played guitar solo. Imagine you wake up in the morning with your loudness spectrum inverted. A gentle bird chirp now feels deafening loud. The roar of tiger now sounds as soft as a kitten's meow. Will you still behave in an indistinguishable way? I find it very difficult to imagine such a scenario.

I would say that there's a difference between "sensation" and "evaluation." I think the sensation of seeing the color blue is different from how I feel about the color blue; or the "quale" of blue is different from the "valuation" of blue. The latter can have effects but not the former. Blue happens to be my favorite color, so when I see it, feelings of pleasure are generated, causing me to act (or at least contributing to my action). If I see a red shirt and a blue shirt, I'll buy the blue shirt, for example.

>Feeling pain is much more intense than feeling an itch. It's easy to ignore an itch but it's very hard to ignore pain. If how something feels has no effect on anything, why is relieving pain such a big medical problem but relieving itch is not?<

I'm not sure the qualia has much to do with this, but rather your instincts on how to respond to each quale.

I mean, to extend Steve's example, we could easily design a robot to avoid the colour red and seek the colour blue. We'd have to have some representation of the colours, so we might refer to Red with the number 1 and Blue with the number 2 in our software.

So, when you say that how something feels must affect behaviour, this feels to me like insisting that the particular choice of number we use to represent colour is crucial for achieving the desired behaviour. The fact is, it isn't. The "quale" 1 (representing "Red") is "scary" to the robot with our current design, but if we switched the representations around it wouldn't change the robot's behaviour at all, it would just mean that now the "quale" 2 (now representing "Red") is "scary" instead.

In other words, it is perfectly conceivable to me that a person with some pathological condition could find pain quite tolerable while finding itching to be unbearable, without any change in the qualia involved.

'Consciousness' is an unhelpful word because it means so many different things to so many people. In my musings on this topic I've pretty much abandoned it in favor of the more rigidly defined 'subjective awareness,' which ignores the behavioral aspects of our identity and focuses specifically on the experiential aspects.

There already is a term for the experiential aspects: "phenomenal experience". The term "subjective awareness" entails more than "phenomenal experience", because it ties a set of experiences with a subject.

Steve: "I have no problem believing that we have consciousness itself because it was adaptive. In my opinion, it’s easy to see why being able to simplify, organize, reflect on and prioritize the vast amounts of sensory information coming in would be useful to an organism such as ourselves."

Here your definition of consciousness is basically cognitive/functional, having to do with behavior control, and there's no question such neurally instantiated capacities were selected for in evolution. But at the end of your essay, you say "a true science of consciousness must be able to account for qualia, or subjective experience, or whatever you want to call it." Here consciousness refers to phenomenal experience, not to the cognitive functions with which it's associated (about those functions see Dehaene and Changeux, http://www.unicog.org/publications/DehaeneChangeux_ReviewConsciousness_Neuron2011.pdf ).

It's qualia, not their neuro-functional-behavioral correlates, that pose the hard problem of consciousness, about which you say "it seems like no one has any idea about how to even begin investigating it." But you point to a productive theoretical possibility when you say "I’m of the belief that the self, the 'entity' to which consciousness is generally attributed, is the brain’s internal model of all the internal-external stimuli (or of internal and external 'reality,' let’s say) that is simply impossible for us to recognize *as a model* while it’s there." (your emphasis)

Thomas Metzinger's thesis in his books Being No One and The Ego Tunnel (reviewed at http://www.naturalism.org/metzinger.htm ) is that phenomenal consciousness is perhaps necessitated by the recursive limitations and other features of our being complex representational, behavior guiding systems. The system of necessity has basic representational elements, and a self-world model, that it logically can't, and for functional reasons shouldn't, see directly *as* representations, thus these representations become untranscendable appearances for the system (cognitively impenetrable, hence qualitative), and they exist for the system alone (hence the categorical privacy of experience as a subjective reality). Note that this isn't for the brain to produce or cause consciousness as a further effect that could be detected by an external observer. Instead, consciousness could be a *non-causal* entailment of being such a system, (http://www.naturalism.org/appearance.htm ). And as a first person, private phenomenal reality, it doesn’t play a causal role in behavior control over and above what its neural correlates accomplish in third person explanations of behavior.

All this suggests that consciousness isn’t, as Nagel and Chalmers hypothesize, a fundamental feature of objective reality as modeled by science, rather it’s a higher level system property. But as you’re inclined to think, it’s still a fully *natural* phenomenon since after all it’s a property of fully natural, physical creatures like us (and eventually artificial systems that attain our competencies).

You ask: "if the brain’s main function is executive control of the organism, charged with planning and decision-making and the like, why couldn’t it have evolved without consciousness?" Well, if something like Metzinger’s hypothesis is the case, then it couldn’t because phenomenal consciousness is necessitated by having those higher level capacities. But it wasn’t selected for *directly* since it doesn’t add to what the brain is already doing in guiding action.

I still haven't gotten around to reading Metzinger's books! Too many books, too little time :(

I think I agree that phenomenal consciousness/subjective experience is necessitated by our being "complex representational, behavior guiding system." But I guess I'm still stuck on the problem of *why* such a system has first-person, subjective experience...

Part of me certainly understands and accepts that organic systems of a certain type and level of complexity will be necessarily conscious (behaviorally, functionally, self-reflective, etc.), and that we are able to describe the cellular and bio/electro-chemical processes, but another part of me thinks that this still doesn't provide an adequate explanation of our mental life.

I think the problem is different (harder) with phenomenal experience than it is with something like contra-causal free will. Before adopting a naturalistic worldview, I took it as self-evident that I had that type of will, as surely as our ancestors thought the sun revolved around the Earth. But now I'm able to live in light of the fact that I don't have that kind of will, even while I live in light of the fact that I have a *voluntary* will, which is something different. In other words, I'm able to make a distinction between having the ability to deliberate and choose, and having those deliberations and choices being things that "I" create ex nihilo. I now have a more or less constant awareness that, as Nietzsche said, a thought comes when "it" wishes and not when "I" wish.

Or maybe take the example of "pain": we can describe the physiology of pain, but we can't describe or explain why pain feels bad to us, why there is a valuation of it. Why is it that I could say, "A certain stimulus caused the nerve endings in my finger to send signals to my brain, which in turn sent signals to my hand, which in turn caused my arm to withdraw my hand quickly," AND also say, "Damn! That f*cking hurt!"?

> we can describe the physiology of pain, but we can't describe or explain why pain feels bad to us<

But we can, can't we? Sure, we have difficulty explaining why it feels bad in *this* way rather than in *that* way, but it has to feel bad in order to hurt, in order for you to want to avoid it. This would not be too hard to understand if we were aliens studying human behaviour. It's just like any other powerful instinct.

As for why it makes you shout, again, that's an instinct. It's useful for social animals like humans to let others know when we're in pain.

Do you feel that your aversion to pain is fundamentally different to your aversion to bad smells, awkward social situations or tedious work?

I'm not sure I do. For me, these are just different drives and they all feel different only because different brain systems are involved, like the way that sight feels completely different to hearing even though both are senses.

Steve: "But I guess I'm still stuck on the problem of *why* such a system has first-person, subjective experience... "

Indeed, but I hope at least you'll change your mind that "it seems like no one has any idea about how to even begin investigating it." After all, some of those books you've not yet read might well contain good ideas on this very problem.

I agree this is much harder than the free will issue but it isn't as if no progress has been made. I look forward to your thoughts once you've had a look at Metzinger, Tononi, Dehaene and others, and I've got a recent book chapter on why consciousness does and doesn't matter you might find of interest: http://www.naturalism.org/Experautonomy.htm

"Sure, we have difficulty explaining why it feels bad in *this* way rather than in *that* way, but it has to feel bad in order to hurt, in order for you to want to avoid it."

That's the difficulty I'm talking about...and I don't think pain has to feel bad in order to hurt. I would compare it to the pupillary response to light, which is an unconscious reflex. Light hits the retina, signals get sent to the brain and back again, and the iris reacts accordingly. There's no "feeling" involved in the action of the iris.

from PI 87."But then how does an explanation help me to understand, if after all it is not the final one? In that case the explanation is never completed; so I still don't understand what he means, and never shall!"—As though an explanation as it were hung in the air unless supported by another one. Whereas an explanation may indeed rest on another one that has been given, but none stands in need of another—unless we require it to prevent a misunderstanding. One might say: an explanation serves to remove or to avert a misunderstanding——one, that is, that would occur but for the explanation; not every one that I can imagine.

It may easily look as if every doubt merely revealed an existing gap in the foundations; so that secure understanding is only possible if we first doubt everything that can be doubted, and then remove all these doubts.

246. In what sense are my sensations private?—Well, only I can know whether I am really in pain; another person can only surmise it.—In one way this is wrong, and in another nonsense. If we are using the word "to know" as it is normally used (and how else are we to use it?), then other people very often know when I am in pain.—Yes, but all the same not with the certainty with which I know it myself—It can't be said of me at all (except perhaps as a joke) that I know I am in pain. What is it supposed to mean—except perhaps that I am in pain?

Other people cannot be said to learn of my sensations only from my behaviour,—for I cannot be said to learn of them. I have them.

The truth is: it makes sense to say about other people that they doubt whether I am in pain; but not to say it about myself.

I just read your draft of Experience and Autonomy, and I really like it. I think I pretty much agree with everything you've written there: "the functions themselves get the behavioral job done"; "appeals to subjective experience may not be needed in scientific accounts of voluntary and intentional action"; "human persons...are real agents that exert robust *local* control", etc.

So I don't think we really differ with regard to conscious experience (phenomenal consciousness), either. I agree with you when you say that the "first person reality of phenomenal experience...still awaits a conceptually and empirically transparent integration..." I suppose I'm not really stuck on the *why* so much as the *how* of phenomenal consciousness. And it's the *how* that I think hasn't been adequately touched by any philosophers or scientists so far. But whereas Nagel concludes that there must be a non-material explanation--and an upheaval in biology--I remain in an agnostic place with regard to what that explanation will be. And I say agnostic because while I'm completely convinced that any explanation will be a naturalistic one, I can't currently imagine what form it might take. Though I realize it will likely be a *very* counter-intuitive one.

Glad you liked E&A, and yes, the "how" of phenomenal consciousness is the hard problem. As to imagining the form of a possible explanation, I think Metzinger's representationalist program has promise, so I hope you'll check him out sometime - you might start with his book The Ego Tunnel. I think we can get at least in the vicinity of qualia by considering what follows from being a behavior-controlling representational system at our level of complexity and recursion, http://www.naturalism.org/appearance.htm Good luck!

Science measures and divides Nature, a Nature that is truly immeasurable, indivisible, and self-evident.

I watched some time ago a science teacher standing on a bridge crossing a river with her young science students demonstrating how to measure the depth of a river. She had the children lower a weighted rope down to the river bottom and then raising the rope measured the length of rope that was wet. We are what we are taught, that the depth of a river is measurable, but is it? What if the weighted rope landed on a rock, a large rock or a small rock, and at the same time a wave came by, for surely it did; is a river, is Nature measurable? Surely the river bottom is not constant nor is the surface, so what of Nature is? Is the speed of light? Are you certain? Is anything or everything measurable? , Am I, are you, are they, is it?

One day soon science will find the cracks in their own foundation, the flaw is measure, and the shoulders of giants they teach us to stand on are as weak and shaky as the measure of a river with a weighted rope, or a string.

it seems to me that claims that consciousness is something more than an emergent phenomenon of the chemical processes in the brain are extraordinary as they invoke untestable non-physical existents. The only argument against physicalism seems to amount to "there is something special about qualia" (verging on argument from ignornace), and I think you'll have to do better than that.

I suppose I should have phrased the above more temperately so as to reflect my own ignorance. I am a scientist and not a philosopher, so it's likely that I don't understand Nagel's argument (or Neumann's agnostic stance regarding it). Nevertheless, it would seem as though you'd need a very good argument to support the contention that purely physical phenomena cannot account for consciousness given the success of the materialist programme in explaining everything else.

"The only argument against physicalism seems to amount to "there is something special about qualia" (verging on argument from ignornace), and I think you'll have to do better than that."

I agree. That's why I said that someone like Nagel needs to offer some way of testing it. He may very well do this in his book; I don't know, I haven't read the whole thing.

"Nevertheless, it would seem as though you'd need a very good argument to support the contention that purely physical phenomena cannot account for consciousness given the success of the materialist programme in explaining everything else."

Again, that's exactly my point. If he doesn't offer this argument, or offer a way to approach the problem that will shed real light on what he thinks is non-material, then he's no better than the folks at the Discovery Institute.

>given the success of the materialist programme in explaining everything else.<

But don't you think everything else is a bit different?

Consciousness, moral absolutes, free will etc are not in the same category as heat, light, evolution etc.

These are phenomena which are widely believed to exist, but which cannot be detected empirically. It is entirely possible that they don't exist at all, or at least not in the manner currently believed.

The only reason we have to suspect that consciousness exists is our strong intuition that it does. Seeing as this intuition is only available to the subject, I see no reason to think that the materialist programme is going to have any more success in finding it than it did in proving the existence of objective morality or free will.

While some philosophers are pondering, computer scientists (publishing in International Journal of Machine Consciousness, First International Workshop on Artificial Consciousness, etc.) are working on making conscious robots. Perhaps they will have the reflective/introspective code of the nature Michael Graziano writes about in How consciousness works (the attention schema theory of consciousness), Aeon Magazine

Your statement is just another version of Ken Ham's "were you there?" nonsense. If direct observation were the only measure of an explanation, we'd be still counting the night stars by eyesight rather than discovering exoplanets, cosmic expansion and microwave background radiation.

Consciousness doesn't deserve to be on the high pedestal: it's a scientifically explorable question, and will succumb to our investigations like everything else.

Phenomenal consciousness is by definition something that cannot have any empirical side effects. It is by definition only accessible to the subject. This is entirely unlike evolution, where though much of it is not directly observed, it has had an empirical effect on the physical world which we can see, measure and predict.

That's not to say it can't be understood, but I think it will need philosophers to unpack the problem.

> Phenomenal consciousness is by definition something that cannot have any empirical side effects.

How can you justify that phenomenal consciousness has "no empirical side effects"? How do you know it is "only accessible to the subject"?

Consciousness does have empirical side effects (neural correlates), and it isn't only accessible to a subject (fMRI, subject communication, AI studies). You are just defining a property to be outside science because "you weren't there".

Dis, how do you feel about philosophical zombies? You seem to be suggesting that they are coherent. I wouldn't agree, but it might be useful point of reference as the extreme version of the "consciousness has no empirical effects" hypothesis.

>How can you justify that phenomenal consciousness has "no empirical side effects"? How do you know it is "only accessible to the subject"?<

Because that's what it means. Any physical effects, e.g. neural correlates, fMRI activity are not phenomenal consciousness but only correlated with it. Consciousness itself is a broader concept which is not obviously specifically tied to our particular biology or brain organisation.

For example, if we had a computer program which seemed to exhibit consciousness and intelligence, there is no way we could take our learning about what goes on in human brains to settle the question of whether it was conscious.

For me, human-like consciousness depends on the computations being carried out in human brains. You can't have those computations without having consciousness because it is the computational process that is conscious.

So, philosophical zombies which are physically identical to humans are certainly impossible

Philosophical zombies which are physically dissimilar to humans (e.g. AIs which pass every empirical test for consciousness)? Probably impossible. There may be a way to produce that behaviour without having consciousness (e.g. some very complicated or infinitely long stimulus-response script), but I doubt it. I suspect it's much simpler to just have consciousness.

But in order to accept this view you must first be persuaded of the computational theory of mind, which is primarily a philosophical issue. If you accept the CTM, you might then be able to empirically "detect" consciousness by proving that the algorithms going on are of the sort that would experience consciousness.

But you can't detect consciousness itself because it's not an empirical phenomenon, so that foundational philosophical grounding is essential.

>How can something be "only correlated" but not an "empirical side effect"? What do you think a "side effect" is?<

Correlation is not causation! Side effects are caused by the thing they are side effects of. If there is any causal relationship between phenomenal consciousness and the physical events that happen in the brain it is the physical events that cause phenomenal consciousness and not vice versa. Since phenomenal consciousness itself has no physical side effects, we can only detect its existence in our own individual cases. We only infer that other people are conscious. We have no way of verifying this belief.

Again, my point is that there is no reason to believe that the relationship between consciousness and neural events is necessary rather than contingent. If there were, then there would be no choice but to accept that the computational theory of mind is false and that computers could never be conscious, as they don't have the hardware to have the same neural events.

And yet the CTM persists. I for one think it is certainly true. Therefore whatever you see happening in the brain is only correlated with consciousness, it is not caused by it.

>But he claims to be a non-materialist naturalist, and it’s difficult to determine what that means — or even if that’s possible. On the face of it, it sounds like an oxymoron.<

Hmm. I think I'm probably a non-materialist naturalist, and I don't think that's a problem. I'm a mathematical Platonist, which means I'm a non-materialist, but I'm a naturalist because I think all physical phenomena have natural physical causes.

This is relevant to philosophy of mind because I think the mind is fundamentally an algorithmic/mathematical structure. It is genuinely immaterial. It doesn't directly affect the physical world, instead the physical correlates of the mind in the brain affect the world.

The physical world doesn't directly affect the mind, instead the physical world alters the brain which necessarily changes the mind which is just the algorithm which corresponds to the behaviour of that brain.

(If you think it doesn't make sense for the physical world to alter an abstract structure like an algorithm you can think of swapping it out for another algorithm entirely.)

I don't understand your view. You say the mind is immaterial and 'doesn't directly affect the world'. Then you say that the 'physical correlates of the mind in the brain affect the world' - but these physical correlates are themselves 'in' the world - presumably - so how does the immaterial mind, which doesn't directly affect the world, affect these physical correlates - either directly or indirectly? You seem to be skipping over the interaction problem.

Plus, earlier, you say: 'If there is any causal relationship between phenomenal consciousness and the physical events that happen in the brain it is the physical events that cause phenomenal consciousness and not vice versa.' So it sounds like the immaterial mind cannot, in your view, cause physical events in the brain - so how, again, does the mind produce action in the world via physical correlates in the brain?

My position is that the Computational Theory of Mind is true. In my view, this means the mind has to be substrate-independent and so the the mind and consciousness are not physical phenomena but abstract.

Everything that happens objectively or physically can be understood ultimately in reductionist terms by looking at the interaction of atoms in the brain.

Everything that happens subjectively or mentally can be understood by regarding the mind as an abstract object which supervenes on the physical interactions of the brain. This is also a useful shortcut in understanding physical events, but only as an approximation.

There is a one-to-one mapping between the abstract mind and the physical brain. Changes on the physical level change the abstract level in the same way that erasing a part of an equation and rewriting it on paper changes the abstract object represented by the physical paper.

The abstract level does not change the physical level -- it doesn't need to because it supervenes on it. The physical level can take care of itself. The point of viewing the two as separate is to make clear that there is no need for any particular mind to be implemented by any particular physical substrate. Any substrate will do as long as this isomorphic mapping to the mind remains the same.

Also, as a mathematical Platonist, regarding the mind as an abstract object makes sense and helps to explain my view that the physical correlates of mental events are not actually the same thing as mental events, in the same way that two apples are not the number two.

Taking subjectivity out of it, I see it as very like the distinction between hardware and software. Software is abstract as it is information, not physical. Different machines with entirely different physical implementations can still run the same software. In principle, any software program can even be run by a human with a pencil and paper. So software is not physical. Software events correlate with physical events in the computer but they are not identical.

I think the view that consciousness is an illusion is entirely defensible.

If we can understand in principle why and how an intelligent electronic automaton or philosophical zombie might act as if and genuinely believe itself to be conscious, I don't think we have any robust grounds for believing that we are any more so.

If you think such an entity is mistaken then perhaps you are also so mistaken.

As it happens, I don't think the entity is mistaken because I think the abilities and intuitions it exhibits are precisely what consciousness is.

It's this sense that there's anything mysterious, anything about subjective experience that needs a radical new kind of explanation that is illusory.

Well, I’m surprised at the lack of comment, thus far,on this subject. It’s Monday after all, Perhaps, others will have collected their thoughts by tomorrow. A few blogs ago, Massimo had this to say:

"I keep being baffled by the popular contention that consciousness can’t be defined. It can, and it has, by many people. Try this for size: consciousness is the ability to have first person experience; self-consciousness is the ability to reflect on first person experience. So, many animals have the former, human beings and possibly a few other species are likely to have the second. Not that difficult.”“The real problem, as you say, lies with the testability issue. How do we know if another animal species (let alone a computer) is self-conscious? I don’t have an answer to that question . . . . “

I liked the conciseness, but was unconvinced that the distinction between “consciousness” per se and “self-consciousness” travelled very far. This introduces, perhaps unnecessarily, the subject of point of view. What are we to say then about one’s “ability” to adopt a third person point of view? Is this something other than self-consciousness? I know, this brings up the subject of introspection. But my question is: aren’t we ultimately going to end up talking about consciousness and what we mean by it?

Nagel suggests that in saying “We don’t know what it’s ‘like’ to experience reality as a bat experiences it” he’s articulating something of importance. Well, he might have said that a bat does not know what it is like to experience reality as a human. So I suspect that a triviality might be hiding somewhere. It appears that any claim to consciousness can make a claim to self-consciousness because we are essentially postulating that experience and its robustness, some vague threshold, supports the moniker of “self” consciousness. Now we are perhaps engaged in some measurement of quality or or degree or intensity of experience.

Take your example of tasting a glass of wine and perhaps the contention that I can’t know (duplicate?) your first hand experience, your self-conscious experience of drinking it. Well, let us say that you perhaps have been raised in a winery and your experience of all things wine is quite extensive. I say, “I like it.” You say, but that’s not what I had in mind and go on to explain in detail all sorts of sensations that you experience. I say, “Yes, I understand your point.” Aside from the fact that you are perhaps more sophisticated in this matter, what have you said: You are not me? And is this what we mean to say about consciousness?

And so, we retreat to neuroscience so we can discuss the how of consciousness. For example, this piece from a recently read article about it: “ I’ll be explaining my theory about how the brain — a biological machine — generates consciousness.” As if, in the presence of fire alarms and smoke and flames, one proposed to explain what fire is.

Embedded in all such discussions is an acknowledgement that something which we refer to as consciousness is real, but we don’t know why it is there. What did nature have in mind, as it were, by producing or generating it? Better we resurrect the archaic term “wherefore” in discussing it so that we can suggest its placement in space. I sense that the argument perhaps unwittingly proceeds erroneously from the “self.” That is, we acknowledge “self-consciousness” first and then go on to the more general subject of “consciousness.”

At the risk of being completely humiliated, there is something to ponder in the exasperation one senses in the writings of Zen monks like Huang Po on this subject: “But whether they [the enlightened] transcend conceptual thought by a longer or shorter way, the result is a state of Being: there is no pious practising and no action of realizing.”

It seems to me that subjective experience and objective physical description (be it neural or otherwise) are just different (incomplete) models of the same underlying ontological phenomena.

I don't see any value or reason for labeling either of these models as fundamental since each has aspects that are not accessible by the other. A subset of the physical activity in the brain ( perhaps/possibly the part unified through EM waves by attention ) seems to be correlated with conscious experience.

Why should we call that conscious experience an illusion? Is it simply to allow physical description to be fundamental? If part of the physical goings on are inseparable from our conscious experience it seems to me the key relationship we should pursue lays in the feedback dynamics between our conscious experience and unconscious patterns.

My best guess is that most of our conscious experience is an expression of unconscious causes, but I speculate that we do have some passive access and through practice can orient our awareness so that 'we' can have some non-trivial influence in the causal chain.

Well, saying it's illusion is not really the same as saying it doesn't exist or that there is no phenomenon to explain.

If consciousness is an illusion then that's simply what consciousness is. As Dennett put it, it's like the paradox about magic.

"Real magic" doesn't exist, so it's not real! The magic that actually does exist, that actually is real, is an illusion, just a bunch of tricks. But that illusion is a real phenomenon, and we call it magic even though it's not "real magic".

So perhaps "real consciousness" is just like "real magic" and doesn't exist either.

But why assume that the physical description must trump conscious experience even if in principle there is a subset of the physical description that perfectly overlaps (correlates) with the experience? You still don't get the experience from the objective physical description.

This is quite a different scenario than the magic analogy where subjective experience is intentionally tricked so that the subjects experience is altered. In the magic scenario the magician knows what happened.

In the objective/subjective problem it is just a matter of preference in picking which is 'real'. Neither is complete, why must we say only one aspect is real.

Well, we don't need to. My definition of real is flexible. I'm not sure reality is a very useful or coherent concept when applied phenomena other than physical objects, as evidenced by the difficulty in deciding if mathematical objects are real.

I just think those who think there is something very strange going on that needs to be explained may be incorrect. If we can understand how the brain "tricks" us into thinking we have consciousness, there may not be anything else to explain.

That's why I have said that the illusion of consciousness is simply what consciousness is. In this view, once we understand how an entity can come to believe it is conscious, there is nothing else to explain.

This makes philosophical zombies incoherent: philosophical zombies are supposed to believe themselves to be conscious while actually being unconscious.

To put it succinctly, I suspect that it is an illusion that consciousness is anything more than information processing. That qualia don't really exist in the way they seem to - that qualia are nothing more than the identifiers of certain percepts and that they don't actually have any content.

If we simulate a human brain, it will of course believe itself to be conscious in the same way that we do. John Searle would view this simulation as being deceived, since he thinks it isn't really conscious. My point is that if John Searle is right then he has no way to be certain that he is not also so deceived. Since this makes nonsense of the idea of consciousness, I think it makes more sense to think of this supposed "illusion" as consciousness itself.

"If we simulate a human brain, it will of course believe itself to be conscious in the same way that we do. John Searle would view this simulation as being deceived, since he thinks it isn't really conscious. My point is that if John Searle is right then he has no way to be certain that he is not also so deceived. Since this makes nonsense of the idea of consciousness, I think it makes more sense to think of this supposed "illusion" as consciousness itself"

I think we could maybe program a relatively convincing simulation of a brain to report it has first person experience and feelings, thoughts, and the whole lot, we could tag some computations as conscious, some as unconscious, we could tag the red attribute as conscious and the construction of dopamine molecules as unconscious and so on, and the simulation would report on itself just as humans do, but I don't see any reason to think it would have first person experience as we do.

"To put it succinctly, I suspect that it is an illusion that consciousness is anything more than information processing. That qualia don't really exist in the way they seem to - that qualia are nothing more than the identifiers of certain percepts and that they don't actually have any content"

We understand how human sensations in general are not a one for one reflection of an objective reality, so it seems possible if not probable the sensation of first person experience is the same kind of construction, but I feel that even if we refer to first person experience as an illusion we are no closer to understanding how our sensation of the illusion of first person experience, and all the illusory sensations it contains, comes about.

In considering brain simulations, it is important to remember that we are not simply programming it to report that it is conscious. If it is a true brain simulation, then everything that happens in your mind has an analogue in the simulation. The simulation reports that it is conscious not because we designed it to do that but because it really has a belief that it is conscious.

If you had a discussion with such an entity, you would probably find it hard to convince it that it had no first person experience, or that its consciousness was an illusion. Every argument available to you as you defend the reality of your own first person experience would also be available to it.

So if you still think that you are conscious but that it is not, then it seems to me that you are endorsing a weaker version of the concept of philosophical zombiehood (where the zombie is not physically similar to a human), and your position seems rather weak to me.

But more importantly, since you allow two kinds of intelligent entities, conscious and unconscious, which are indistinguishable even to themselves then you have no grounds on which to claim that you are a conscious entity and not an unconscious entity. The very same mechanisms that "trick" the simulation into mistakenly believing it is conscious could be responsible for your belief in your own consciousness.

As this seems to me to be a silly metaphysical position, I reject this and instead propose that both you and the simulation are conscious -- that consciousness is the computations that lead you and the simulation to believe that you are conscious.

I agree that this does not constitute a detailed description of how our sensation of first person experience comes about, but to me it takes a lot of the mystery out of it. I think it means that we only really need to understand how the brain works in an objective sense - no further explanation of subjective phenomena is required. If we can objectively explain how the brain makes us behave as if and report that we are conscious, the job is done.

"If we don't know how the 'first person experience' * happens how can we simulate it ?"

I can think of two ways this might happen.

1) We might allow a conscious digital organism to evolve in a virtual environment, having natural selection do the work for us2) We might have a means of automated scanning of a brain and then reproducing that brain in a simulation. The brain as a whole may be too complex for us to understand, but as long as we understand the functionality of its constituent components (e.g. neurons), then it ought to be possible to simulate it.

But finally, I did not actually say that we would never have a detailed description of first person experience, nor did I say that accurate brain simulations are an imminent prospect. It may be that we never accurately simulate a brain, and it may be that by the time we can we will have a detailed understanding of how the brain works.

There are many details left to be discovered. Furthermore, I am not a neurologist, so I am not attempting to give a comprehensive account of the brain's workings. Far from it. I'm just arguing that if we can see that a brain simulation ought to be possible in principle, then there is no fundamental mystery of subjective experience.

I appreciate your input, I'm just starting to sort these things out for myself.

“We might have a means of automated scanning of a brain and then reproducing that brain in a simulation. The brain as a whole may be too complex for us to understand, but as long as we understand the functionality of its constituent components (e.g. neurons), then it ought to be possible to simulate it”

Not sure. Even supposing we can understand the brain and it's nervous extensions (and its supporting systems which may turn out to be mostly functional too) at a resolution that captures all that's needed, and supposing all that is needed is amenable to 'scanning', we still probably need to contend with multiple scans through time, and the seemingly necessary and ongoing nature of theses processes; and the need to get the simulation of these processes rolling, keep it going coherently and not breakdown.

“We might allow a conscious digital organism to evolve in a virtual environment, having natural selection do the work for us”

Agreed, I think we would probably need a system that can cope with the issues I mentioned above and also deal in the same fashion with the exponential interactions that would occur with and within a realistic simulation of an ecological and evolving environment (though I think simplistic simulations are not necessarily without value).

“a brain simulation ought to be possible in principle, then there is no fundamental mystery of subjective experience.”

I think it may be possible, but I don't think it is at this point feasible to even start instantiating these ideas, though I think entertaining such things in the abstract and philosophically is useful.

And I agree there is no 'mysteriousness' about 'subjective experience', except in the sense that we don't understand how it happens.

Well, maybe so in order to arrive at an understanding of how the parts work. But if we already have that understanding we only need one scan. We can let the simulation evolve it from there.

"and also deal in the same fashion with the exponential interactions that would occur with and within a realistic simulation of an ecological and evolving environment"

I think we're losing sight of the idea that faithful brain simulation is more of a thought experiment than a practical possibility right now. It may never be possible. Or it might!

"And I agree there is no 'mysteriousness' about 'subjective experience', except in the sense that we don't understand how it happens."

Well, what I'm trying to argue is that there may be no phenomenon to understand. What we need to understand are the objective processes which lead brains to believe, claim and behave as if they are conscious.

I don't think we need a further step where we explain how these objective processes actually do result in real subjective experiences, because I think that once you have explained the former there is no more mystery.

"maybe so in order to arrive at an understanding of how the parts work. But if we already have that understanding we only need one scan"

I don't see it, the assumptions are still intractable or beyond reason. My idea was more of an attempt to try to point out that the problems of simulations are vast, multiple, and interrelated: if we clean up one area in our thought experiment we are not "discovering plausibility" because we have simply moved the intractable complexity to the periphery of our attention or thought experiment.

"the idea that faithful brain simulation is more of a thought experiment than a practical possibility right now. It may never be possible. Or it might!"

I'm thinking less and less that it's possible, or can be a coherent though experiment, and more and more that it is more akin to fantasy.

"Well, what I'm trying to argue is that there may be no phenomenon to understand. What we need to understand are the objective processes which lead brains to believe, claim and behave as if they are conscious"

I don't think that's a useful avenue of thought considering our present state of knowledge and understanding (and as we gain new knowledge I don't think there is reason to assume that could help us make things clearer, but who knows what may be in the future).

"I don't think we need a further step where we explain how these objective processes actually do result in real subjective experiences, because I think that once you have explained the former there is no more mystery"

As I mentioned earlier, I think you have just shifted the complexity of the issue. Moving the "problem" of understanding consciousness to the understanding of "objective processes" still leaves the whole issue just as intractable as it was.

Personally I think there may be areas we may never understand fully (at least not understand at the level needed to reproduce them, or not understand them even at very simplistic levels; and even if we can understand more and more of these and similar subjects, we never be able to noticeably reduce the size of the subject's areas of unknown).

I don't mind if you think that brain simulations are fantasy, but that doesn't affect the argument. Lots of thought experiments are fantasy but they are nonetheless illuminating.

I really do think that brain simulations ought to be possible in principle, however difficult the practical issues. Whether it is feasible is irrelevant to the point of figuring out what it would entail should such a thing be achieved.

If you think it is incoherent or impossible in principle, I disagree, and would be interested to learn how you would defend this position.

>Moving the "problem" of understanding consciousness to the understanding of "objective processes" still leaves the whole issue just as intractable as it was.<

But do you understand why I think it doesn't? If we understand the objective processes, we understand how the brain comes to believe it is conscious and report that it is conscious. What you are trying to explain then is something more: how it is actually conscious.

But what if it isn't? If we come to understand these objectively verifiable processes which make you believe that you are conscious, then what reason do you have to think that there is such a phenomenon as "real" consciousness at all? It could be that these objectively verifiable processes are simply deceiving you (this is what Searle seems to think happens in The Chinese Room, even if he might not phrase it like that).

Or we could just refer to consciousness as comprising these objective processes, and do away with the distinction between real consciousness and illusory consciousness.

I forgot the word "may" in my previous comment "we *may* never be able to noticably reduce the size of the subject's areas of unknown"

"Lots of thought experiments are fantasy but they are nonetheless illuminating."

I think thought experiments are useful, fantasy or not, and I think they can also sometimes make clear that we don't have anything to support the idea that something is possible in principle. With my comment I meant to imply not that thought experiments are fantasy but that that I think the idea of brain simulations are more fantasy than possible in princible.

"would be interested to learn how you would defend this position."

At this stage I'm seeing a lot of problems that seem to point to the incoherence of "brain simulations" but I don't want to get into that here. I'm just setting up a blog, and if I get something clearer on the subject I will give you the link.

"It could be that these objectively verifiable processes are simply deceiving you"

Yes, of course, I think I'm following you, but that doesn't help us actually understand "the objectively verifiable processes which make you believe that you are conscious".

"Or we could just refer to consciousness as comprising these objective processes, and do away with the distinction between real consciousness and illusory consciousness."

If we don't know what it feels like to be color-blind, we don't possess full knowledge of Daltonism?

If the question sounds absurd, it's only because reframing the issue in context of something homely and common takes away any pseudo-profundity. There's some funny business with the meanings of explanation and knowledge going on. At a guess, there is an implicit assumption that only direct intuition counts as true knowledge? I rather suspect no discussion of subjective experience as knowledge has value until there is something like an operational definition of knowledge.

On the other hand, if we are tone-deaf, does that mean we can't full knowledge of music?

If the question sounds trivial, it's only because we were never much interested in knowledge of music but in the experience. One of the oddities of this issue is that, if it the assumption that true knowledge was inherently personal, there be no such thing as "knowledge," in the sense of something collective or public. Is the goal of this problem to privilege personal intuition?

What a person knows in principle can be demonstrated. What a person feels can not, according to this presentation of the problem. Every other alleged human being or organism, or even object, or the entire universe, could be possessed of either no consciousness or an inscrutably alien consciousness.

No one lives like they think a cloud could be uniquely cloudly conscious of the people it drifts over. No one lives like they really expect other people to have strange, other minds. Indeed, they tend to be quite unnerved when evidence to the contrary emerges. We tend to think of insanity as a damaged consciousness, at best, or something demonic, at worst.

Could it be the whole debate stems from another eruption of religious thinking?

I have no problem with Nagel outside of the fact that he seems to me making a claim for the limits of science but his suggestion seems equally puzzling. How does adding the concept of "immaterial" change anything? Does it provide an explanation for conscious as he refers to it? I don't even see it getting close to an explanation. He seems to be basically saying, science doesn't know what consciousness, therefore my favorite pet idea is true. I'm not saying it isn't, I'm just saying he hasn't actually provided us with a reason why his immaterial idea is true. It could just be the fact that we have a epistemological wall when it comes to consciousness and we will never understand it.

I too have been a little dismayed at the treatment Nagel has received - To be fair, my first reaction was also defensive in that it does sound an awful lot like giving aid and comfort to creationists, but when the universe is considered in a more abstract sense, one that contains features like "landscapes" and "attractors," his conclusions almost seem banal.

Consciousness is definitely going to take a metaphysical leap of some kind, though...Just spitballing - "Quale are that via which abstractions are real entities such that we can include them in our ontology." That sounds a little panpsychist...Or how about, "Subjective experience is the way in which the real activity of manipulating symbols and meanings is, in fact, real and not just a label."

I get the feeling Chalmers might be onto something in a kind of anthropic, "tree falling in the woods" sense: A universe without any conscious observers is, in a way, over before it begins - There's no way to "pick out" a moment in space or time, so the notion of their extent becomes kind of meaningless. Our talk about a universe with no one in it is not a universe at all, but a mental object associated with a conscious observer. If we imagine what it's actually like from the inside, it's not a lifeless wasteland, but like the ur-nothingness of the visual field behind our heads or in the span of unconsciousness - Not even empty. In this sense, I think it might be meaningful to say consciousness is a kind of fundamental object, insofar as it might be that which gives substance to dimensionality by privileging a given point within it.

I'm-a throw this out there: What if, in some realm of Mathematical Platonism space, there is a space of possible qualia, isometric in some way with complex mathematical patterns, a bit like the sand art designs you get on carefully prepared glue-on-paper? As to what the parameters of this space are, I can't even imagine, but we know it has structure, which we can see in things like the color wheel and musical octaves. Maybe, as we encounter other intelligent beings or enhance our own senses, we'll uncover more of this space, and that structure will start to become more self-evident and necessary, like the Complex and Real number axes, or the Natural numbers emerging from set theory. It might simply be that, in the same way squaring a number *is*, in some abstract way, a real geometrical square, and all the calculations for the Lie Group E8 *are* this crazy hyperdimensional object, other types of patterns simply *are* a color or a sound, corresponding to some point in a space of all possible qualia, which might itself simply be fundamental or "necessary" in the same way the space of geometrical objects is.

Excellent post, Steve. I enjoyed how you wrapped dream experiences and transition states into consciousness questions.

We all know that many paradoxes are a function of the language used to describe them. So as you also tied atheism into the post, can I ask - if you could tweak the definition of a god from a super-power (or set of super-powers) to something more integrated with human consciousness, would you feel another way? For instance if it can be shown that for all we know and may find out about 'individual' consciousness, if we were to also find that such consciousness taps into a group consciousness that can be said to be 'alive' in that it reacts to its environment and behaves 'intelligently', do you think that might provide more running room for god definitions? The scientific tests producing empirical data would be mostly sociological and psychologically based, but would serve to prove that there was a unitary behavior based on the actions of many individuals. And that such behavior is somewhat god-like in the commonly known sense

Science plays a dice game of probability with Nature's absolute. And sadly will never connect mind and body until they resolve there own divisive issues.The solution is much more simple than (conscious) thought! =

"The final account may tell us that consciousness is indeed something such as light that is made up of electromagnetic waves, where the electromagnetic waves don’t cause light but simply are light;"

This seems to me to be more like an equivocation on "light." (I was going to call it a subtle equivocation, but a quick review shows me that this distinction is the basis of Kant, at least.) It's a term for both what we experience (the phenomenon) and for the thing-in-itself. I don't have any problem differentiating those two, but I also don't have any problem connecting them, as I understand what is sensed, and I understand the mechanism how we sense it.

I don't find consciousness to be "the hard problem." What is the qualia "red" for a human? What is the qualia "hot" for a thermostat? Our qualia are just "hot for a thermostat" correlated with thousands of experiences of that qualia in thousands of contexts, and millions of analogies to compare them to, and millions of desires to modify them.

I've been studying Searle for a couple of years for exactly this question, and I'd like to read some critical analyses of Searle's Chinese Room, because my complete contravention of the Chinese Room seems too obvious to be unique.