The Conscious Mind was something of a blockbuster, as serious philosophical works go, so a big new book from David Chalmers is undoubtedly an event. Anyone who might have been hoping for a recantation of his earlier views, or a radical new direction, will be disappointed – Chalmers himself says he is a little less enthusiastic about epiphenomenalism and a little more about a central place for intentionality, and that’s about it. The Character of Consciousness is partly a consolidation, bringing together pieces published separately over the last few years; but the restatement does also show how his views have developed, broadening into new areas while clarifying and reinforcing others.

What are those views? Chalmers begins by setting out again the Hard Problem (a term with which his name will forever be associated) of explaining phenomenal experience – why is it that ‘there is something it is like’ to experience colours, sound, anything? The key point is that experience is simply not amenable to the kind of reductive explanation which science has applied elsewhere; we’re not dealing with functions or capacities, so reduction can gain no traction. Chalmers notes – justly, I’m afraid – that many accounts which offer to explain the problem actually go on to consider one or other of the simpler problems instead (more contentiously he quotes the theories of Crick and Koch, and Bernard Baars, as examples). In this initial exposition Chalmers avoids quoting the picturesque thought experiments which are usually used, but the result is clear and readable; if you never read The Conscious Mind I think you could perhaps start here instead.

He is not, of course, content to leave subjective experience an insoluble mystery and offers a programme of investigation which (to drastically over-simplify) relies on some basic correspondences between the kind of awareness which is amenable to scientific investigation and the experience which isn’t. Getting at consciousness this way naturally tends to tell us about the aspects which relate to awareness rather than the inner nature of consciousness itself: on that, Chalmers tentatively offers the idea that it might be a second aspect of information (in roughly the sense defined by Claude Shannon). I’m a little wary of information in this sense having a big metaphysical role – for what it’s worth I believe Shannon himself didn’t like his work being built on in this direction.

The next few chapters, following up on the project of investigating ineffable consciousness through its effable counterparts, deal with the much-discussed search for the neural correlates of consciousness (NCC). It’s a careful and not excessively over-optimistic account. While some simple correspondences between neural activity and specific one-off experiences have long been well evidenced, I’m pessimistic myself about the possibility of NCCs in any general, useful form. I doubt whether we would get all that much out of a search for the alphabetic correlates of narrative, though we know that the alphabet is in some sense all you need, and the case of neurons and consciousness is surely no easier. Chalmers rightly suggests we need principles of interpretation: but once we’ve stopped talking about a decoding and are talking about an interpretation instead, mightn’t the essential point have slipped through our fingers?

The next step takes us on to ontology. In Chalmers’ view, the epistemic gap, the fact that knowledge about the physics does not entail knowledge of the phenomenal, is a sign that that there is a real, ontological gap, too. Materialism is not enough: phenomenal experience shows that there’s more. He now gives us a fuller account of the arguments in favour of qualia, the items of phenomenal experience, being a real problem for materialism, and categorises the positions typically taken (other views are of course possible).

Type A Materialism denies the epistemic gap: all this stuff about phenomenal experience is so much nonsense.

Type B Materialism accepts the epistemic gap, but thinks it can be dealt with within a materialist framework.

Type C Materialism sees the epistemic gap as a grave problem, but holds that in the limit, when we understand things better, we’ll understand how it can be reconciled with materialism.

In the other camp we have non-materialist views.

Type D dualism puts phenomenal experience outside the physical world, but gives it the power to influence material things,

Type E Dualism, epiphenomenalism, also puts phenomenal experience outside the physical world, but denies that it can affect material things: it is a kind of passenger.

Finally we have the option that Chalmers appears to prefer:

Type F monism (not labelled as a materialism, you notice, though arguably it is). This is the view that consciousness is constituted by the intrinsic properties of physical entities: Chalmers suggests it might be called Russellian monism.

The point, as I understand it, is that we normally only deal with the external, ‘visible’ aspects of physical things: perhaps phenomenal experience is what they are intrinsically like in themselves – inside, as it were. I like this idea, though I suspect I come at it from the opposite direction: to Chalmers, it seems to mean something like those experiences you’re having – well, they’re the kind of thing that constitutes reality whereas to me it’s more you know reality – well that’s what you’re actually experiencing. Chalmers’ way of looking at it has the advantage of leaving him positioned to investigate consciousness by proxy, whereas I must admit that my point of view tends to leave me with no way into the question of what intrinsic reality is and makes mysterian scepticism (which I don’t like any more than Chalmers) look regrettably plausible.

Now Chalmers expounds the two-dimensional argument by which he sets considerable store. This is an argument intended to help us get from an epistemic gap to an ontological one by invoking two-dimensional semantics and more sophisticated conceptions of possibility and conceivability. It is as technical as that last sentence may have suggested. To illustrate its effects, Chalmers concentrates on the conceivability argument: this is basically the point often dramatised with zombies, namely that we can conceive of a world, or people, identical to the ones we’re used to in all physical respects but completely without phenomenal experience. This shows that there is something over and above the physical account, so materialism is false. One rejoinder to this argument might be that the world is under no obligations to conform with our notions of what is conceivable; Chalmers, by distinguishing forms of conceivability and of possibility, and drawing out the relations between them, wants to say that in certain respects it is so obliged, so that either materialism is false or Russellian monism is true. (Lack of space – and let’s be honest, brains – prevents me from giving a better account of the argument at the moment.)

Up to this point the book maintains a pretty good overall coherence, although Chalmers explicitly suggests that reading it straight through is only one approach and unlikely to be the best for most readers; from here on in it becomes more clearly an anthology of related pieces.

Chalmers gives us a new version of Mary the Colour Scientist (no constraint about the old favourites in this part of the book) in Inverted Mary. When original Mary sees a tomato for the first time she discovers that it causes the phenomenal experience of redness: when inverted Mary sees a tomato (we must assume that it is the same one, not a less ripe version) she discovers that it causes the phenomenal experience of greenness. This and similar arguments have the alarming implication that the ineffability of qualia, of phenomenal experience, cannot be ring-fenced: it spills over at least into the intentionality of Mary’s knowledge and beliefs, and in fact evidently into a great deal of what we think, say and believe. This looks worrying, but on reflection I’m not sure it’s such big news as it seems; it’s inherent in the whole problem of qualia that when we both look at a tomato I have no way of being sure that what you experience – and refer to – as red is the same as the thing I’m talking about. More comfortingly Chalmers goes on to defend a certain variety of infallibility for direct phenomenal beliefs.

Further chapters provide more evidence of Chalmers’ greater interest in intentionality: he reviews several forms of representationalism, the view that phenomenal experience has some intentional character (that is, it’s about or indicates something) and defends a narrow variety. He offers us a new version of the Garden of Eden, here pressed into service as a place where our experiences are direct and perfectly veridical. Chalmers uses the notion of Edenic content as a tool to break apart the constituents of experience; in fact, he seems eventually to convince himself that Edenic content is not only possible but fundamental, possibly the basis of perceptual experience. It’s an interesting idea.

Included here too is a nice piece on the metaphysics of the Matrix (the film, that is). Chalmers entertainingly (and surely rightly) argues that the proposition that we are living in a matrix, a virtual reality world, is not sceptical, but metaphysical. It’s not, in fact, that we disbelieve in the world of the matrix, rather that we entertain some hypotheses about its ontological underpinnings. Even bits are things.

The book rounds things off with an attempt (co-authored with Tim Bayne) to sort out some of the issues surrounding the unity of consciousness, distinguishing access and phenomenal unity along the lines of Ned Block’s distinction between access and phenomenal consciousness, and upholding the necessity of phenomenal unity at least.

It’s a good, helpful book; what the content lacks in novelty it makes up in clarity. Chalmers has a persuasive style, and his expositions come across as moderate and sensible (perhaps the reduced epiphenomenalism helps a bit). It’s surprising that the denial of materialism (surely the dominant view of our time) can seem so common sense.

3. Charles Wolverton says:

Thanks to Peter for a succinct yet informative review. A few responses/questions (for anyone):

“The key point is that experience is simply not amenable to the kind of reductive explanation which science has applied elsewhere”

This issue is explored in the Ramberg essay in “Rorty and His Critics” (which I have cited before) in terms of irreducibility of one vocabulary (specifically, that of intentionality) to another (that of physics). That discussion convinced me that there is nothing of great import to be concluded from the inability to do “reductive explanation”. (Similarly for irreducibility to the vocabulary of neurology, ie, to NCCs.) Am I missing something?

Re ineffability and “what it’s like to be …”. While I agree with Peter Hacker that “what it’s like to be something” is pretty much just “to be that thing”, I am with Peter (our host) in thinking that reduction not helpful. However, neither do I think the original expression helps much. So, in my continuing attempt to be Sallarsian on these matters, I am leaning toward interpreting “what it’s like to be X” as meaning something like “responds to any possible stimulus in exactly the way X would respond to it; in particular, for an X with language capability, says just what X would say in response to any stimulation for which a verbal response is appropriate”. (This smacks of behaviorism, but I think the following saves it from that fate. Am I wrong?)

This view allows one to be more precise as to how “ineffability” comes into the picture. It has always bothered me that something described as ineffable has had thousands of words said/written about it. But let me generously define ineffable in this context as meaning “can’t be described even in an unlimited (ie, countably infinite) number of statements”. Since we are analog creatures, even assuming an exhaustive and incorrigible 1-POV and infinite time we could never describe every possible response to every possible stimulus because there would be an uncountably infinite number of them. However, while this means “what it’s like to be …” (my interpretation) is ineffable (in my sense), why that observation should be interpreted as having ontological, mysterian, or any other special significance continues to escape me. Even given infinite time, neither could one list all the real numbers. So?

Finally, whatever inverted Mary may know a priori about the neural correlates in response to a color stimulus or the attendant phenomenal experience, she can simply ignore any such knowledge and intersubjectively learn to associate her non-standard phenomenology with what others call “red” (what I have called previously the process of “triangulating”). As Peter observes, neither she nor anyone will ever know the difference. In fact, since presumably we all experience somewhat different color phenomenology, the case of inverted Mary seems only a matter of degree.

“Chalmers goes on to defend a certain variety of infallibility for direct phenomenal beliefs.”

This sounds ominously like the myth of the given. IIRC, in “The Conscious Mind” Chalmers mentioned Sellars only one endnote. Anybody know if he disputes the argument of EPM?

It’s possibly a somewhat garbled thought, but I had in mind that from Chalmers’ perspective, if reality is made of phenomenal experience, it’s logical that examining ‘real world’ stuff might tell us about phenomenal experience.

Whereas my gloomier reading of a similar position is that phenomenal experience seems strange and different from your theoretical version of it because it’s real. And that’s all you can say.

Thanks for the review. But I remain puzzled by the whole Chalmers approach. What am I missing?

“we can conceive of a world, or people, identical to the ones we’re used to in all physical respects but completely without phenomenal experience. This shows that there is something over and above the physical account, so materialism is false.”

So, we can conceive of dualism, which shows there’s something over and above monism, therefore monism is false?

Being able to conceive of something doesn’t really tell us anything useful, since basically, not yet understanding consciousness, we don’t yet understand what it is to conceive of something.

The whole of Chalmers, Nagel and qualia generally seems to be built on soft sand to me.

And on reductionism…The whole point of reductionism isn’t to explain a system at that reductive level; it’s to better understand the mechanisms at the lower level in order to build better higher level models. The black box view of behaviourism is insufficient because of the internal complexity can produce the same behaviour for different states, or different behaviour for the same states. We need to know what’s going on inside, and only a reductive approach can help. If that’s not the case then we should have stopped at alchemy, not bothered with chemistry, and saved hell of a lot of money on the LHC. The brain is just another system to study.

7. Charles Wolverton says:

ron I presume):

I completely agree with your comment on reductionism (specifically, reductive explanation), but the issue in this context is what to conclude from the (current) inability to explain “the mental” in the vocabulary of physics. Per my understanding of the essay I cite above, if one accepts the tool metaphor for language, that inability boils down to little more than the fact that the language of physics seems clearly an inappropriate tool to use in analyzing “the mental”. As you no doubt understand, notwithstanding that in principle one could do so, no rational person would try to explain the high-level functioning of any large, complex – but well-understood – system in the vocabulary of physics. Why the current inability to do so in the case of “the – not at all well-understood – mental” is seen as having ontological significance escapes me.

On conceivability:
The thesis of materialism is assumed by Chalmers to say that specifying the physical facts necessarily entails the experiential facts. If conceivability implies metaphysical possibility (a further thesis of Chalmers), then the fact that we can conceive of the failure of the entailment means it’s not a necessity. So, the fact that we can conceive of something different doesn’t make it untrue of our world — it could be true in our world, but only contingently — it just undermines the necessity of the entailment.

Type-F monism follows from thinking about why this apparent ability to conceive goes wrong: it fails because it is a mistake to see our physical theories and descriptions as full explanation of any phenomena in the world. They are descriptions of the extrinsic or relational features of the world, but leave out the intrinisic properties of physical things. First-person conscious experience depends on these intrinsic properties.

On reductionism:
The idea isn’t that there’s a general problem with reductionism, it is just that explaining first-person experience entirely with third person physical descriptions fails (all other scientific reductions are third-person to third-person).

9. Vicente says:

Charles,

no rational person would try to explain the high-level functioning of any large, complex – but well-understood – system in the vocabulary of physics

Absolutely false. What kind of system are you talking about: a galaxy, a city or the stock market. You’ll find that you statement only applies well to those complex systems in which consciousness is involved, societies, or buildings….

what it’s like to be X” as meaning something like “responds to any possible stimulus in exactly the way X would respond to it; in particular, for an X with language capability, says just what X would say in response to any stimulation for which a verbal response is appropriate”.

You could have X, Y and Z, responding identically to the same stimulii, as a result of completely different inner experiences and processes. That defines nothing.

10. Vicente says:

What is an intrinsic property of a physical thing ? be careful… you are about to fall in the substance concept hell.

I like Type D, for formal reasons, but how is the implicit interaction consciousness-brain setup?

Regardless what is the nature of consciousness, by what mechanisms it could interact with the brain? and where exactly?

Reflecting upon Peter’s “research by proxy” idea, the only way I think this could be done is looking for places in the brain that required such interaction to make the system consistent and coherent, in the way Eccles required it. We need to undestand how a process is started in the brain, Libet might have shown that a decision process is started before we are aware of it, but says nothing about how the process is triggered.

Why do certain neuronal nucleii start firing, apparently with no prior cause? is it an stochastic spontaneous phenomenon, weird. If stochastic would produce stochastic behaviour, wouldn’t it, and spontaneous processes are usually random, so… there must be some rational agent behind, unless we are an infinite feedbacked loop, moving along cause-effect chain that ends with death.

In summary, since it seems really difficult to access the phenomenal side by physical means, maybe we could try to identify the interface between it and the physical side (brain), by detecting anomalies or unexplained phenomena that would require of the introduction of an external agent. Just to prove the need of such an agent would an incredible success.

[…] This post was mentioned on Twitter by Steve Esser, argaldo's arXiv feed. argaldo's arXiv feed said: consciousentities The Character of Consciousness: The Conscious Mind was something of a blockbuster, as serious … http://bit.ly/aIylRq […]

Yes, I know talking about ‘intrinsic’ anything is dangerous. The only intrinsic property I would claim myself is that of being real, that quality which things have when they’re not abstract or theoretical, but actual. I’m not even sure that that is properly described as intrinsic, but it seems essential in a way that sort of justifies the word.

“…but the issue in this context is what to conclude from the (current) inability to explain “the mental” in the vocabulary of physics.”

There is nothing yet to conclude about consciousness in the language of physics, other than what has already been learned about Neurons => Biology => Chemistry => Physics. There’s nothing yet observed that defies this hierarchy between levels of modelling.

What we don’t yet understand about consciousness, which is preventing us reaching some conclusion, is simply that: something we haven’t figured out yet.

Given all other phenomena we know of conform to this hierarchy of modelling there’s no reason to posit anything else (e.g. dualism) and no reason to do what Chalmers appears to do, deny some aspect of materialism simply because we don’t have an answer to this particular problem. It seems some philosophers, even non-theistic ones, still cling to the notion that there’s something spooky about consciousness.

We can challenge materialism, if we wish. But we have to challenge it for all phenomena we know about, not just consciousness. There’s no good reason to suppose something odd with regard to consciousness, while accepting materialist models for other aspects of the observed universe.

I’m not sure why you think there’s a problem in being unable to identify some specific trigger. Given how complex is the interconnectivity of neurons, and the complexity of the chemical soup they sit in, wouldn’t you think that our neurons have been busy being triggered since they formed? A particular neuronal stimulus that appears to trigger prior to some action may well have been on the way to being triggered by a vast array of pre-conditions of prior neuronal and chemical activity.

It may be that there are so many pre-conditions that a ‘start’ becomes essentially indeterminate. Any precursor to some neuronal activity will have its own precursors (connected neurons and chemical signals, as well as the internal cell state of the neuron), and so on.

There may be a stochastic element to a neuron firing. But we need to appreciate what we mean by stochastic in this sense. It may be stochastic at the level of ions – the number being available locally to contribute to an action potential. It might also be stochastic at the level of neuronal networks – the effective indeterminacy of such a vast number all interacting. There’s nothing magical about stochastic processes. Neuronal connections may be viewed as massive pseudo-random number generators.

I might think about what it’s like to be me, or speculate about what it’s like to be a bat, but I don’t really worry myself about what it’s like to be my fridge – we’re quite happy to analyse most systems from the third person perspective.

The first person issue isn’t necessarily a problem for understanding consciousness – we don’t know that it matters. What Chalmers and others are doing is:

(1) We don’t yet understand consciousness, from a third person science perspective.
(2) I have a first person subjective experience of my consciousness, which we don’t understand.
(3) Therefore this means something spooky about consciousness.

But,
(A) (1) and (2) might be quite independent issues.
(B) It may be that a better grasp of (1) also explains (2).
(C) (2) causes a problem for (1), and hence (3).

Chalmers seems to be going with (C). I fail to see why that should be.

Ron,
If Chalmers’ position is phrased in the way you phrased it, it really does not seem problematic. I believe this way of expressing the problem does not expose the true issue in the real problem.

The key is really #(2) “I have a first person subjective experience of my consciousness”. This point is not to be understood superficially. We know that identical twins do exist. If we carry the concept of identical twins to the extreme (physicists like consider extreme case to simplify complication, such as frictionless surface etc), and consider the case of structurally identical twins, we come to the conclusion that if you are replaced by your structural identical twin brother (how this identical brother come about is completely irrelevant for this discussion, just like how you make a frictionless surface is irrelevant in the some discussion of Newton’s law) at time t=t_o, the world would have evolved exactly the same way as if no replacement has ever happened. That is, from the third person point of view, nothing has changed whatsoever. However, this is not the same from the first person point of view. You have stopped existing starting t=t_o. As far as you are concerned, you die at t=t_o. You no longer have a point of view. Someone else, your structurally identical twin brother, has taken your place in this world.

Therefore, one can imagine (conceivability becomes very important here), there can be a world, with humans roaming on a planet called earth, with a person called a Ron Murp typing into his computer, while you don’t have a point of view. A Ron Murp’s existence in this world does not necessary imply the existence of your point of view. It could well be just your “structurally identical twin brother’s point of view”. But the fact is, you do have a point of view, not your identical twin brother having a point of view.

Why?

The world in which you don’t have a point of view, while your “identical twin brother” has a point of view is, for all practical purposes of this argument, is a Zombie World for you.

The conceivability of this Zombie World, which turned out not to be the reality, is therefore something one has to explain. One’s existence, therefore one’s consciousness, needs to be explained. But too many smart people try to explain it in terms of “physics”, or “science” (the current understanding of science), or this sort of reductionist approach. It is clear that these are the wrong approaches. No matter how much you know about “physics”, you cannot explain why it is that you exist (you are conscious, having a point of view) instead of your identical twin exist. And Chalmers’ big gap is really pointing out that it is the wrong approach. My description above is basically a different way of rephrasing the “hard problem” (it does look very different from the conventional way of stating it).

I looked at your blog. I believe you are tackling a different problem, not the Hard Problem.

This still doesn’t explain why it’s a problem. We can use the other popular thought experiment – the Star Trek transporter that creates identical copies of a person at t=t_o: twin-A and twin-B.

Assume that atom for atom, at t=t_o, the two entities are identical. But from that very instant on they will diverge. There are some very real world issues for these manufactured twins.

As discussed with Vincente, any number of indeterminate processes, what we see as stochastic processes, start the divergence.

Assuming non-instantaneous building, let’s say feet first (forgetting physiological problems with this process), then by the time they are built complete their feet have already diverged in toe nail growth, at the micro level.

Then, there are in different locations in the transporter as they are recreated at t=t_o. So they see slightly different perspectives – maybe twin-A sees a reflection of Uhura bending over picking up her communicator and his mind instantly wonders of in that direction; while twin-B notices Scotty scratching his groin, and person A/B was a homophobe and recoils in disgust.

If Spok talks to twin-A and Kirk to twin-B they are already having divergent experiences.

If Kirk and Spok ask the same question, “What do you remember of your mother?”, then in the process that is recall the twins will already be re-creating memories, with all these indeterminate processes going on. They may, through conditioning, even use the same phrases they are familiar with to describe their mother, so there will be many very weird similarities.

But to all intents and purposes they are two people, with a ‘spookily’ similar past – i.e. identical up to t=t_o.

The only reason we don’t see the same degree of similarity in real twins is because they diverge as soon as they split in the womb. If our biological process was such that we develop as one entity until reaching 18, and then on our 18th birthday we always divided into two entities, then Chalmers, and the rest of us, wouldn’t see this as a problem. At the point of split, one personality becomes two.

So, getting back to your ‘replacement’ scenario, a similar condition applies. if I, twin-A, am replaced by my twin-B, then twin-B will be having his experiences of consciousness. It doesn’t matter that we were the same at t=t_o, we’re different from then on.

This makes the ‘other world’ of Ron Murp meaningless. Being able to invent thought experiments is only helpful if they lead to some real understanding. The ‘conceivability’ of this world becomes pointless, and deceptive. For example, the world of the twin is constructed, and then the actual fact is added in that, “But the fact is, you do have a point of view, not your identical twin brother having a point of view” This is no better than saying, “Let’s conceive of a world where 2+2=5. But, the fact is 2+2=4, and therefore this is a problem for mathematics.”

Another point to consider is that thought experiments, like analogies, are sometimes helpful for describing unfamiliar concepts in terms of familiar ones. They cannot be used to prove or demonstrate something to be the case. In this context these thought experiments do not show that there is a problem with third person view of consciousness, or that the first person experience causes a problem for third person investigation of consciousness.

Ron,
My emphasis is on the possibility of a world in which there is a Ron Murp walking around while you don’t have a point of view. Presumably, this Ron Murp could be your structurally identical twin if you exist in that world. Star Trek transporter opens up another can of worms that I am unwilling to touch at this point.

Vicente (regarding intrinsic properties): I don’t endorse a substance ontology, but I think other models still could make a distinction between properties revealed in certain interactions, and those which aren’t — which is all we need I think.

Ron: Your idea that consciousness is a hard problem because of our ignorance is a good stance to take because it’s not easy to argue we’re not ignorant (You might like Daniel Stoljar’s book which takes up the thesis in your comment).

But I think that one can make a good case that the hard problem is rooted in a distinction between first-person interactions and third person descriptions which is not merely a product of ignorance and won’t be altered as we add more knowledge about the details of the science (it’s not a question of “spookiness”).

Decades before Chalmers, Russell and Whitehead said materialists were making a mistake by taking the success of scientific theories to mean that reality (metaphysically) is as the models describe (the “fallacy of misplaced concreteness”).

20. Trond says:

Vicente: “Regardless what is the nature of consciousness, by what mechanisms it could interact with the brain? and where exactly?”

This is a very interesting question. What would it take for the brain to create a whole new color for us to see, for instance? Neurologists would say that the brain just needs to dedicate some neurons to recognising it. It doesn’t come close to explaining how that color would look.

As far as physics is concerned, the number of potensial colors is infinite. We are just talking about frequencies. Colors the way we *experience* them, on the other hand, seem limited in some fundamental way. They are not sequential, you don’t have a sense of which colors comes first, which are in between and which come last. To me it seems impossible to conceive what a new color would look like, but I can easily imagine a part of my brain dedicated to a new frequency.

To me, the neurological explaination is the magical, spooky one. One is giving the brain an unreasonable amount of credit for our experiences. A more rational approach must be to accept some connection between the brain and a more or less similar machine in an undiscovered realm.

21. Charles Wolverton says:

It appears that you may have inferred from my comment that I disagree with some aspect of your position; I don’t. I endorse everything in your comments so far.

vicente:

“complex systems in which consciousness is involved, societies, or buildings”

If you think “consciousness is involved” in things like buildings in some way other than the seemingly irrelevant fact that so-called “conscious” beings conceive, architect, and build them, then your concept of what is “conscious” seems so broad as to make almost any denial of any positive statement about consciousness “absolutely false” (as contrasted with “relatively false”?).

“You could have X, Y and Z, responding identically to the same stimulii, as a result of completely different inner experiences and processes. That defines nothing.”

Alternatively, perhaps you don’t understand what a “definition” is. If (in the extremely unlikely) event that X and Y respond to all stimuli identically, then under my definition “what it’s like to be X” and “what it’s like to be Y” would be the same. One may find that consequence disturbing for some reason, but that doesn’t nullify the definition.

22. Charles Wolverton says:

I’d like to understand precisely what this means. So, please humor me while I try to parse it.

First, let’s start with something “simple” like a definition of “phenomenal experience” (presumably subsumed under “first-person experience”). Not an explanation of how it happens, just a coherent statement of what it means. After three weeks of trying to understand how that word is used in this arena (motivated by the recent Jesse Prinz essay at the On the Human blog), I remain utterly confused.

I gather that it’s generally agreed that we start with a sensory stimulus (usually from an external source but sometimes from an internal one) and that some initial processing results in some neurological state that one might call the neural correlates of that sensory input. It’s at this point that I become unsure of what else is subsumed under the term “phenomenal experience”. I assume that creation of the mental “images” (not necessarily in the visual sense, eg, an “audible tone” is a mental “image” of an aural stimulation) of colors, sounds, tastes, etc, that accompany sensory stimulation are included. But what about discrimination like light-dark, still-moving, tone-noise, et al; or recognition of shapes, colors, objects, et al. Ie, is there a cognitive component? And are there other components of such an experience that I’m missing?

Now consider the description of whatever one decides is included in “phenomenal experience”. What does it mean to say that “third person physical descriptions” fail? If it’s that such an experience can’t be described using the vocabulary of physics, then I think this is “indeterminacy of translation” (the term used in the essay I cite in comment 3), in which case my comment 7 applies (with “analyzing” replaced by the better word “describing”). If it means merely that the explanatory cascade that Ron described in comment 13 has one or more missing steps (as he implies, near the top level of the cascade, not near the bottom – physics – level), then his opinion that it seems rather premature to declare defeat (with which I concur) seems to apply.

In any event, can “phenomenal experience” be described from a 1-POV in some appropriate vocabulary, in particular by a polymath in relevant disciplines? If not, then what is the special significance of it’s not being describable from a 3-POV by a team of relevant experts? And if it’s indescribable from any POV in any vocabulary, shouldn’t that give us pause as to the utility of the concept? OTOH, if it is describable from a 1-POV, in what sense is it “ineffable”?

I don’t mean to be playing word games here, just trying to understand – and wondering if Rorty’s suggestion that we simply stop using certain problematic concepts/words and see how we get along might be apropos.

My point about spookiness is that the way in which the first person perspective is presented presupposes that the first person perspective prevents analysis of consciousness from the third person perspective.

This seems comparable to the relationships in visual perspectives. If I’m stood here and you’re stood there, we have different visual perspectives. I can’t see things from your perspective, but I can deduce an awful lot about what you are seeing. I can ‘appreciate’ your perspective, and in that sense construct ideas about it.

If we’re both looking at a cube that appears red to me, and you see 3 faces I don’t, then I can’t deduce the colours of the 3 faces you see, so I grant there are limitations to actual detail from different perspectives. But I can still infer general details about what you might be seeing – enough to understand visual perspective generally.

Similarly, I might never experience the precise details of what you are thinking/feeling – what it’s like to be you at this moment. But that doesn’t rule out the possibility of third person analysis of a general model of your mental perspective, as a generalisation of consciousness.

In a much simpler system, of two fridges responding to their temperature environment, they may be reacting differently, depending on internal content, variations in heat exchange, etc., so we never know the actual perspective of any fridge. But we still model a fridge in principle, design them based on these models, and so on. We wouldn’t say we don’t understand fridges because we can’t know what it’s like to be a fidge.

If anything, my having consciousness too might even help me to understand consciousness better than I understand fridges, allowing for difference in complexity.

In just stating that the first person perspective is a problem doesn’t really explain why it’s a problem, or how it prevents understanding of consciousness. Is the hard problem simply a concept created in the mind of Chalmers? He can conceive of it as a hard problem, therefore it is?

Can you point to a good explanation of the hard problem that explains specifically why first person is problematic?
…

OK on Russell on materialism. I don’t see materialism claiming to be anything other than a model – not having access to any ultimate metaphysical reality, or more to the point, not knowing whether we have or not, not know if we’ve hit rock bottom…

But this problem with our metaphysical models applies to everything, and not just consciousness. We’d have to address the problems with materialism first and then show how that result has significance for consciousness more so than any other area of knowledge.

This is where the spookiness seems to creep in to me. It seems like first person issue is a slightly more credible view than, say, Deepak Chopra claiming quantum effects are related to consciousness (any more than they are related to any aspect of the material world). Both still appear to be claimed rather than explained.

27. Vicente says:

Trond: Yes and No. My understaning was a few months ago very much alike what you say. But, thanks to the points M. Spenard made to me, I looked a bit into the colour vision bibliography, and colour perception is much more than just light with a certain spectrum hitting the retina. The perception seems to be really scene dependent, and each colour perception also depends on the colours around, and other factors… It might even depend on your current mood. Senses seem to provide just part of what it is needed to create each frame of the conscious phenomenal movie, in the way W. James presented it.

The question is what does nervous tissue has to do with colours? Does it have any “intrinsic” property that we could relate to colours, or sounds, etc.

Second, even it had such a property (for example, when looking at a red item, some “redness” appears somewhere in the brain, I know, really silly to say), who or what is observing such process in the brain… and this question leads to the well known infinite recursion issue in the Cartesian Theathre show.

Ron,
There is no thought experiment. I am simply pointing out that if you are made to disappear at some point in time, and a structurally identical (indistinguishable from you) twin of yours is made to appear to take your place in this world, then, from any third person point of view, nothing have changed. However, from the first person point of view (your point of view), this swap is nothing less than a personal death.

So, trying to explain this first personal fact (personal death) from a third person point of view, which is the only thing current state of science can do, will be in vain.

29. Trond says:

Vincente: I’m not opposed to your description of the complexity of color perception, its just that somewhere in the brain this very processed signal is converted to consciousness. I’m nonplussed that so many people don’t see that. It’s just not likely that atoms contain consciousness intrinsically, or that flow of information causes it, because no theory of the brain explains the *connection* between brain processes and consciousness. I have never even heard a hypothesis.

This does not have to mean there is a Cartesian theater paradox, because obviously at some point there is an observer that *is intrinsically* an observer. But the nature of the brain and our immediate universe indicates that the brain is not this ultimate observer. Do you agree?

Charles and Ron: I may not have been clear, but when I talked about explaining, I meant metaphysical explanation. That’s the context of Chalmer’s critique of materialism. He defines materialism as a metaphysical thesis about what reality consists of — that when we’ve specified the physical description of the world we say “we’re done all our work — there are no further facts about the world”.

The point is not that the descriptions can’t be excellent and informative about all the fine-grained features of consciousness we might care about. Rather the argument is meant to say that it is a mistake to take our success at providing the third-person physical descriptions as inspiration for the metaphysical thesis that the world consists of exclusively nonexperiential material or physical facts.

Ron: as to your last comment, you don’t have to take Russell’s critique of materialism as implying panpsychism or idealism (see Peter’s first comment above). You can just be an agnostic about the rock bottom basis of reality.

Looking at your comment again Charles (#22): I don’t know if any of this has been very helpful regarding understanding the “hard problem”. I know that different people have different intuitions about this. If I say I’m talking about the problem of accounting for “the raw, subjective, qualitative character of first-person experience,” rather than any specific features or contents of consciousness, some people nod their heads and some don’t.

OK, that’s fine. I’m dead, but the rest of the world doesn’t realise I’ve been replaced? But that applies to any object that has sufficiently similar copies. Where the object being replaced and destroyed has self-awareness, then yes, the replaced object has no further realisation or experience.

If brain scientists are busy studying me, and then without their knowledge I’m replaced by my identical twin, then they can pick up where they left off – with a few caveats: if they’re cutting into my skull, marking and testing specific neurons, then whoever is doing the swapping should make sure the twins are identical to a sufficient degree.

But what is all this telling us that informs our view of consciousness?

(#30)
I’m not taking Russell as implying anything in particular, beyond saying that we can’t be sure – i.e. being agnostic. It’s still reasonable to be agnostic, but to run with a working model until something comes along to deflate that model. Currently materialism seems a pretty good model, even for the brain and consciousness. And even though we can’t explain consciousness, that alone doesn’t defeat materialism as a model.

So, the materialist model of the brain and consciousness is just a hypothesis – and a very incomplete one, since it’s not really offering any predictions yet, other than that the brain and consciousness will be explainable in materialist terms.

The first person experience doesn’t actually falsify it – OK, there’s not a lot to falsify, but the first person experience is often presented as if it does falsify materialism.

Other than pointing out the first person perspective exists, there isn’t a satisfactory alternative model to materialism that explains what it is, and which would also falsify the materialist model.

I’m one of those that doesn’t nod my head. To explain, if you don’t mind this cross-thread link…

I get the feeling that a common view of those who nod is reflected in Vincente’s comment (#8) on http://www.consciousentities.com/?p=634 “The only explanation I have is that not anybody has the sensitivity, the capacity to “see” to understand the breach between their inner world and the outer one…There must be something that allows some to experience that fact, and not to others.”

I’d offer a different perspective. I do get the difference. I do ‘experience’ what is effectively a dualism – I don’t think normal humans can help but experience it. But I see myself as seeing through the deception that is “the raw, subjective, qualitative character of first-person experience”. And, at the risk of putting words into the mouths of others, I think many who don’t nod see it like this too.

For example, we can view any object reacting to interaction with another, as when one rock collides with another. We can view chemical reactions in the same way – just matter interacting. And we can follow this chain all the way up through varying degrees of complex biology, through simple nervous systems all the way to humans. There is no indication that ‘consciousness’ has been switched on, or has begun to interact with the material brain as some dualistic phenomenon. To see this unbroken chain of connectivity makes a far stronger challenge to our apparent dualism than does 1-POV to a materialist approach to consciousness.

So far there is no explanation for this physical, chemical and biological chain from simplicity to complexity that says anything about what consciousness is beyond Types A,B,C materialism.

I ‘get’ Types D and E, though I don’t hold to them, because the other aspect of the dualism is never explained, only posited.

I don’t get what Type F is offering, other than a mask behind which Chalmers presents qualia – and to change Peter’s comment on Type F: not labelled as a dualism, you notice, though arguably it is.

35. Charles Wolverton says:

Steve:

Thanks for the reference to Stoljar. His “Ignorance and Imagination” looks interesting. I especially like his separation of metaphysical and non-metaphysical questions. As an engineer rather than philosopher, my interest leans toward “how it all works” rather than “what it’s all about”. Hence, my disdain for phrases such as “what it’s like to be …” that seem too “airy-fairy”. If I’m to use the word “consciousness”, I need it to be definable in terms I can understand (a position to which I gather Stoljar might object, which is why I said “definable” instead of “defined”). If that isn’t feasible, then I have to add it to the list of words – eg, “truth”, “moral”, and “evil” – that I simply don’t use in serious conversation.

In any event, as long as Ron stays around, I’ll just retire from the field vis-a-vis this topic. He expresses exactly my position except with an eloquence and thoroughness I can’t (as yet, although I have hopes) achieve.

Ron[32],
“But what is all this telling us that informs our view of consciousness?”

It demonstrates that in a universe that has a physical “Ron Murp” in it, there is no logical relationship to your having a point of view in such a universe, not to mention that you have a point of view through that physical “Ron Murp”. Therefore, your existence in that universe, or rather, the existence of your point of view in that universe, is unexplained, and the answer is unlikely to come from physics.

Regarding the two camps of people who nod and who don’t nod in Steve Esser’s comment[31], let me off an analogy to illustrate the difference: Newton’s apple falling downward in pre-Newton’s time. (Apology for the repetition to those who have seen my analogy before.)

In pre-Newton’s time, one group of people become very disturbed after seeing that apples fall downward always. They think there is something fishy about the direction “down”. They don’t know what it is. But they just believe they don’t have an explanation why apples have to fall downward. They believe this is a fundamental problem. “Why down, not up?” they asked.

Another group of people are not a single bit disturbed. They are quite baffled by the behavior of the first group of people who are disturbed. “What is the problem?” they ask. “Of course apples fall downward. No one has ever seen an apple that falls upward. Have you? What is the big deal?” they asked. “I see no problem at all,” they claim, “in fact, we can use the direction that an apple falls to define DOWN.”

And the first camp of people find themselves unable to explain the “apple falling downward” problem to the second group of people who find apples falling downward to be so natural that requires no explanations, and therefore baffled by the first group of people who are so disturbed.

In our case, those people who have a “hard problem” find themselves unable to explain to those who don’t have a “hard problem”.

38. Vicente says:

Kar Lee, I don’t believe your analogy fits our issue. In our problem there are no apples to observe, no directions to define… No Newton to serve as “common” referee… sorry. You have people telling you about seeing apples that fall, but what is it that they see? you see your apple, now come and measure the falling acceleration of mine(the one in my mind). I stick to my point, those who don’t see the problem either lie or lack some special faculty required for the ocassion.

And maybe, if brain scanning and observation instrumentation develops enough, it could be possible to indirectly measure the speed and the acceleration of an object by observing the corresponding brain activity patterns of an individual observing such object. I don’t know, interesting technical problem, probably it could be done for a small trajectory and accepting a big error. But this is not what we are talking about Kar.

39. Trond says:

Vicente:
“I stick to my point, those who don’t see the problem either lie or lack some special faculty required for the ocassion.”

You skipped the third possibility: Those who don’t see the problem don’t have the same conscious experience that those who see the problem do.

A related problem is whether dreams are conscious. I personally don’t remember being in a dream and pondering about consciousness. But while I’m awake I do it all the time. There are two explanations: Either the limited intellectual state that is typical of dreams stops me from recognising the problem, or dreams are only conscious in retrospect.

40. Vicente says:

Kar Lee, having said that (38), your example represents quite well part of the situation… and despite I said I believe it does not fit the issue, I will make use of it, I am really cheeky and thickskin huh ;-).

Then, taking advantage of your work:

– Why is it that for some people the fall of an apple was a problem, and it was not to others. Why were there two camps in the first place?

Dealing with your Newton analogy first. That’s OK, yes, we have two camps that have the same experience – apples falling, or consciousness 1-POV – but who see it differently.

But applying the current context to the pre-Newton case, those worrying about the apple falling are saying, yes, the apple is a material object, but there must be something in addition to the material in order for the apple to fall. The apple can’t be falling according to some natural law, it must have a will of it’s own. I wonder what it’s like to be an apple that decides to fall?

The other group is saying, no, it’s all part of the material world, in that the apple is obeying some aspect of the material world that we don’t understand yet. We experience it. We appreciate it. But, as yet, we just can’t give you any physical law that describes the falling down in a way that would be ‘scientific’ (allowing them some current understanding of the term) – i.e. we can’t give you a law that would predict how all objects fall. But, given that we’ve never witnessed anything in addition to the normal material world (a claim harder to back up in Newton’s more superstitious times) we have no reason to think that someone won’t figure it out soon.

On the ‘Ron Murp’ twin, I’m afraid I might be being a little thick here, because I just don’t get the point. Could you stick with it, and could you expand on, “there is no logical relationship to your having a point of view in such a universe”

There are some optical illusions where we are able construct perceptions of 3-D situations from 2-D images. 2-D necker cubes, or the vase and faces, for example. I’m not so interested in these because they are simply offering alternative 3-D interpretations of a 2-D image. We can’t actually say that either perception is faulty, because either could be correct had we access to the 3-D object the image represents. We would have to see the 3-D object – a wire frame cube, or two vases, for example.

But there are some illusions of a an actual 3-D object, where we not only construct some perception, but where the perception is actually a faulty one, as opposed to simply an alternative one – e.g. Ames Window or Ames Room illusions. When we change the physical perspective of the view, or rotate the object, we get a better understanding of the actual physical object.

This second case is how I see consciousness. We have a perception – our 1-POV. And this perception is so resilient, so convincing, because the only perspective we ever have of our consciousness is the 1-POV. We can’t change perspective, as we can with 3-D optical illusions, and it is this that is preventing us seeing our personal consciousness in any other way that 1-POV.

What would it take to be able to do what we can with 3-D optical illusions, to see it from an alternative perspective that reveals a more complete understanding of our own consciousness?

Perhaps brain science has to get to the stage where we can effectively ‘read minds'; that is, be able to view someone else’s 1-POV. Or, in the context of my analogy above, to be able to see our own consciousness as if we are seeing it from 3-POV.

The point here is that the claims you are making for what you experience – this faculty you think you have and I don’t – isn’t as straight forward as it appears. And, if you don’t actually understand your own 1-POV the way you think you do, doesn’t that make you question the certainty with which you hold that view?

Is the 1-POV that we live with all our lives fooling us, preventing us from seeing through it, preventing us seeing that consciousness is just a phenomena of a material brain?

46. Trond says:

Ron Murp:
“But applying the current context to the pre-Newton case, those worrying about the apple falling are saying, yes, the apple is a material object, but there must be something in addition to the material in order for the apple to fall. The apple can’t be falling according to some natural law, it must have a will of it’s own. I wonder what it’s like to be an apple that decides to fall?”

The obvious difference between this case and consciousness is that gravity has to be accounted for to explain movement of matter on earth. If gravity didn’t exist, we would be floating about.

But consciousness as a property of the universe can be left out. If it didn’t exist, the world might well have been identical to this one. Evolution already explains the “side-effects” of consciousness, so whether or not it really is a non-materialistic phenomenon, it seems impossible to prove that it’s there. I am 100% certain it is a separate phenomenon, but bothered by the fact that thanks to evolution, that’s what I would say anyway. But it’s still there.

47. Vicente says:

Ron,

And, if you don’t actually understand your own 1-POV the way you think you do, doesn’t that make you question the certainty with which you hold that view?

Yes. I wish I could have any certainty about anything, I am basically astonished about the whole issue. I hold my view, because somehow is what I feel things are (no value for anybody else), and because all efforts to explain consciousness in materialistic terms look non-sense to me. For some reason the Manic Street Preacher’s song “everlasting” lyrics that at some point say “pathetic acts for a worthless cause…” have poped up in my mind.

48. Vicente says:

Ron, I forgot to thank you for the links to the documentary, really interesting.

Imagine you are playing with the virtual reality mask, and going through those tests and games. You could concentrate, “be fully conscious” at every moment of what is really going on, you could avoid being fooled by the tricky system. In that case, what part of the brain is supersedeing the ordinary sense input management (that is confused), and maintains a logical and rational integrity… YOU. I would love to go through that experiment, because to introspectively control the sense-fraud would be a fantastic way to access, to isolate, the SELF acting. It would be like a hyperaccelerated way of meditation.

Isn’t it telling that the person undergoing the test, the person experiencing the 1-POV of the experiment has to make that effort? Isn’t this the case, that the 1-POV can be made to doubt who they are, or where this 1-POV is residing? No body else in the room has to do that with regard to the person being tested, because they know who’s who.

Surely this is the benefit of 3-POV, and the whole of science. It helps us overcome the human fallibilities that are inherent in 1-POV. And it’s from this point of view that science is trying to get a better handle on what consciousness is. The idea being that this understanding is one we will be able to share and communicate to each other.

A few thousand years of philosophy have gone by and we’re not really much further on with the 1-POV. There’s nothing about the 1-POV that wasn’t really understood by the Greeks. There’s a whole lot of Rationalist philosophy still going on, even if some of it is operating under other banners. The idea that we can just dismiss a scientific understanding of consciousness just because we ‘think’ it’s nonsense, and we think that because it doesn’t conform to our 1-POV, without any evidence to either back up that objection, or any evidence of any counter idea, seems like pure Rationalism in action.

You can still have your 1-POV – as we all can, and do. I’m sure all (well, most) neuroscientists don’t go home at night and start talking about themselves in the third person.

But I’d still be interested to know why our personal experience of 1-POV prevents science doing what science does well, just in the context of consciousness, because we experience consciousness as 1-POV.

50. Charles Wolverton says:

“What would it take to be able to do what we can with 3-D optical illusions, to see it from an alternative perspective that reveals a more complete understanding of our own consciousness?”

On a slightly different tack, the benefit of the 3-POV is the epistemological factor of endorsement by peers (essentially the “justification” part of knowledge as “justified true belief”). From the 3-POV, we can be “wrong” but normally we’ll be “corrected” by our peers. (The quotes are to highlight that occasionally someone is right and the peers are all wrong. But that’s just a feature of non-absolutism.)

In “Empiricism and Phil of Mind”, Sellars argues for a workaround in the case of 1-POV (I think – although I’ve read it now a couple of times, I’m still shaky on the later sections in which this idea is presented.) It’s my impression that Dennett’s heterophenomenology is along the same lines, possibly (even probably, since I understand him to be a Sellars fan) inspired by EPM. Roughly (as best I can tell), the idea is that 1-POV reports are taken as evidence – with the same caveats applicable to any evidence. The concept is, of course, much more subtle than this one sentence summary.

Those who argue the “what it’s like o be …” perspective suggest that we all “know” more-or-less what that means (eg, Peter in this thread’s kick-off essay), with which I agree. The problem is that they go on to claim (as I understand it) that no one else can “know” what that “being like” is like. But until we have a more tangible definition of “what it’s like to be …” (eg, my hypothetical definition in comment 3, arguably verifiable in principle from a 3-POV), I don’t see how that claim can be justified.

Ron and Vicente,
After 18 hours of plane ride, I can barely open my eyes for 10 minutes straight… but I am typing.

“- Why is it that for some people the fall of an apple was a problem, and it was not to others. Why were there two camps in the first place?”

Vicente, this is exactly the aspect of the question I am trying to draw attention to, not whether apples have free wills.

Ron,
I think Trond’s [46] statement sums up what I want say: “But consciousness as a property of the universe can be left out. If it didn’t exist, the world might well have been identical to this one.”

I guess the best I can do is to point out the following:
Assuming causality closure of the material universe, consider this sequence:

1.) Big bang occurs.
2.) Formation of the local group super cluster or equivalence (something that is structurally identical).
3.) Formation of the Milky Way or equivalence.
4.) Formation of the Solar System or equivalence.
5.) Formation of earth or equivalence.
6.) Appearance of the human species or equivalence
7.) Birth of a human named Ron Murp or someone equivalence (someone structurally identical, down to the atomic quantum level, and named Ron Murp.)
8.) Existence of your consciousness.

I will say, given a set of natural laws that the material world obeys, 1.) implies 2.), 2.) implies 3.), 3.) implies 4.), 4.) implies 5.), 5.) implies 6.), 6.) implies 7.) , BUT 7) does not imply 8.)

The “equivalence” part kills the link.

Conclusion, given the universe exists, you don’t have to exist. i.e., you don’t have logical relationship with this universe.

52. Charles Wolverton says:

Ron,
Now that I have the chance to read more carefully what you have written: “Surely this is the benefit of 3-POV, and the whole of science. It helps us overcome the human fallibilities that are inherent in 1-POV”, and realizing that your implicit definition of consciousness is different from mine.

Unfortunately, consciousness (my definition) is NOT a third-person observable. The problem of other minds clearly demonstrated this point. Also, allow me to quote from Wikipedia :

Trying to look at one’s consciousness from outside necessary fails. Standing outside of oneself, you see a regular physical human regulated by its internal detailed-balanced physiology, which results in its complex but predictable behavior and that’s all. Studying a human from outside is not fundamentally different from studying a climate system, a cockroach, or a gem stone. These are all recognized as unconscious systems (except for cockroach). Therefore, studying consciousness from outside adds nothing to the phenomenon of consciousness. It only adds value to the understanding of complex system behavior. Trying to study a “third person consciousness” is like studying an “odd number that is even”. Whatever you find, you are looking at something else. And that brings me back to something that I have said in this forum: People falling into different camps because they are talking about different things, using the same word “consciousness”. Consciousness thus “refuses to be defined”.

Here are more complications about 3-POV as used in this discussion. My objection to the 3-POV is not about talking to peers. We are all, after all, talking to our peers on this excellent platform Peter has built. My objection is for A to talk to B about B’s consciousness, and B to talk to C about C’s consciousness. My claim is that A can only talk to B about A’s own consciousness, and B to C about B’s own consciousness. A knows nothing about B’s consciousness and therefore A is not qualify to do that.

Case in point, if you are trying to tell me something about my consciousness as if you do know something about it, and in mid-sentence, some all-mighty being instantly replaces me with my identical twin, you will still be talking to “me” about my consciousness as if it is still there. The fact is, my consciousness has already disappeared after the replacement. If you don’t even know about as big an event as the disappearance of my consciousness right in front of you, how can you know about my consciousness? How can you be qualified to talk to me about my consciousness? On the other hand, you can definitely talk to me about your own consciousness because you are the only world renowned expert on your own consciousness, and I can learn something by comparing your description with my own experience about my consciousness. This type of 3-POV is good.

55. Vicente says:

Kar Lee: You are inspired, I have to say.

Just one detail. What you would compare is his description with your own description of your consciousness, not with your experience itself. It is a completely different case. As I was told in primary school, you can’t add pears and apples. I apologyse for ontolophobes being hurt.

56. Charles Wolverton says:

One aspect of the second video Ron cited in comment 45 that especially interested me was that the printed name “Halle Berry” and the picture evoked a similar neural activity response.

I posit that the way we respond to stimuli might be that stored in our long term memories are sets of candidate responses (in the form of motor neuron activation commands, although that isn’t relevant to the issue at hand), each set being indexed on neural correlates corresponding to a stimulus. Ie, over time we learn essentially “canned” responses to familiar stimuli such as seeing the image of an object, being asked a specific question, or detecting an event (eg, incoming tennis shot). Some of these responses we call reflexive or “unconscious”; others we call planned (“conscious”). (I actually wouldn’t be surprised were it to turn out that they’re all “reflexive, but it suffices here to consider the more conventional view.)

Following Sellars’ doctrine that to grasp a concept is to master the use of a word, I further posit that the index into this “concept database (CDB)” comprises neural correlates corresponding specifically to the word/phrase/sentence use of which constitutes the concept. The specific response chosen from the set of candidate responses to an instance of the stimulus would be context dependent. (I don’t know enough about relational DBs to make an argument, but my guess would be that the CDB would need to more like an relational DB than a conventional one.)

Now suppose this is all correct. Then why might the responses to seeing the printed name pf a person and seeing a picture of the named person in an elaborate costume be similar? One possibility is that after processing a visual image to the point of recognition, subsequent processing is performed on the neural correlates of the vocalized word/phrase/sentence that “names” an interpretation of the image. That could make the index of the set of candidate responses consistent across seeing a picture, reading the name, or hearing the name verbalized. (In Koch’s example, one might actually recognize the cat woman image and either associate that with “Halle Berry” or use that as the index (or key) into the CDB entry for “cat woman” which might contain response candidates related to “Halle Berry” (hence, my guess re RDBs).

Whether the “Halle Berry” situation mentioned by Koch can be construed as evidence in support of my speculation depends on how “similar” the details of the two neural responses are (he only says that the firing(s) of a (some) neuron(s) that corresponds to the “concept of Halle Berry” was the same for the two stimuli). But it seems at least suggestive.

57. Shankar says:

I am becoming more and more skeptical of anything that alludes to ‘materialism’.. *Everything* in the universe that I am aware of is only through my own phenomenal experiences. I cannot vouch for any absolute ‘material’ existence of anything other than my own consciousness.

Even if you don’t adopt a solipsist viewpoint, materialism is on shaky ground. I do not have to explain why computers made of levers or pulleys experience or not experience qualia.. those levers and pulleys are nothing more than my own phenomenal states. So there you go..

These posts were really directed at theists I was debating, but they pretty much describe my view of the tentative contingent nature of knowledge that consigns most complex epistemology to the dustbin.

We can define consciousness any way we like. If you define consciousness to specifically exclude 3-POV access, then 3-POV access to your particular definition is excluded, by definition. Who wrote the wiki definition?

“Trying to look at one’s consciousness from outside necessary fails.” – Only if you consciousness it ensure failure. By all means define consciousness in this limited way; but then we’re not talking about the same thing.

There’s no need to be so limiting. In fact I think it’s misleading to so limit what consciousness is by definition, when the whole program is not to define what consciousness is, but to discover what it is.

So, I don’t have an implicit definition as such. I recognise that we have a vague intuitive feeling for what consciousness is because we experience it, from the 1-POV perspective. But as I said in (#25), how do you know that this 1-POV perspective is all that we can have of consciousness? Where’s the evidence or reasoning that this is the case?

“Trying to look at one’s consciousness from outside necessary fails.” – First, this hasn’t be shown to be the case, only claimed. Second, what makes you think that we are restricted to only looking from the outside? What is it to look from the outside? Does 3-POV necessarily entail looking from the outsode?

There are many invasive techniques that can read neuronal activity, and also influence it. The possibility has not been ruled out that we may actually be able to share consciouss experience. It might seem a bit of a stretch at the moment, but try telling a 19th century physician we can scan and see inside the brain.

One problem of experiencing the 1-POV so intimately is that this experience in itself seems to be limiting the thinking of some philosophers – a kind of confirmation bias: I only experience 1-POV now, so that’s all we’ll ever experience.

This was my point about Rationalism – it’s all armchair philosophy that is declaring limits to science before science has had a good try.

60. Vicente says:

Well Shankar and Ron, I propose you the following experiment: you run as fast as you can in opposite directions towards each other, in the last second jump forward with your head in front so that your skulls collide making a funny sound, and then you tell me if you can vouch for any absolute material, or it was just your consciousness, a simple colliding mental state… if that one doesn’t work, a second option is to hit your thumb with a hammer, the philosophical lesson you learn is fantastic, it changes your life.

The old “I refute it thus!” ploy. But your suggestions make no difference to the problem. Taking the extreme view of solipsism, then any ensuing pain is just as imaginary as everything else that is apparently real.

But I agree that the material world is pretty convincing – enough for me to choose materialism. So convincing that, as tentative as it may be, it’s far more useful than any other world view.

Once that choice is made, then everything else can be accounted for philosophically under the materialist model, including consciousness. The fact that science hasn’t provided a good explanation yet doesn’t refute either materialism, or consciousness as a materialist phenomenon.

I’d still be interested in why you think 1-POV prevents 3-POV science being able to provide an understanding of consciousness.

63. Vicente says:

Ron, ok, you are right, it is no argument. It was just that the thread was getting a bit gloomy and overserious.

To me both worlds are convincing: phenomenal and material, and I believe we have to consider both sides to have the whole picture, at least in this “reality plane” whatever that means. We cannot understand our conscious contents without considering our body and environment, but that is clearly not enough by far to explain the conscious experience. This is the essence of dualism, “twoism”, two sides, not one. So I have two main problems on my hobbies desk, qualia and interaction. And I am giving a lot of credit to you by assuming that the material side is much better understood, he he…

64. Vicente says:

Ah, regarding you question on 1/3-POV, it is very much what Kar Lee said, or if you don’t mind checking the discussions I had in the past, with Lloyd Rice, Mike Spenard, John Davies,… It is a matter of subjectivity and scientific methodology requirements. The phenomenal side of consciousness, or as Peter likes to say: the ineffable, is simply out of range for everybody except for the 1-POV owner of that particular experience. To me it is evident, but as presented in KL analogy, you dwell in the other camp, too bad.

A different thing is the progress that can be made in neuroscience, you’ll whitness incredible things in the coming years, that’s for sure. But if your visual cortex is damage, you will never watch the sunset using mine, I am afraid.

And having said this, a few days ago there was a link in the dynamic blogroll about these siamese twin girls, that are sharing part of their cortex showing amazing effects, I am trying to track the process to see how they develop. This fact is making me reconsider a few things.

What would happen if we could connect directly the cortex (and/or other parts of the brain) of two individuals?

Still, I believe there would be two different conscious entities sharing bandwidth… spooky.

65. Shankar says:

Ron Murp 62 – “But I agree that the material world is pretty convincing – enough for me to choose materialism. So convincing that, as tentative as it may be, it’s far more useful than any other world view.”

I find dreams to be totally convincing too.. but being convinced of something is not a guarantee that the stuff actually exists.

Dreams may be convincing while you’re having them. But in ‘this’ world we regularly have memories of dreams, or at least of having dreams, and similar experiences are reported by most other people we talk to about dreams. We can even correlate external behaviour during sleep with dreaming, when observed 3-POV. This is a measure of the inductive weight of ‘this world’.

How often do you, in a dream, dream that ‘this world’ is the dream state? What weight of evidence does the ‘dream world’ have to make us think that that world is any more real than this?

This weight of evidence is why we think dreams are dreams and this waking experience is real.

“discussions I had in the past, with Lloyd Rice, Mike Spenard, John Davies” – I’ve not been following this blog for long. Can you tell me on which posts.

‘ineffable’ – The same applied to atomic physics, back in the middle ages. But now we have a language for that that matches our current 3-POV understanding. So, still, 1-POV remains ineffable, but still now clear reason as to why that’s the case. So maybe, as you suggests, 3-POV will eventually have access to your experiences.

70. Vicente says:

Ron

“But now we have a language…”

yes, and we have a mathematical model that represents experimental observations and data with more or less accuracy… fine… And that’s all folks !!

As we were commenting regarding the “intrinsic properties” and “concept of substance” issue, the ineffable remains as ineffable as it was in the middle ages… describe me what is a quark or a neutrino… ah so ineffable, isn’t it… sort of that I said that the material side is not much clearer either.

Well, you are right… evident is a tough term… and to say “evident to me ” is incorrect, if it is evident, it has to be for everybody. Sorry for the semantical and grammatical mistake. There we have, science based on evidence, or what is the same evidence requires of objectivity.

Although on reflection, a certain clue could make the case solution evident for a certain policeman and not to other, simply because the former is more intelligent than the latter, so evidence could also be observer dependent, couldn’t it. But yes, I agree, evidence has some universal flavour.

Let me rephrase it: I positively believe with the highest level of assurance I could have about anything.

Ron [68],
“This weight of evidence is why we think dreams are dreams and this waking experience is real.”

I actually had a dream inside a dream some 20 years ago. In the dream when I realized that what “happened” was actually a dream, I was very disappointed. Then I woke up and realized that even that was a dream.

Maybe we are all in a dream now. Until you get to the next level, you really cannot be sure about this level, can you?

By the way, I think the movie “Inception” is pretty good in presenting this aspect of reality, though how a group of people can share a common dream without using a Matrix is beyond me, si-fi logic speaking.

73. Vicente says:

74. Vicente says:

Kar Lee,

I actually had a dream inside a dream some 20 years ago. In the dream when I realized that what “happened” was actually a dream, I was very disappointed. Then I woke up and realized that even that was a dream.

ha ha come on… you dreamt that you were waking from dream and that’s it. This is just what we needed, another infinite recursion, I dream that I am dreaming that I am dreaming that I am dreaming…. On top of that, how can you explain a dream inside a dream in neurological terms.

Whether life is a dream has been a recurrent topic of literature for centuries, let me share with you a pearl of universal literature…

75. Charles Wolverton says:

OK, I’m going to reenter the 1/3-POV fray by trying to describe as precisely as I can how I interpret “what it’s like to see red from a 1-POV”. Then perhaps someone can either refute my interpretation or explain why they attach so much significance to the fact that the accompanying “experience” is inherently 1-POV and unique to each individual.

First, we need to define what it means to “see red”. There are several components to that act:

1. The sensory input, a specific spectral power density impinging on the viewer V’s eye(s). Call it SPD(red).

4. An epistemological aspect, viz, V’s ability to “know” that red is being seen, ie, to respond to NC(V,red) with the statement “I am seeing red”.

It is important to note that 3 and 4 are distinct. Presumably, one can have 3 without 4. The latter is learned ostensibly by being stimulated by SPD(red) and being taught that the conventional response when V has PE(V,red) is to say “I’m seeing red” (that is the process in essence, though clearly not literally).

Now it seems likely that at some point the technology will be available to identify NC(V,red), leaving only PE(V, red) – what I assume people have in mind by “what it’s like to see red” – to be explained. As nearly as I have been able to tell, almost nothing is currently known about the brain processing which leads to PE(V,red). And being an experience, it is limited to the person having it – a fact that seems implicit in the very concept of an “experience”. Ie, it is inherently a 1-POV event.

But what some apparently infer from all is that we can never learn enough about PE(V, red) to explain it in a 3-POV vocabulary, and therefore it is somehow
“mysterious”, possibly even having ontological significance. So, my questions for those folks are:

1) Does my description comport with your view of “what it’s like to see red”?

2) If so, why draw those seemingly defeatist conclusions from a situation that seems dominated by current ignorance of the key brain functionality involved?

Charles,
Let me attempt to address your question.
If you leave out 3., retaining only 1., 2. and 4., that will be the complete 3-POV discription. I really mean complete. 1., 2. and 4. form a self-contained consistent 3-POV description of this phenomenon that you are describing, with nothing left out. A subject under test receives this signal in his sensory organ, he has this neural activity (not correlate, because there is nothing to correlate to) in his brain, and he has this response in the form of a statement “I am seeing red?”, all done in a scientific and 3-POV, with nothing left out.
Here is the question: Why include 3. ?
3. has no role in a 3-POV scientific investigation.
As Chalmers writes in “The conscious mind”, the so-called experience or consciousness is explanatory irrelevant.

If you are adding something extra to the already complete description of the world, how can you expect the already complete set of explanations to be able to include this extra thing that you just added? If the complete set of explanations include this extra thing, then this extra thing will not be “extra” any more, it would have already been included.

So, to answer your question, why draw a defeatist conclusion, it is like trying to learn how a duck quacks by peeling an apple: A category mistake, not a defeatist’s conclusion.

78. Trond says:

Charles Wolverton:
“2) If so, why draw those seemingly defeatist conclusions from a situation that seems dominated by current ignorance of the key brain functionality involved?”

As far as I can tell, we have a good understanding of how the brain works. We know the chemicals and processes involved. We don’t have a complete overview of the neural network in the brain, but it is not impossible that we will have that someday, just as we may decode our entire DNA.

Can you explain why you think this information will be of any help? What could happen in the brain that would explain 1-POV? Somehow, an algorithm that translates input from the eyes into output to the mouth is given this extra property that algorithms in other domains don’t have. A property that could have an infinite amount of values. Where is the value assigned?

79. Charles Wolverton says:

KL:

“If you leave out 3., retaining only 1., 2. and 4., that will be the complete 3-POV discription.”

I described what I assume the complete process to be for any person, necessarily from my 3-POV for any person other than me. While in principle I think it possible that just 1,2, and 4 could be an adequate subset for the stimulus to result in the verbal response alone, we all know that step 3 accompanies the other steps.

“3. has no role in a 3-POV scientific investigation.”

Why not? Step 3 is an integral part (in humans) of the process I attempted to describe even if an unnecessary part. It’s true that an instance of step 3 can only be a 1-POV event, but so what? It isn’t obvious to me that A’s inability have a 1-POV of B’s phenomenal experience implies an inability for someone to explain how that PE occurs from a 3-POV, an explanation that I would welcome.

(BTW, As far as I know, “neural correlates” is standard terminology (cf NCC). The ones to which I refer “correlate to” the sensory stimulus (the SPD(red)).)

I agree that one question is “why does the process include step 3″, but that is an evolutionary issue. I’m interested in the answer to the question “Given that step 3 does occur, how might it be explained?”

Trond:

“What could happen in the brain that would explain 1-POV?”

The 1-POV is not the thing in need of being “explained”. The unexplained thing is the (IMO inherently 1-POV) step 3.

80. Trond says:

Charles: I don’t think I understand the difference. I’m happy to replace 1-POV with step 3: PE(V, red). I’m curious whether you have an hypothesis about what such a connection between NC(V, red) and PE(V, red) looks like.

The main reason I am reluctant to the idea that the brain contains both, is that it is seemingly in conflict with our knowledge of the brain so far, as a biological neural network computer.

Charles,
“..Why not? Step 3 is an integral part (in humans) of the process I attempted to describe even if an unnecessary part. It’s true that an instance of step 3 can only be a 1-POV event, but so what? It isn’t obvious to me that A’s inability have a 1-POV of B’s phenomenal experience implies an inability for someone to explain how that PE occurs from a 3-POV, an explanation that I would welcome…”

Why do you want to describe an unnecessary part? What exactly are you describing? Before we understand what is it that you are describing, can we know that it can be explained?

Just as a devil’s advocate, I am going to take a strong materialist’s view and claim that since 3. is unnecessary (it is just a concept that is unnecessary), it does not exist. There is nothing to explain in terms of something else. Problem solved. But obviously you believe 3. exists. Then you have to explain why you believe 3. exists and requires an explanation at all.

Yes, philosophically we are on tricky ground all round. This was the point I was making earlier regarding materialism: by all means question materialism with regard to consciousness, but you have to apply that questioning to materialism in total.

We can accept the materialist model of the universe as a working model; and we can acknowledge that nothing has been found that challenges that refutes that model; we’ve only found philosophical questions as to the deep nature of that world view. But then we have the same problems with regard to all world views. Everything is contingent to some extent.

So we don’t really know what a quark is, or a neutrino. To all intents and purposes they are just models for what empirical science shows us.

But none of this means we don’t really understand the material world model in any useful sense. We’re not so stumped by the ultimate reality of matter, if there is such a thing, that it stops us flying planes, growing food, performing experiments on brains. And none of this means there is anything special about consciousness that cannot be understood to a similar degree.

You have provided no fundamental philosophical reason for making 1-POV special.

“If you are adding something extra to the already complete description of the world…” – Why is it complete without (3)? If we don’t really understand 1-POV yet, then how can you rule it out, as inaccessible?

Chalmers may be right, in that 1-POV may turn out to be irrelevant. Some possibilities haven’t been ruled out by any science or philosophy. It may be enough to ignore 1-POV as a oddity of behaviour.

But so far we don’t know any of this for sure, so it seems premature to say it’s either not relevant, or forever ineffable.

“Can you explain why you think this information will be of any help? What could happen in the brain that would explain 1-POV?”

The point is we don’t know yet. As a materialists my view is that as yet we’ve found nothing that doesn’t conform to the materialist model. Sure there are some unexplained aspects of the world. But there have always been unexplained aspects, and many of them have now been included in the materialist model, as science has progressed.

So, what is it about consciousness that is a special case? What is it about 1-POV that categorically rules it out of having some model of it developed?

A lot of the views here, that 1-POV is special, ineffable, are views taken by people using their 1-POV. Maybe that 1-POV is preventing us assessing the 1-POV, from the 1-POV. Maybe we need the 3-POV to get further than we have so far in the last few thousand years in philosophy (or theology). Maybe 1-POV has always been inadequate for understanding 1-POV.

The materialist view, and the rigor of science, has given us models of a material world in which the boundary between life and non-life is becoming more and more vague. The evolution sub-model has shown us that there have been numerous dynamic forms of matter, life, that could navigate the world quite well without consciousness as we see it in ourselves.

Somewhere along the line these dynamic forms of matter evolved self-sensing to the point that it became a self-reflecting awareness behaviour.

But there is nothing, no evidence, that any of this does not conform to the materialist model.

All our philosophical puzzles are associated with how baffled we are by our own brains. The very fact that we construct concepts, models, in our minds is itself a puzzle. It seems less of a puzzle intuitively when we consider organisms with very few neurons – we don’t have a problem with understanding them as automata. But for us, our complexity seems to baffle us.

We have been busy, with thousands of years of philosophy and theology, trying to understand ourselves from the inside out, from 1-POV, because that’s all we had available. And we’ve been stumped because, so far, the 3-POV appears to work for most of everything else.

But there is nothing in philosophy or science that dictates that this is how it has to remain.

91. Trond says:

Vicente (90), that’s an interesting point!

Ron murp: “The point is we don’t know yet. As a materialists my view is that as yet we’ve found nothing that doesn’t conform to the materialist model.”

I believe we have indications that consciousness does not conform to the materialist model. We haven’t found anything in the brain that we can identify as consciousness, despite intense study. We have found an insanely complex computer, and we are constantly making progress reverse-engineering it, but that’s it. We had the hypothesis “consciousness can be found in the brain”, tested it, and have so far failed. Yet, we have had success answering other neuroscientific questions. Doesn’t that count as evidence against materialism?

That said, if you, or anyone else, can produce a reasonable hypothesis (which has been the basis of all scientific progress) about the connection between the brain and consciousness, I’m inclined to believe it, untested, if it explains why such an important part of the brain has escaped science until now.

I don’t have any. I thought you thought 1-POV presented a ‘hard-problem’ for understanding of consciousness, based on Chalmers. I don’t have a problem with 1-POV, and I don’t think it presents an insurmountable problem. Given how persuasive 1-POV is I’d hope that a thorough understanding of consciousness could account for it; but maybe it can’t; maybe it doesn’t need to. But I don’t think we’re in a position to rule it out as unnecessary either.

“that will be the complete 3-POV description. I really mean complete.” – That sounds pretty authoritative. I was wondering how you know that.

3-POV exists just as 1-POV does – they are perspectives, points of view. It’s true that all 3-POV that we are currently interested in are views of ‘sentient’ beings that each have their own 1-POV, about themselves. But their view of others is 3-POV, as is their view of the external world.

Schermer says the model-dependent realism is just talking about models, not reality, and wonders if there’s a way out of this epistemological gap. He thinks science is the answer. But I see science itself as another model – one that is about how humans find out stuff.

So I don’t think there’s any way out of the epistemological gap – only ways building knowledge that is as reliable and consistent as we can make it.

My view is that the best we can do, for now, is model-dependent realism. But also that science is the best way of constructing reliable models. And science consists of many scientists, each with their own 1-POV, but looking at the world from a 3-POV perspective, as rigorously as they can, in order to overcome the flaws in a single person’s 3-POV of the world, with regard to knowledge generally, and in order to overcome the flaws in 1-POV when it comes to science about consciousness.

The points of view in this respect is similar to the grammatical sense , where 1-POV is the reflexive view of self. 3-POV is the detached view that is used in science. There’s no denying that one’s subjective views influence one’s 3-POV, but viewing someone else, or anything else, is considered from the 3rd person perspective. These points of view relate to English grammatical points of view, or persons: 1st=I, 2nd=You, 3rd=He/She – though to all intents and purposes in this context 2-POV isn’t relevant, or is subsumed within 3-POV.

If this isn’t the case, then why have we been having this discussion? Why is there a hard problem at all, if all views are 1-POV.

98. Charles Wolverton says:

Trond:

“I’m curious whether you have an hypothesis about what such a connection between NC(V, red) and PE(V, red) looks like.”

Of course not. An explanation of PE is one view of the so-called “hard problem”. As far as I know, even the foremost leaders in the field either have none that have gained much popularity or claim none is even possible. Why would an amateur like me hsve sn explanation?

“I am reluctant to the idea that the brain contains both”

As far as I know, that PE (as I define it – see below) occurs is not in dispute. The questions are how and why it’s produced.

Your comments suggest an assumed level of understanding of the brain within relevant communities that is much greater than I think is warranted. If I’m right, the position that our not currently understanding some hypothesized mental process suggests that it doesn’t exist seems premature.

KL:

If you think PE doesn’t exist, you don’t understand how I’m using the term. In visual processing, for example, light impinging on the eye(s) results in neural activity – what I mean by “neural correlates”. Those neural correlates – which are not available to us by introspection – could in principle be processed to recognize such things as movement in the FOV (eg, an approaching object) or the presence of a familiar object based on identifiable surface patterns (eg, the althernating stripes of a zebra). That we also experience a pictorial representation of the content of our FOV is what I mean by “phenomenal experience”. Such a PE is an additional feature that in a sense makes those neural correlates indirectly available to us for introspection. All I mean by saying that PE is unnecessary is that we could extract some useful info – such as the two examples just given – without the added feature of PE. But it seems almost certain that adding PE has evolutionary advantages. And in any event, it seems like something it would be desirable to understand.

A description can be “complete” only if the thing to be described is specified. I’m trying to describe our total visual processing package including the production of PE.

All:

It appears that “1-POV” has assumed an independent life of its own as a “something” whereas I use it as Ron describes in comment 93: simply an abbreviation for “introspection” of mental events by person A as contrasted with attempts to explain those events by person B, necessarily from a 3-POV. With that understanding, some statements being made using “1-POV” seem to make little if any sense. There are many questions about what can and can’t be achieved using the results of introspection or as a result of from either POV. but I don’t see that there is much to be said about 1-POV per se.

99. Trond says:

Charles: “Why would an amateur like me have an explanation?”

Frankly, I’m more interested in your opinion than people who claim no explanation is possible or warranted. I’m in no way claiming PE doesn’t exist. I’ve just come to realize that it is hard to imagine the structure of the brain containing it on its own. Of course, we may have missed something vital to understanding the brain, but in that case, studying neural correlates won’t bring us any closer to the answer. If the electrons moving in a certain pattern is enough to create PE, then we’re talking about a very strange and ad hoc phenomenon of physics. It’s one hypothesis, but it has many weaknesses.

These are of course very difficult questions to answer, but it would be interesting to know what it would take for you to be satisfied that we knew all there is to know about how the brain works (which we presently don’t, I admit).

100. Vicente says:

Trond, that is a good summary.

If the electrons moving in a certain pattern is enough to create PE, then we’re talking about a very strange and ad hoc phenomenon of physics

I don’t think that is the problem, because it is clear that the physical properties of the brain are undoubtedly involved in the creation of PE (or at least of its contents). It is a problem of the nature of the nervous tissue, and the nature of PE, they just don’t fit. PE qualities (qualia) have no counterpart in the brain physical side, despite there is a link, a connection between them. For this reason, I believe the brain merely acts as an interface, a bridge between two sides of reality.

The thing is we don’t know where to look, and what to look for. It is as if you are sent to Arizona dessert to fish cod, good luck man!

For this reason, I believe that to try to identify effects in the brain that in order to be explained need (or could need) of the action of an external agent, could be a first step, an strategy to sort out this dual nature scenario. Once these effects are identified we could move ahead looking more in detail at what goes on in them.

101. Trond says:

Vicente, yes, I agree, the existence of consciousness is not a problem in itself, just like the existence of atoms is not a problem. As long as you’re in a universe, that universe is bound to have some things in it.

What is so fascinating is, as you say, the apparent connection between the behaviouristic, physics-conforming brain and the rich states of consciousness. It’s as if consciousness intercepts certain synapses, processes that information, and then throws colors, sounds and such into the mix.

If only our own awareness of consciousness is not determined (or overdetermined) by the brain’s evolved self-awereness, we would be able to identify the external agent.

Charles [98],
I think we are closing in on one crucial element of our discussion: What you mean by PE. You said,
“..In visual processing, for example, light impinging on the eye(s) results in neural activity – ….. That we also experience a pictorial representation of the content of our FOV is what I mean by phenomenal experience”.”

You seem to be introducing an interesting “we” in the “we also experience a pictorial representation of the content of our FOV..” What exactly is the “we” you are referring to?

Let’s say you are describing a machine vision system that guides a car through the Los Angeles city traffic (Wow, the Google Car!), and you describe it like “In visual processing, for example, light impinging on the camera results in CPU activity ….. That the Google car also experiences a pictorial representation of the content of its FOV is what I mean by phenomenal experience”.” Now, this statement does not make any sense, does it?

Now, let’s investigate how the camera and CPU activity produce the “phenomenal experience”. Can we? Except for people who are particularly imaginative (like those who talk to their stuff animals), most people will say “what are you trying to explain?” “What phenomenal experience?”

So, what is it that you are trying to explain in a human?

You can say, “But the Google car is not conscious and it does not have phenomenal experience.” I will say, “How do you know it does not? Because you are not a Google car!” Then how do you know a woman has phenomenal experience. After all, you are not a woman. Oh, but because women and men are very similar. But what does similar mean? Two things are similar if there is a subset of attributes that is identical. I will say a human and a Google car are very similar as well. Both are very complicated visual signal analyzing systems. Doesn’t this subset of attributes make a human and the Google car similar enough? Why not? If that is not similar enough, you and I are not similar enough for you to know that I have phenomenal experience.

When we refer to phenomenal experience, we necessarily refer to our own. So, what you are actually saying in the original statement is this: “..In my visual processing, for example, light impinging on MY eye(s) results in my neural activity -….. That I also experience a pictorial representation of the content of MY FOV is what I mean by phenomenal experience”.”

Please note that I have replaced all the seemingly 3-POV descriptions, but should have been 1-POV descriptions, by explicit 1-POV descriptions.

When your statement is re-written this way, which is what it should have been in the first place, the problem is immediately clear: You are trying to explain to yourself your own experience. You know your brain works in a certain way. This come from science, the result of investigating other brains. Then you have “phenomenal experience”. And you want to piece these two pictures together. The ultimate thing you want to explain is the existence of your phenomenal experience, which, my friend, is the same a your own existence. You are trying to explain your own existence! The existence of your phenomenal experience is the same as your own existence from the 1-POV. If you don’t have “phenomenal experience”, that is equivalent to saying you don’t exist. If you don’t experience anything, you cannot be said to exist. If you exist, you must experience something. The two statement are equivalent. So, what I want to point out is, your are trying to explain your own existence.

And your approach is to find the reason in materialism. What I am trying to argue is, if your living body is instantaneously replaced by a different but identical physical body (which we will call your twin, which is not you), the material world will be exactly the same as before (no one will notice any difference short of being you yourself), but your existence will have been terminated. That I infer, the clue of the explanation of your personal existence is not within this material world because the two are unlinked, decoupled. This problem of decoupling, if you are a materialist, is the Hard Problem.

Not necessarily. Whether it’s complete or not depends on what else is to be discovered. It could be considered contingently complete. So it depends on the particular materialist. But a materialts view can also encompass 1-POV – it mearely needs to be able to explain it in materialist terms.

And to pick up on Charles (#98), completeness depends on the bounds we set. I’m not aware of anything we know of being know to an absolute degree. We always have some bounds within which we work. So for many physicis problems Newton is enough, while for others we need Einstein, and so on.

So, maybe we can explain aspects of consciousness that exclude any use of or explanation of 1-POV. But other explanations may include that too.

Ron [103],
You are responding to [97], I assume. If you believe “It could be considered contingently complete”, then it means it is contingent upon the correctness of materialism. And that is a very open-minded approach. I appreciate that.

105. Charles Wolverton says:

KL:

I’ve described – in the only ways I know – what I mean by “phenomenal experience” in humans (the “we”, which I would have thought obvious), and your response is to ask what PE means in the context of a camera and a CPU. I have no idea how to proceed given that big a disconnect.

106. Vicente says:

107. Vicente says:

Vicente,
The double layer dream I had is not like I dreamed of waking up from a lower layer dream and that is it. The lower layer dream has content. I remember it because it was related to a life changing event I was going through.

109. Vicente says:

Kar Lee, there cannot be multi-layer dreams, unless you hold a position, much more bizarre than the interactionist dualism I do.

Probably what you felt what due to the dreams sequence (in one single layer):

1) You have an ordinary dream (what you call lower level)
2) You dream of waking up of that dream.
3) You go on with another ordinary dream.
4) You wake up for real.

With what brain was the Kar Lee in the higher level dream dreaming? with the same one as you were in the lower level, one brain, one level. Dreaming is just a strange state of mind, not to talk about lucid dreams. I remember having a very interesting talk on dreams in some other page of this blog.

Many cultures believe that dreams are a sort of window to the other side. For me, I can usually relate what happens in my dreams directly or indirectly with events of the day. I think they mainly are caused by memory arrangement processes that leak into some consciousness related brain units.

Vicente,
You may be right. I did remember dreaming about waking up from a dream. It could have been that I was interpreting the waking up part as waking up from a lower level dream while that was actually just two same level segments (from the top level anyway). But how can one differentiate between the two interpretations? I guess your reasoning is by saying that the KL in the first level dream does not have a “physical” brain, so the first level dreaming KL could not dream further. But if inside the dream, the dreaming KL dreams of waking up from a dream, is it unreasonable for “him” to interpret what he remembers before “waking up” as a dream inside his dream world?

Would that make the number of levels interpretation dependent? Or you have seen other reasons in literature of the incorrectness of the multi-layer interpretation?

111. Vicente says:

Kar Lee, not really, I haven’t seen anything.

What I find interesting, is that in a way dreaming is like the “Brain in a Vat” scenario. To me, and regarding the emboddied coginition debate, and mind out of our heads etc etc, what the body and its relation with the environment does, is to fix the space time and circumstances coordinates, so that the mind wandering around is minimized. But in the dreams, you have senses and motor output quite disconnected. So if our brains were put and sustained in a vat, we would probably be in such a state, or similar, probably ending up in a horrible nightmare.

To me, one of the characteristic dream features, is that you often find yourself in the middle of a situation, not knowing how you got in it. You loose your timelime and events chain reference. But life in a way is like that, if you look back, you turn back the years to your first memories, it is like that. From a pure conscious POV, you are here not knowing very well how you arrived, like in a dream. In the same fashion, dying could be like awakening to the next layer. In that broader sense, I like the multi-layer dreaming model.

Vicente,
I think the multilayer concept has this following similarity in the software world. On top of a computing hardware architecture, we have the first layer of software called the operating system, on top of which we have the so-called applications. If the application is an internet browser, then another layer of software can be written to run on top of that. Then, we have this virtualization that multiple operating systems can be running on, not the hardware itself, but the virtual machine, which emulates multiple hardware platforms.

Strictly speaking, there is only hardware and software, and all software eventually result in the hardware running something. Even though the software part is conceptualized into multiple layers, it is after all one layer only. But I think the multilayer concept is still useful. Dream is probably like this, at least from the POV of this reality we are in.

113. Vicente says:

Kar Lee,

It is an interesting idea, but the multilayer SW part structure is not just conceptualized, actually each layer’s programmes have to call functions of the API provided by the lower level e.g: a certain application will have to call the Open/Close R/W FILE functions provided by the OS. Thus, there is a real hierarchy of SW layers.

In the case of thinking (rather than dreaming), maybe we could still use that analogy (that suits pretty well the universal mind approach ;)) Could be that more elaborated cognition processes make use of lower level units, or “thought bricks”, like some sort of axiomatic logic, that from basic logical units (a=a / a=b; b=c => a=c ….) grows and produces very complex true statements…. This eventually ends in a Platonic view, with conceptual elements having a real existence in some reality layer.

The point is, where is the display device and the user in this model (carthesian fallacy included)?

or as somebody mentioned at the end there must be an “intrinsic observer” . Great concept!! but beyong my logic. Object and Subject in one unit. I know this is the solution, but I don’t think human beings can grab such a marvellous concept.

114. Trond says:

Can I ask what your opinion is on this? If it is true that consciousness resides in another realm, must we assume that either the agent itself or another agent has willfully connected it with the brain in this universe?