About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Thursday, November 08, 2012

Consciousness and the Internet

blog.lib.umn.edu

by Massimo Pigliucci

Here is an interesting statistic: if we multiply the (approximate) number of computers currently present on planet Earth by the (approximate) number of transistors contained in those computers we get 10^18, which is three orders of magnitude larger than the number of synapses in a typical human brain. Which naturally prompted Slate magazine’s Dan Falk to ask whether the Internet is about to “wake up,” i.e., achieve something similar to human consciousness. He sought answers from Neuroscientist Christof Koch, scifi writer Robert Sawyer, philosopher Dan Dennett, and cosmologist Sean Carroll. I think it’s worth commenting on what three of these four had to say about the question (I will skip Sawyer, partly because what he said to Falk was along the lines of Koch’s response, partly because I think scifi writers are creatively interesting, but do not have actual expertise in the matter at hand).

Koch thinks that the awakening of the Internet is a serious possibility, basing his judgment on the degree of complexity of the computer network (hence the comparison between the number of transistors and the number of synaptical connections mentioned above). Koch realizes that brains and computer networks are made of entirely different things, but says that that’s not an obstacle to consciousness as long as “the level of complexity is great enough.” I always found that to be a strange argument, as popular as it is among some scientists and a number of philosophers. If complexity is all it takes, then shouldn’t ecosystems be conscious? (And before you ask, no, I don’t believe in the so-called Gaia hypothesis, which I consider a piece of new agey fluff.)

In the interview, Koch continued: “certainly by any measure [the Internet is] a very, very complex system. Could it be conscious? In principle, yes it can.” And, pray, which principle would that be? I have started to note that a number of people prone to speculations at the border between science and science fiction, or between science and metaphysics, are quick to invoke the “in principle” argument. When pressed, though, they do not seem to be able to articulate exactly which principle they are referring to. Rather, it seems that the phrase is meant to indicate something along the lines of “I can’t think of a reason why not,” which at best is an argument from incredulity.

Koch went on speculating anyway: “Even today it might ‘feel like something’ to be the Internet,” he said, without a shred of evidence or even a suggestion of how one could possibly know that. He even commented on the possible “psychology” of the ‘net: “It may not have any of the survival instincts that we have ... It did not evolve in a world ‘red in tooth and claw,’ to use Schopenhauer’s famous expression.” Actually, that wasn’t Schopenhauer’s expression (apparently, the phrase traces back to a line in a poem by Alfred Lord Tennyson published in 1850), but at least we have an admission of the fact that psychologies are traits that evolved.

And talk about wild speculation: in the same interview Koch told Slate that he thinks that consciousness is “a fundamental property of the universe,” on par with energy, mass and space. Now let’s remember that we have — so far — precisely one example of a conscious species known in the entire universe. A rather flimsy basis on which to build claims of necessity on a cosmic scale, no?

Dennett, to his credit, was much more cautious than Koch in the interview, highlighting the fact that the architecture of the Internet is very different from the architecture of the human brain. It would seem like an obvious point, but I guess it’s worth underscoring: even on a functionalist view of the mind-brain relationship, it can’t be just about overall complexity, it has to be a particular type of complexity. Still, I don’t think Dennett distanced himself enough from Koch’s optimism:

“I agree with Koch that the Internet has the potential to serve as the physical basis for a planetary mind — it’s the right kind of stuff with the right sort of connectivity ... [But the difference in architecture] makes it unlikely in the extreme that it would have any sort of consciousness.”

The right kind of stuff with the right sort of connectivity? How so? According to which well establish principle of neuroscience or philosophy of mind? When we talk about “stuff” in this context we need to be careful. Either Dennett doesn’t think that the substrate matters — in which case there can’t be any talk of right or wrong stuff — or he thinks it does. In the latter case, then we need positive arguments for why replacing biologically functional carbon-based connections with silicon-based ones would retain the functionality of the system. I am agnostic on this point, but one cannot simply assume that to be the case.

More broadly, I am inclined to think that the substrate does, in fact, matter, though there may be a variety of substrates that would do the job (if they are connected in the right way). My position stems from a degree of skepticism at the idea that minding is just a type of computing, analogous to what goes on inside electronic machines. Yes, if one defines “computing” very broadly (in terms, for instance, of universal Turing machines), then minding is a type of computing. But so is pretty much everything else in the universe, which means that the concept isn’t particularly useful for the problem at hand.

I have mentioned in other writings John Searle’s (he of the Chinese room thought experiment) analogy between consciousness as a biological process and photosynthesis. One can indeed simulate every single reaction that takes place during photosynthesis, all the way down to the quantum effects regulating electron transport. But at the end of the simulation one doesn’t get the thing that biological organisms get out of photosynthesis: sugar. That’s because there is an important distinction between a physical system and a simulation of a physical system.

My experience has been, however, that a number of people don’t find Searle’s analogy compelling (usually because they are trapped in the “it’s a computation” mindset, apparently without realizing that photosynthesis also is “computable”), so let’s try another one. How about life itself? I am no vitalist, of course, but I do think there is a qualitative difference between animate and inanimate systems, which is the whole problem that people interested in the origin of life are focused on solving (and haven’t solved yet). Now, we know enough about chemistry and biochemistry to be pretty confident that life as we know it simply could not have evolved by using radically different chemical substrates (say, inert gases, to take the extreme example) instead of carbon. That’s because carbon has a number of unusual (chemically speaking) characteristics that make it extremely versatile for use by biological systems. It may be that life could have evolved using different chemistries (silicon is the alternative frequently brought up), but there is ample room for skepticism based on our knowledge of the much more stringent limitations of non-carbon chemistry.

It is in this non-mysterian sense that, I think, substrate does matter to every biological phenomenon. And since consciousness is — until proven otherwise — a biological phenomenon, I don’t see why it would be an exception. To insist a priori that it is in fact exceptional is to, ironically, endorse a type of dualism: mind is radically different from brain matter, though not quite in the way Descartes thought.

As it turns out, cosmologist Sean Carroll was the most reasonable of the bunch interviewed by Falk at Slate. As he put it: “There’s nothing stopping the Internet from having the computational capacity of a conscious brain, but that’s a long way from actually being conscious ... Real brains have undergone millions of generations of natural selection to get where they are. I don’t see anything analogous that would be coaxing the Internet into consciousness. ... I don’t think it’s at all likely.” Thank you, Sean! Indeed, let us stress the point once more: neither complexity per se nor computational ability on its own explain consciousness. Yes, conscious brains are complex, and they are capable of computation, but they are clearly capable of something else (to feel what it is like to be an organism of a particular type), and we still don’t have a good grasp of what is missing in our account of consciousness to explain that something else. The quest continues...

I was rather impatient with all the replies until Sean Carroll's came along. While the connections and substrate may be there, they basically provide a lump of dead meat, rather than something that has come together like a brain.

If level of complexity sufficed, then the iTunes Terms and Conditions would be sentient.

I think Searle's mistake is focusing on syntax while neglecting semantics. The reason the Chinese room fails is that it fails on the semantic level.

Meaning is about sign and signified. To fix the Chinese room, you would have to have 5 languages, all inputting at the same time. Like our five senses. Then our illiterate homunculus could create meaning, via the order, and organization of how these different languages interact... causation, category and identity. Time scale is an issue, but computers can also compute much faster than the human brain. So the evolution necessary could happen faster.

The internet has nothing like our type of interaction, as you say, so any consciousness that might occur would be very different and... well... very unlikely. It might not even notice us, if it did become conscious.

I think robots are a much more likely candidate for consciousness, because they 'interact' like us.

I could be misunderstanding you, but I don't see how your method would solved the Chinese room problem. I was under the impression that to have semantics of the same kind as a human, the signs have to be grounded with real senses.

To be 'real', it would just have to be self-consistent over time. Standard brain-in-a-vat type stuff.

Imagine an apple, it feels a certain way, smells a certain way, tastes a certain way...etc..

As long as these inputs are self-consistent, your language is not nonsense and by organizing the different language inputs, the inputs become meaningful. But you need interaction (and comparison of the inputs creates meaning). Consciousness seems to simply be a high level of this, which leads to self-reflection. The abstract 'I'.

I completely agree with your characterization of the article...it was a bit of fun, but misguided play. Sean Carroll nailed it on the head in the interview, but I don't know if I completely buy your criticisms of the concept of building consciousness.

I'd like to caveat that I've only read a couple Dennett's books partway through, but I'm inclined to fall on a "substrate matters, kinda". And where I slip up and don't mention it, I'm going to be focusing on human-like consciousness. It's probably physically possible to emulate a brain on a different substrate when the substrate can support a manageable density of the fundamental computing units. I cannot see any reason why consciousness should be limited to biological systems, though I think we're a long way of even understanding enough to know how to build it (much less actually build it). But we no the substrate for consciousness can't be made carpentry, as the engineering challenges would be literally astronomical. Also, the degrees of freedom in carbon chemistry are not going to bubble-up beyond the "logical" layer (this may be a computer science only term, but "logical" vs "physical" is about abstracting items in a useful way away from their physical details).

When it comes to the emulation problem...isn't the output of conscious processes itself usually abstract symbols? Emulation seems perfectly capable of transmitting this into the "real world", or rather, we have no problem getting at it. Why can we not judge consciousness based on input/output? Anything capable of consciousness would probably need time to build a "private state" (experience) that was complex enough to behave "consciously", but after this period, the things we use to reason about human-like conscious minds should start to make sense. We would talk about degrees of consciousness based on how well reasoning about the "other mind" of the computer(s) would work (predict output).

Now, with the "uploaders" thing...I think that's the most ridiculous thing in the world. I don't see that it would be feasible or at all...consciousness seems sufficiently complex that the subtleties of the medium would prevent any 1:1 transfer, and I don't know how one would capture a 100% complete brain-state snapshot.

Tononi addressed the complexity is not sufficient argument in his new book Phi (the style of which I find annoying rather than amusing - still a thought provoking read).

Also a simple correction to Unknown (above) - we have many, many more senses than 5. I'm not at all sure why this is taught to children. Some senses other than the top 5 (the earth's gravitational field and acceleration more generally, balance, a sense of where our limbs are in 3D space, heat, cold, noxious substances, levels of various chemicals in our tissues, lots of tissues can sense strain which gives us the sense of where our limbs are but also bone loading environment, blood pressure, etc. etc., possibly but not likely pheromones, possibly but really not likely the earth's (or any) magnetic field)

> Can you recommend a good handbook/companion to the philosophy of mind that discusses the latest controversies/theories of consciousness? <

We may have to crowd source this one. My specialty is philosophy of biology, so I would hesitate to make that kind of recommendation.

Alex,

I think our positions are very close, but a couple of comments:

> I cannot see any reason why consciousness should be limited to biological systems <

I never said that. I think, however, that it is limited to certain classes of complex systems with certain kinds of chemistries, in part because other kinds of chemistries just do not allow for the necessary complexity and reactivity.

> the degrees of freedom in carbon chemistry are not going to bubble-up beyond the "logical" layer <

Right, but that assumes that consciousness is only a matter of logic / symbol manipulation. It isn't, otherwise modern computers would likely already be conscious. Consciousness is about the feeling of first person experience, which is separate from whatever computational issues pertain to the logical layer (unless, again, one defines the latter as having to do with computability in the broadest sense of the word, in which case I'd like to know the difference between a conscious being and anything else that is computable in the universe).

Along these lines ... what if our definition of consciousness, like, say, SETI's search for extraterrestrial life, is too narrowly focused? Or, if not too narrow, wrongly focused? We could both overlook consciousness somewhere, wrongly, and impute it where it isn't, also wrongly?

Also, there's degrees of consciousness, based on how many "layers" of thinking a conscious entity can manipulate, as in:1. I am thinking;2. I know that you are thinking;3. I know that you are thinking about what I am thinking;Etc.

If "the Internet" is conscious, I'm sure it's not beyond the second-order level of thinking consciousness.

I'd argue these are layers of abstractions in the conscious thinking process. Which only life forms are capable of.

1. I am thinking about making conscious choices.2. I am thinking about the conscious choices you are thinking about.3. I am thinking about the conscious choices you may make in response to those you think I'm making.

The Internet is aware of serving human purposes - as are our human brains. (It also serves some animal purposes, although in a largely "unawareness" state.) Purpose is an intelligently strategic process, and the process requires itself to ask the questions. The Internet so far can ask no questions that we humans haven't prompted it to do.Now you might respond that humans have also been prompted by the universe to ask questions. But we have evolved to make our own choices of and from our own questions, and the Internet and its functional computers have so far not evolved, by either their own intelligences or ours, to do so.

Hammers "serve" a purpose. but it's mostly ours. They don't have any purposes of their own that we know of. Not in the guise of being hammers anyway. The hammer does react to its physical awareness of stress, but its reactive options were chosen for it when we arranged for and caused it to function.Computers, by the way, can act intelligently. But again because we've taught them to. And so I've tried to teach my hammer to anticipate things at least, but so far to no purpose.

"The Internet" is aware of nothing. Its human users, programmers, etc.? Different story. That said, I think, speaking of, the thrust of Dan Falk is all wrong. Maybe we should take a page from Lynn Margules and ask if (shades of Kurzweil, even, I know) we shouldn't talk about symbiosis.

I think that "complexity" is being misused much like brain-volume was mis-used in the early 20th century as a measure of intelligence. Consciousness requires complexity, but it requires (IMHO) a particular, non-random complexity, and the internet, for all it's virtues, lacks the requisite character to become conscious. It sounds cool, but it's BS.

I've held this idea in my mind for years. Not long ago, I discovered that there's actually a name for it, the "China brain" (http://en.wikipedia.org/wiki/China_brain). That's a separate thought experiment than the Chinese room -- an argument that never had much weight for me.

Falk asks if the Internet is "about to wake up"...What if it already has? Our individual neurons have no idea what we are thinking (or that we are thinking), so how would be know if the Internet is thinking?

Unless we can figure out a test for the Internet's possible consciousness, I think the answer is to remain agnostic on the question. The one thing in my mind that makes it seem unlikely is the lack of coherent input. By that, I mean that we have eyes that give us a coherent visual image of the world, feedback from touching an object that we also see, plus hearing, smell, and taste that we can correlate with what is around us. The Internet gets most of its inputs from humans who are doing stuff like tweeting about what they ate for breakfast. Yeah perhaps the Internet can "see" through the many webcams attached to it, but without physical interaction, what can it possibly make of what it sees? And supposing it can, it knows "where" things are mainly in terms of IP addresses, so its perception of the globe we live on might have an unusual (and quite artificial) topology. Ultimately it would depend on whether the Intenet is capable of self-organizing into something that can form a coherent world view. This is another way of saying that the substrate matters -- yes, I think it does, but not in a way that Searles' Chinese room captures.

> what if our definition of consciousness, like, say, SETI's search for extraterrestrial life, is too narrowly focused? Or, if not too narrow, wrongly focused? We could both overlook consciousness somewhere, wrongly, and impute it where it isn't, also wrongly? <

Yes, this is always a possibility. But, just in the case of life, we have to go by what we know, and we know only one type of consciousness. If anyone is interested in a "broader" version it is up to them to provide reasons, definitions, and evidence.

pete, denlillekemisten,

I see what you mean. I'll look into it, but it sounds like the idea is to use multi-valued logic to avoid paradoxes, which is fine. Bayesian inference comes in for practical applications (resolution of inconsistent beliefs), but, again, I don't see how a theory of probability is directly relevant to an abstract problem in logic.

I agree that the internet is almost certainly not about to reach some human-level of consciousness. But I don't agree that the substrate is as necessary for consciousness as you suggest.

First I should mention that I haven't read much of this blog, so I'm not sure if you've covered any of what I discuss below elsewhere. Please do refer me to posts that might cover these topics.

Second, I seem not have the same narrow definition of "consciousness" as you, since you've implied that only humans are conscious. I tend to view consciousness as a scale of awareness. Practically all animals have some collection of senses of which they are aware. Higher mammals start to become aware of more complex world states and interactions, including things such as intentions, desires and feelings. Humans certainly do appear to be the best species at being able to deal with abstractions and being able to reflectively perceive and control our internal state of mind. Though there are many low-level brain processes that are beneath the level of our conscious awareness.

I think your analogy between Searle's Chinese Room and photosynthesis falls down since you don't maintain the external interaction. The Chinese Room is flawed as an argument against artificial intelligence, but it interacts through the input and output of messages in Chinese. The "photosynthesis room" should be capable of generating the same chemical output as photosynthesis. From an external perspective, it doesn't matter what is going on in the room as long as the output is the same. There could be actual photosynthetic chemicals in there, or some complex machine that includes a simulation connected to a "molecule assembler". Would you be concerned that the photosynthesis was only simulated if the physical result was still the same?

Of course, few would argue that consciousness is much more complicated than photosynthesis. The China brain, mentioned by Richard above, is something I've contemplated in one form or another in the past. It seems to me that the vast difference in the substrate there is probably a significant stumbling block for people considering such a system to be conscious. Though I don't see how the China brain producing consciousness raises questions of any greater difficulty (where and how consciousness arises) than in a biological brain.

The human brain is composed of approximately 100e9 neurons, and each neuron is a distinct cell that can survive in a culture outside of the brain. Through the arrangement and efficacy of connections to sensory neurons and each other, the activity of collections of neurons abstractly represent awareness of the body and environment. Consciousness emerges from the collective activity of neurons, so it seems to me that consciousness is actually the combination and interaction of these abstract representations.

Scientific experiments with out-of-body experiences would seem to suggest that, while consciousness is a product of the brain, ambiguous sensory information or direct chemical manipulation can cause our consciousness to "leave" the body. This could be evidence for consciousness being the emergent abstractions of perceptual brain processes.

Having said that, the substrate is critically important for biological organisms. Just as the physical laws of the universe are critical. (How could they not be when living things are composed of matter and energy interacting under these physical laws?) But if consciousness emerges from the abstract representations resulting from perceptual brain processes, by the nature of abstraction, the physical substrate of consciousness becomes non-critical. Given this view, I don't currently see why consciousness couldn't exist "in silico".

Maybe you could give your point of view on the ideas that abstract representations can objectively exist, do exist in the brain, and that consciousness is the a collection of abstract representations?

>"Scientific experiments with out-of-body experiences would seem to suggest that, while consciousness is a product of the brain, ambiguous sensory information or direct chemical manipulation can cause our consciousness to "leave" the body."

What scientific experiments with out-of-body experiences, such as those using transcranial magnetic stimulation and the like, tell us is that the 'feeling' of being outside your body is just that, a feeling, specifically the manipulation of the proprioceptive modeling of our embodied system. Neural modulation of the relevant brain areas responsible for maintaining said proprioceptive model does not prove that consciousness can leave the brian/body, it just shows that the part of the brain that maintains a model of our embodiment is manipulatable, so we can induce the feeling of floating outside the body etc. If you want to prove that our consciousness does indeed extend outside the body, then you'd need a study where the floating consciousness reports back novel information (say a changing sequence of random numbers on an led display somewhere the person's eyes can't see) that would be physically impossible for the person to have perceived. In that regard, there is an ongoing study where they place displays that cycle numbers or images, forget which, in operating rooms positioned so that the patients cannot see them, but if you hover above the body and observe the operating theater like so many have reported, then it would be visible. I have a very low probability of expecting such positive data points in these experiments, the simpler conclusion is that it is just the feeling of leaving your body, aided by your additional senses, to recreate the scene from a different, imagined vantage point.

I agree. :) I was probably a bit vague with the, in quotations, "leave". I don't think that consciousness is like a spirit that can leave the body and independently experience the world.

I was working with a definition of consciousness where there is a distinction between the functions of the brain that give rise to consciousness and the awareness and subjective experience of consciousness that emerges.

It may be a stretch, but I would say that consciousness extends to wherever sensory signals come from. Of course, in normal situations that only extends to the sensory receptors in the body.

However, we could, for example, imagine a living brain in a jar receiving all of the sensory signals from, and sending all the motor signals to, a distant human body. Or, we could use the real examples of implants (cochlear and artificial retinas) that allows us to be aware of signals from sensors (sound or vision) outside of the body.

I would say that "consciousness" extends to the site of these sensors. So consciousness could, in this sense, "leave" the body.

Massimo,The Internet reacts to our requests. It has elements that calculate at our request. I know this from logically reacting to my observations. ("Know" being a term of probability in my case, if not in yours.)All things that react are in that sense aware of the reaction process. The Internet computers are not aware of the extent to which their masters are humans, or the extent to which humans have purposes. Your computer for example was likely taught that we have none.And physicists may have informed you that (in their professional opinion) there is an awareness involved in every physical reaction process in the universe, but again they are speaking from a logical analysis of their observations. And of course you have logically analyzed this particular bit of knowledge and found it wanting. In your "oh boy" opinion that is. Oh brother.

> I seem not have the same narrow definition of "consciousness" as you, since you've implied that only humans are conscious. <

No, I don't. I explicitly said in the post that consciousness is a matter of degrees, as to be expected from pretty much anything that evolved. However, that to me is even more of a reason to think that the phenomenon is substrate dependent.

> Would you be concerned that the photosynthesis was only simulated if the physical result was still the same? <

The point of that analogy is precisely that if photosynthesis were only simulated there wouldn't be any physical result. Your "molecular assembler" simply sneaks in the physical substrate.

> Though I don't see how the China brain producing consciousness raises questions of any greater difficulty (where and how consciousness arises) than in a biological brain. <

I do, since the substrate is entirely different (and so is the connectivity).

> Consciousness emerges from the collective activity of neurons, so it seems to me that consciousness is actually the combination and interaction of these abstract representations. <

We don't actually know how consciousness emerges from neural activity (though it doubtless does). I don't think I can make sense of the idea of neurons having representations.

> Scientific experiments with out-of-body experiences would seem to suggest that, while consciousness is a product of the brain, ambiguous sensory information or direct chemical manipulation can cause our consciousness to "leave" the body. <

That's a well documented illusion, there is no leaving the body, as the body seems to be necessary for consciousness (yet another argument against "mind uploading" fantasies).

> But if consciousness emerges from the abstract representations resulting from perceptual brain processes, by the nature of abstraction, the physical substrate of consciousness becomes non-critical. <

But that's begging the question. I don't think consciousness emerges from abstract representations of logical inputs / outputs. That's precisely what one needs to demonstrate (rather than assume) for the computational theory of mind to go through.

Baron,

> All things that react are in that sense aware of the reaction process. <

Nonsense on stilts, as I have argued to you many times in the past.

> physicists may have informed you that (in their professional opinion) there is an awareness involved in every physical reaction process in the universe <

Physicists have not informed me of any such thing. And what would physicists known about awareness, given that the latter is a biological phenomenon, to the best of our knowledge?

Massimo, if awareness is more than a biological phenomenon, than who better to have realized that than physicists.

And because you've persisted in arguing that only biological forms have awareness, does such persistence alone make you right?

Physicists (John Wheeler, et al) have also argued that the universe operates and evolves intelligently, and yet your argument seems to be that intelligence was only evolved by life - along with, I suppose, by life's self invented and self evolved awareness.

Of course it's long been obvious that you won't and apparently can't change your mind on these matters, but many of us feel that some forms of intelligent change are a necessity for learning. And when so many learned folks have stressed the need to look for purposes, you're refusal to consider purpose as a function of all intelligent processes is problematic to say the least.

>" One can indeed simulate every single reaction that takes place during photosynthesis, all the way down to the quantum effects regulating electron transport. But at the end of the simulation one doesn’t get the thing that biological organisms get out of photosynthesis: sugar."

But you do get sugar! If you were to simulate, with appropriate resolution, yourself (forget the feasibility for the moment), then you'd be able to interact with the simulated sugar in the same way you'd interact with 'physical' sugar here in the 'real' world. I don't buy the argument that a simulation is different from the real thing.

Now, if there are some physical phenomena that are not amenable to computational simulation, either for computability or computational complexity constraints, then that is another story. But what can be simulated, at appropriate details, is not different from the 'real' thing except for the divide of the interface between the agent and the simulation.

Philosophy is hard enough to do when definitions are rigid and clear, even harder when the subject matter becomes as murky as consciousness. While it is is intriguing to consider the internet's point of view due to all its underlying complexity, why bother? A car might be a useful analogy to seeing consciousness as a 'simple' thing. Yes, there might be hundreds of systems within the car allowing it to drive at 80mph, but such movement can be achieved using any number of methods. Same could be true with consciousness. Isn't the mystery of consciousness supposed to be not so much the interaction with the environment, not so much the 'awareness' of the environment (if you believe that anything in this environment even exists outside the awareness of others) but simply this - the soundless hum you feel when you wake up, barely sensing anything except your own thoughts. That's the lodestone I think we are trying to mine, and I think we do ourselves and the science such a disservice by thinking this is only a property of a living thing, whatever that is supposed to mean.

I know philosophy is about humans, but the problem of consciousness is bigger, and you can play simple games to switch scope so that the internet, hammer, or thermostat get a fairer shot at it. Create a film of the real world except 40 years pass between each frame. Who is more alive, who is more reactive to the environment, who exhibits more awareness in the film - the rocks or the quark-like people? Why must it always be about us?

I saw a talk recently entitled "Computing like the brain" by Jeff Hawkins. In it you can get a feel for how different modern computers are from how a brain works. They try to mimic the way the brain works and manage to build a good a classifier. Another BIG point in that talk is that building a brain simulation in software is computationally prohibitive if it is going to have anywhere close the number of neurons we have. We need _fundamentally_ different computer architectures to make that possible.

The internet is most probably connected in the "wrong" way to produce consciousness. "Braining" is all about how connections are made and broken.

It's a very interesting talk, and I learned some neurobiology as well, highly recommended.

> if awareness is more than a biological phenomenon, than who better to have realized that than physicists <

It isn't, as far as anyone can tell, since it is manifested only in (certain) biological systems.

> does such persistence alone make you right? <

You do realize that that is the sort of sophistic rhetoric that can be turned against anyone, yes? You are not making an argument, just substitute your own name for mine in your sentence and contemplate the effect.

Gadfly,

> maybe we need to talk about human/internet "symbiosis" rather than just the internet. <

Well, it's an intriguing suggestion, but the term refers to interactions among living organisms (as the "bio" part implies). I don't doubt that we will soon be looking to more and more *integration* between computers and biological beings, but that doesn't qualify as symbiosis, at least not in the way biologists understand the term.

nonzero,

> But you do get sugar! If you were to simulate, with appropriate resolution, yourself (forget the feasibility for the moment), then you'd be able to interact with the simulated sugar in the same way you'd interact with 'physical' sugar here in the 'real' world. <

No, you would just get a *simulated* interaction between the simulated plant and the simulated sunlight, resulting in simulated sugar. No *actual* sugar involved. Of course there is a difference between simulations and the real thing, unless you think the universe itself is a simulation (which, I admit, is a possibility - but that's a whole other story, already covered at RS).

DaveS,

> Create a film of the real world except 40 years pass between each frame. Who is more alive, who is more reactive to the environment, who exhibits more awareness in the film - the rocks or the quark-like people? <

Isn't the argument against rock consciousness that we can't see any evidence for it? What if we were to look at them at higher speeds from beginning to end, would we not see stuff that could lead us to reconsider?

Do me a favor and define consciousness or point to a definition you like. Then contemplate how human-centric that definition will have to be in order to make any sense.

I'm not saying that that is wrong, we cannot proceed with meaningful work without borders and assumptions. I am saying it is shortsighted to say consciousness in the non-living cannot exist. Far better to say, we have a hard time conceiving it, and leave it at that.

Massimo turns my question back at me: "does such persistence alone make you right?"No, Massimo, neither me nor you, but note the word "alone" in that question. You've persistently repeated, without references, that awareness is no more than a biological phenomenon "as far as anyone can tell, since it is manifested only in (certain) biological systems."My persistence in that area was not alone, yet you've ignored my references completely. What would physicists in particular, know about awareness, you ask, as if they have no right to a contrary opinion where you're concerned. Not even John Archibald Wheeler.Of course he's dead, so he's no longer "anyone" I suppose.Then how about Charles Sanders Peirce, scientist AND philosopher? Well, he's even deader.I have more, but that would be persistent, I suppose, to a fault.

Here's a nice little article about the awareness of electrons.http://www.buffalo.edu/news/12678At Small Scales, Tug-of-War Between Electrons Can Lead to Magnetism Under Surprising Circumstances

Also check out this paper at: http://www.cogsci.uci.edu/~ddhoff/ConsciousRealism2.pdfConscious Realism and the Mind-Body ProblemDonald D. HoffmanDepartment of Cognitive SciencesUniversity of CaliforniaIrvine, California 92697

The awareness of electrons? Really? Well, now I know that, Jain, New Ager or whatever you are, further discussion of this issue with you is a waste of time. Any additional response by Massimo is charitable.

> Isn't the argument against rock consciousness that we can't see any evidence for it? <

No, it's that we have no reason whatsoever to think that rocks are conscious, just like we have no reasons to believe that human beings are rocks.

> Do me a favor and define consciousness or point to a definition you like <

I did, in the post. Of course it looks human-centric, since that's the only experience of the phenomenon we have. That's true also for language. D you suspect that rocks talk and we don't hear them because we are not using the right frequency?

> Here's a nice little article about the awareness of electrons <

There is absolutely nothing about awareness in that article, it is about magnetism. You see why I don't take seriously the sources you keep invoking?

> What would physicists in particular, know about awareness, you ask, as if they have no right to a contrary opinion where you're concerned. <

They have a right to their opinions, but they don't know any more about consciousness than biologists know about quantum mechanics. Just like there is no particular reason to ask a biologist about quantum mechanics there is no particular reason to listen to a physicist when he talks about consciousness.

> Then how about Charles Sanders Peirce, scientist AND philosopher <

You will always find all sorts of strange opinions being held by all sorts of smart people. That's an argument from authority, a basic logical fallacy. I see no reason to talk about consciousness outside of biological systems, and nothing you have written or linked to has provided me with a compelling reason to change my mind. But you are more than welcome to kee trying.

Massimo, The article on magnetism was about how the mobile electrons act as "magnetic messengers," using their own spins to align the spins of nearby manganese atoms. And if you've ever studied anticipatory systems, you'd perhaps see that the atoms were acting in an anticipatory fashion, which causes some of us to realize that they have to be aware of each others application of forces to reactively make the process work as predictably as it invariably does. If atoms don't and didn't operate with this form of awareness of each others presence, there'd be no magnetism, and in the end, no regulated universe. (Assuming you accept of course that the "laws" we've observed in operation are regulatory.) There is information being meaningfully exchanged here, as Wheeler would have told you, and he was far from a merely smart person with a strange opinion. (And the talk here is about awareness, not just our forms of conscious awareness, but nice try.)And you must know that arguments from authority are not fallacies per se. They're your own primary form of argument here, are they not? We find some authorities strange because of their previous opinions, but the opinions of those like Wheeler and Peirce have made them icons as tellers of our modern truths. (Relatively and rationally speaking, of course.)

Massimo: >"No, you would just get a *simulated* interaction between the simulated plant and the simulated sunlight, resulting in simulated sugar. No *actual* sugar involved. Of course there is a difference between simulations and the real thing, unless you think the universe itself is a simulation (which, I admit, is a possibility - but that's a whole other story, already covered at RS)."

That is the point I was trying to get across, the difference between *simulation* and *real* is not as strong as you'd like to think. My thought experiment was trying to address this by showing that if you could simulate a human along with the photosynthetic plant, then the simulated human would interact with the plant (and the sugar output) just the way a *real* person would interact with *real* sugar. You do not give any rational explanation for why this would not be so.

> The article on magnetism was about how the mobile electrons act as "magnetic messengers," using their own spins to align the spins of nearby manganese atoms. <

I read the article, and nothing there implies anything at all about awareness of atoms. You are simply misreading a metaphor, as you keep doing when twisting the scientific literature to fit your preconceived panpsychist view of the world.

> the talk here is about awareness, not just our forms of conscious awareness, but nice try <

No, the talk here is about consciousness. Just re-read the title of the post, if not the post itself. But nice try.

> They're your own primary form of argument here, are they not? <

No, I try to present arguments and evidence in favor of my positions. You may or may not buy either, but it is a willful mischaracterization of this blog to say that I argue from my own authority.

nonzero,

> My thought experiment was trying to address this by showing that if you could simulate a human along with the photosynthetic plant, then the simulated human would interact with the plant (and the sugar output) just the way a *real* person would interact with *real* sugar. <

I don’t get it: you either simulate the whole thing, in which case you don’t get sugar, or you simulate part of the process and then connect the output to something that actually makes sugar, in which case you are cheating. If I’m missing something, please explain.

dmc,

> Likely the principle of organizational invariance <

Sorry, I long stopped taking seriously anything David Chalmers says. He started with zombies and recently even endorsed Singularitarianism. Besides, I was talking about an empirically-established scientific principle, not the sleight of hand of a misguided philosopher.

Massimo. it's unfortunate that you don't see a difference between awareness in general and conscious awareness in particular. Because any talk on consciousness is about awareness, but awareness is not necessarily conscious (regardless of the varied views of either you or that somewhat more nutty Chalmers).And although I'm not what you'd like to label a preconceived panpsychist (whatever the hell that really means), I'd accept the label of panprotoexperientialist, if labels help you think.

And of course you argue all the time from your own authority; it would be a mischaracterization to say otherwise. Not that I've said there's anything wrong with that per se. But your arguments make your authority questionable when you put physicists on a lower pedestal where it comes to your version of philosophy.The physical sciences sprang from philosophy, and not the other way around. You for example may be a better philosopher than Lawrence Krauss (but who isn't) yet Wheeler (who you've studiously ignored here) was much better at the philosophical side of physics than you. As Peirce was about its purposes as well.

> it's unfortunate that you don't see a difference between awareness in general and conscious awareness in particular <

Where on earth did you get that? I’m perfectly aware of that distinction. I simply pointed out that *my* discussion in this post pertained to consciousness, but you highjacked it to turn it into your perennial favorite.

> I'd accept the label of panprotoexperientialist, if labels help you think. <

No, not at all, I’m afraid.

> Wheeler (who you've studiously ignored here) <

Once again, you simply drop names (Wheeler, Pierce), which amounts to arguing by authority. Tell me what you find so compelling (and pertinent!) in Wheeler and maybe we can have a conversation.

Well, Peirce wrote: "In the pragmatic way of thinking in terms of conceivable practical implications, every thing has a purpose, and its purpose is the first thing that we should try to note about it."But you knew that, and you disagree, and apparently there's no curing you on that score.

Wheeler, on the other hand, wrote something that was more to the point, where awareness is concerned:From Wikipedia: “In 1990, Wheeler suggested that information is fundamental to the physics of the universe. According to this "it from bit" doctrine, all things physical are information-theoretic in origin.Wheeler: It from bit. Otherwise put, every "it" — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. "It from bit" symbolizes the idea that every item of the physical world has at bottom — a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes — no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.”

Awareness in a nutshell, or as you might say, on stilts.

You could also look at some of his books and papers, such as this one oft referenced at SEP:Wheeler, J.A. and Zurek, W.H. (eds) (1983) Quantum Theory and Measurement (Princeton NJ: Princeton University Press).

Re Peirce:First, there's more than one use of the word "purpose." Second, people as reductionistic as Dennett and Dawkins have used words such as "purpose" in an anthropomorphic way about evolutionary processes. I would say, rhetorically, that "surely you don't believe that means the process of evolution itself is conscious," but I'm not sure that you just maybe you DO believe that.Third, what Massimo said.

Re Wheeler:If you can show me a statement by Wheeler supporting your original claim that hammers are sentient, I'll eat my hat.As for Wheeler's actual statement, as cosmologists around the world know, nobody knows what is the "correct" interpretation of what wavefunction collapse really means. Is it Everett's Wheeler-inspired multiverses? Greene's string theory? Some neo-Einsteinian quantum realism?

And, taking a shot at Dave while here: If this is just about "playing word games" and a change in games would let us talk about the consciousness of a hammer or thermostat, as soon as a hammer or thermostat can read Wittgenstein, understand linguistic game-playing and "deal itself in," call me back. (Or ask Nagel to write, "What Is It Like To Be a Thermostat?")

Oh, and having clicked Baron's blogger profile ... wow. If you agree with 10 percent of what this Anthony Peake claims, Massimo has been far more than generous on the amount of time he's spent responding to you.

Baron P has pointed out what is compelling about Wheeler's work. His ideas about the anthropic universe in the 60s led him to the work on information in the 90s. The centerpiece of this work was the 'delayed choice' variation on the 2-slit experiment, ostensibly proving that the way an experiment was set up not only determined the path of an electron in a beam being observed, but also paths that had already taken place. Again, what you call the most successful science to date. Worth mentioning that unlike Bohm, he avoided parapsychological pursuits (and their proponents) like the plague, and he despised pseudoscience.

The pertinence of Wheeler's ideas are interestingly enough, that he did NOT require consciousness for stuff to exist, inanimate observers would do just as well. Well if the internet can be said to be conscious, wouldn't we be back to square one, whereby stuff is only in our minds, and that what we think about and what "is" are not much different. But different they are, according to Andrei Linde and John Smythies. While at pains to proclaim the existence of matter, they are both quite clear that any description is not complete unless consciousness is invoked. They draw on the work of Kaluza and Klein to distinguish between a physical space of > 3 dimensions, and phenomenal space having only 3 dimensions.

To me, this stuff is both science and philosophy, it takes place in universities, not everyone agrees or follows, but it it's done by some of the best in the field, ignore it if you like.

Consciousness's role in the universe is why I asked you about your definition. I do see that you wrote something about it being a biological phenomenon, was hoping for something a bit more substantial.

Consciousness is a biological function because biological forms make proactive choices. (I'd argue that to do so purposely is to be conscious of the purposes, but that won't fly here.) But the choices in any case are formative and it's the choice making function that evolves its forms. If these functions evolved to form our creatures, they may well have evolved elsewhere to accommodate other formations, and we would likely call these other forms life, but not necessarily find them biologically constructed. But we could find that they at the same time have the sensory capacities to consciously respond intelligently to whatever efforts we've made to contact or respond to them.

as usual we are hitting diminishing returns here, but a couple more comments before I sign off from the discussion:

> Peirce wrote: "In the pragmatic way of thinking in terms of conceivable practical implications, every thing has a purpose, and its purpose is the first thing that we should try to note about it." <

Yes, but he wrote that in the context of his 1908 essay on “A Neglected Argument for the Reality of God.” I don’t take seriously any argument that refers to the existence of supernatural entities. See? Even very smart people write nonsense on stilts, from time to time.

As for Wheeler, I suspect you are either taking him out of context or over-interpreting him, as I have grown accustomed to see you do. Even if we agree that “information is fundamental to the physics of the universe” (whatever that may mean, precisely), the book by Ladyman and Ross that I discussed several times at RS is an example of how such a statement is perfectly compatible with a standard naturalistic interpretation of reality, and one that doesn’t require everything to have purpose or awareness. Take DNA molecules: if there is anything that everyone agrees carries information that’s DNA. Yet DNA is not aware of anything, and if you maintain such a notion you will be laughed out of the room in any serious biological conference.

Dave,

> The centerpiece of [Wheeler’s] work was the 'delayed choice' variation on the 2-slit experiment, ostensibly proving that the way an experiment was set up not only determined the path of an electron in a beam being observed, but also paths that had already taken place. <

Which as far as I can tell has nothing whatsoever to do with anything being discussed here. Whatever you wrote after that is so garbled that I couldn’t get any meaning out of it.

"An international group of prominent scientists has signed The Cambridge Declaration on Consciousness in which they are proclaiming their support for the idea that animals are conscious and aware to the degree that humans are ....What's also very interesting about thedeclaration is the group's acknowledgement that consciousness can emerge in those animals that are very much unlike humans....Consequently, say the signatories, the scientific evidence is increasingly indicating that humans are not unique in possessing the neurological substrates that generate consciousness....The declarationwas signed in the presence of Stephen Hawking ...."

Hawking was not only present at the declaration, but made some statements of his own (on other occasions):

"There is a real danger that computers will develop intelligence and take over. We urgently need to develop direct connections to the brain so that computers can add to human intelligence rather than be in opposition".

"I think the brain is essentially a computer and consciousness is like a computer program. It will cease to run when the computer is turned off. Theoretically, it could be re-created on a neural network, but that would be very difficult, as it would require all one's memories".

Now, I am only making a Devil's Advocate argument here (an argument from authority!).

Heck, even within natural sciences, Hawking can have "issues." He's an overly enthusiastic touter of manned space travel to Mars, having seemed to overlook things such as the problem with cosmic rays, which Moon voyagers don't have to face.

Massimo, as expected, you have found your usual excuses not to have to change your stance on the fundamental pillars of your own outdated philosophical structures. Find what you consider one mistake in someone else's philosophical house and declare the place uninhabitable. Peirce, I suppose, is no longer (or never was) one of our greatest logicians, because he was not the agnostic that in your view he should at least have been.The God that Peirce "believed" in was not the more silly God that most of today's religionists believe was the Creator of humans and our human world. Was he wrong in any case, yes, I'm quite sure so. Aristotle was much more wrong in his beliefs in Gods, yet you still admire his more sustainable discoveries as to how his God has worked the world. And if I'm not wrong, you still have great respect for the believer, Kant. And of course the agnostic, Russell.Peirce wrote: "I am inclined to think (though I admit that there is no necessity of taking that view) that the process of creation has been going on for an infinite time in the past, and further, during all past time, and, further, that past time had no definite beginning, yet came about by a process which in a generalized sense, of which we cannot easily get much idea, was a development." So his God was a much different one than you'd like us to believe, and certainly from the one that your friend Kenneth Miller believes in.And yes, people that believe in purpose would seem more likely to believe in a purposive God than those who don't. And so in your logic, as exhibited here, they should not have believed in purposes at all.Of course I'm writing this for the benefit of some in your audience to ponder, since it's long been clear you won't.

As to your dismissal of Wheeler, it's just derisible. You write: "Yet DNA is not aware of anything, and if you maintain such a notion you will be laughed out of the room in any serious biological conference." Yet DNA is the quintessential site of our awareness functionality. I can name a dozen modern biologists who would laugh you out of the whole building for not knowing that.

golly, you truly are unbelievable. If you could only step back and examine the logic of what you write, and how nicely you twist what I write, you might pause for just one second and reconsider. But you won’t, which is why I — as you do — write for the benefit of the broader audience, not because I have the slightest hope of changing your mind.

On Pierce:

> Aristotle was much more wrong in his beliefs in Gods, yet you still admire his more sustainable discoveries as to how his God has worked the world. And if I'm not wrong, you still have great respect for the believer, Kant. And of course the agnostic, Russell. <

I never said, nor will ever say, that belief in gods is a litmus test for rejecting *everything* someone says, so your list is entirely irrelevant. But Pierce gets his belief that everything has a purpose from his belief in god, regardless of which particular kind of god he believed in. As such, I consider his argument flawed and irreparable. But he was still a great logicians in other respects.

On Wheeler:

> Yet DNA is the quintessential site of our awareness functionality. I can name a dozen modern biologists who would laugh you out of the whole building for not knowing that. <

That is not at all what I wrote, is it? Of course DNA is the basis (I’m not sure about “quintessential”) for our capacity to be aware. But that is not even close to what we were talking about, since the discussion was about the awareness of molecules like DNA. And I repeat that in that case you are the one that would be laughed out of the room.

No, Peirce does not get his belief in purpose from belief in God.But even if he, as you'd like to imagine, had, other philosophers such as Russell haven't.Russell wrote: "Has the universe any unity of plan or purpose, or is it a fortuitous concourse of atoms? Is consciousness a permanent part of the universe, giving hope of indefinite growth in wisdom, or is it a transitory accident on a small planet on which life must ultimately become impossible? Are good and evil of importance to the universe or only to man? Such questions are asked by philosophy, and variously answered by various philosophers. But it would seem that, whether answers be otherwise discoverable or not, the answers suggested by philosophy are none of them demonstrably true. Yet, however slight may be the hope of discovering an answer, it is part of the business of philosophy to continue the consideration of such questions, to make us aware of their importance, to examine all the approaches to them, and to keep alive that speculative interest in the universe which is apt to be killed by confining ourselves to definitely ascertainable knowledge."

As to DNA, you now say, "But that is not even close to what we were talking about, since the discussion was about the awareness of molecules like DNA."

So your argument now is that DNA is aware but its molecules aren't. Well in fact the argument was also about atoms that make up molecules, so apparently your use of DNA as an example was only arbitrarily relevant, as you could have used the example that brains carrying information are not aware on the level of their atoms.Except that Wheeler says they, the atoms, are aware, and other than doing the name dropping you've accused me of, you haven't made an argument to refute that.

you really need to make an effort not to twist what others write. I’m ok debating things almost ad nauseam, but sometimes it looks like you are not even trying.

> Peirce does not get his belief in purpose from belief in God. <

Yes, he did, as it’s clear from the context of the quote we were discussing.

> other philosophers such as Russell haven't. <

Except that absolutely *nothing* in the Russell quote you gave hints at the conclusion that Russell himself believed in cosmic purpose. C’mon, man, that’s just reading other people through incredibly thick ideological glasses.

> So your argument now is that DNA is aware but its molecules aren't. <

NO, IT ISN’T! Please re-read what I wrote. I agreed that DNA is part of what gives *us* the ability to be aware. I never agreed that either DNA or any of its constituent molecules are themselves aware. Because that is pure nonsense, as far as the available empirical evidence and theoretical foundation of biology are concerned.

Massimo, you've agreed that Peirce was a great logician, yet argue that his capacity for logic was somehow perverted, as it did not prevent, but in fact was causing him, to see purpose in everything because he illogically believed there must be a god out there in some amorphous form. Now you'll say I've twisted what you've written so that it no longer makes sense. Exactly. Although I'd call it parsing or dissecting. Because it doesn't make sense logically that a great logician based perhaps his most important logical proposition on superstition. And I've found nothing in his writings that says otherwise, even though in "context" you have.

What we usually find is that these people are prompted and confounded by their superstitions with the compulsion to make rational sense of them, but true greatness won't ordinarily ensue from that. More like Chopraesqueness. Or perhaps Dawkinsistics.

And regardless of your reading of my quote from Russell, in that there's no evidence he believed in cosmic purpose, the fact is that he clearly did, and in his case it came from his disbelief in a god that he had once believed in. (He threw out the baby but not the bath, it seems.) His pal, Whitehead, was similarly inclined, as you'll recall.

And then you wrote: "Of course DNA is the basis (I’m not sure about “quintessential”) for our capacity to be aware." But now you say DNA is not in itself aware; but it gives us that ability. Does awareness thus spring from non-awareness? I find it hard to think so - it's a lot like saying that life's intelligence sprang from non-intelligence. But then you have your pure nonsense and I have mine.

Can you help me to understand Searle's "Chinese Room"? In particular, I'm puzzled by the "semantics/syntax" thing.

Searle suggests that computers just process syntax and so can never truly "know" things. But in the example of the person (in the room) who internalized the rules for manipulating Chinese characters -- can we not say that he will come to know Chinese? He can certainly reply to any question posed in Chinese. To an outside observer, he will appear to be a speaker. To say (as Searle does) that he does not understand what a "hamburger" is would surely just be a temporary obstacle. That is, as the man walked around and experienced things outside the room, he would come to note that the Chinese character (let's call it #32) is constantly associated with a hamburger. He would see pictures of hamburgers with #32 underneath it. From there, he may make associations of someone reading character #32 with a particular sound and learn to associate the sound with the character, etc. (Edgar Rice Burroughs's first "Tarzan" story describes Tarzan teaching himself English -- solely from Tarzan looking at picture books and the dictionary!)

Of course, coming to understand what a hamburger is requires making inferences and generalizing beyond what one is specifically taught. I'm not sure to what extent computers are able to do this. But the man in the Chinese room certainly could. Indeed, how else could a foreigner come to know the language of another land (other than by making inferences from a large group of experiences)? In short, I don't see the barrier of obtaining semantics from syntax. I also wonder why Searle focuses on syntax, since I am assuming here that the information received by the computer or the man in the Chinese room is not only syntactically correct, but consists of accurate and meaningful examples from which to generalize. (i.e. syntactically correct, but meaningless statements like "The water is triangular" are excluded). It seems to me not so much a question of semantics/syntax, but of receiving accurate examples/information and being able to make inferences and generalizations.

For example, if a computer is taught that a red ball is a ball, a green ball is a ball, etc. is it able to generalize that the color is irrelevant and it is the shape that gives us the category of "ball"? If so, it would seem to me that links could be made from symbols to objects that would qualify the computer as "knowing" what a hamburger is.

I think we are supposed to imagine that the man in the room is just like a microprocessor, taking inputs and processing a list of instructions. We aren't allowed to equate the man's own consciousness with the idea that the room + list + man is a conscious system. The room + list + man is a system that presents to the outside world as an apparently conscious system that understands the inputs it is taking. But if the man does not know Chinese, he does not have an understanding of those inputs, he just understands the language that that book of instructions is written in, just as a microprocessor in an apparently aware computer only understands its own machine code.

I think the argument is flawed. It is meant to imply that no simulation of consciousness ever can develop real consciousness. That might be true if our consciousness were some mystical aspect of our minds, but Searles' argument doesn't refute the idea that our own consciousness is material in its origin -- it just gives an example of a material system that presents as conscious but isn't.

Computers might become conscious if sufficient criteria are met -- richness of algorithms, amount of memory, coherence of sensory data, ability to interact with the environment (for example, to perform experiments that yield interpretable results), etc.

What I have a hard time understanding is that Searle speaks of processing the symbols without "knowing" what the symbols mean. But it seems to me that properly responding to the symbols demonstrates knowledge. What other evidence is there of "knowing" beyond behavior? How do we know that you or I understand what a hamburger is other than our behavior will link the word hamburger to the characters or the sound or the picture -- we see the picture and say "that is a hamburger".

Searle seems to me to cheat a bit about the man in the room processing Chinese characters for hamburger, then entering the outside world and not recognizing a realhamburger. That is unfair. It is as if I taught a group of young students about geometry, then tested them about trigonometry. I would say, "all the students failed so that proves you cannot teach them mathematics". Unfair. You have to match the testing to the teaching. The man in the room was taught only about linking characters to characters. He was never taught about linking visual data of an object to the character, or linking the character to the word sound, taste, texture, etc. of a hamburger. If he WAS taught these things, his behavior (or a computer's behavior) would be indistinguishable from someone who "knows" what a hamburger is.

If there is some other "knowing" or "understanding", then Searle is being pretty vague about exactly what it is. I saw some talk about "intentionality", but I really didn't understand what he was getting at. Thomas Nagel writes in "What is it Like to be a Bat?" that it is very difficult to say what provides evidence of consciousness. I can understand that we cannot say"we have achieved consciousness in a computer" if we do not know exactly what consciousness is. But on the other hand, we also cannot say "it is impossible to achieve consciousness", as Searle seems to say. If the definition is vague, it seems to me that we are obliged to stick to observable behavior. The burden of proof is on those who claim that there is something beyond that.

Richard, thanks again for your reply. Please know that any anger/hostility/impatience that you might detect in the lines above is not directed towards you.

Tom, I didn't read your post as hostile at all. I think we both find the China room argument flawed, but for different and perhaps equally valid reasons. I'm trying to be as fair to Searle as possible as I'm sure you are. The best of my ability to interpret his argument is that the man is the only intelligent component in the room, the room "behaves" as if it understands Chinese, but the man knows no Chinese whatsoever, so any appearance the room presents as an intelligent Chinese-speaking entity is can't be true awareness. The instructions in the book might be so cryptic (If the input character looks like such-and-such then write the number 772 in column 55 of Table 37, then if column 91 of Table 107 has a 6 in it, go to step #5934266, else go to step #6823449) that the man has no hope of ever figuring out what he is doing. Unless he follows the instructions blindly to the letter, he fails utterly at this job. The same man when it comes time to file his tax returns might follow the line-by-line instructions and fill out a perfectly accurate return without ever understanding a thing about the tax code.

So you are right that intelligence can't be judged on whether the man actually does know the Chinese symbol for hamburger, etc. And the China room argument may have a flaw based on that issue alone, but the flaw has a dodge in the form of "but you can replace the man with a microprocessor that only knows machine code, and the instructions are so mechanical that they could be rewritten in the microprocessor's machine code. Repeat the thought experiment and you have to conclude, since the microprocessor is just an unintelligent automaton, that this time there really is no intelligence in the room, but there still is an appearance of it." I have a strong suspicion that this dodge is also flawed, but the flaw is more subtle and difficult to articulate. Perhaps you put it best by pointing out where the burden of proof should fall.

But I do think the Chinese room argument is flawed on the basis that a single example of an automaton displaying fake intelligence, if that *is* the case, does not refute the concept of intelligence being created on a non-biological substrate. There could be routes that lead to actual intelligence, which emerges from the system as a whole and not from a single component.

The reply above is more or less correct, but a simpler way of putting it is to say that human awareness is more than automatic tabulation (or computation) per se. The room is just a tabulator and no more. A human might be a tabulator in some ways in neuronal processing, but is more than that. It doesn't mean that a human isn't computational in some ways, only that we are more than an automatic tabulator of the chinese room variety.

The obvious crucial example of how human awareness is more than automatic tabulation is that is it as much a product of our environment as ourselves (nature - nurture, and so on). We might need some automatic tabulation in neuronal processing, but that serves an interaction between oneself and one's environment, and is adaptive. Soldier on, tech people, but a reproduction of a human is a long way off.

Which part of this exchange is the "with friends" part? I'm afraid the language of dismissiveness in the original post, compounded by the relentlessly combative tone of the comments on all sides, makes it difficult for me to see the conviviality implied in that inspiring tag line.

"They [computer scientists] reason that once computers can match the amount of parallel connections in the brain, they will have the equivalent of human intelligence. But [Jeff] Hawkins points out a fallacy in this reasoning, which he calls the hundred-step rule. He gives this example: When a human is shown a picture and asked to press a button if a cat is in the picture, it takes about a half second or less. This task is very difficult or impossible for a computer to do. We already know that neurons are much slower than a computer, and in that half second, information entering the brain can traverse only a chain of one hundred neurons. You can come up with the answer with only one hundred steps. A digital computer would take billions of steps to come up with the answer. So, how do we do it?

"And here is the crux of Hawkins's hypothesis: 'The brain doesn't "compute" the answer to problems; it retrieves the answers from memory. In essence, the answers were stored in memory a long time ago. It only takes a few steps to retrieve something from memory.Slow neurons are not only fast enough [to] do this, but they constitute the memory themselves. The entire cortex is a memory system. It isn't a computer at all. [page 366].

Good reply about memory. What complex computation is needed for automatic flow through past patterns by chemical commonalities (or some such thing)? If there is automatic tabulation or pattern matching at a basic level, it might be by basic commonalities in a flow through past patterns. Maybe that can be reduced to an algorithm, or is better defined as chemical affinities, or chemical affinities can be defined algorithmically. We shall see in time.

Note that he is very clear in saying that he is NOT creating some sort of artificial consciousness.

Even so, if our brain is a memory system it is interesting to speculate on how our consciousness relates to this and how it could work as a memory system instead of as a computing system. Where does this rabbit hole go? If only I could do a couple of PhDs on that as well :).

The point that Koch misses is that complexity of a system is a necessary but not sufficient condition for the phenomenon of consciousness to arise.

He is not alone in this. Bamber and most others who concern themselves with this issue, whether agreeing or disagreeing with the proposition that the Internet can, or will, become conscious, fall into the same trap.

Along with those who consider consciousness to be mystical or metaphysical, they overlook the fact that consciousness is a product of evolutionary processes.

Within the context of modern science, particularly evolutionary biology, there can be no question but that consciousness is a feature of most, if not all organisms.

Firstly, and most importantly, from our understanding of biological evolution by natural selection it becomes quite clear that the provision of a navigational feature that involves some degree of self awareness is required for an organism to interact optimally with its environment.

It is a measure of its fitness for the prevailing environment and subject to selection pressure accordingly.

There is, of course, a great gulf between the level of consciousness exhibited by our species in comparison to any other.

Simply because the level of interaction with the environment required by our particular ecological niche is incomparably higher. As evidenced by the billions of artifacts and systems that have resulted from human activities.

Furthermore, there is a good case to be made for the proposition that, from what we at present call the Internet it is probable that we will, quite soon, have a new cognitive and consciousness entity on this planet that will greatly surpass those of the human.

A product of the observed autonomous evolution of technology within the collective imagination of our species.

This topic is part of the broad evolutionary model very informal outlined in "The Goldilocks Effect: What Has Serendipity Ever Done For Us?" (free download in e-book formats from the "Unusual Perspectives" website)