Chapter 14
"CONSCIOUSNESS INVOLVES NONCOMPUTABLE INGREDIENTS"

Lee Smolin: Roger Penrose is the most important physicist to work in relativity theory except for Einstein. He's the most creative person and the person who has contributed the most ideas to what we do. He's one of the very few people I've met in my life who, without reservation, I call a genius. Roger is the kind of person who has something original to say — something you've never heard before — on almost any subject that comes up.

__________

ROGER PENROSE is a mathematical physicist; Rouse Ball Professor of Mathematics at the University of Oxford; author of Techniques of Differential Topology in Relativity (1972), Spinors and Space- time, with W. Rindler, 2 vols. (1984, 1986), The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics (1989), Shadows of the Mind: A Search for the Missing Science of Consciousness (1994), (with Stephen Hawking) The Nature of Space and Time (1996); coeditor with C.J. Isham and Dennis W. Sciama of Quantum Gravity 2: A Second Oxford Symposium (1981), and with C.J. Isham of Quantum Concepts in Space and Time (1986).

[Roger Penrose:] My main technical interest is in twistor theory — a radical approach to space and time — and, in particular, how to fit it in with Einstein's general relativity. There's a major problem there, in which some progress was made a few years ago, and I feel fairly excited about it. It's ultimately aimed at finding the appropriate union between general relativity and quantum theory.

When I was first seriously thinking of getting into physics, I was thinking more in terms of quantum theory and quantum electrodynamics than of relativity. I never got very far with quantum theory at that stage, but that was what I started off trying to do in physics. My Ph.D. work had been in pure mathematics. I suppose my most quoted paper from that period was on generalized inverses of matrices, which is a mathematical thing that physicists hardly ever mention. Then there were the nonperiodic tilings, which relate to quasi crystals, and therefore to solid-state physics to some degree. Then there's general relativity. What I suppose I'm best known for in that area are the singularity theorems that I worked on along with Stephen Hawking. I knew him when he was Dennis Sciama's graduate student; I've known him for a long time now. But the main things I've done in relativity apart from that have to do with spinors and with asymptotic structure of spacetimes, relating to gravitational radiation.

I believe that general relativity will modify the structure of quantum mechanics. Whereas people usually think that in order to unite quantum theory with gravity theory you should apply quantum mechanics, unmodified, to general relativity, I believe that the rules of quantum theory must themselves be modified in order for this union to be successful.

There's a connection between this area of physics and consciousness, in my opinion, but it's a bit roundabout; the arguments are negative. I argue that we shall need to find some noncomputational physical process if we're ever to explain the effects of consciousness. But I don't see it in any existing theory. It seems to me that the only place where noncomputability can possibly enter is in what is called "quantum measurement." But we need a new theory of quantum measurement. It must be a noncomputable new theory. There is scope for this, if the new theory involves changes in the very structure of quantum theory, of the kind that could arise when it's appropriately united with general relativity. But this is something for the distant future.

Why do I believe that consciousness involves noncomputable ingredients? The reason is Gödel's theorem. I sat in on a course when I was a research student at Cambridge, given by a logician who made the point about Gödel's theorem that the very way in which you show the formal unprovability of a certain proposition also exhibits the fact that it's true. I'd vaguely heard about Gödel's theorem — that you can produce statements that you can't prove using any system of rules you've laid down ahead of time. But what was now being made clear to me was that as long as you believe in the rules you're using in the first place, then you must also believe in the truth of this proposition whose truth lies beyond those rules. This makes it clear that mathematical understanding is something you can't formulate in terms of rules. That's the view which, much later, I strongly put forward in my book The Emperor's New Mind.

There are possible loopholes to this use of Gödel's theorem, which people can pick on, and they often do. Most of these counterarguments are misunderstandings. Dan Dennett makes genuine points, though, and these need a little more work to see why they still don't get around the Gödel argument. Dennett's case rests on the conten-tion that we use what are called "bottom-up" rather than "top-down" algorithms in our thinking — here, mathematical thinking.

A top-down algorithm is specific to the solution of some particular problem, and it provides a definite procedure that is known to solve that problem. A bottom-up algorithm is one that is not specific to any particular problem but is more loosely organized, so that it learns by experience and gradually improves, eventually giving a good solution to the problem at hand. Many people have the idea that bottom-up systems rather than top-down, programmed algorithmic systems are the way the brain works. I apply the Gödel argument to bottom-up systems too, in my most recent book, Shadows of the Mind. I make a strong case that bottom-up systems also won't get around the Gödel argument. Thus, I'm claiming, there's something in our conscious understanding that simply isn't computational; it's something different.

A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either — although I'm saying that it's beyond the physics we know now.

The argument in my latest book is basically in two parts. The first part shows that conscious thinking, or conscious understanding, is something different from computation. I'm being as rigorous as I can about that. The second part is more exploratory and tries to find out what on earth is going on. That has two ingredients to it, basically.

My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignifi-cant role. It would have to be in the bridge between quantum and classical levels of behavior — that is, where quantum measurement comes in.

Modern physical theory is a bit strange, because one has two levels of activity. One is the quantum level, which refers to small-scale phenomena; small energy differences are what's relevant. The other level is the classical level, where you have large-scale phenomena, where the roles of classical physics — Newton, Maxwell, Einstein — operate. People tend to think that because quantum mechanics is a more modern theory than classical physics, it must be more accurate, and therefore it must explain classical physics if only you could see how. That doesn't seem to be true. You have two scales of phenomena, and you can't deduce the classical behavior from the quantum behavior any more than the other way around.

We don't have a final quantum theory. We're a long way from that. What we have is a stopgap theory. And it's incomplete in ways that affect large-scale phenomena, not just things on the tiny scale of particles.

Current physics ideas will survive as limiting behavior, in the same sense that Newtonian mechanics survives relativity. Relativity modifies Newtonian mechanics, but it doesn't really supplant it. Newtonian mechanics is still there as a limit. In the same sense, quantum theory, as we now use it, and classical physics, which includes Einstein's general theory, are limits of some theory we don't yet have. My claim is that the theory we don't yet have will contain noncomputational ingredients. It must play its role when you magnify something from a quantum level to a classical level, which is what's involved in "measurement."

The way you treat this nowadays, in standard quantum theory, is to introduce randomness. Since randomness comes in, quantum theory is called a probabilistic theory. But randomness only comes in when you go from the quantum to the classical level. If you stay down at the quantum level, there's no randomness. It's only when you magnify something up, and you do what people call "make a measurement." This consists of taking a small-scale quantum effect and magnifying it out to a level where you can see it. It's only in that process of magnification that probabilities come in. What I'm claiming is that whatever it is that's really happening in that process of magnification is different from our present understanding of physics, and it is not just random. It is noncomputational; it's something essentially different.

This idea grew from the time when I was a graduate student, and I felt that there must be something noncomputational going on in our thought processes. I've always had a scientific attitude, so I believed that you have to understand our thinking processes in terms of science in some way. It doesn't have to be a science that we understand now. There doesn't seem to be any place for conscious phenomena in the science that we understand today. On the other hand, people nowadays often seem to believe that if you can't put something on a computer, it's not science.

I suppose this is because so much of science is done that way these days; you simulate physical activity computationally. People don't realize that something can be noncomputational and yet perfectly scientific, perfectly mathematically describable. The fact that I'm coming into all this from a mathematical background makes it easier for me to appreciate that there are things that aren't computational but are perfectly good mathematics.

When I say "noncomputational" I don't mean random. Nor do I mean incomprehensible. There are very clear-cut things that are noncomputational and are known in mathematics. The most famous example is Hilbert's tenth problem, which has to do with solving algebraic equations in integers. You're given a family of algebraic equations and you're asked, "Can you solve them in whole numbers? That is, do the equations have integer solutions?" That question — yes or no, for any particular example — is not one a computer could answer in any finite amount of time. There's a famous theorem, due to Yuri Matiyasevich, which proves that there's no computational way of answering this question in general. In particular cases, you might be able to give an answer by means of some algorithmic procedure. However, given any such algorithmic procedure, which you know doesn't give you wrong answers, you can always come up with an algebraic equation that will defeat that procedure but where you know that the equation actually has no integer solutions.

Whatever understandings are available to human beings, there are — in relation particularly to Hilbert's tenth problem — things that can't be encapsulated in computational form. You could imagine a toy universe that evolved in some way according to Hilbert's tenth problem. This evolution could be completely deterministic yet not computable. In this toy model, the future would be mathematically fixed; however, a computer could not tell you what this future is. I'm not saying that this is the way the laws of physics work at some level. But the example shows you that there's an issue. I'm sure the real universe is much more subtle than that.

The Emperor's New Mind served more than one purpose. Partly I was trying to get a scientific idea across, which was that noncomputability is a feature of our conscious thinking, and that this is a perfectly reasonable scientific point of view. But the other part of it was educational, in a sense. I was trying to explain what modern physics and modern mathematics is like.

Thus, I had two quite different motivations in writing the book. One was to put a philosophical point of view across, and the other was that I felt I wanted to explain scientific things. For quite a long time, I'd felt that I did want to write a book at a semipopular level to explain certain ideas that excited me — ideas that weren't particularly unconventional — about what science is like. I had it in the back of my mind that someday I would do such a thing.

It wasn't until I saw a BBC "Horizon" program, in which Marvin Minsky and various people were making some rather extreme and outrageous statements, that I was finally moved to write the book. I felt that there was a point of view which was essentially the one I believe in, but which I had never seen expressed anywhere and which needed to be put forward. I knew that this was what I should do. I would write this book explaining a lot of things in science, but this viewpoint would give it a focus. Also it had to be a book, because it's cross-disciplinary and not something you could express very well in any particular journal.

I suppose what I was doing in that book was philosophy, but somebody complained that I hardly referred to a single philosopher — which I think is true. That's because the questions that interest philosophers tend to be rather different from those that interest scientists; philosophers tend to get involved in their own internal arguments.

When I argue that the action of the conscious brain is noncomputational, I'm not talking about quantum computers. Quantum computers are perfectly well-defined concepts, which don't involve any change in physics; they don't even perform noncomputational actions. Just by themselves, they don't explain what's going on in the conscious actions of the brain. Dan Dennett thinks of a quantum computer as a skyhook, his term for a miracle. However, it's a perfectly sensible thing. Nevertheless, I don't think it can explain the way the brain works. That's another misunderstanding of my views. But there could be some element of quantum computation in brain action. Perhaps I could say something about that.

One of the essential features of the quantum level of activity is that you have to consider the coexistence of various different alternative events. This is fundamental to quantum mechanics. If X can happen, and if Y can happen, then any combination of X and Y, weighted with complex coefficients, can also occur. According to quantum mechanics, a particle can have states in which it occupies several positions at once. When you treat a system according to quantum mechanics, you have to allow for these so-called superpositions of alternatives.

The idea of a quantum computer, as it's been put forward by David Deutsch, Richard Feynman, and various other people, is that the computations are the things that are superposed. Rather than your computer doing one computation, it does a lot of them all at once. This may be, under certain circumstances, very efficient. The problem comes at the end, when you have to get one piece of information out of the superposition of all those different computations. It's extremely difficult to have a system that does this usefully.

It's pretty radical to say that the brain works this way. My present view is that the brain isn't exactly a quantum computer. Quantum actions are important in the way the brain works, but the brain's noncomputational actions occur at the bridge from the quantum to the classical level, and that bridge is beyond our present understanding of quantum mechanics.

The most promising place by far to look for this quantum- classical borderline action is in recent work on microtubules by Stuart Hameroff and his colleagues at the University of Arizona. Eukaryotic cells have something called a cytoskeleton, and parts of the cytoskeleton consist of these microtubules. In particular, microtubules inhabit neurons in the brain. They also control one celled animals, such as parameciums and amoebas, which don't have any neurons. These animals can swim around and do very complicated things. They apparently learn by experience, but they're not controlled by nervous systems; they're controlled by another kind of structure, which is probably the cytoskeleton and its system of microtubules.

Microtubules are long little tubes, a few nanometers in diameter. In the case of the microtubules lying within neurons, they very likely extend a good deal of the length of the axons and the dendrites. You find them from one end of the axons and dendrites to the other. They seem to be responsible for controlling the strengths of the connections between different neurons. Although at any one moment the activity of neurons could resemble that of a computer, this computer would be subject to continual change in the way it's "wired up," under the control of a deeper level of structure. This deeper level is very probably the system of microtubules within neurons.

Their action has a lot to do with the transport of neurotransmitter chemicals along axons, and the growth of dendrites. The neurotransmitter molecules are transported along the microtubules, and these molecules are critical for the behavior of the synapses. The strength of the synapse can be changed by the action of the microtubules. What interests me about the microtubules is that they're tubes, and according to Hameroff and his colleagues there's a computational action going along on the tubes themselves, on the outside.

A protein substance called tubulin forms interpenetrating spiral arrangements constituting the tubes. Each tubulin molecule can have two states of electric polarization. As with an electronic computer, we can label these states with a 1 and a 0. These produce various patterns along the microtubules, and they can go along the tubes in some form of computational action. I find this idea very intriguing.

By itself, a microtubule would just be a computer, but at a deeper level than neurons. You still have computational action, but it's far beyond what people are considering now. There are enormously more of these tubulins than there are neurons. What also interests me is that within the microtubules you have a plausible place for a quantum-oscillation activity that's isolated from the outside. The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum- level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.

On the biological side, one would have to think of good experiments to perform on microtubules, to see whether there's any chance that they do support any of these large-scale quantum coherent effects. When I say "quantum coherent effects," I mean things a bit like superconductivity or superfluidity, where you have quantum systems on a large scale.

Reality Club Discussion

Roger Penrose is the most important physicist to work in relativity theory except for Einstein. He's the most creative person and the person who has contributed the most ideas to what we do. He's one of the very few people I've met in my life who, without reservation, I call a genius. Roger is the kind of person who has something original to say — something you've never heard before — on almost any subject that comes up.

Part of Roger's interest in relativity from the very beginning has been a skepticism about quantum mechanics. Indeed, before he was working on general relativity he was trying to understand quantum mechanics; he was thinking about ideas like hidden variables, he was thinking about Bell's theorem and the Einstein-Podolsky-Rosen paradox. His first ideas in physics came out of applying ideas he was using to try to prove the four-color theorem, to try to understand those ideas, and only when he met the American theoretical physicist David Finkelstein did he begin to become interested in general relativity.

David Finkelstein went to London and gave a talk on his ideas about how the topology of spacetime might be different inside black holes. David was one of the very few people to think about applying topological ideas to space and time. Topology is the science of relationships without regard to actual measures, like distance; it's the study of relationships and connectivity, taken purely. Roger was a topologist; his Ph.D. was in mathematics and algebraic topology.

David Finkelstein was applying topology to the geometry of space and time. Roger, in what he was calling spin networks, was trying to build space and time up from little discrete pieces that were purely quantum mechanical. It's always been his idea — an idea shared by many others — that space and time aren't continuous; that the continuum is an illusion that has to do with the fact that we're looking at things on a large scale.

Roger had begun trying to make models of how the geometry of space might derive from little atoms of geometry, and he called these models "spin networks." They're a very deep mathematical construction, which people have recently been studying very carefully. He listened to David's talk, and told him about spin networks, and in some sense they switched places. David went home and began trying to make models of space and time as discrete processes, which is what he's been doing ever since. Roger began to think about how to apply topological ideas to the geometry of space and time.

Having invented the discrete models of space called spin networks, Roger couldn't get them to make models of space and time and incorporate relativity theory. The attempt to do so, which he's been working on since the early 1960s, is called twistor theory, and part of the reason for Roger's isolation from the mainstream of particle physics is this preoccupation with twistor theory, his efforts to formulate a complete new theory of physics that would bring together quantum mechanics and relativity theory in a new way.

Twistor theory may be concisely defined as follows: In looking at the world, we think of points — that is, things that exist in space — as being fundamental and time as something that happens to them. The fundamental thing is the things that exist, and the secondary thing is the processes through which they change in time. In twistor theory, the fundamental things in the world are the processes. The secondary things are the things that exist. They exist only by virtue of the meetings of the intersections of processes. In the twistor description of space and time, the fundamental entities are not events in space and time but processes, and the idea of twistor theory is to formulate the laws of physics in this space of processes and not in space and time. Space and time as we think about them emerge only at a secondary level.

Twistor theory is a beautiful mathematical thing. Roger, and a succession of his students, have devoted an enormous amount of effort to trying to make a fundamental theory of physics based on it. It's a deep and difficult problem; whether it's right or not is impossible to tell, and even though Roger is a genius, the work is still unfinished. We don't yet know the potential of twistor theory. Certainly it's something that only somebody like Roger could have created.

From the beginning, Roger has been very skeptical about quantum mechanics, and has always believed that quantum mechanics would not, in the end, be the correct theory, and that there was some more fundamental theory that unified quantum mechanics and spacetime. This sets him apart from many other people, who believe that quantum mechanics is essentially correct and what we need is a new dynamical theory of the geometry of space and time- -in other words, that general relativity needs to be modified to something like supersymmetric gravity or string theory. Roger believes that gravity is important for understanding the puzzles of quantum mechanics, and that quantum mechanics must be modified to make room for the effects of gravity, rather than the reverse.

All Roger's thoughts are connected. The technical ideas he's thinking about in twistor theory, his philosophical thinking, his ideas about quantum mechanics, his ideas about the brain and the mind — all of them are connected.

You could say about Roger that in spite of the fact that he's the most influential living person in relativity theory, what he has accomplished is a small shadow, a faint shadow, of what his ambition has been, and continues to be.

Roger Penrose is known mostly for his work in the classical theory of general relativity. Penrose is a relativity physicist. There aren't that many. There are none at MIT, none at Harvard; there's Robert Wald at the University of Chicago, Kip Thorne at Caltech, Penrose at Oxford, Rovelli at the University of Pittsburgh, Ashtekar and Smolin at Penn State. Stephen Hawking is also a classical general relativist, although his more recent work has touched other areas as well. Hawking became famous in classical general relativity, just like Penrose.

The two of them established many of the fundamental theorems that we know about the general behavior of Einstein's equations. The problems there are mainly mathematical; the theory they're dealing with is Einstein's theory of general relativity, which has been unchanged at the fundamental level since Einstein first invented it, in 1916. But nonetheless, the equations of general relativity are very complicated, and the implications of general relativity are not easy to extract.

One question, for example, that Penrose and Hawking were concerned with is what happens when matter collapses under the force of gravity to very high densities. By the mid-1960s, it was already known that matter could collapse to form a black hole, a conglomeration of mass that produces such a strong gravitational field that even light can't escape it. Nonetheless, the solutions that give rise to black holes were very special — that is, the equations could be solved only by making special assumptions about the symmetry of the collapsing matter. If the matter was perfectly spherical, you could calculate exactly how it would collapse, and you could show that it would form a black hole. If the matter was in some complicated arrangement, nobody knew for sure if a black hole would form.

Since you'd never expect the matter in the real universe to find itself in a perfectly spherically symmetric distribution, this question was very important. It was Hawking and Penrose who developed the theorems by which you can prove, without actually solving the equations for nonspherical collapse, that under certain conditions a black hole will necessarily form.

Penrose is mostly known for his work on that kind of a problem. The same kind of theorems apply to closely related questions of the initial singularity of the universe. In the standard big-bang model, one assumes that the universe is perfectly symmetrical, completely homogeneous, and completely uniform in its mass density. The real universe, of course, isn't so ideal. You make these idealizations in order to obtain equations simple enough to solve. When you run those idealized equations backward in time, you find what's called a singularity- -an instant at which the mass density and temperature of the universe is literally infinite.

This is called the initial singularity. Again, there's the question of what would happen if you complicated the equations by putting in the real complexities of the real universe, with the nonuniformities in mass clumped into galaxies, which are clumped into clusters. Once you do that, the equations become clearly too complicated to solve, so what instead one needs to do is to prove general theorems about how these equations have to behave, independent of the details. That's the forte of Penrose and Hawking, and there again they were able to prove theorems that guarantee that if the universe looks anything like the universe we see, you'll find a singularity if you follow it backward in time.

Roger Penrose...I'm so glad he exists, because, as someone once said of Voltaire, if he hadn't existed, God would have had to invent him. Much the same is true of Penrose; he lucidly plays a role that needs playing, just so everyone can see it's dead wrong.

When Roger confronted the field of artificial intelligence, he tells us, he had a deep and passionate negative reaction. "Somehow," he thought, "I've got to prove that this is wrong." One way of reading what he's done is as a backhanded compliment to AI, in that what he's seen — and none of the other critics of AI have seen — is that the only way you're ever going to show that the idea of strong artificial intelligence is wrong is by overthrowing all of physics and most of biology! You're going to have to deny natural selection, and you're going to have to have a revolution in physics. The fact is that artificial intelligence is a very conservative extrapolation from what we know in the rest of science, and Penrose makes this clearer than anyone has ever done before. There's absolutely no question that he'd like nothing better than to have an absolute knock-down drag-out refutation of artificial intelligence, and he's such an honest man — and he knows so much — that he realizes he's not going to be able to do this unless he can overthrow physics. Of course, he might be right. But he knows as well as everybody else that he doesn't have a theory yet.

Is Roger's quantum computer a skyhook or a crane? A crane is nonmiraculous; it just obeys good old mechanistic principles. A skyhook is something pretty darn special; it's either a miracle or something that requires a revolution in physics. I see Penrose trying desperately but ingeniously to invent a skyhook. He says that the brain is a sort of machine, but that you shouldn't call it a machine, because it involves quantum effects. Most biologists think that quantum effects all just cancel out in the brain, that there's no reason to think they're harnessed in any way. Of course they're there; quantum effects are there in your car, your watch, and your computer. But most things — most macroscopic objects — are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them. Roger thinks that the brain somehow exploits these quantum effects, so that they aren't just quantum effects going on in the background.

Two questions: First, why does he think this? Does he think there's empirical evidence that the brain is a quantum computer, and if so, what field does this evidence come from? My understanding is that he thinks that the evidence that the brain is a quantum computer comes from mathematics and nowhere else. He's now searching, trying to get assistance in this from people like Stuart Hameroff, at the University of Arizona, who argues that in the microtubules of the neurons we've got amplifiers of quantum effects.

Why? Physics certainly permits it. If you were looking for a place for a type of quantum amplifier — a little transducer of quantum effects of the brain — the microtubules would be a pretty good place. Let me just give him that, all right? Let's give Roger the claim that Hameroff has identified the site of transduction, or amplification, of quantum effects. The second question is, "What good does it do?" What architecture does Penrose have that could use these effects, that could parlay them, or exploit them, into a quantum computer of some sort? That's a tall order. Boy, if he can do that, we'll have something to look at.

It's annoying that you get somebody who's good at mathematics who uses his mathematical credibility to pontificate on something he's speculating about. Penrose tells a good story, but he tells a fundamentally wrong story. Penrose has committed the classical mistake of putting humans at the center of the universe. His argument is essentially that he can't imagine how the mind could be as complicated as it is without having some magic elixir brought in from some new principle of physics, so therefore it must involve that. It's a failure of Penrose's imagination.

He takes a perfectly good computational idea — the idea of uncomputability — and somehow confounds that to complex behavior in humans that he can't explain. It's true that there are unexplainable, uncomputable things, but there's no reason whatsoever to believe that the complex behavior we see in humans is in any way related to uncomputable, unexplainable things. The intelligent behavior in humans is unexplainable because it's very complicated. Penrose's argument is a little bit like the arguments that the vitalists used to make about life: that life clearly couldn't be just chemistry, so therefore there must be some vital principle. Essentially Penrose is saying the same thing about the mind: that the connection between neurons firing and intelligent behavior — thinking — must involve something beyond our current understanding. He can't make that connection, therefore he thinks there must be some vital principle that has to be added. That's all there is to his argument.

biologist; director of research at the Centre National de Recherche Scientifique, and professor of cognitive science

Roger Penrose is the perfect example of physicists acquiring an authority to speak on just about everything and anything. Between Turing, as the ideal of computation, and quantum mechanics there's something missing — a body. For Penrose, the body has disappeared. I find it amazing that because he is a famous physicist and mathematician, and probably very rightly so, he can come up with this stuff. I would say there are no clothes on Penrose.

There is an arrogance that comes with being a physicist — particularly a mathematical physicist — which also shows up in some of the crowd at the Santa Fe Institute, including Gell Mann. Biologists, and the public at large, share a kind of physics envy.

If I have a chance to have a discussion with Penrose, I'll press him to give me just a shred of evidence that quantum processes are relevant to describing the brain. There is none. This is the same thing that happens, say, with the psychokinesis people, or the UFO people. There are shreds of things here and there, but nothing you can put on the table and bite into.

On the other hand, there are huge amounts of evidence from neurobiology and neuropsychology to make the body a very interesting set of possible interpretations which need not be computational. Penrose discovered that the mind is not computational. I agree. Then he makes this funny leap. He says, "Then it must be quantum." That's where he loses me.

Johnstone Family Professor, Department of Psychology; Harvard University; Author, The Sense of Style

In The Emperor's New Mind, Penrose expresses some skepticism that evolution could have constructed the human mind — and is admirably clear that this aside comes more from a personal intuition than from an argument he'd be prepared to defend. It's not uncommon among some kinds of scientist to be skeptical of Darwin and natural selection. For many physicists and mathematicians, natural selection seems a repugnant kind of explanation, because it's too kludgey. It's random stochastic variation, and selection by utility seems like an ugly way to arrive at something beautiful, and for a physicist or a mathematician, or someone like Noam Chomsky, whose work has often been mathematical, the favored kind of theory is one where a conclusion can be deduced from a bunch of premises in an elegant deductive system. By the esthetic of a grammarian, or the esthetic of a physicist, natural selection seems too ugly and weak.

Penrose has a strange historical tie with the Galton Laboratory, because his father was my predecessor as the head of the department. He's spoken of by mathematicians in extremely positive terms, and I'm more than willing to take that on board. I like the patterns his tiles make.

Tiling is about how you can fill a space. Seems like an obvious question: How do you tile a bathroom floor? The obvious way is with square tiles. There's another way, with diamonds. But how many other ways are there? What happens is that as you begin to go up, you get tiles of the most unexpected shapes, and you can produce tiles none of which have the same shape but in the end make a completely consistent mathematical pattern, which fills that space in both a scientifically and esthetically satisfying fashion. There's a funny bit in Francis Crick's autobiography, What Mad Pursuit, when he talks about visiting the Galton in the early 1950s and finding Penrose and his father playing with odd cutouts made of wood, in the hope of working out the way DNA replicated. He thought it was a complete waste of time — and it was, as far as DNA was concerned; but it was the beginning of a new branch of mathematics.

Roger Penrose wrote an outrageous book on AI. It's very sad that people write books about subjects they don't understand. If you're a famous physicist, you think you have the right to comment on things that you actually don't get. There's a famous attack on AI that many people use from Gödel's theorem. It has to do with how many computations you could make in a certain amount of time. The "proof" is about how too many computations need to be made to solve certain problems in certain ways for a machine to be able to do what is necessary to think. The mistake they make is in assuming that the kinds of computations they are talking about are the kind that compose thinking. Those are probably not right assumptions; in fact everything we have learned about human thinking says they are quite wrong assumptions. The premises for these attacks usually show the ignorance of the attackers about what intelligence is all about. This includes Penrose, who really says nothing particularly interesting about AI.

Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust

Roger Penrose gets full marks for effort. It was a good try. He thinks brains are capable of leaps of intuition which are not conceivably possible for a machine. He thinks human minds can see the truth or falsity of statements that are in principle noncomputable. I'm not impressed by his examples. Of course, people can do very clever and creative things that we can't yet begin to understand — nobody has a clue how Shakespeare could write his plays or Picasso paint his paintings or Hawking do his mathematics — but I don't think there's any real parallel between these astonishing achievements and noncomputable "Gödel sentences."

Penrose has got an interesting theory, but it's a theory in search of something to apply it to. I just don't think we need quite such a radical new theory to explain human intelligence and creativity.

In effect, it seems to me, Penrose simply assumes from the start precisely what he purports to prove. He asserts that humans can do certain things that we've proved, mathematically, that computers cannot do. Specifically, he suggests that humans can "intuitively" solve certain machine- unsolvable problems (such as Alan Turing's halting problem for Turing machines, or Kurt Gödel's problem of recognizing the consistency of arbitrary sets of axioms). The trouble, though, is that these problems are unsolvable only in the sense that there's no computer program that can do this and never be wrong, and there's absolutely no evidence that there can't be computer programs as good at intuiting — that is, guessing — as well as human mathematicians can. There's no reason to assume, as Penrose seems to do, that either human minds or computing machines need to be perfectly and flawlessly logical; as the child psychologist Jean Piaget showed, logical reasoning is a sophisticated skill that develops quite late, if at all, in normal human development. Perhaps it did not occur to Penrose that it's easy to write computer programs that can work with inconsistent sets of axioms by applying occasionally defective logic to them. Thus Penrose's assertion that no computer could ever think in a humanoid way is just that — an unsupported assertion. Where there's smoke, a sharp reader could only conclude, there is smoke.

I don't really know Roger Penrose. I think he and I were once at Imperial College, London, at the same time, so I must have met him, but I don't remember what he looks like. I understand he has had a distinguished career in certain kinds of mathematical physics, especially the physics of general- relativistic gravitation. But recently he has put forward in a couple of popular books some ideas that I find extremely odd.

I regard self-awareness — consciousness — as being a property, like intelligence, that may eventually evolve in complex adaptive systems when they reach certain levels of complexity. I imagine that complex adaptive systems have evolved both intelligence and consciousness on enormous numbers of planets in the universe. In fact, our human levels of intelligence and self awareness, of which we are so proud, may not be very impressive on a cosmic scale, even though they are significantly higher than those of the other apes here on Earth. Whereas I don't think it impossible in principle that we humans may someday produce computers with a reasonable degree of self-awareness. Penrose seems to attribute some special quality to self-awareness that makes it unlikely to emerge from the ordinary laws of science. He's certainly not unique in that respect; some other authors seem to react that way to the challenge of understanding consciousness. But what characterizes his proposal, as far as I can tell, is the notion that consciousness is somehow connected with quantum gravity — that is to say, the incorporation of Einsteinian general-relativistic gravitation into quantum field theory. I can see absolutely no reason for imagining such a thing. Moreover, we now have, in superstring theory, a brilliant candidate for a unified theory of all the elementary particles, including the graviton, along with their interactions. The theory leads, in a suitable approximation, to Einstein's general- relativistic theory of gravitation and incorporates that theory beautifully into quantum field theory in a way that avoids all the terrible problems of infinities, which plagued previous attempts to treat general relativity in quantum mechanics. We will find out someday whether superstring theory is supported by observation, for instance in experiments done with a new high- energy accelerator. But I see no basis for engaging in mystical speculation about quantum gravity.

Penrose also revives, for some reason, the long discredited idea that Gödel's work in mathematics somehow implies a special difficulty in achieving self-awareness in a physical system. I hope that Penrose will come around eventually to the simple idea that self-awareness and intelligence emerge from biology, just as biology emerges from physics and chemistry. After all, we now understand that nuclear forces arise from quark-gluon interactions and interatomic forces from electromagnetism. Hardly anyone is left who thinks that special vital forces, apart from physics and chemistry, are needed to explain biology. Well, the idea that special physical processes are needed to explain self- awareness will soon die out as well.