You’ve eaten your polynomial-time meatloaf and your BQP brussels sprouts. So now please enjoy a special dessert lecture, which I didn’t even deliver in class except as a brief coda to Lecture 10. Watch me squander any remaining credibility, as I pontificate about Roger Penrose’s Gödel argument, strong AI, the No-Cloning Theorem, and whether or not the brain is a quantum computer. So gravitationally collapse your microtubules to the basis state |fun〉, because even a Turing machine could assent to the proposition that you’re in for a wild ride!

(Important Note: If you belong to a computer science department hiring committee, there is nothing whatsoever in this lecture that could possibly interest you.)

This entry was posted
on Thursday, March 15th, 2007 at 11:53 am and is filed under Democritus, Metaphysical Spouting.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

Interesting stuff. I would agree that quantum mechanics are related to consciousness only in as much as they are both mysterious. Even if physics had at last reached the bottom line, even if it had found the ultimate explanation, the basic components of the universe with quantum theory, I fail to see how that would explain that I can feel and can but assume that you do too. The answer belongs to other fields of research. For the same reason, I’m not quite comfortable with a refutation of the teleportation problem by means of pure physical impossibility.

At this point I cannot resist the urge to tell you and the other readers of your blog to, that if you like this sort of crazy talk and thought experiments about consciousness, you should seriously consider reading *The Mind’s I*. Oh, and specifically about the story about cloning, I can think of a short SF story by James Patrick Kelly : *Think like a dinosaur*.

We argue that computation via quantum mechanical processes is irrelevant to explaining how brains produce thought, contrary to the ongoing speculations of many theorists. First, quantum effects do not have the temporal properties required for neural information processing. Second, there are substantial physical obstacles to any organic instantiation of quantum computation. Third, there is no psychological evidence that such mental phenomena as consciousness and mathematical thinking require explanation via quantum theory. We conclude that understanding brain function is unlikely to require quantum computation or similar mechanisms.

There’s lots of good stuff in this paper, including references into the literature. My favorite bit might be the quotation they give from P. S. Churchland, who said, “The want of directly relevant data is frustrating enough, but the explanatory vacuum is catastrophic. Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules.”

roland: Yes, if Con(F) is true, then F+Not(G(F)) is consistent. But if we’re allowed to take Con(F) as an axiom, then F+Con(F)+Not(G(F)) is not consistent! I agree that it’s a headache-inducing distinction.

I enjoyed Penrose’s book very much, but I have to agree with Scott that the starting point of his arguments was ill-posed.

Folks might be interested to know that Oxford University Press publishes a book by Sunny Auyang that IMHO provides an alternative starting point for these same questions, entitled How Is Quantum Field Theory Possible?

Ms. Auyang argues, AFAICT, that beings who live in a universe whose attributes include relativity, causality, and identity are pretty much forced to accept quantum field theory as the fundamental theory of reality.

What I myself would dearly love to see is an analysis that combines the revolutionary objectives of Penrose with the intellectual discipline of Auyang.

That is to say, it would be mighty fun to see Penrose’s brick-throwing attitude that Hilbert space sucks and needs to be replaced (`cuz it’s just too big maybe? or too hard to be united with gravity?) pursued in the context of Auyang’s disciplined appreciation of theories that respect relativity, causality, and identity.

Well, the atoms of our brains are constantly being replaced, aren’t they? (Feynman’s formulation was, “The atoms come into my brain, dance a dance, and then go out — there are always new atoms, but always doing the same dance, remembering what the dance was yesterday.”) We’re being slowly teleported all the time. So, doesn’t it seem likely that whatever information necessary to make our neurons do their job must be transferable to a new substrate?

Instead of using nanotech (or Calvin’s cardboard box) to create a whole new body, what if we replaced each neuron one at a time with a transceiver which communicated with a “neuron emulator” somewhere else in space? This is just a mildly more extensive operation than what biochemistry does every day. There wouldn’t even be an irritating, superfluous original to kill off, just a consciousness spread between two bodies before settling completely in the new one.

Blake: Thanks for the reference — I know Steve Weinstein, so it’s strange that I hadn’t seen this paper!

Regarding the atoms in your brain being constantly pooped out: yeah, I made the same point in the lecture. Of course, this argument only tells us it’s the information that matters, not whether it’s classical or quantum information.

Scott Aaronson wrote a very nice piece about consciousness and Quantum Mechanics. Actually, he’s just bashing some of Penrose’s ideas about consciousness and the inability of machines to achieve it.
I personally dislike these ideas of Penro…

>The obvious response is equally old: “what makes you so >sure that it doesn’t feel like anything to be a computer?”

If you believe that it feels like something to be a computer
(yes, I know, it’s not the beliefs that are important…)
then does it feel like something to be a thermometer ? or a stone ? Or a system composed of a stone and a thermometer?

I noticed you made the point about atom exchange in your lecture, but I think it deserved a little more attention. It seems to suggest that Nature can “clone” the necessary information; perfect cloning is impossible, but it doesn’t appear to be required.

Unless, of course, the reason I can’t remember where I left my keys has to do with imperfect quantum cloning.

To me, the keyword is: meaning. G(F) has a meaning for us, and only based on this, we can assign a logical value to it, and the value is “true”. Somehow, symbols are mapped to meanings in our brain. If you write “aslfkjsdflj”, it is true?
Or false? None of the above, because we don’t identify it as a meaningful statement. So, to understand logic, we need to know what “meaning” means, and how the mapping of symbols to meaning works. Note, BTW, that Godel’s statement has two meanings – it’s constructed as a statement about numbers, but has a hidden second meaning as a statement about provable statements (in other words, it’s a string with 2 meaningful interpretations). Unfortunately, you cannot define “meaning” in computer terms (every definition of meaning would be meaningless by definition), so in order to claim human=computer (be it quantum or classical), you have to discard the notion of meaning as unimportant. This contradicts our everyday practice.

Having read both The Emperor’s New Mind and Shadows of the Mind, I don’t think you understood Penrose’s argument quite correctly. I took him to be saying this: if human beings can be simulated by an algorithm, Goedel’s theorems imply that human beings cannot prove that algorithm’s consistency. It follows that any algorithm which human beings can prove to be consistent, is not capable of simulating a human being. Since AI researchers only work with algorithms that human beings understand, simulating a human being by their methods is impossible.

Scott: it’s the same as asking “how do you know you are really you?” We have to accept something without proof, or else discussion will become circular. I know that I have to decide whether something has meaning before I can say whether it’s true or false, that’s all I can say. I suspect this notion of meaning is not portable to computers. Fortunately, Kurtzweil will soon resolve this issue by personally migrating to computer – let’s wait and see.

That’s wrong, Michael: Penrose very explicitly wants humans not to be simulatable by any algorithm, not merely by any algorithm that human beings can understand. If all he wanted was an inscrutable conventional algorithm, then why would he be speculating about quantum gravity effects in microtubules?

As a side note, AI researchers do not work exclusively with algorithms that they “understand,” in the sense of being able to prove their correctness. Indeed, the fact that we can’t prove much about neural nets, genetic algorithms, survey propagation, etc. is precisely why people study these things empirically.

Scott: of course I cannot know whether I’m a computer or not, but I believe I’m not – on more or less irrational grounds. That makes me somehow different from computer – which is not known (sic!) to believe in anything.

Penrose did discuss the possibility that humans can be simulated by an inscrutable algorithm, in chapter 3 of Shadows. He didn’t actually manage to rule it out, but he did show that if that possibility holds, the hope of transferring our minds onto computer hardware will never be realized — which was the real point of the book. I don’t recall Penrose saying anything against the possibility of intelligent machines; it’s intelligent algorithms he disbelieves in.

I have a different reason for not wanting to be “teleported” than you listed.

Let’s say instead of teleporting me, you proposed to teleport a river. First, we would study the position of all the water atoms and river bank atoms at a time T, then we’d make an exact duplicate of those atoms somewhere else, then we blow the first river up with a nuke or something. Would this be the same river? I say obviously not. Yet, we all know that rivers can move over time. The Colorado River used to be on surface level, but now it’s at the bottom of a huge ravine, so position is not a river. Similarly, the water molecules in a river can never last for more than a few months, so the water molecules are not a river. Similarly, a river can have multiple sources, and as one dries up, another can replace it, so the source of a river is not the river. What then is a river?

I would propose, in a Heraclitean/Buddhist vein, that a river is a process not an object. To be a river at time T means to be caused by a river process at time T − 1. To be a part of a river means to be part of an ongoing process that continues in a “natural” way. That is, the cause of the continuing movement of the river should not be imposed by an external process, as is the case when we attempt to “clone” the river. The clone river’s existence was not caused by the original river’s existence (except in the passive sense that the original river was used as a template for constructing the clone), so the clone river cannot be said to be the continuation of the original river’s process.

(The same example can be made with cloning a fire as well.)

By now you should see where I’m going with this: a human being is a “river.” We are not our atoms. We are not the information, quantum or otherwise, that our atoms represent/are represented by. We are the ongoing process of being us. To be me at time T is to be a set of objects whose relationship with one another was caused in a particular manner by the existence of another set of objects also called me at time T − 1. So, teleporting me and shooting the original would be like creating an identical river elsewhere. If my only concern were to ensure that something identical to me at a particular time exists somewhere in the universe at all times, then I’d be satisfied by “teleporting” through destroying the original. However, if my concern was to ensure the continuation of the original process, then I would have serious misgivings about whether the clone of me could be said to be the same process or not.

(Incidentally, this reminds me of how scientists used to look for a particular “vital essence” that made living things alive. Now, scientists don’t think such an essence is necessary, since to be alive means to be the continuation of a living process, not to possess some concrete substance or essence at all times. We also don’t think fire has a substance or essence anymore, for the same reason.)

“If we liked, we could also have the lookup table encode the person’s voice, gestures, facial expressions, etc. Clearly such a table will be finite.”

Why it would be clearly finite? You say one can produce only finitely many meaningful questions?

Do I understand right that Turing Machine can do everything that quantum computer can do? If so, then – because, as you point out, Penrose is C type – arguing whether brain is quantum mechanical isn’t crucial for his most important point. I see “Minds” as Penrose’s scientific try to add something to neuroscience that would allow his C view to be right – this is how new ideas come to science so it’s valuable, I believe.

Aside: There is science-fiction novel by Jacek Dukaj that _convincingly_ shows human society where nobody cares for their own bodies because everybody has a copy of a brain on HDDs (which live in string-theoretic different universes with different physics, by the way, so that it’s possible to store much more data there).

This is what I appreciate very much in good books: they sometimes show _convincingly_ perspective that I thought of as being totally weird. Another, nonrelated, example is Solaris by Lem.

Carl: When I try your viewpoint on for size, I immediately run up against the problem of how you know the clone river is not the “natural continuation” of the original one. What exactly does it mean for a process to “continue in a natural way”? What if there was an artificially intelligent river that teleported itself?

More prosaically: If engineers divert the river in a different direction, is it still the same river? If a car accident destroys part of your brain and you wake up after a year-long coma, are you still the same person? If a large fire goes out except for a tiny ember, which then reignites a large fire again, is it still the same fire?

Philosophers can and do debate such questions till the cows come home — or rather, until cows appear at home at time T whose existence was caused by another set of cows in the field at time T-1…

Sorry, I don’t! I do have audio recordings for most of them, but … umm … I don’t … err … intend to … y’know … make them public. (As Nabokov said: “I think like a genius, I write like a distinguished author, and I speak like a child.” )

If all he wanted was an inscrutable conventional algorithm, then why would he be speculating about quantum gravity effects in microtubules?

The problem with your thinking, Scott, is that you don’t appreciate the possibilities of Quantum Gravity algorithms, for which Penrose no doubt understands the potential gain in complexity. The river story is a good example. A human here is not the same as a human there. Observables in quantum gravity include concepts such as position (in a shared classical universe) and these observables are related to the good old observables of QFT which govern the atoms residing in one’s brain at a given time.

The fact that the lepton and baryon masses were derived by Carl Brannen using an effective categorical Penrose twistor approach to M theory shows that the correct concept of observable is directly related to the theory of computation. Eg. your TFT Jones approximations – all about operads. But that’s just boring old complex number quantum theory. What about quaternions and octonions and….. (the latter 3×3 Jordan algebra gives the 26 dimensions of bosonic string theory, for example).

That doesn’t really answer the question. We know, for example, that quaternionic quantum mechanics is equivalent in computational power to ordinary complex-number quantum mechanics. So, even if the set of observables changes in quantum gravity, why should I believe that leads to a stronger model of computation?

Look, the computational power of quantum gravity theories is a profound and important question, which many people have kicked around for years. But with the possible exception of the Freedman-Kitaev-Wang stuff, I don’t know of a single result concrete enough to sink my teeth into. Are there any?

Yes, of course I was referring to Freedman-Kitaev-Wang. As a physicist and not a complexity theorist, I don’t really care about the strength of models of computation. I only care about matching experimental data. And of course QG observables are VERY different to QM ones. Another way of looking at it is as a kind of generalised number theory….

Scott: here’s a reasoning (perhaps well-known one), please tell me where it breaks down.Theorem 1: if world is a computer, then God exists
Proof: computers are known to be able to simulate any other computers. So, at some point we (being part of computer) will write a simulation of another Universe, and run it (some will even migrate to it). Being a universal computer itself, this Universe will at some point create next universe, and so on. So, we have a chain of simulated Universes:
U[0]… U[m] U[m+1] …
So, with which of those Universes will you identify yourself?
Since the length of this chain is potentially unlimited, your probability of being personally in U[0] (as well as in any other particular Universe) is exactly zero. So, you have to assume you are in some U[n] where n!=0. Then there exists U[n-1], which, with respect to your Universe, is the God, by definition.
[ We can get rid of any probabilistic elements in this reasoning if we – quite logically – assume the chain to be infinite in both directions. Please note that even anthropic principle will not save you from this Theorem: intelligent beings are all over the place in every Universe on this chain]

Now we can go ahead and prove less obviousTheorem 2: if Universe is a computer, then laws of logic are inconsistent.
Proof: There exists a mathematician who believes in two things simultaneously: a) Universe is a computer b) God doesn’t exist [ Existence of such a mathematician can be proven by producing a live model]. By definition, mathematician is a subroutine which was specifically programmed to obey the laws of logic in all its computations. As follows from Theorem 1, God exists. Therefore, both of the following statements are correct: a) God exists b) God doesn’t exist. Contradiction proves the theorem.

Finally, we arrive atTheorem 3: In Universe is a computer, it doesn’t exist.
Proof: as follows from theorem 3, logical system that includes the axiom “Universe is a computer” is inconsistent. Inconsistent system cannot have a model. Therefore, Universe doesn’t exist.

P.S.
It’s curious that some people really believe that chain of computer simulations has an initial element U[0]. I cannot say it for sure, of course, but it seems Wolfram’s message can be interpreted as follows: universal programs can be so short, they can emerge by themselves from nothing. I tried to find out who created the hardware for this first simulation, but this remains a mystery.

To me, the teleportation question doesn’t interest me much at a quantum theory level. It seems likely to me that at some point in the fairly near future it will be possible to duplicate a human brain in a way that is purely classical and not at all exact, but plenty good enough to digitize minds, make arbitrary numbers of perfect copies, and simulate (or build, assuming the necessary technology) arbitrary numbers of a given mind. (Maybe even modify and tweak it too, but we may be able to copy it before we can really profitably mess with to to any great extent.) At that point the world will have to face all the philosophical, moral, ethical, etc. issues, but nothing interesting will have happened at a theoretical level in physics or computational complexity.

Of course I am just assuming that Penrose is wrong, that quantum states are not important, and even that a lot (probably the vast majority) of classical state is not important, but to me this seems as reasonable (actually more reasonable, which is why I believe it) as the idea that quantum states are critical or that “natural continuation” (whatever that means) is important.

With that in mind, I have to say that I do feel squeamish about shooting my clone, because presumably he has just as much claim to being me as the new copy does, and, well, I (as that original) don’t feel like a “meat hard drive” any more than my copy does, and I would rather not die.

But even this future (likely as I think it is) is more theoretical than is necessary to bring up most of the relevant philosophical issues about what constitutes the self and the continuation of the self. People go through all sorts of “discontinuities” of self every day that are more jarring than teleportation and are arguably as jarring as copies. Brain injuries create much bigger changes in mind and personality than even relatively crude brain duplication methods would. A traumatic experience can radically change a person. Personalities and mental capabilities vary tremendously over a lifetime. It seems to me that you can easily make a case for any of these things having more interesting effects on the self (on both philosophical and pragmatic levels) than teleportation or cloning. In other words, if teleportation or cloning bother you more than what goes on every day right now, it’s because you haven’t thought about it enough. This is ultimately why I believe that arguing over whether it is possible to copy fine details is a red herring for the purposes of actually doing this in the real world. My intuition is that even if the brain uses quantum states, it can probably recover from having them randomized without appreciable lasting effects.

In short, when it comes to teleportation thought experiments, the relatively well-defined questions about physics and computational complexity are swamped by the poorly defined idea of “self;” and the philosophers certainly can and do happily go on doing what they do, regardless of quantum theory or, when it happens, actual duplication of human brains. Of course the engineers will also go on writing software and duping brains, and the complexity theorists will go on trying to prove P != NP and building quantum computers, and they will all still be arguing about who is asking the really interesting questions and posting off-topic rants to each others blogs. Some things just don’t change!

As a physicist and not a complexity theorist, I don’t really care about the strength of models of computation.

Hey, to each his own — but then what on Earth have we been talking about?! This whole discussion started when you accused me of not understanding the potential for complexity gains with quantum gravity algorithms…

Sigh. The point is that, for a physicist, the enormous gain in physical computational power (in QG) is a strengthening of complexity measure. Now, perhaps you only like talking to fellow complexity theorists, but you were discussing Penrose’s ideas about physics….

By Theorem 2, on the other hand, you’ve descended into utter silliness…
It was intended to be so. To tell you the truth, Theorem 1 looks equally silly to me. I think we don’t understand something BIG, without which all our philosophical debates are equally silly. Looking forward to hearing your view on this…

Kea, what do you mean by “physical computational power”? What complexity measure are you referring to? And how do you know that quantum gravity theories yield an “enormous gain” in that measure?

I’ve been hanging out with physicists for a decade. When John Preskill, Dan Gottesman, etc. talk about theory A having “more computational power” than theory B, they mean the same thing I mean: that the class of problems feasibly solvable in theory A strictly contains the class of problems feasibly solvable in B. If you mean something different, then please just tell me what it is!

Carl: When I try your viewpoint on for size, I immediately run up against the problem of how you know the clone river is not the “natural continuation” of the original one. What exactly does it mean for a process to “continue in a natural way”? What if there was an artificially intelligent river that teleported itself?

I agree that there are wrinkles inside my view that need to be ironed out before one can say, “Yes, let’s use a process ontology for biological beings,” with confidence, but I don’t think your example nails it. An artificially intelligent river isn’t a river at all. So, perhaps you mean river to be interpreted as a metaphorical “river,” ie. a human being or an AI. That has more salience. The question under that interpretation is, if the intelligence caused its body to be teleported, by what criteria can we say that this teleportation was unnatural/artifical? That question more concretely gets at a problem that I myself realized was in my explanation of the teleporter problem after I explained it. In my old explanation, I said the problem with killing the original was that the clone couldn’t be said to be the continuation of the original process. However, more to the point, the gunshot comes after the creation of the clone has finished. Thus, however briefly, there are two me’s in the world before the original is shot. I think the “shoot the original” part has more to do with our natural revulsion than the actual teleportation. For example, when we watch Star Trek, we aren’t revolted by the thought of the crew members being disassembled and reassembled and thus technically not-alive during the beaming process. No, I think the revulsion comes from the act of shooting the original. We have a natural disgust at the thought of our death — ie. the end of the stream of our continued existence. So, when you shoot the original, you do kill someone permanently, much as we might say that after a river dries up, that same river can’t come back as the same river, even if it was cloned before hand. It can only come back as another river derived from the original. So, the intelligence that teleports itself might be considered in one sense “the same” as the original, so long as it is the only set of objects that can be construed as the successor to the prior intelligence after the teleportation event.

I’m going to answer your next questions, but before I do, I want to put in the caveat that of course words naturally mean just what we want them to mean. So, if you wanted to define a river as being the same only if it kept the same banks or the same atoms or whatever, you certainly could have and use that definition. But I think that if you want to have a working system for rigorously defining a river/fire/living being that lines up with the sorts of things that we would normally call “the same river/fire/living being” then you should accept my process-based definition, since it allows us to call the sorts of things that we normally call the same “the same” without being tripped up by the “philosopher’s axe” type objections.*

More prosaically: If engineers divert the river in a different direction, is it still the same river?

Yes, of course, because the banks of the river aren’t the river. If you cut off the arm of a starfish, the new arm is still a part of the same starfish even if it goes back crooked.

If a car accident destroys part of your brain and you wake up after a year-long coma, are you still the same person?

Depends on what you mean by “the same person.” Obviously, your mind will be different from how it was before the crash, but our minds are always changing anyway and disappear almost entirely every night when we lose consciousness. So if same person means “same or similar enough personality as usual” than perhaps the answer is no. If “same person” means “the successor of the process of being a living being for the person before the car crash” then the answer is clearly yes, since even if all your atoms have turned over while you were unconscious, the biological process of your life never ceased in its cellular machinations.

If a large fire goes out except for a tiny ember, which then reignites a large fire again, is it still the same fire?

Is this a controversial case? It seems like it clearly would be, although if someone didn’t notice the small ember, they might mistakenly say it was a new fire.

Here’s a much more damning example for you to use: When I was in Japan, I visited a temple that claimed to have had “the same flame” burning for thousands of years, since the great head priest Kukai got it from somewhere (maybe it went back to the Buddha in India? I forget the details…). However, this same temple also burned down in the Middle Ages! So, presuming that it’s true that as the fire that ate the temple was dying down, one monk managed to keep alive a part of the flames of the burnt down temple after the fire was under control or the temple was almost completely reduced to ash, which fire did the monk preserve? Did he preserve the fire from the time of Kukai or the new fire made in the Middle Ages that grew out of control and ate the temple? Or did he preserve both!? (I would say that he probably “preserved” both, presuming that the Kukai flame managed to merge with the larger fire as the temple collapsed around it, but I can see how this might be a more controversial claim….)

Philosophers can and do debate such questions till the cows come home — or rather, until cows appear at home at time T whose existence was caused by another set of cows in the field at time T-1…

I see your point—it’s dangerous to introduce a not widely used ontology to solve a hypothetical philosophical problem—but I think that if we’re serious about saying that there is no preservation of matter or the information about that matter in biological systems, then we have no choose but to say that for those systems, when we say “the same” what we mean is something derived from the original by natural processes. What else could we say what it means to be the same life form?

* To specifically deal with the case of the philosopher’s axe, if I say, this is my grandfather’s axe, you might reasonably infer that this axe has much of same material as when it was used by grandfather, hence you would object to claiming that an axe with a replace head and handle is “the same” since saying that leads to false impression about the axe. Furthermore, the replacement of the head and handle is not a natural thing that axes do on their own, it is something done to axes from without, which makes it different from the case of dynamic systems.

For example, suppose I really did have my grandfather’s axe, but you were to object that it’s not the same because the nucleons that are in the atoms of the axe are now supported by new virtual particles that just popped into (and out of) existence in order to give them volume. Surely, we would all laugh at your objection. “Virtual particles in the nucleus come and go all the time! When someone says it’s the same atom, they don’t mean the same virtual particles have been there the whole time! If we use your definition of sameness in atoms, then no atom is ever the same for any significant length of time!” Note also that virtual particles come and go on their own all the time as a part of just being a nucleus for all nuclei, unlike replacing the axe head and handle.

If I say, “this is a river my grandfather loved” or “my grandfather’s old basset hound” then you won’t infer that the physical make up of the river or dog are the same, but that the dynamic processes of the river and dog have continued in the normal way without interruption.

No, Kea, that’s not a translation of what I wrote. The fact that theory A contains theory B as a limiting case does not mean that we can solve computational problems in A that we can’t solve in B. The two theories might be equivalent in their computational power. Do you understand this distinction?

Any attempt to decisively evaluate Penrose’s argument will run aground on the notion of “seeing the truth” of something. And the inability either to reduce that concept to physics and computation, or to do without it at all, is just a symptom of the inability of natural science as presently constituted to deal with consciousness. So one more effort, metaphysicians! Quantum entanglement already tells you that atomism may be wrong. But quantum mechanics is still formulated using a deracinated naturalist ontology. Abandon the presuppositions of mathematized materialism, brave the perils of solipsism and idealism, and join Husserl in seeking the true ontology via phenomenological reflection – because in the end, you have nothing else to use.

Scott, the point is that only in QG do we actually have a proper Quantum Mechanics, or a rigorous QFT. It is a ‘theory of computation’. So when I say QG is more powerful, I mean it strictly in the sense that you mention.

Scott: There’s a slight twist to Theorem 1, which I’d like you to cover if you write on this issue. My point is: our beliefs are greatly influenced by technology we have at hand. Currently, not too many people believe we live in a computer simulation. I don’t believe it either. But, in a not very distant future, technology will reach a level where our computer simulations will become indistinguishable from reality. You will be able to travel in 3D, see holographic views as real as it could possibly be, experience tactile sensations, etc . – artificial reality will become as real as a real one, and this will make us think along the lines of Theorem 1: the majority will start believing we live in a Martix, that’s where our everyday experience with 3D simulations will naturally lead us. How this new world outlook will affect our life – I don’t know, but I don’t expect any good to come from it. Now, the funny part: it is already clear that this time will come pretty soon – maybe, in 20 or 30 years; it’s not even necessary to create absolutely realistic reality – just a couple of improvements in movie technology will do – to shake current beliefs. And we KNOW this will happen, for sure – but the knowledge is not enough, our views will change only when it REALLY happens. But big thinkers are averse to the beliefs of the masses – new generation of philosophers and scientists will start looking for new arguments to prove our reality is more real than virtual one. How do you think, what these arguments will look like?

Mitchell, I completely agree with you. From this day forward, I’m going to quit proving theorems and all that other hard stuff, and just lie on my couch seeking the true ontology via phenomenological reflection. It’s certainly easier, and it ought to bring in the babes way better than deracinated naturalist ontology.

Vasily Shirin: Didn’t Kant already say that since our phenomenal experience cannot be the same as what’s “really” underlying existence due to the existence of the antinomies of reason, we may as well consider this reality to be a form of ‘simulation’ under the stipulation that when we say the word ‘real’ we should mean ‘really in the simulation’ and not ‘really in world that grounds the simulation’ since we can’t know about an outermost world so long as we are experiential beings.

Because a full formulation of QG will definitely (from physics arguments) involve a proof of the Riemann hypothesis and the zeroes are related to string type amplitudes (or you can consider the usual GUE arguments) which have duality features that are wiped out by the ‘linearity’ of ordinary QM.

Kea, why would a full formulation of QG require a proof of the Riemann hypothesis? I’m surprised that I never encountered such an important result in anything I’ve read about QG.

Also, even if the zeroes of the zeta function have some connection to amplitudes in QG, why does that imply that a QG computer could efficiently find the zeroes? Can you give me any evidence that this is anything more than a speculation, or a hope?

Finally, why is finding the zeroes hard for an ordinary QC? Saying the “duality features are wiped out by the linearity of QM” doesn’t mean anything to me.

If you don’t want to explain yourself, can you at least point me to some references that would back up the striking claims you’re making with such casual confidence?

Wait a minute… Isn’t Vasily’s theorem 1 obviously problematic? The only way for a finite computer to simulate itself is by being itself. It can’t simulate itself because it would need one bit to represent every bit of its state, therefore it would need to be made twice as big, but then it is only simulating something half the size of itself… Oops!

Even a machine with unbounded memory (e.g. a theoretical turing machine with infinite tape) runs into a very similar problem, which is that it can’t simulate itself in real time. Given that it is only roughly another 10^130 seconds before we dissolve into a sparse electron neutrino gas, then any “nested” universes must have shorter lifespans (by a some constant fraction) than our own. So you go down a few dozen levels, and you find yourself in a universe that doesn’t last long enough for stars to even form.

Mark: Yes, every universe would have to be smaller than the one simulating it. But depending on what assumptions we make, it could still be the case that most observers find themselves in one of the “simulated” universes.

(And if the top-level “root universe” was infinite, then the observers in that universe really could have zero measure compared to observers in the simulated universes! Of course, this will depend on our choice of measure.)

Greg: Yes, one can develop a whole theory of quaternionic QM, just like one can develop a theory of real-valued QM. (Admittedly several things are less nice than in the complex case, as I discussed in Lecture 9.) Steve Adler even wrote an entire book about quaternionic QM.

It all depends on the properties of the root universe, but since we don’t know anything about the root universe, it’s fruitless to speculate about it. Maybe in the root universe QM is considered simple, contradictions are true, and cats and dogs live together. Since we’re just speculating about it, we have no reason to suppose the root universe is anything like ours, anymore than an AI that lived in World of Warcraft would be able to speculate about the physics of our world. (“Well, clearly the root universe will clearly be a series of polygons with bitmap skins stretched to fit them and poor clipping…”)

Carl: in relation to your earlier comment about the river – have you considered the notion of “genidentity”? (google, and google with “fluffy”!)

Scott: On a slightly different tack than Penrose’s, but still on computation and consciousness:

1) Assume a computer simulated brain has “real” experiences.

2) Run the simulation, dangling a simulated patch of blue within purview of its simulated optic nerve connectors.

3) Now use your computer science expertise to determine what the patch of blue looks like to the simulation.

Could it be that the difficulty of step 3 shows there is some incommensurability, or “explanatory gap” between experience and what is computable? (Of course, the question “does it look like the blue I see?” has a simple yes-no answer, had we but the means to determine it.)

Scott again: I don’t take posting comments to emminent computational complexity blogs lightly, so in order to make best use of my visit I’ll ask an unrelated question: could you please explain where the pure randomness theorised to be involved in quantum measurements actually comes from? Thanks!

I came up with an illustration for Godel’s theorem. It’s actually a plot for a movie. I am not even sure whether it’s my own idea, maybe I heard it somewhere.
The plot revolves around the idea that our DNA has double meaning: it encodes proteins, but a the same time, the very same sequence has different interpretation: it contains some very important message, which we will never be able to figure out, and the hidden meaning of our life, of which we are absolutely unaware, is to deliver this message to someone. So, there’s a lot of action and drama in this movie, I omit irrelevant details, but in the end, the hero overcomes unimaginable difficulties, travels thought parsecs and eons and reaches the destination. In the last scene, some invisible entity (made of quantum superposition), opens the envelope, and the message reads “life is meaningless!”. He says: old news, I already got it 5 billions years ago from another source – and throws it away.
It’s a very sad movie, I’m not sure it will ever be screened.

Addendum (3/28): Oops, I didn’t see before what you meant. There’s no mistake in your logic. From F+Not(G(F)) you can deduce that F is inconsistent — that’s completely true! But that doesn’t mean there’s actually an inconsistency — the theory just “thinks” it’s inconsistent!

In my view, nonlinear gravity is really the key issue.
General relativity (GR) is said to be nonlinear. I have heard it argued (no refs right now) that this GR nonlinearity is not nonlinearity in the sense that contradicts the linearity of quantum superpositions. (QS nonlinearity). Via Abrams-Lloyd, if quantum gravity (QG) is QS nonlinear, then physically NP problems are efficiently computable, although I see you (Scott) dispute this in your NP-complete Problems and Physical Reality paper.
I think there are two crucial questions here:
1.Is QG nonlinear in the requisite sense?
2.Does this nonlinearity actually allow efficient computation of NP-hard problems?
QG is still unknown, but we know GR and the Hawking semi quantum approximation. In this theory, can you make the necessary nonlinearity, either by passing close to a black hole, or by forming and evaporating one?
Also, does forbidding this form of nonlinearity significantly restrict possible QG theories?
My first guess is yes, but I have no strong arguments other than the general improbability that everything is linear.
Finally, Scott, if simple nonlinearity is not sufficient for the Abrams-Lloyd conclusion, what more is required?

Jim: Don’t get confused by overloaded terminology! The sense in which GR is nonlinear is completely unrelated to the sense in which nonlinear QM would be nonlinear. GR is nonlinear at the level of observables, while nonlinear QM would be nonlinear at the level of amplitudes. The latter would allow NP-complete problems to be solved in polynomial time; the former, so far as we know, does not.

The computational power of quantum gravity theories is very much an open question. However, Kea’s claims to the contrary, everything we know now (including about black holes, etc.) is consistent with the possibility that QG yields no further computational power beyond that of ordinary quantum computers.

Almost any nonlinearity in QM would suffice for the Abrams-Lloyd construction. To be more precise, you need a nonlinearity that can increase the angle between two quantum states.

Because I explicitly restricted myself to questions that could be asked in at most a million years. (And if you read on to the next paragraph, you’ll see exactly why.)”

Sorry, I didn’t understand – I thought You mean a computer that would answer any question within one million years. Now next paragraph makes sense

As I understand, You suppose that time is discrete – so that brain may work in an input-output way? This seems to me to be quite a strong assumption, but even if it’s true, is it clear then that such a lookup table could be contained in a universe? I mean that we don’t really know on how much information answer given by brain in this infintesimal amount of time would depend?

However, I think this computational perspective is nevertheless interesting.

sirix: By holographic arguments, certainly I can’t be communicating to you more than (say) 10120 bits per second. But the real bound will be vastly better than that — since what matters is not the total number of nuances of my voice, facial expressions, etc. that could be distinguished, but rather the number that you do in fact distinguish.

Of course, for the conversation we’re having right now, the bound is more like 104 bits.

“But the real bound will be vastly better than that — since what matters is not the total number of nuances of my voice, facial expressions, etc. that could be distinguished, but rather the number that you do in fact distinguish.”

Scott: I don’t argue that such a look-up table wouldn’t be good enough to cheat any human – I totally agree. That should be even possible to do in a conceivable future (since here one needs much less than 1 millon years lookup table).

BUT, if You wanted to cheat all machines that I’d be in principle able to build in order to test whether your look-up table is really Winston Churchill then You perhaps need much more storage place.

The non-linearity I referred to is a breaking of the closed monoidal structure underlying the Abramsky-Coecke protocols, and this very much has to do with the non-linearity in gravity, although this can hardly be explained in a few sentences. Scott, you appear to believe that you are up to date with QG thinking, despite not even being a physicist. There are plenty of references for the ideas I mention.

Kea, you seem to believe you know what’s meant by a computational speedup, despite very clear evidence to the contrary in your posts above. Yet I never once questioned your qualifications to discuss algorithms and complexity — instead I asked you eight times (!) to clarify your ideas. You responded by repeatedly changing the subject, arguing by buzzword, appealing to unnamed authorities, reminding me that you’re a physicist and I’m not, and now stooping to ad hominem attack.

I’ll give you one more chance to answer my questions as if I weren’t a five-year-old: with evidence and argument rather than name-dropping. If you still won’t do so, I’ll have no choice but to block you for trolling.

As I think about it more, I really start to doubt the look-up table idea. I claim that brain performs enough multitude of , say, “computatations”, that although it takes, say, 10^15 bits of input data during the whole life (80 years*350 days *24hours*60seconds *10^8 bits, I think it’s much more), it may well produce more than 10^80 of output data (and I believe that to make your point this look-up table should be able to reproduce all functions of brain). So that really all this can’t be stored in universe, and accordingly your table would have to use some kind of algorithms, and I believe that it’s crucial for this thought experiment that it doesn’t.

Everynight brain apparently quite randomly – but your table would have to reproduce it – creates dreams out of some of these 10^15 bits, this is huge amount of data. (I can almost _hear_ You say that dreams only appear to contain so much data but indeed they are very obscure and easy to reproduce I totally disagree and I have at least one authority on my side – Feynman in his semi-autobiographies claimed that he could dream of whatever he wanted and performed some experiments of his choice in dreams – I propose to believe Feynman, and so this story gives huuuuge increase in estimation of size of table needed to simulate Feynman).

Consider also, but this is less important than dreams, that we don’t really know how much data processing is needed to, say, play football, as no machine can do it convincingly and propably best supercomputers woudn’t be able to control all muscles with the precision needed.

sirix, you seem to be responding to what some hypothetical person might say about lookup tables, not to what I said!

1. I was only talking about whether a computer can simulate a human — i.e. pass the Turing test. That’s why it sufficed to talk about observable inputs and outputs. Dreams and so on are irrelevant by definition.

2. Yes, the lookup table would quickly get larger than the observable universe — I said that several times! But to demonstrate that the issue is one of complexity and not computability, all I needed was that there should exist some finite lookup table, regardless of how large.

2. I’m not a physicist, but I imagine that physicist would say that to demonstrate that the issue is one of complexity you’d still have to demonstrate first that table is possible to exist.

(I’m a mathematics student and _as a mathematics student_ I’d say that if You don’t show that table can in principle exist then you can’t proceed meaningfully any further. But nobody cares about mathematics students these days…)

1. The whole novelty of the Gödel argument is that it tries to show, not merely that computers can’t have consciousness, but that they can’t even simulate it. So that’s the claim I was responding to.

There are plenty of AI skeptics like John Searle (the “view B” people) who make only the weaker claim about consciousness. I wasn’t talking about them here.

2. By definition, computability theory ignores resource measures like running time and circuit size. So once we’re arguing about whether a Boolean circuit can be made small enough to fit in the observable universe, for that very reason we must be talking about complexity!

1. Ok, I agree You are victorious in responding to the claim that computer can’t pass Turing test. I thought we’re arguing about something different: that computer can’t simulate brain.

2. You’re the complexity theorist here :-).

(BUT, I can’t resist to say that if one wants to do a thought experiment then one should carefuly show that things one refers to can be in principle constructed (I have Einstein-Bohr discussion about Einstein’s box in mind). But it’s irrelevant, since You weren’t doing a thought experiment (I thought You were) but some complexity theory.)

sirix, I just don’t buy the claim that the brain can output more information than can be stored in the universe. The more generous your counting method is for getting the number of bits a brain can produce, the more generous you have to be in calculating the storage capacity of the universe. But under any method used, the universe is always FAR bigger than brains, and it seems to be doing just fine at holding the output of billions of brains.

As a rough heuristic (that may not be strictly true; I’d have to think about it more) , I would guess that no physical system can output more information in time T than can be contained in its neighborhood of the universe bounded by a radius c*T. Even across the history of the whole human race, all of the “output” of all of our brains is still bounded within a very tiny portion of the known universe. (The fact that it’s computationally intractible to recover this information is a separate matter entirely.)

i liked your line, “soul or bullet”, except that while i am a strict physicalist, i dont warmly embrace the bullet idea.

also, wouldnt a good CAPTCHA be to generate questions that require a small amount of interpretation? for instance, to ask questions like “2+2 is an easy math question?” and “blonde hair is not as light as white hair”. wouldnt it be much harder for a program to answer those sorts of questions reliably? i dont think my examples are even that good… and maybe itd require something too ingenious to generate and grade coherent questions of that sort. in fact, i guess i dont know at all how to have a machine grade them.

Scott: Addendum: All I said was trivial. But since it’s plausible to think there are people who misunderstood You in the way I did :-), I’ll say just for them that You’re not claiming that it’s possible to build a machine which could pass a Turing test.

RM: I’m pretty sure that You’re right, but I don’t really know what the precise definition of information is However, your “theorem” looks very elegantly and surely Scott could prove it

However, when I was talking about dreams I meant that brain can visualize large portion of universe (larger than brain is). This creates problems if You want to simulate brain by primitive lookup table.

I believe that situation is quite analogous to the folowing: I choose random function [0,1] –> R which for simplicity takes rational values on rational points (I surely know alef_0 such functions). I call this function temperature of a universe and I take a computer that can compute this function on rationals of form k/m with m

IMHO, a discussion of quantum mechanics and the brain, to be useful at all, has to have some pretty good mathematics in it. Conversely, if the discussion doesn’t have good mathematics, it might as well be about poetry.

Therefore, here’s some poetry! By Emily Dickinson:

The Brain is Wider than the Sky

The Brain — is wider than the Sky —
For — put them side by side —
The one the other will contain
With ease — and You — beside —

The Brain is deeper than the sea —
For — hold them — Blue to Blue —
The one the other will absorb —
As Sponges — Buckets — do —

The Brain is just the weight of God —
For — Heft them — Pound for Pound —
And they will differ — if they do —
As Syllable from Sound –

(This failure to bind isn’t restricted to this example. An entire class of important reactions (those catalyzed by enzymes) has this feature).

In order to capture the mechanisms responsible for binding, aspects of the binding process need to be treated with more accurate quantum mechanical models, for example coupled cluster http://en.wikipedia.org/wiki/Coupled_cluster.

Anyone who claims quantum mechanics is not directly involved in human behavior (which is of course related to consciousness because its the only evidence we have for anything related to it) has to explain how a human could be conscious without the ability for any of their neurotransmitters to bind to the receptors in their brains.

Geordie, it’s also the case that without quantum interference, all the atoms that make up the brain would instantly disintegrate. So the question is not whether quantum effects are relevant to the physical and chemical substrate of the brain — obviously they are! The question, rather, is whether the quantum effects can be “abstracted away” when we consider higher-level information processing, or whether they’re relevant at that level as well.

I would argue though that there’s a difference between QM being required for the brain’s components (which it obviously is, eg. electrons not spiraling down into nuclei) and QM being required for certain dynamical effects important for cognition to occur or not (serotonin example).

For example QM is required to have lego blocks for the same reason. But you can easily calculate whether two lego blocks will bind using classical physics. The same isn’t true for most of the constituents of the brain.

Surely the binding issue could be dealt with via precalculated lookup tables, given that there are only a finite number of neurotransmitters.

It isn’t in the least bit obvious to me that to simulate the higher functions of a brain you would need to simulate the quantum effects which underpin the physical implimentation. In fact, I would think that the success of neural networks is a very strong counter example.

Now, suppose we reached a technological level where we can create artificial consciousness in a computer. So, let’s go ahead and propose some design principles for such a construction. Let’s call our artificial universe U, its conscious inhabitant K.
1) To start with, we don’t want to simulate parts of U (be it space or time) which no K can observe. This would be a complete waste of time. (Some further performance improvements are possible – they can be quite interesting, but I omit them for brevity)

2) The picture Ks create in their minds should be consistent with the laws of logic; in particular, we don’t want them to see any miracles, especially after they master scientific methods.

3) Granularity of simulation, in line with principle 1, should be proportional to accuracy of experiments conducted by Ks; remote galaxies can be roughly evaluated, whereas particles observed by fine tools should be given more care and computational resources.

4) since U becomes part of our own reality, it will sooner or later (rather sooner) be utilized for our own needs (after all, Ks are conscious beings, we can make use of them). However, this symbiosis will create a little bit of a headache for us: every epidemic in U is now OUR problem; every tyrant in U living too long is OUR problem; every stupid social/economic experiment that leads to mass starvation is OUR problem; etc. So, just in case civilization in U is in danger, we want to be able to rewind simulation a little bit, or as much as needed, without rebooting U. Although we can save periodic snapshots, it would be much more elegant to introduce reversible laws of nature in U, so we can always go back as much as needed, and no further.

This is just a first draft, a bit sketchy; not that I personally believe in this possibility (I don’t), but still it would be curious to develop it further; even in the first 4 principles we can recognize some familiar properties. I think it would be possible to cast this as a kind of mathematical theory, where we just go as far as we can; in particular, can we derive relativity from 1-4? Can we make some assumptions about consciousness of Ks so that they won’t feel anything while we are backtracking the simulation? In other words, can this myth be as (or more) productive for us as other myths we believe in (on which the whole science is based)? What other principles can be added to make theory more closely resembling our own world? (It looks like no one is seriously considering this subject right now; too bad – in 20 or 30 years it will become commonplace. At least Kurtzweil thinks so)

I don’t know what Kea was talking about, but this is what little I know.

IF the quantum gravity path integral includes a sum over (all possible) topologies, then nature solves a problem which no computer (classical or quantum) could solve exactly.
There can be no algorithm to decide in general whether two manifolds (specified e.g. as simplicial complex) have the same topology, due to the Halteproblem. It would still be possible to calculate reasonable approximations (James Hartle wrote papers about this) but not the full integral.

By the way, this was already a minor issue when people discussed back in the 90s whether the algorithms used in the ‘dynamical triangulation’ approach are ergodic.

Therefore one can perhaps speculate that quantum gravity might provide for a novel way of computation and I would assume that Prof. Penrose is perhaps thinking about such things.

But of course, one should not forget that we do not yet have a (fully understood) quantum theory of gravitation.

Thanks, Wolfgang! Yes, I know well that the 4-manifold isomorphism problem is undecidable. But even if we defined a quantum gravity computing model by integrating over 4-manifolds, it’s still far from obvious that undecidability would actually rear its head in that model.

Here’s a relevant analogy: even in conventional quantum mechanics, computing individual amplitudes is #P-complete. From this one could immediately conclude, by Kea-logic, that quantum computers can solve #P-complete problems. But such an argument would be mistaken, since the amplitudes aren’t directly observable. In other words, one actually has to think about it, instead of just stringing together buzzwords.

“It isn’t in the least bit obvious to me that to simulate the higher functions of a brain you would need to simulate the quantum effects which underpin the physical implimentation.”

Yes I agree. However it is an interesting (to me at least) question as to why it is that quantum mechanics is relevant (in the serotonin binding sense) to the way our particular implementation functions.

Could we just as well have brains constructed out of lego-block-ish classically simulatable components? For the simple reason that we only have one physical implementation of our kind of brains, I wouldn’t discount the possibility that the details of this particular implementation might be very important to its functioning.

Just thought of something: the argument that a lookup table could in principle pass a Turing test turns on the claim that the data output by a human being is finite, and any finite amount of data can be stored explicitly. But passing a Turing test calls for giving a correct output from all possible inputs — and when the set of possible inputs is not finite, no finite table could ever store all the correct outputs.

For instance, the set of strings that match a regular expression is usually infinite. One can build a finite automaton that accepts exactly the set of strings matching a regular expression. One can then feed a sequence of strings into this automaton, and it will output a sequence of YESs and NOs. And one can prepare a finite table, listing each string in the sequence with the corresponding YES or NO from the automaton. But that table would not be a complete representation of the automaton — one could always supply a string the automaton would accept that isn’t in the table.

Now it is indisputable that human beings are more complicated than regular expressions, and that the set of inputs a human being is prepared to handle is not finite. It isn’t relevant to the Turing test that the inputs any given human does handle are finite; what matters is the set of inputs a human can handle, in principle. Since that set isn’t finite, no finite table could represent it.

let me not string (!) together buzzwords, instead just mention that in your presentation (www.scottaaronson.com/talks/royalsoc.ppt)
you answer with a clear NO the question if you are concerned about the undecidability of 4-manifolds.
But I noticed that Dave left the answer blank; I would agree with him on this (which in my case just means that I have no clue 8-).

By the way, a nice paper about this (which briefly mentions Penrose) is gr-qc/0506019.

Alright, let me try to preempt further questions about why the lookup table is finite, by repeating one of the central points from the lecture:

In real life, we regularly decide that other people think after talking to them for only a few minutes. So why should an infinite conversation be necessary to decide if a machine thinks? Isn’t this yet another example of the meatist double-standard that people constantly apply in these discussions without even noticing it?

Scott: can you propose a stronger intelligence test that can’t be passed based merely on table? E.g., let computer generate beautiful mathematical theory, which humans can nevertheless understand (theory should introduce some new notions – in other words, new meanings). Isn’t it a good test?

I think the whole business about the lookup-table is a dirty trick. It is sort of a wordy way of saying “if you were talking to a machine, there is something that it could say at each juncture that would make you think it was intelligent.” (And that something is what we put in the table). When you put it that way, it is obvious – of _course_ there is something it could say that would make it appear intelligent.

Vasily: When you ask Scott for a test that can’t be passed with a table, you are falling into the trap. The table sounds stupid but it is actually very smart. Why? Because _any_ problem with a bounded input can be solved with a table. The idea of the table is just a way of saying “the answer to any question of size at most N exists, and it could be written down in theory.”

Or, as Scott has been saying, this is all just a way to get people to focus on complexity, not computability.

Again: Finite number of bits
Again, it’s not the number of bits that matters, it’s interpretation of these bits that matters. Number of bits is not the only possible measure of complexity, it’s just a convenient measure of complexity in some theories. If computer comes up with a beautiful theory, it passes the test in my books; counting bits in this context is as silly as counting notes in Beethoven’s sonata.

For those whose state has not yet been fully collapsed into “|fun>” (which I thought was a truly delightful image, Scott) the above is a link to an article by Cristian Calude, Michael Dinneen, and Chi-Kou Shut entitled Computing a Glimpse of Randomness.

“Resistance (to Laughter) Is Futile” when an article starts out “Any attempt to compute the uncomputable or to decide the undecidable is without doubt challenging, but hardly a new endeavor.” …. and immediately gets down to computing.

To be fair to the authors, I should mention that this is a serious article … one which happens to be fun to read.

OK, let’s put it this way: the string S printed by computer is not a final result, it’s a program for our brain to interpret; assume for the sake of argument there’s program P running in our head, it takes S as argument, and can either run indefinitely (still trying to figure out what S means), or it stops on “true” or “false” state. By the very fact of printing S computer claims (!) that P will stop on this input. This is a statement of infinite complexity; the only way for computer to make such a claim is to run simulation of P internally and make sure it stops, before printing; but by your own assumption, P is not known to computer.

Vasily, if you wish to participate in this discussion, you’re going to have to stop changing the rules in mid-game. You asked: is there a stronger intelligence test that can’t be passed by a lookup table? I pointed out that the answer is no. You answered that the number of bits is “not the only possible measure of complexity,” that counting bits in a beautiful theory is like counting notes in Beethoven’s sonata. Well, duh! Can you not understand why that’s completely irrelevant to the question you asked? Of course the number of bits is not the important thing about a mathematical theory — but it can be expressed with a finite number of them, and that means (trivially) that a lookup table could produce such a theory.

“In real life, we regularly decide that other people think after talking to them for only a few minutes. So why should an infinite conversation be necessary to decide if a machine thinks?”

The Turing test requires simulating, not an infinite conversation, but an infinite number of possible conversations. (Just as finite automata don’t accept infinite strings, but do accept an infinite number of strings.) And no, sirix didn’t bring up this specific distinction.

Also, many people decided that the Eliza program thinks after talking to it for only a few minutes — and Eliza isn’t intelligent at all.

The Turing test requires simulating, not an infinite conversation, but an infinite number of possible conversations.

No, that’s incorrect. In his 1950 paper, Turing explicitly said there would be a time limit (say 5 minutes) on the conversation.

Also, many people decided that the Eliza program thinks after talking to it for only a few minutes — and Eliza isn’t intelligent at all.

That’s because those people weren’t sophisticated. They should have immediately asked things like, “Which is taller: Everest or the Empire State Building?” and not accepted any artful dodging or deflection of the question, or any attempt to change the subject.

(Come to think of it, maybe the same test should be applied to commenters on blogs… )

I was the first to publish a scientific article on the likeliehood that we are indeed simulated by positron-electron entities at least a googol years from what humans think is the present, in the magazine Quantum.

Years later, Nick Bostrom got great publicity by rediscovering my argument, and claiming that he was the first to publish.

It is widely believed by Physicists, yet neither proven nor unanimous, that the universe is a quantum computer.

Richard Feynman had this in mind (and discussed it with me) when he became the great-grandfather of Quantum Computing. There were some revisions in his original proposal, but still wide agreement with his assertion that the universe computes it own next state
by real-time integration.

I’ve thought about that since he and I were at Caltech togther 1968-73. Before I graduated, I gave Post’s corollary to Feynman: “The universe is the smallest
(least action) computer that can compute or simulate the future of the entire universe.”

I stated then and still believe that we could be nested as a simulation inside a larger universe, which could in turn be nested as a simulation inside a larger universe.

I specifically suggested that we (me and the readers of the essay) were likely to be embedded in a simulation by a far far future electron-positron civilization (citing Freeman Dyson’s physics theories of the deep future).

I was the first to put this in print, at: “Human Destiny and the End of Time” [Quantum, No.39, Winter 1991/1992, Thrust Publications, 8217 Langport
Terrace, Gaithersburg, MD 20877;
ISSN 0198-6686.

Professor Gregory Benford acknowledged to me that he drew on this theory in his novels of the galactic core (whole sentences in his novels, even paragraphs, in italics, were from his notes while he read my essay, many sentences and phrases of mine were used with permission for poetic and cosmological value), and then years later philosopher Nick Bostrom (director of the Future of Humanity Institute at Oxford University) redicovered what I’d published first, and did a better job than I at getting mainstream PR on it.

The germ of the idea was in science fiction even before me and Feynman. “The Matrix” and “The 13th Floor” popularized the idea further.

Point is: Seth Lloyd and Roger Penrose are not intentionally writing science fiction. Their work on the limits of computing, both classical and quantum, are indeed quite exciting.

I didn’t want to write again about Lookup Table (since it’s trivial :-), but it keeps coming so let me say what I think (and I strongly believe Scott agrees with) this reasoning proves:

If one wanted to prove that it’s impossible to build a machine that would pass a Turing test, one would need to nontrivially use that such a machine would be contained in a universe.

I think that at least some people think about topic under discussion in terms similar to mine (in other words: know s*** about complexity theory , and this kind of sentence would be much clearer for them than something along the lines “It’s a complexity theory question, whether…” .

Notice, that I don’t say it’s unimportant (only trivial) – it is, since it indeed shows that all one-line-“This sentence cannot be proved in F.”-proofs are obviously wrong.

“No, that’s incorrect. In his 1950 paper, Turing explicitly said there would be a time limit (say 5 minutes) on the conversation.”

You’ve misremembered the paper. The only place where Turing mentions a time limit is where he predicts that circa 2000 computers will be able to deceive 30% of questioners after five minutes of conversation. That prediction is not part of Turing’s description of the test!

No, that is the sentence in the paper I was talking about. In any actual implementation of a Turing Test (e.g. the Loebner competition), it’s obvious that you’ll need some time limit, and Turing — not one to stress the obvious, but also not one to let it pass by him — explicitly acknowledges that in making his prediction.

In any case, there’s no need for further Turing-hermeneutics. As I tried to explain both in the lecture and here, we always have a de facto time limit when deciding if other people are intelligent. If you want to say that, even after a machine conversed with you intelligently for 1010 years, you’d still need to converse with it for longer before knowing if was really intelligent, then why not say the same about humans? Why not say that, for all you know, your closest friends and family don’t really have minds — that you haven’t known them for long enough to have any clue? And conversely, if knowing those people for years or decades is enough to be reasonably sure they have minds, then why isn’t it long enough for a machine?

Given the sheer contentless triviality of what I’m saying, I didn’t expect that so many commenters would have so much trouble understanding it. But here I was mistaken: I should’ve known that, when it comes to this topic, not even the trivial can be taken for granted.

Given the sheer contentless triviality of what I’m saying, I didn’t expect that so many commenters would have so much trouble understanding it. But here I was mistaken: I should’ve known that, when it comes to this topic, not even the trivial can be taken for granted.

Let’s say I know all the entries in the table. What will happen if I intentionally ask the machine a question whose answer is not in the table? Certainly such a question exists, since the table is finite and the number of questions I can think of is infinite. Doesn’t that invalidate the lookup-table approach?

For the question to not be in the table, it will need to take you more than an arbitrary amount of time to ask the question.

Right now, we’re potentially engaging in a dialogue with “infinite” possibilities, but if we both agree to write less than 2,000 words per comment, then there are only finitely many different things we might say using 26 lower case letters + 26 upper + 10 numerals + 11 or so punctuation markers.

]Any attempt to decisively evaluate Penrose’s argument will
run aground on the notion of “seeing the truth” of something.

Just for the record, I opposed John Lucas himself (the Oxford philosopher who originated the argument, named in /Go”del, Escher, Bach/, and a Fellow of Merton College where I was a grad student and Junior Research Fellow) in an informal Merton College debate in 1988. I came from the angle that his argument would prove that Peano Arithmetic—or any system F that can formalize and prove Go”del’s Theorems—has an immortal soul. What happened is that John quickly made a distinction between *knowledge* and *proof*, and declared himself as standing on the knowledge island. After some sparring I realized that I couldn’t bridge my prepared argument (and its careful distinction between G(F) and the implication Con(F) –> G(F)) from the proof island to the knowledge island. I didn’t even “run aground” on the latter island—instead I suggested we and the audience of 12 or so hit the port and cheese, and a good time was had by all…!

OK, if I put it this way: “based on our current knowledge, it’s impossible to provide any meaningful definition of intelligence” – will everybody agree with that?
Then there’s nothing striking there. We cannot provide meaningful definition of any intuitive notion and prove it really corresponds to our intuition (even if it’s as “simple” thing as number) We cannot prove anything about intuition at all. So, what are we arguing about? So, had you formulated it that way, as a generic statement about intuition, we wouldn’t have this debate.

“As I tried to explain both in the lecture and here, we always have a de facto time limit when deciding if other people are intelligent.”

The arguments of Lucas and Penrose aren’t dealing with what can be done in practice, though. They’re claiming that there is something humans can do, and are doing, which is beyond the scope of any algorithm in principle. If you want to refute them, you have to accept the idealizations of computability theory — unbounded input, in particular. Objecting that inputs in practice are always finite is simply refusing to meet the arguments.

On the practical question, we humans are quite eager to ascribe the attributes of rationality to anything that can engage in sensible conversation, because in our experience nearly everything that can do this is, in fact, rational. So it doesn’t take much evidence to convince us that we’re talking to a fellow intelligence; the difficult feat is to convince us that we are not. If I were speaking to an imperfect simulation of an intelligent being, I would at first assume the simulation is intelligent, and only become suspicious as time passed and a pattern of incomprehension began to appear.

Vasily: We can certainly agree that we don’t at present have an algorithmic definition of intelligence. The point under debate is whether an algorithmic definition of intelligence is possible, even in theory. Lucas, and Penrose, tried to prove that it is not; our host thinks it is.

Scott noted that: “Almost any nonlinearity in QM would suffice for the Abrams-Lloyd construction. To be more precise, you need a nonlinearity that can increase the angle between two quantum states.”

But a much smaller set of nonlinearities (of which M. Czachor has characterized an example) exist that wouldn’t allow signalling at at distance. I wish I was a good enough mathematician to characterize this set, and then to show that there was some kind of error-correcting code that would work over a dynamics including members from that set …

I find it interesting that people are talking about theories of quantum gravity, when we don’t, I believe, really have any consistent theories of quantum gravity that cohere with our predictions. Showing how to consistently add in some fundamental nonlinear evolutions isn’t the same as generating something that could even be a possible candidate for a theory of quantum gravity.

If you want to refute them, you have to accept the idealizations of computability theory — unbounded input, in particular. Objecting that inputs in practice are always finite is simply refusing to meet the arguments.

Michael: OK, I’ve calmed down enough to understand you better :-), but my perspective is still very different. Here’s an analogy: as I’ve said before on this blog, if P≠NP but there were a fast algorithm to solve SAT on instances of size at most 1030, then I would cheerfully admit that the P versus NP question had turned out to be irrelevant. In computer science, I see every asymptotic statement as just an idealized version of suitable finite statements. The difference is that with P versus NP,

(1) we know what the asymptotic statement is, and

(2) we have lots of historical evidence that thinking in terms of asymptotic statements is a good way to approach the finite questions we really care about.

In Penrose’s case, I claim that both (1) and (2) fail. Since “simulating a human being” is not mathematically well-defined to begin with, it’s not clear what it even means to define an asymptotic version of the problem. And even supposing we could somehow do this, we’d have no historical experience to indicate that such a thought-exercise would tell us anything about the finite question. So here it seems much more sensible to me just to ask the finite question directly — in which case we get a question about circuit complexity rather than computability.

Scott: You should be glad that the online version of this argument fell on a more argumentative crowd than your class was. I do believe the best part of Emperor’s New Mind was, however, the denary (sic) listing of a number for a Universal Turing Machine. I’m not sure what this was supposed to show.

Re: the asymptotics of simulating a human, I wholly agree that this is an ill-defined question, but it may be definable in some sane sort of way: they study $n \times n$ chess, after all. I mean, I’m a physically bigger system than, say, Scott, but does that make me harder to simulate in a useful manner? Scott’s more brilliant than I am, does that make him harder to simulate than myself? Also, does this matter?

YK: Scott’s response to your question is a fairly accurate depiction of how he speaks. It’s not all bad, though: the class wouldn’t otherwise have time to keep up with him. Everyone remembers Feynman’s Lectures on Physics, but the people that were supposed to be in the class were, for the most part, lost or not attending by the end of it. I believe Scott’s given at least one talk at Perimeter Institute, however, and they put all their seminars online if you just want to see him speaking on a topic of some description.

Also: I don’t check the Shtetl for three days and Scott goes and posts something that gets 130-odd comments. Sheesh.

[…] Update: From deep in the comment thread, my favorite new catchphrase this year (boldface added): In real life, we regularly decide that other people think after talking to them for only a few minutes. So why should an infinite conversation be necessary to decide if a machine thinks? Isn’t this yet another example of the meatist double-standard that people constantly apply in these discussions without even noticing it? […]

Isn’t the infinite simulation within simulation argument of Vasily a form of, or at least deeply related to, the argument for the existence of God by lack of necessity in an infinite chain of causation, i.e. the cosmological argument? (which has mutated from the ancient Greeks through Ibn Sina through Maimonides to Aquinas and, as this shows, continues to evolve). Vasily’s version seems clearly to be of the _in esse_ form.

No, I don’t think they’re at all the same. The simulation argument tries to show that, if we consider the “tree” of universes simulating other universes, then we’re unlikely to just so happen to be at the root of the tree. The cosmological argument (in this context) tries to show that the tree has a root.

Among other things, this implies that if the conclusion of the cosmological argument fails, then we get the conclusion of the simulation argument for free! In other words, if the chain of universes simulating universes extends infinitely far backwards, then certainly there must be another universe simulating ours, wherever we happen to be in the chain. And that’s all the simulation argument really cares about.

So, as with any unifying principle, we ought to be able to express it in algebraic, geometric, differential, and (21st Century!) information-theoretic terms, right?

Scott’s post has (implicitly) linked the differential and information-theoretic aspects of this principle. Good.

The geometric interpretation is obvious: the state space of quantum computation must have a Kähler geometry (so that parallel transport of two states does not change their relative phase). Very good.

But what is the algebraic venue for Scott’s principle? Presumably, the Kähler manifolds appropriate to the state-spaces of quantum information theory must have a natural algebraic structure … which is … which might be … ???

The point under debate is whether an algorithmic definition of intelligence is possible, even in theory
The problem is not so much to give a definition of intelligence. The problem is to prove that your definition really corresponds to your intuitive notion of intelligence, that it’s not too narrow, not too wide, but a precise fit.
I don’t think anybody knows how this can be done.
If you look on things from this viewpoint, Godel’s theorem will not seem very deep, after all. It just says that some form of definition of number (i.e., using the language of axiomatic theory) will not be a precise fit. I assume same will be the case for any attempt to define any real thing, too: e.g., any definition of electron will either be incomplete, or contradictory. Apparently, Greeks knew this.

“The simulation argument tries to show that, if we consider the ‘tree’ of universes simulating other universes, then we’re unlikely to just so happen to be at the root of the tree.”

Exactly. Everybody who knows Math/Computer Science beyond the shallow end of the pool knows properties of infinite trees.

Heck, I took “Theory of Algorithms” from John Todd, still attending seminars at age 93. He was the man who got John von Neumann interested in computers.

Another well-known property of infinite trees of universes:

If the ‘tree’ of universes simulating other universes, is infinite, then at every level there exists a universe which at ALL future times will have a descendant.

In human geneology terms, this is to say that, if the human race lasts forever, there is at any time in the history of the human race, at least one person who has, at all future times, a living descendant.

Next, consider that the Harary graph reconstruction hypothesis, though true for finite trees, is untrue for infinite trees.

If one cannot see the original graph, but only the set of all subgraphs (created by removing a single vertex and all edges adjacent to that vertex), the problem is to reconstruct the original graph.

But we cannot tell, given an infinite set of isomorphic infinite trees, if the original was an isomorphic infinite tree or an infinite forest of those isomorphic infinite trees.

Now, what is the actual topology of the multiverse, if subtrees can be isomorphic to the entire tree? One need not see it as a tree structure any more.

Err, quantum states form a Kahler manifold, right? If so, then to whatever extent the concept of “state space” makes sense in quantum gravity, I don’t see how the state space could be anything else without violating QM…