The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

I’m, uh, sorry about all the, you know, mumbling. Clearly I haven’t yet been media-trained.

This entry was posted
on Thursday, December 15th, 2005 at 2:10 pm and is filed under CS/Physics Deathmatch.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

The next time, try to symmetrize the situation: ask the interviewer to tell you the main questions in advance so that you can think the best answers. This way you will save the bandwidth for generating a polished output while keeping the processing part of it from the record.

If a physicist asks you about the difference between space and time as it pertains to computation, I suggest the following glib quote:

“The biggest difference between time and space is that you can’t reuse time.”

–Merrick Furst

In all seriousness, there is a problem here that computational complexity theory hasn’t satisfactorily addressed in my mind. Wheeler whimsically calls the problem the “many-fingered nature of time.” The issue is that in special relativity, while observers in different global Lorentz frames split spacetime into hypersurfaces of constant time in different ways (leading to a lack of a universal definition of “time”), they all at least agree agree that each other’s hypersurfaces are flat. (In other words, Furst’s quote holds.) However in general relativity, there are no global Lorentz frames and correspondingly no flat hypersurfaces of time. A family of intertially moving observers will eventually aquire relative motions due to tidal forces and their hypersurface of simultaneity (defined by Einstein light-ray synchronization) can become quite distorted and “fingered.” This is where the problem for computational complexity comes in.

Because there is no global definition of time in general relativity, one could imagine foliating spacetime arbitrarily with hypersurfaces that one calls the “hypersurfaces of simultaneity.” Time is then defined by local Lorentz frames as the direction locally orthogonal to a hypersurface. Because there are many possible foliations of spacetime one could use, it is not clear what it means for a language to be in PSPACE, because the choice of what “space” means globally is ambiguous. One cannot simply say that the “space” considered by languages in PSPACE is small and hence in the same local Lorentz frame because complexity is supposed to be about asymptotic analysis after all.

This problem could be avoided if complexity theorists realized that computation is local, as I recently posted about on Dave Bacon’s blog. (What computer scientists should know about complexity (that physicists already do).) With this axiom accepted, it would be clear that the class PSPACE has no meaning in general relativity.

I guess what Scott wanted to emphasize is that complexity theory is purely (if I may say so) an abstract concept. No matter what are the theories that explain the physical world, Halting problem would always be undecidable for the TM and it’d never reuse time even if physics allows events to move back in time. But it’d probably not depict the behaviour of computers, may be not even classical ones, as we believe it does. But I didn’t get what is meant by computation being local. Anyway, the difference between physics and complexity theory is as much as the difference between physics and mathematics.

“I guess what Scott wanted to emphasize is that complexity theory is purely (if I may say so) an abstract concept.”

I don’t know what Scott’s position on this issue is, but I disagree that complexity theory is purely an abstract concept. Mathematics may be purely abstract, but computation is not. I would say that this is one of the most revolutionary ideas at the core of quantum information science. This concept often summed up in Landauer’s mantra, “Information is physical.” A particularly eloquent discussion of this point can be found in one of Deutsch’s orignal articles on quantum computing (I can’t remember the citation other than it’s in Proc. R. Soc. Lond. A). In it [paraphrasing heavily], he posits that one could imagine a universe in which one could prove that, say 2+2 had an answer but in which it would be impossible to say with certainty that the answer was indeed four.

“But I didn’t get what is meant by computation being local.”

With the above point of view in hand, namely that physics enables computation, constraints (and, fortunately, possibilities!) in physical law become constraints (and abilities) for computation. Since all known physical law involves interactions that are fundamentally local, computation is also fundamentally local. [Actually, the way in which locality (and causality, etc.) constraints in physical law translate to computational constraints is potentially very subtle and worthy of careful thinking/experiment.]