The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

While I rummage around the brain for something more controversial to blog (that’s nevertheless not too controversial), here, for your reading pleasure, is a talk I gave a couple weeks ago at Google Cambridge. Hardcore Shtetl-Optimized fans will find little here to surprise them, but for new or occasional readers, this is about the clearest statement I’ve written of my religio-ethico-complexity-theoretic beliefs.

What Google Won’t Find

As I probably mentioned when I spoke at your Mountain View location two years ago, it’s a funny feeling when an entity that knows everything that ever can be known or has been known or will be known invites you to give a talk — what are you supposed to say?

Well, I thought I’d talk about “What Google Won’t Find.” In other words, what have we learned over the last 15 years or so about the ultimate physical limits of search — whether it’s search of a physical database like Google’s, or of the more abstract space of solutions to a combinatorial problem?

On the spectrum of computer science, I’m about as theoretical as you can get. One way to put it is that I got through CS grad school at Berkeley without really learning any programming language other than QBASIC. So it might surprise you that earlier this year, I was spending much of my time talking to business reporters. Why? Because there was this company near Vancouver called D-Wave Systems, which was announcing to the press that it had built the world’s first commercial quantum computer.

Let’s ignore the “commercial” part, because I don’t really understand economics — these days, you can apparently make billions of dollars giving away some service for free! Let’s instead focus on the question: did D-Wave actually build a quantum computer? Well, they apparently built a device with 16 very noisy superconducting quantum bits (or qubits), which they say they’ve used to help solve extremely small Sudoku puzzles.

The trouble is, we’ve known for years that if qubits are sufficiently noisy — if they leak a sufficient amount of information into their environment — then they behave essentially like classical bits. Furthermore, D-Wave has refused to answer extremely basic technical questions about how high their noise rates are and so forth — they care about serving their customers, not answering nosy questions from academics. (Recently D-Wave founder Geordie Rose offered to answer my questions if I was interested in buying one of his machines. I replied that I was interested — my offer was $10 US — and I now await his answers as a prospective customer.)

To make a long story short, it’s consistent with the evidence that what D-Wave actually built would best be described as a 16-bit classical computer. I don’t mean 16 bits in terms of the architecture; I mean sixteen actual bits. And there’s some prior art for that.

But that’s actually not what annoyed me the most about the D-Wave announcement. What annoyed me were all the articles in the popular press — including places as reputable as The Economist — that said, what D-Wave has built is a machine that can try every possible solution in parallel and instantly pick the right one. This is what a quantum computer is; this is how it works.

It’s amazing to me how, as soon as the word “quantum” is mentioned, all the ordinary rules of journalism go out the window. No one thinks to ask: is that really what a quantum computer could do?

It turns out that, even though we don’t yet have scalable quantum computers, we do know something about what they could do if we did have them.

A quantum computer is a device that would exploit the laws of quantum mechanics to solve certain computational problems asymptotically faster than we know how to solve them with any computer today. Quantum mechanics — which has been our basic framework for physics for the last 80 years — is a theory that’s like probability theory, except that instead of real numbers called probabilities, you now have complex numbers called amplitudes. And the interesting thing about these complex numbers is that they can “interfere” with each other: they can cancel each other out.

In particular, to find the probability of something happening, you have to add the amplitudes for all the possible ways it could have happened, and then take the square of the absolute value of the result. And if some of the ways an event could happen have positive amplitude and others have negative amplitude, then the amplitudes can cancel out, so that the event doesn’t happen at all. This is exactly what’s going on in the famous double-slit experiment: at certain spots on a screen, the different paths a photon could’ve taken to get to that spot interfere destructively and cancel each other out, and as a result no photon is seen.

Now, the idea of quantum computing is to set up a massive double-slit experiment with exponentially many paths — and to try to arrange things so that the paths leading to wrong answers interfere destructively and cancel each other out, while the paths leading to right answers interfere constructively and are therefore observed with high probability.

You can see it’s a subtle effect that we’re aiming for. And indeed, it’s only for a few specific problems that people have figured out how to choreograph an interference pattern to solve the problem efficiently — that is, in polynomial time.

One of these problems happens to be that of factoring integers. Thirteen years ago, Peter Shor discovered that a quantum computer could efficient apply Fourier transforms over exponentially-large abelian groups, and thereby find the periods of exponentially-long periodic sequences, and thereby factor integers, and thereby break the RSA cryptosystem, and thereby snarf people’s credit card numbers. So that’s one application of quantum computers.

On the other hand — and this is the most common misconception about quantum computing I’ve encountered — we do not, repeat do not, know a quantum algorithm to solve NP-complete problems in polynomial time. For “generic” problems of finding a needle in a haystack, most of us believe that quantum computers will give at most a polynomial advantage over classical ones.

At this point I should step back. How many of you have heard of the following question: Does P=NP?

Yeah, this is a problem so profound that it’s appeared on at least two TV shows (The Simpsons and NUMB3RS). It’s also one of the seven (now six) problems for which the Clay Math Institute is offerring a million-dollar prize for a solution.

Apparently the mathematicians had to debate whether P vs. NP was “deep” enough to include in their list. Personally, I take it as obvious that it’s the deepest of them all. And the reason is this: if you had a fast algorithm for solving NP-complete problems, then not only could you solve P vs. NP, you could presumably also solve the other six problems. You’d simply program your computer to search through all possible proofs of at most (say) a billion symbols, in some formal system like Zermelo-Fraenkel set theory. If such a proof existed, you’d find it in a reasonable amount of time. (And if the proof had more than a billion symbols, it’s not clear you’d even want to see it!)

This raises an important point: many people — even computer scientists — don’t appreciate just how profound the consequences would be if P=NP. They think it’s about scheduling airline flights better, or packing more boxes in your truck. Of course, it is about those things — but the point is that you can have a set of boxes such that if you could pack them into your truck, then you would also have proved the Riemann Hypothesis!

Of course, while the proof eludes us, we believe that P≠NP. We believe there’s no algorithm to solve NP-complete problems in deterministic polynomial time. But personally, I would actually make a stronger conjecture:

There is no physical means to solve NP-complete problems in polynomial time — not with classical computers, not with quantum computers, not with anything else.

You could call this the “No SuperSearch Principle.” It says that, if you’re going to find a needle in a haystack, then you’ve got to expend at least some computational effort sifting through the hay.

I see this principle as analogous to the Second Law of Thermodynamics or the impossibility of superluminal signalling. That is, it’s a technological limitation which is also a pretty fundamental fact about the laws of physics. Like those other principles, it could always be falsified by experiment, but after a while it seems manifestly more useful to assume it’s true and then see what the consequences are for other things.

OK, so what do we actually know about the ability of quantum computers to solve NP-complete problems efficiently? Well, of course we can’t prove it’s impossible, since we can’t even prove it’s impossible for classical computers — that’s the P vs. NP problem! We might hope to at least prove that quantum computers can’t solve NP-complete problems in polynomial time unless classical computers can also — but even that, alas, seems far beyond our ability to prove.

What we can prove is this: suppose you throw away the structure of an NP-complete problem, and just consider it as an abstract, featureless space of 2n possible solutions, where the only thing you can do is guess a solution and check whether it’s right or not. In that case it’s obvious that a classical computer will need ~2n steps to find a solution. But what if you used a quantum computer, which could “guess” all possible solutions in superposition? Well, even then, you’d still need at least ~2n/2 steps to find a solution. This is called the BBBV Theorem, and was one of the first things learned about the power of quantum computers.

Intuitively, even though a quantum computer in some sense involves exponentially many paths or “parallel universes,” the single universe that happened on the answer can’t shout above all the other universes: “hey, over here!” It can only gradually make the others aware of its presence.

As it turns out, the 2n/2 bound is actually achievable. For in 1996, Lov Grover showed that a quantum computer can search a list of N items using only √N steps. It seems to me that this result should clearly feature in Google’s business plan.

Of course in real life, NP-complete problems do have structure, and algorithms like local search and backtrack search exploit that structure. Because of this, the BBBV theorem can’t rule out a fast quantum algorithm for NP-complete problems. It merely shows that, if such an algorithm existed, then it couldn’t work the way 99% of everyone who’s ever heard of quantum computing thinks it would!

You might wonder whether there’s any proposal for a quantum algorithm that would exploit the structure of NP-complete problems. As it turns out, there’s one such proposal: the “quantum adiabatic algorithm” of Farhi et al., which can be seen as the quantum version of simulated annealing. Intriguingly, Farhi and his collaborators proved that, on some problem instances where classical simulated annealing would take exponential time, the quantum adiabatic algorithm takes only polynomial time. Alas, we also know of problem instances where the adiabatic algorithm takes exponential time just as simulated annealing does. So while this is still an active research area, right now the adiabatic algorithm does not look like a magic bullet for solving NP-complete problems.

If quantum computers can’t solve NP-complete problems in polynomial time, it raises an extremely interesting question: is there any physical means to solve NP-complete problems in polynomial time?

Well, there have been lots of proposals. One of my favorites involves taking two glass plates with pegs between them, and dipping the resulting contraption into a tub of soapy water. The idea is that the soap bubbles that form between the pegs should trace out the minimum Steiner tree — that is, the minimum total length of line segments connecting the pegs, where the segments can meet at points other than the pegs themselves. Now, this is known to be an NP-hard optimization problem. So, it looks like Nature is solving NP-hard problems in polynomial time!

You might say there’s an obvious difficulty: the soap bubbles could get trapped in a local optimum that’s different from the global optimum. By analogy, a rock in a mountain crevice could reach a lower state of potential energy by rolling up first and then down … but is rarely observed to do so!

And if you said that, you’d be absolutely right. But that didn’t stop two guys a few years ago from writing a paper in which they claimed, not only that soap bubbles solve NP-complete problems in polynomial time, but that that fact proves P=NP! In debates about this paper on newsgroups, several posters raised the duh-obvious point that soap bubbles can get trapped at local optima. But then another poster opined that that’s just an academic “party line,” and that he’d be willing to bet that no one had actually done an experiment to prove it.

Long story short, I went to the hardware store, bought some glass plates, liquid soap, etc., and found that, while Nature does often find a minimum Steiner tree with 4 or 5 pegs, it tends to get stuck at local optima with larger numbers of pegs. Indeed, often the soap bubbles settle down to a configuration which is not even a tree (i.e. contains “cycles of soap”), and thus provably can’t be optimal.

The situation is similar for protein folding. Again, people have said that Nature seems to be solving an NP-hard optimization problem in every cell of your body, by letting the proteins fold into their minimum-energy configurations. But there are two problems with this claim. The first problem is that proteins, just like soap bubbles, sometimes get stuck in suboptimal configurations — indeed, it’s believed that’s exactly what happens with Mad Cow Disease. The second problem is that, to the extent that proteins do usually fold into their optimal configurations, there’s an obvious reason why they would: natural selection! If there were a protein that could only be folded by proving the Riemann Hypothesis, the gene that coded for it would quickly get weeded out of the gene pool.

So: quantum computers, soap bubbles, proteins … if we want to solve NP-complete problems in polynomial time in the physical world, what’s left? Well, we can try going to more exotic physics. For example, since we don’t yet have a quantum theory of gravity, people have felt free to speculate that if we did have one, it would give us an efficient way to solve NP-complete problems. For example, maybe the theory would allow closed timelike curves, which would let us solve NP-complete and even harder problems by (in some sense) sending the answer back in time to before we started.

In my view, though, it’s more likely that a quantum theory of gravity will do the exact opposite: that is, it will limit our computational powers, relative to what they would’ve been in a universe without gravity. To see why, consider one of the oldest “extravagant” computing proposals: the Zeno computer. This is a computer that runs the first step of a program in one second, the second step in half a second, the third step in a quarter second, the fourth step in an eighth second, and so on, so that after two seconds it’s run infinitely many steps. (It reminds me of the old joke about the supercomputer that was so fast, it could do an infinite loop in 2.5 seconds.)

Question from the floor: In what sense is this even a “proposal”?

Answer: Well, it’s a proposal in the sense that people actually write papers about it! (Google “hypercomputation.”) Whether they should be writing those papers a separate question…

Now, the Zeno computer strikes most computer scientists — me included — as a joke. But why is it a joke? Can we say anything better than that it feels absurd to us?

As it turns out, this question takes us straight into some of the frontier issues in theoretical physics. In particular, one of the few things physicists think they know about quantum gravity — one of the few things both the string theorists and their critics largely agree on — is that, at the so-called “Planck scale” of about 10-33 centimeters or 10-43 seconds, our usual notions of space and time are going to break down. As one manifestation of this, if you tried to build a clock that ticked more than about 1043 times per second, that clock would use so much energy that it would collapse to a black hole. Ditto for a computer that performed more than about 1043 operations per second, or for a hard disk that stored more than about 1069 bits per square meter of surface area. (Together with the finiteness of the speed of light and the exponential expansion of the universe, this implies that, contrary to what you might have thought, there is a fundamental physical limit on how much disk space Gmail will ever be able to offer its subscribers…)

To summarize: while I believe what I called the “No SuperSearch Principle” — that is, while I believe there are fundamental physical limits to efficient computer search — I hope I’ve convinced you that understanding why these limits exist takes us straight into some of the deepest issues in math and physics. To me that’s so much the better — since it suggests that not only are the limits correct, but (more importantly) they’re also nontrivial.

60 Responses to “What Google Won’t Find”

“It’s amazing to me how, as soon as the word “quantum” is mentioned, all the ordinary rules of journalism go out the window. No one thinks to ask: is that really what a quantum computer could do?”

Actually, journalists never follow “the ordinary rules of journalism”. Everyone is outraged by how awful the journalistic coverage is for whatever subject they are expert in, but everyone assumes that this is just how journalists treat that one, two, three, or four hundred and eighteen subjects and that for all other subjects they are perfectly objective high-caliber rationalists. My guess is that this phenomenon is related to base rate neglect and the failure to regress to the mean which so poisons most thinking on so many subjects.

This was actually really useful to me, thanks. I think a lot of the confusion about whether a quantum computer can solve NP-complete problems in polynomial time comes from the existence of Shor’s algorithm; I had been under the impression that integer factorization was NP-complete, and thus Shor’s algorithm would solve all NP-complete problems. On reading this essay I did some Googling (perhaps appropriately), and came up with this:

One can test a factorization quickly (just multiply the factors back together again), so the problem is in NP. Finding a factorization seems to be difficult, and we think it may not be in P. However there is some strong evidence that it is not NP-complete either; it seems to be one of the (very rare) examples of problems between P and NP-complete in difficulty.

With regards to your “No SuperSearch Principle”, in the context of parallel classical computing, what is a good paper describing the limits for evaluating NP problems in parallel? Can you get a O(f(n)/p) runtime, is it more like O(log(p)*f(n)/p), or is there nothing better than serial O(f(n))?

Well quantum computing, in terms of the way pop culture and the media treats it, seems no different today than neural networks and AI did in the 80s. It was going to be the NEXT BIG THING, but without any solid applications the funding dried up and nobody really talks about neural networks anymore — and even AI has taken a backseat in pop culture to the larger question of whether it even be done? There is an occasional popular book or two and there are many application of the technology, but popular and media interest isn’t there anymore. I see quantum computing falling in the same realm. Eventually there will be quantum computers and eventually they will solve problems, and when popular culture and media recognizes that it doesn’t make a damn bit of difference in their lives as far as they can tell, interest and funding will try up… Until then, enjoy the hype.

Yes, but we can prune the solution space at least a little. Monien’s 85′ paper showed we can do serial SAT in O(1.619^n), well under O(2^n). How much redundant pruning do we incur for parallel implementations? Can we prove it is constant?

Excellent post: I am not sure if you saw the NY times article a few weeks ago about the matrix within a matrix ideas.. that we are all a simulation, or that the universe is a computer.. not necessarily a new idea.. but it is interesting to ask if the universe is a computer…

Now, if the universe is a “computer” it means that it is has to be constructed from some set of computable functions. Quantum mechanics is a set of such rules and can be seen as the superset of such functions… Yet, now a paradox.. by this definition our brain is also a computer since it is fundamentally built from particles, yet we can define and reason about non-computable functions… I think Godel showed this..

So.. that suggests that either brain operates on functions defined below the Planck constant and if so, the universe does… and then there must be some sort of new physics at this level that is important… and perhaps these apparent non-computable functions are actually computable at a higher level. But I can’t imagine that because it wouldn’t address Turing equivalence.. for which the definition of P = NP is sort of necessary.

For me, just like we cannot solve our own halting problem.. yet we can affect it by interaction.

Sami, I try to choose which research areas to study based on (1) how fundamental the questions are and (2) how much I can contribute, not on the level of media hype. And when I do talk to journalists about quantum computing, it’s almost always for the purpose of fighting irresponsible hype. So I don’t know to whom your comment is addressed.

Chad: In principle, there can indeed be heuristic algorithms for NP-complete problems that are hard to parallelize. But in practice, most heuristic algorithms are parallelizable as well. For example, if you want to use simulated annealing, you’ll probably do almost T times as well if you run it on T processors, each with a different random starting point. This is not a theorem — but then again, there are very few theorems about the performance of simulated annealing even on a single processor. As for parallelizing those local search algorithms that have provable performance guarantees, I dunno — does anyone know a good reference?

i enjoyed this post a lot, despite being a regular reader. just one question, i cant tell, is the line, “It seems to me that this result should clearly feature in Google’s business plan. Of course in real life…” a bit of a joke?

Sorry, that was kind of insincere of me, I was actually addressing the comment to Michael and generally agreeing that the journalist rules go out of the window in regards to Quantum computing and was dwelling on the past vs. the present — how it’s always been the same. I did not mean to imply that you enjoy the hype ;). Anyhow, great article, it was a good read even for someone not all that familiar with the material.

2. Are you saying that nature has solved this (possibly) NP-complete problem, but that it has taken nature a damn long time to do it?

3. If you hooked together 100 random amino acids and then let them fold up, how often would the artificial protein form a minimum energy configuration?

4. If the answer to 1 is “yes” and the answer to 3 is “99% of the time”, then putting the two together suggests that someone should make a reduction from the traveling salesman problem to protein folding.

1. Under a common idealization of it (called the “hydrophobic model”), yes.

2. No. I’m saying something very different: that because the problem is hard, nature (in the form of natural selection) has focused on easy instances of it.

3. I don’t know! In any case, for ordinary NP-completeness reductions, we care about the worst-case rather than the average-case. (Reductions typically produce instances that are anything but random, even if you started with a random instance of the original problem.)

Greg: I actually discussed that at the Google Cambridge talk, although I forgot to include it in the notes. Philosophically, it’s pretty much the same as the special-relativistic hypercomputer, which chugs away at your problem while you board a spaceship that accelerates to near the speed of light, then decelerates and returns to Earth. When you get back, all your friends are long dead, but the computer’s solved your 3SAT instance!

The fundamental problem with this proposal is the amount of energy needed to accelerate to relativistic speed (or in the GR case, to get really close to an event horizon without falling in). One can calculate that, if you want an exponential speedup, then you have to get exponentially close to the speed of light or to the event horizon — which in both cases requires an exponential amount of energy. Therefore your fuel tank, or whatever else is powering your spaceship, will have to be exponentially large. Therefore you’ll again need an exponential amount of time, just for the fuel in the far parts of the tank to affect you!

I think the question of commercial viability is how often does a system not get stuck in local minima. Are there usable probabilities when the optimal solution is found over sufficiently complex problems.

if the best alternative systems only come up with an optimal solution for N=4, and special cases of N=5. But the new system can fairly freqently get optimal solutions up to N=100 or less frequently N=10000 then that could be a commercial advance.

Nature’s protein folding may not be perfect for NP hard problem but it is sufficiently successful to allow life to form. A commercially useful solution capability.

The comment on the Clay problems is a bit off. I agree that if P=NP then one has a shot at algorithmically solving the other problems but, as you yourself argue, P is not NP, so solving this problem doesn’t help solving the others. Which one is most important is a tricky question.

Interestingly, the only Clay problem for which you get the prize for a counterexample is P vs NP. Finding a zero of the zeta function off the line (unlikely) or a counterexample to the Hodge conjecture (very likely in higher dimensions and codimensions) gets you nothing.

Exploit Moore’s Law by waiting an amount of time proportional to N such that exponentially increasing computing power is so much cheaper that you can now solve the problem at hand in a reasonable amount of time.

If Google really does want to organize a googol bits, we’re going to need several solar systems, or perhaps galaxies worth of atoms, just to store the data. Now you’re saying we’re going to need several supernovas just to power each search? With so many searches per second, I foresee massive scalability problems for Google!

Ray: Your proposal would actually work, if not for the fact that Moore’s “Law” is running up against fundamental physical limits, and therefore has to break down soon! (Indeed, it’s already slowed significantly.)

On the spectrum of computer science, I’m about as theoretical as you can get. One way to put it is that I got through CS grad school at Berkeley without really learning any programming language other than QBASIC.

That’s the single most devastating critique of the current state of computer “science” education ever penned.

Interestingly, the only Clay problem for which you get the prize for a counterexample is P vs NP.

Eh? When the rumor that the Clay Navier-Stokes problem was solved did the rounds, people discussed that the 4 statements paired up 2 and 2 against each other: either prove existence and smoothness of solutions (on 2 common spaces) or prove breakdown (on the same spaces).

If you hooked together 100 random amino acids and then let them fold up, how often would the artificial protein form a minimum energy configuration?

IANAB. (I am not a biologist.)

But protein folding seems to be researched a lot, in the evolutionary context where an easily foldable protein evolves to another easily foldable one.

It’s somewhat ironic that in a post where you discuss quantum mechanics being misrepresented, you do a bit of it yourself. I am sure you know that what you said wasn’t quite right, but I’ll elucidate for readers who may not be versed in quantum mechanics:

You said: “And if some of the ways an event could happen have positive amplitude and others have negative amplitude, then the amplitudes can cancel out, so that the event doesn’t happen at all.”

Complex numbers don’t have magnitudes like positive and negative. We can add them vectorially (tip-to-tail), and if the tip of the last vector points to the tail of the first, then the event doesn’t happen at all.

[/pedantic]

I too am surprised that reputable journals misreported the siginificance of the D-Wave system. I was shocked that these major journals didn’t have a decent physics graduate on their editorial team.

Masud: Every verbal account of quantum mechanics is a lie; the key is knowing what’s OK to lie about and what isn’t!

In this case, it’s known that complex numbers are inessential for quantum computing: one can assume without loss of generality that all amplitudes are real and all transformations are orthogonal, in which case my description of interference would be correct.

i’m interested in knowing your opinion of the recent slashdot article regarding a couple of german university students who’d drummed up a solution to the TSP using optics. their claim was that it was a way to show P = NP.

their article talked a bit about noise, and i wonder if your comment about noisy qbits acting like normal bits might apply.

Scott: Thanks for the link, but no-fair comparing coverage of events to coverage of fields. The former is much easier. Coverage of economics, for instance, “The dow closed down 2 points today on…” oh god, make it stop…the pain.

Scott: The second problem is that, to the extent that proteins do usually fold into their optimal configurations, there’s an obvious reason why they would: natural selection! If there were a protein that could only be folded by proving the Riemann Hypothesis, the gene that coded for it would quickly get weeded out of the gene pool.

travis: If you hooked together 100 random amino acids and then let them fold up, how often would the artificial protein form a minimum energy configuration

To my best understanding, the hypothesis presented by Scott is that protein have evolved to fold into the minimum energy configuration, which seems to make a lot of sense – but do you know of evidence for that? (e.g. data comparing the fraction of times a true protein folds into the minimum energy configuration or close to it with that of a random amino acid chain)

On second thought, why is it so important that the protein folds into the lowest energy? what if it folds (Rapidly)
to a poor local minima? is it clear that it will have a selection disadvantage? (will these proteins be less stable? but i can imagine that this could be good in some scenarios, e.g. the immune system).

What about the colored string method of solving the
traveling salesman problem–where you connect pins on the map with tied colored strings and pull the strings off the map
into a “straight bundle” and then see the longest
single colored line?

oz: The main thing you want is for the protein to fold reliably — i.e. into more-or-less the same configuration every time. And that becomes harder if there are lots of local minima. Alas, I’m not a biologist and I don’t know any relevant experiments — any biologists care to comment?

Ray: Your proposal would actually work, if not for the fact that Moore’s “Law” is running up against fundamental physical limits, and therefore has to break down soon! (Indeed, it’s already slowed significantly.)

I disagree. The trend of increasing the number of transistors at the same geometric rate has not leveled off appreciably despite the power ( and thereby frequency ) ceiling demanded by the requirement to keep the Si substrate in pristine condition ( i.e. not melted into a puddle of grey goo ). If anything, the underlying physics is not the problem; it’s the design side that seems to hold up the works. But, process has always given design types an embarrassment of riches in that regard.

I guess what matters is the structure of the energy landscape – would appreciate if anyone has a reference (I could imagine one can design the solution space such that there are many local minima, but whenever you start with the same initial conditions – e.g. amino acids produced in a chain one-by-one, and starting to simulate the laws of physics you’re attracted to the same local minimum almost all the times.)

“… if you want an exponential speedup, then you have to get exponentially close to the speed of light or to the event horizon — which in both cases requires an exponential amount of energy. Therefore your fuel tank, or whatever else is powering your spaceship, will have to be exponentially large. Therefore you’ll again need an exponential amount of time, just for the fuel in the far parts of the tank to affect you!”

This is very interesting. It implies that time symmetry (basis for energy conservation) is an assumption for standard complexity theory, or turning it around, if you violate (energy conservation / natural law time symmetry) you could get exponential speed ups in computation also. I wonder if there are arguments like this for the other big symmetries (momentum/position, angular momentum/rotation)? This feels like what a contradiction in a logical system feels like – all of a sudden you can do anything.

proteins:
Nature doesn’t just select for proteins that fold the way it wants, it also gives them hints. Synonymous DNA sequences, meaning those that result in identical amino acid chains, can produce different configurations. A theory (I don’t know how well it’s been tested) is that rare codons slow down the protein building, letting part of the chain get a head start on folding. A cursory google search turned up this from 1999, where an artificial mutation seems to produce higher rates of misfolding and in 2007 the claim that an antibiotic resistance may be due to such a substitution.

is it clear that it will have a selection disadvantage? (will these proteins be less stable?

It is late, so I’m not up to looking for many references, but IIRC the post about chaperones and its references discusses the incredibly chemically crowded environment a cell consists of, which is one other reason chaperones are used. (And generally, one of the reasons why most cells has a lot of compartments.) You may get different “non-biological” foldings in other (freer) environments.

All seven Clay Millennium Prizes are still outstanding. None have been awarded. It is widely believed that when CMI convenes an advisory board to review Perelman’s proof of the Poincare conjecture (which, under the rules of the competition, they will be able to do no sooner than June of 2008), that they will declare the proof to be correct and offer the prize to Grigoriy Perelman, who, it is widely believed, will refuse it. However, none of these things have happened yet.

[OT] Thanks for the link. Scanning the paper it looks legit, and while they can only claim correlation between efficiency and speciation impressively the independent cladogram based on other traits concur. Quite a find you made. [/OT]

I’m having difficulty getting a clearer conception of the “protein folding as computation” notion. Scott, you talked about certain protein folding goals as being a coherent theorem — I know this relates to complexity classes, solution spaces, and the like, but I’m not making the connection here between physical computation and function optimization. Any links/tips?