a personal view of the theory of computation

Quantum Groundhog Day

Happy Groundhog Day! This doesn’t actually count as a holiday, but at the very least it is the one day each year that every American can devote to the contemplation of quantum physics. Can there be any bigger fun than that?

Well we hope the comments flying today in our own recent item on quantum computers are edifying as well as fun. This is the first of a series, but before it recurs we thought we’d summarize some of what has been said. We fear otherwise the pith may get “lost in translation,” as in a famous film with Bill Murray in a serious role, delivering an Oscar-nominated performance. Bill Murray charges a $100,000 speaking fee, which a bet posted here would just cover, so we should get serious too. Here goes…

Roadmap to Comments

Cristopher Moore of the Santa Fe Institute was the first to give an evaluation of Gil Kalai’s conjectures. He began with the position that given known theoretical results, arguing limitations on quantum computers (QC’s) entails arguing against quantum mechanics itself. His main request was to focus arguments on the underlying physics. Gil Kalai responded by positing some limits on QC’s that everyone might agree on, but not clearly so to Steve Flammia and Boaz Barak who joined in that thread, while John Sidles gave supplementary considerations and recalled a Theoretical Physics StackExchange thread opened last September by Kalai.

Barak then asked whether Gil’s conjectures intended to knock the power of down to the classical . Gil thought yes but this is not central yet. Gil then opened a second thread responding to Moore and Flammia. One point of contention became whether noise can really be separated from the main notion of physical process being analyzed, insofar as noise arises from the same ambit of physical processes. Sidles noted that “noise” in plasma physics has frustrated efforts at plus-rate fusion energy.

Physicist Robert Alicki, who was referenced in the post itself, then weighed in here and here with details on the physics underlying the objections to FTQC. Geordie Rose, CTO of D-Wave Systems, however, noted progress in building QC’s, though with criticism of the standard quantum circuit model by which the feasibility of QC’s is mainly argued.

Some people are much more ready to accept a reality where QC are possible in principle, but it will take a hundred years to build them, or even that they will not be built at all, compared to a reality that there is a 10% chance that a principle explaining why quantum computers are infeasible will be discovered, but otherwise they will be built in 20 years.

Scott’s analogy of human flight to Mars is a good analogy for how I think about it.

Will we send a human to Mars in the next 20 or 100 years? I’d assign some low probabilities to this, like 1% or so.

Is doing so absolutely impossible, like faster-than-light travel? I’d assign probability 0.000001% to this. This probability is pretty much the floor that I assign to possibilities like our universe being a simulation, or that I’m insane and not able to rationally make predictions, or that there’s some other reason that most of my fundamental beliefs about the world are totally wrong.

I am slightly less confident in the in-principle feasibility of QC. But I am confident enough to argue for it. :)

Ok, I’ll toss in my 0.2 cents. My first point is that I have no idea what I am talking about. I’ve only ever had a quick romp through QM, and I’ve seen a bit of stuff about QC but never looked at it in detail. But it’s Friday, and I figured I’d just chime in with the totally naive observer perspective.

It was implied earlier that not believing in QM would mean not believing in QC. That’s fundamentally my position, although it is not quite that simple. I keep zeroing in on the noise aspect. I see the underlying probabilities in QM as an artifact of the boundary between two discrete formal systems. Ignoring duality for the moment, a universe composed of particles that inhibit discrete locations, behaving in a repeatable manner, forms a deterministic system. A larger non-aligned system overlaid on top of this would induce a probabilistic uncertainty mismatch between these two systems.

By way of analogy, I see it as looking through a camera at the world, but one where the lens is distorted and blurry. If that’s your only perspective, you could easily conclude that what you are seeing in the viewfinder does indeed match reality. If you see a blurry cat in front of you, you can reach out and touch the blurry cat, so it would confirm your conviction that you are seeing the world as it is, which to some degree you are.

But what you are also getting in that picture is a significant amount of artifacts — noise — that is making it difficult to precisely find the boundaries of the cat. You could ignore this noise, or work it into your world philosophy, but the noise itself is an artifact of the lens you are using, not the world around you.

Getting back to quantum computers, if the fundamental underlying issue is that the noise is significant in preventing one from utilizing these constructs for deterministic calculation, it seems to me that the noise might not be coming from the underlying physics, but rather from the model being used to describe the underlying physics. And it would also seem to me that the ability to build a QC is intrinsically tied with the accuracy of QM. The model might be workable, but that does mean that it’s the best or most accurate model of the world around us. A much better lens for the camera might eliminate the probability of reaching out for the cat, but missing …

I liked your post a lot, it was clear and well-articulated. But I disagree. :)

Like it or not, nature just isn’t deterministic. There is no deterministic theory which can explain phenomena like this which doesn’t get lots of other stuff wrong. QM may not be the final word, but it’s certainly right about probabilities.

To put it another way, it happens to be a fact about our world that a maximally informative set of information about the state of a system (even an ‘ideal’ system) does not completely determine the outcomes of all possible measurements performed on that system. If you perform one of these undeterminable measurements, you will indeed obtain a value, but in doing so you will destroy the OTHER information you previously had about the system. In other words, the outcomes of experiments which otherwise would have been predictable with certainty no longer are. One of the triumphs of quantum theory is packaging this (and other) strangeness in a remarkably concise framework.

You say, “[quantum mechanics] might be workable, but that doesn’t mean that it’s the best or most accurate model of the world around us.” You’re certainly right that QM doesn’t have to be exactly right. But certain essential features are here to stay. For example, Newton was wrong about gravity. But clearly any theory of gravity had to explain the success of Newton’s theory – for example, it had to predict that objects of mass should experience an attractive force that is usually at least roughly proportional to r^(-2). And quantum theory has been tested with far more precision than any other theory in the history of science. Certain parameters in QED have been shown to agree with experiment to more than 10 significant figures. That’s like measuring the distance to the moon, and worrying about millimeters… And since so little is put into the theory, these successes reflect an accurate theoretical structure, rather than successful fine-tuning. It is truly remarkable. So my point is this: while it’s possible, or even likely, that improvements to this theory may be found, these modifications will have to respect the profound successes that the theory has had. And this necessarily implies that the new theory will also be probabilistic.

At least, the theory has to be probabilistic several orders of magnitude below what we can directly see (so a thousand times smaller than an electron or more). But conceivably there could be an immense collection of deterministic events operating at scales much much smaller than what we can measure, as long as the behavior of this hidden assembly produces quantum phenomena in the aggregate. This would not be unlike the emergence of thermodynamics from statistical mechanics.

But the important point is that even if this rather fantastical picture turns out to be true, the behavior of the hidden assembly must be PRECISELY (well, with 10 sig figs) that which produces the quantum behavior that we are familiar with. Thus, discovering that this is the case would not change that fact that phenomena at quantum scales are unavoidably probabilistic. And also note that this scenario would be quite different from “the noise coming from the model”, rather than from the physics. The noise comes from the physics, and if our model did not reflect that noise, it would be wrong.

For mathematicians, yes. For scientists however, Bill Murray’s most soulful role is the aging-yet-still-committed Steve Zissou in the meta-eponymous film The Life Aquatic with Steve Zissou (2004), in which the elusive “jaguar shark” stands as a metaphor for quantum factoring engines (and many another scientific quest too).

The evidence for quantum mechanics is overwhelming, and no one has a physical theory that permits quantum mechanics but not quantum computing. Gil is trying to develop one, but it’s not yet the kind of theory like the Standard Model that you can use to make precise physical predictions.

Aram, that is like saying that no one has a physical theory that permits conventional physics but not time travel. The Standard Model makes precise predictions, but it does not make any predictions that support either time travel or quantum computing.

Please let me express the hope (and the expectation) that this wonderful GLL-hosted QM / QC / QIT debate will escape the dreadful quagmire into which the climate-change debates have fallen; a quagmire in which the weakest forms of “gotcha” criticism assail the weakest forms of “not even wrong” science.

It is sobering to reflect that the dogmatically polarized climate-change debate has been largely characterized by invective language and disputed wagers rather than creative math-and-science. The resulting dialogs surely are heated, yet regrettably are not illuminating.

We all prosper when the strongest and most innovative (yet kindly and collegial) criticisms assail the strongest and most successful (yet flexible) scientific theories, and these are the posts that I have most enjoyed reading here on GLL.

Fortunately, there have been plenty of illuminating posts on both sides of the debate … more please! :):):)