The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

Archive for October, 2012

Update (10/31): While I continue to engage in surreal arguments in the comments section—Scott, I’m profoundly disappointed that a scientist like you, who surely knows better, would be so sloppy as to assert without any real proof that just because it has tusks and a trunk, and looks and sounds like an elephant, and is the size of the elephant, that it therefore is an elephant, completely ignoring the blah blah blah blah blah—while I do that, there are a few glimmerings that the rest of the world is finally starting to get it. A new story from The Onion, which I regard as almost the only real newspaper left:

I’m writing from the abstract, hypothetical future that climate-change alarmists talk about—the one where huge tropical storms batter the northeastern US, coastal cities are flooded, hundreds of thousands are evacuated from their homes, etc. I always imagined that, when this future finally showed up, at least I’d have the satisfaction of seeing the deniers admit they were grievously wrong, and that I and those who think similarly were right. Which, for an academic, is a satisfaction that has to be balanced carefully against the possible destruction of the world. I don’t think I had the imagination to foresee that the prophesied future would actually arrive, and that climate change would simultaneously disappear as a political issue—with the forces of know-nothingism bolder than ever, pressing their advantage into questions like whether or not raped women can get pregnant, as the President weakly pleads that he too favors more oil drilling. I should have known from years of blogging that, if you hope for the consolation of seeing those who are wrong admit to being wrong, you hope for a form of happiness all but unattainable in this world.

Yet, if the transformation of the eastern seaboard into something out of the Jurassic hasn’t brought me that satisfaction, it has brought a different, completely unanticipated benefit. Trapped in my apartment, with the campus closed and all meetings cancelled, I’ve found, for the first time in months, that I actually have some time to write papers. (And, well, blog posts.) Because of this, part of me wishes that the hurricane would continue all week, even a month or two (minus, of course, the power outages, evacuations, and other nasty side effects). I could learn to like this future.

At this point in the post, I was going to transition cleverly into an almost (but not completely) unrelated question about the nature of causality. But I now realize that the mention of hurricanes and (especially) climate change will overshadow anything I have to say about more abstract matters. So I’ll save the causality stuff for tomorrow or Wednesday. Hopefully the hurricane will still be here, and I’ll have time to write.

Update (10/10). In case anyone is interested, here’s a comment I posted over at Cosmic Variance, responding to a question about the relevance of Haroche and Wineland’s work for the interpretation of quantum mechanics.

The experiments of Haroche and Wineland, phenomenal as they are, have zero implications one way or the other for the MWI/Copenhagen debate (nor, for that matter, for third-party candidates like Bohm 🙂 ). In other words, while doing these experiments is a tremendous challenge requiring lots of new ideas, no sane proponent of any interpretation would have made predictions for their outcomes other than the ones that were observed. To do an experiment about which the proponents of different interpretations might conceivably diverge, it would be necessary to try to demonstrate quantum interference in a much, much larger system — for example, a brain or an artificially-intelligent quantum computer. And even then, the different interpretations arguably don’t make differing predictions about what the published results of such an experiment would be. If they differ at all, it’s in what they claim, or refuse to claim, about the experiences of the subject of the experiment, while the experiment is underway. But if quantum mechanics is right, then the subject would necessarily have forgotten those experiences by the end of the experiment — since otherwise, no interference could be observed!

So, yeah, barring any change to the framework of quantum mechanics itself, it seems likely that people will be arguing about its interpretation forever. Sorry about that. 🙂

Where is he? So many wild claims being leveled, so many opportunities to set the record straight, and yet he completely fails to respond. Where’s the passion he showed just four years ago? Doesn’t he realize that having the facts on his side isn’t enough, has never been enough? It’s as if his mind is off somewhere else, or as if he’s tired of his role as a public communicator and no longer feels like performing it. Is his silence part of some devious master plan? Is he simply suffering from a lack of oxygen in the brain? What’s going on?

Yeah, yeah, I know. I should blog more. I’ll have more coming soon, but for now, two big announcements related to quantum computing.

Today the 2012 Nobel Prize in Physics was awarded jointly to Serge Haroche and David Wineland, for “for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems.” I’m not very familiar with Haroche’s work, but I’ve known of Wineland for a long time as possibly the top quantum computing experimentalist in the business, setting one record after another in trapped-ion experiments. In awarding this prize, the Swedes have recognized the phenomenal advances in atomic, molecular, and optical physics that have already happened over the last two decades, largely motivated by the goal of building a scalable quantum computer (along with other, not entirely unrelated goals, like more accurate atomic clocks). In so doing, they’ve given what’s arguably the first-ever “Nobel Prize for quantum computing research,” without violating their policy to reward only work that’s been directly confirmed by experiment. Huge congratulations to Haroche and Wineland!!

In other quantum computing developments: yes, I’m aware of the latest news from D-Wave, which includes millions of dollars in new funding from Jeff Bezos (the founder of Amazon.com, recipients of a large fraction of my salary). Despite having officially retired as Chief D-Wave Skeptic, I posted a comment on Tom Simonite’s article in MIT Technology Review, and also sent the following email to a journalist.

I’m probably not a good person to comment on the “business” aspects of D-Wave. They’ve been extremely successful raising money in the past, so it’s not surprising to me that they continue to be successful. For me, three crucial points to keep in mind are:

(1) D-Wave still hasn’t demonstrated 2-qubit entanglement, which I see as one of the non-negotiable “sanity checks” for scalable quantum computing. In other words: if you’re producing entanglement, then you might or might not be getting quantum speedups, but if you’re not producing entanglement, then our current understanding fails to explain how you could possibly be getting quantum speedups.

(2) Unfortunately, the fact that D-Wave’s machine solves some particular problem in some amount of time, and a specific classical computer running (say) simulated annealing took more time, is not (by itself) good evidence that D-Wave was achieving the speedup because of quantum effects. Keep in mind that D-Wave has now spent ~$100 million and ~10 years of effort on a highly-optimized, special-purpose computer for solving one specific optimization problem. So, as I like to put it, quantum effects could be playing the role of “the stone in a stone soup”: attracting interest, investment, talented people, etc. to build a device that performs quite well at its specialized task, but not ultimately because of quantum coherence in that device.

(3) The quantum algorithm on which D-Wave’s business model is based — namely, the quantum adiabatic algorithm — has the property that it “degrades gracefully” to classical simulated annealing when the decoherence rate goes up. This, fundamentally, is the thing that makes it difficult to know what role, if any, quantum coherence is playing in the performance of their device. If they were trying to use Shor’s algorithm to factor numbers, the situation would be much more clear-cut: a decoherent version of Shor’s algorithm just gives you random garbage. But a decoherent version of the adiabatic algorithm still gives you a pretty good (but now essentially “classical”) algorithm, and that’s what makes it hard to understand what’s going on here.

As I’ve said before, I no longer feel like playing an adversarial role. I really, genuinely hope D-Wave succeeds. But the burden is on them to demonstrate that their device uses quantum effects to obtain a speedup, and they still haven’t met that burden. When and if the situation changes, I’ll be happy to say so. Until then, though, I seem to have the unenviable task of repeating the same observation over and over, for 6+ years, and confirming that, no, the latest sale, VC round, announcement of another “application” (which, once again, might or might not exploit quantum effects), etc., hasn’t changed the truth of that observation.