A quantum optimizer folds its first proteins

In 2007, D-Wave announced with great fanfare that it had developed the world's first commercial quantum computer. Unfortunately, details were rather scarce, and it was hard to confirm that anything quantum was going on in the company's device. In the intervening time, D-Wave has backed away from its initial claims somewhat, now calling its device a quantum optimizer, and claiming that, while its device doesn't meet all the criteria to be called a quantum computer, it still offers benefits over a classical computer.

In a recent publication, researchers from D-Wave and Harvard University teamed up to use D-Wave's quantum optimizer to solve a protein folding problem. That demonstration, combined with a simulation of the device's performance, goes a long way to convincing me that D-Wave's optimizer may indeed be a quantum optimizer after all.

Folding proteins

The protein folding problem is a very difficult and very important one. Proteins are strings of amino acids, which, as they are joined up, can flop around and fold up in a huge number of ways. But—and this is the kicker—the final folded shape of the protein is what allows it to perform its function. Proteins that end up folded the wrong way don't work as well as a correctly folded protein or don't work at all, and they can even be harmful. At first glance it seems highly improbable that a protein with a virtually infinite number of potential configurations should, with near-certainty, fold itself correctly every time.

Current thought is that the correct configuration for a functional protein is the one that requires the least energy to hold it in place. This seems to be an eminently sensible idea, since every time it's knocked around by the environment, it's likely to refold into shapes that allow it to give up energy. Over the long run, any functional protein that could be knocked out of shape and not return to its functional form would be replaced.

To test this idea, and to learn more about protein shapes generally, researchers spend a lot of time calculating protein shapes, searching for the lowest energy form. But this is a long and tedious process, requiring many computer cycles per protein.

Folding proteins using magnets

One way to solve a protein folding problem is to place each amino acid randomly on a 3D grid and let them jump around. Each jump requires a certain amount of energy to get started, but that energy and more may be given up if the new location requires less energy—that is, if an amino acid interacts more strongly with those that have folded up next to it. The probability of a jump into any particular configuration depends on these energy calculations.

If you compute the energy for a large number of random jumps, you can find where the protein will have settled into a low energy configuration. But, is it the lowest energy solution? Maybe, maybe not. So, you start again, but with the amino acids in new starting positions.

This way of calculating protein folding is very similar to how magnets arrange their orientations on a 2D grid. If you control how strongly the magnets feel each other, then you can mimic the different bonding strengths between different amino acids, and the 3D nature of the protein. Once you have the magnets set up—setting this up is not easy, and it's a remarkable technical achievement on its own—you use it to find a low energy state.

When the magnets are hot, they have a lot of energy, and can flip their orientation. As they flip, they change the magnetic field around the other magnets, causing some of them to flip. This causes more magnets to flip, and so it carries on. However, you can slowly cool the magnets so there is less and less energy available to allow them to flip. With enough cooling, they tend to get locked into a configuration. If you have cooled slowly enough, then that configuration is likely to be the lowest energy configuration. Read that out, and you have the lowest energy 3D configuration of the protein that the magnets were modeling.

Now, in real life, the magnets are superconducting rings (superconducting quantum interference devices, or SQUIDs). The direction of the magnet is set by the direction that the current in the ring is circulating, and the coupling between the different magnets is not directly through the magnetic fields, but indirectly through capacitors, inductors, and other SQUIDs, This intervening hardware allows the coupling to be controlled. This method of calculating is called simulated annealing, and it works extremely well. It is, however, no faster than any other way of calculating the configuration of a protein: it is still a classical computer.

Riding a quantum horse to the rescue

So how does the quantum nature of a SQUID help? The trick is in the coupling between the different magnets. In the description I gave above, each magnet experiences the average of all the surrounding fields, so individual flips have an almost negligible effect on any other magnet. In a fully quantum description, the currents are added up, taking their phase into account, so interference between different SQUIDs can lead to cancellation or addition of their contributions, or anything in between.

Normally, we would discount phase, because the currents in each SQUID would have no fixed relationship to one another. In other words, there is no coherence. But if the SQUID array is coherent, then the interference between the different SQUIDs drives them toward the overall lowest energy solution faster than you would expect from a classical description.

And that brings us back to D-Wave, which has produced hardware that does simulated annealing, and claims that it is a quantum system. But it has been difficult to verify that claim.

The remarkable thing about this latest bit of work is not the protein folding—it was a rather small demonstration—but that they could use this to demonstrate that there may well be something quantum going on. The SQUID array was too small to directly simulate the six-amino acid protein—instead, the researchers broke the problem up into bits and combined them at the end. One of those bits was small enough that the SQUID array could be fully simulated, including the quantum bits, on a classical computer. The researchers found that the SQUID array behaved exactly as expected if the quantum aspects were contributing, solving the problem in the time expected. Experiment and theory agree, and all is well in the world.

But, as with all things, the picture is still incomplete. What I had hoped the paper would contain was a comparison between a full quantum simulation and a classical simulation. Let's imagine for a moment that the operating device loses coherence within a few nanoseconds, and after that, everything is classical. If the quantum simulation is accurate, it will reflect the loss of coherence, give the classical results, and agree with experimental results. A simulation that intrinsically assumes a lack of coherence (in other words, a classical model) will only agree if coherence is lost.

By comparing these two simulations with experiments, we would be able to be sure that the quantum part of the quantum optimizer was important. More interestingly, we would be able to make estimates of how the coherence of the array was decaying with time and distance.

Nevertheless, I have to say that I am largely convinced that D-Wave has produced evidence that its SQUID arrays might behave as a quantum optimizer. In the past, I have been less than convinced, and rather critical of D-Wave. I still think there is more to be learned about the degree of coherence in the SQUID arrays. This demonstration shows that there is much more to be done in practical terms before the optimizer is ready for larger problems. But progress has been rapid.

Promoted Comments

The limitation of this kind of protein modeling is that a "correctly" folded protein may not always be at the lowest possible energy state. For most proteins, there are probably multiple low-energy states in which the folding would be stable.

However, if you want to ever be able to consistently model protein folding, you do need to develop a system that can find the "nearest" stable state given a starting configuration. The holy grail of folding would be to develop a system that models the protein during and after translation, including key intermediate steps. This would have to include interaction with the cytoplasm and nearby chaperone molecules.

1925 posts | registered Nov 13, 2002

Chris Lee
Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands. Emailchris.lee@arstechnica.com//Twitter@exMamaku

So...how long until this can be used to effectively neutralize all current cryptography techniques? To me it seems that quantum computer technology is one of those extreme double-edged swords - sure it has the potential to carry us leaps and bounds into the future but it also carries many inherent risks to everything from security to the economies around the world. I imagine our government would do their best to keep a tight lid on such a project if they could to prevent what essentially amounts to a new arms race with rival governments and organized groups with various interests around the world. It would be interesting to see an expert analysis showing the positive and negative potentials of this technology breaking into the market.

Until they're more forthcoming about what they have running inside that giant box of theirs, I'm going to assume it's just a massive container for hundreds of nVidia's Kepler graphics cards running parallel operations.

I'm always a little wary of any concept where some amount of vagueness is involved.

Spoiler: show

I had similar thoughts as I read this article. When we got to the parts about magnets emulating proteins folding in a 3d grid, all I could think about "If she weighs the same as a duck, she's made of wood and therefore a witch." and also "...and that my liege is how we know the Earth to be banana shaped".

Instead of a SQID a BOSE could be used instead. The BOSE lattice would line up in sync with a EM field. The fractal representation of the BOSE lattice could be created and this could generate a fractal representation of a qubit.

Run the process in reverse the BOSE lattice could line up via a qubit gate.

The limitation of this kind of protein modeling is that a "correctly" folded protein may not always be at the lowest possible energy state. For most proteins, there are probably multiple low-energy states in which the folding would be stable.

However, if you want to ever be able to consistently model protein folding, you do need to develop a system that can find the "nearest" stable state given a starting configuration. The holy grail of folding would be to develop a system that models the protein during and after translation, including key intermediate steps. This would have to include interaction with the cytoplasm and nearby chaperone molecules.

So a 6 amino acid protein is to big for them? They have to break it down to do the problem in parts then add the results back together. Something like GFP is 238 amino acid residues and it is considered small enough that people add it to other proteins for tracking purposes.

I'm always a little wary of any concept where some amount of vagueness is involved.

Spoiler: show

It's so simple it should be obvious. The internet is made up of computers, but the internet is also said to be made up of cat videos. Therefore, a computer is a box with a cat inside. If happen to use Schroedinger's cat, this makes the box a quantum computer.

Admittedly, this was only meant as "proof of concept" exercise. Still, it's worth mentioning that these 2D lattice models for protein folding were cutting edge. . .20 to 30 years ago. I actually remember running a very similar calculation for a longer peptide on our cutting-edge 350mhz Dec Alpha workstations back in the late nineties. . .and even then it was a "lets tell the undergrads to reproduce a historical experiment in the literature as a class project" sort of thing.

Folding@Home uses fully off-lattice, all-atom models with long-range interactions that are at least six orders of magnitude more complex than this model. And even F@H gets eaten for breakfast by D.E. Shaw's special-purpose supercomputers:http://www.sciencemag.org/content/334/6055/517.abstract

I can't comment on the QA aspect of this paper, but there's a reason it's in an archival publication ("scientific reports") as opposed to even an archival protein folding journal. Because the entire notability of the paper is that it was at all possible and apparently must have involved some non-classical computing. But noone in the protein folding community is getting excited, much less holding their breath over results like these.

So I invented this car powered by cold fusion, it works really well but I won't let you look inside to see how it works. Sure, you'd love to at least see it drive side by side with a normal car under controlled conditions, but I'm going to race it against a horse and buggy. Oh, and the horse is kinda sick but that doesn't affect the notability of the results, and just ignore the fact that we just put this 100 kWH lead acid battery inside our car, that's just until we work out the efficiency kinks. . .

Admittedly, this was only meant as "proof of concept" exercise. Still, it's worth mentioning that these 2D lattice models for protein folding were cutting edge. . .20 to 30 years ago. I actually remember running a very similar calculation for a longer peptide on our cutting-edge 350mhz Dec Alpha workstations back in the late nineties. . .and even then it was a "lets tell the undergrads to reproduce a historical experiment in the literature as a class project" sort of thing.

Folding@Home uses fully off-lattice, all-atom models with long-range interactions that are at least six orders of magnitude more complex than this model. And even F@H gets eaten for breakfast by D.E. Shaw's special-purpose supercomputers:http://www.sciencemag.org/content/334/6055/517.abstract

I can't comment on the QA aspect of this paper, but there's a reason it's in an archival publication ("scientific reports") as opposed to even an archival protein folding journal. Because the entire notability of the paper is that it was at all possible and apparently must have involved some non-classical computing. But noone in the protein folding community is getting excited, much less holding their breath over results like these.

So I invented this car powered by cold fusion, it works really well but I won't let you look inside to see how it works. Sure, you'd love to at least see it drive side by side with a normal car under controlled conditions, but I'm going to race it against a horse and buggy. Oh, and the horse is kinda sick but that doesn't affect the notability of the results, and just ignore the fact that we just put this 100 kWH lead acid battery inside our car, that's just until we work out the efficiency kinks. . .

They never claimed to be racing current methods. They certainly never claimed to be fastest. Nor do they claim to be using the most advanced model--though, they are certainly using a model that is a little more advanced than the one I described. Even worse, the method they use for translating the protein to the 2D SQUID lattice has exponential scaling rather than polynomial scaling (there are methods that use polynomial scaling, but for small proteins, the exponential version is more efficient).

No, the only thing this is is a proof of concept. Considering how new the field is, it is a remarkable achievement, should it be working as D-Wave claim it is.

No, the only thing this is is a proof of concept. Considering how new the field is, it is a remarkable achievement, should it be working as D-Wave claim it is.

Yes, and if D-wave were an academic quantum computing group I'd say "good on you" and pat them on the back. But this is a corporation claiming to have a 128-qubit device for sale right now, if you have a big enough pocketbook, but details on exactly how the innards work or much useful computation you can do with this have been less than forthcoming.

Have they managed to lower the bar so very much for their $10 million dollar black box that "actually performed a quantum computation" is to be lauded?

I'm just saying, if this was a video game company taking orders for the next "Duke Nukem Forever" for 100$ a pop, would everyone be so generous and forgiving that all they can show us are some static screenshots that looks like Wolfenstein 3D and lots of promises that they are, you know, for really realz making an awesome game, take their word?

You are making a major mistake in focusing on comparing the utility of this machine for protein folding against other ways of doing protein folding. Your mistake is that this whole article is not really about protein folding at all, nor about the best way of simulating protein folding. It is not fundamentally even about molecular biology, at all.

This article is really about a different field entirely: *theoretical computer science*. We are talking about a major area of basic research in computer science, which is demonstrating on increasingly macroscopic scales that it is possible to do computations using quantum mechanical techniques with an computational efficiency that exceeds that of a classical computer. The question at hand, which is an extremely important one, is simply whether a device of this sort exceeds the efficiency of a classical computer *AT ALL*.

If you think that is a minor result, or an irrelevant one, it is simply because you don't understand the field in question. The use of protein folding is simply a convenient *application* of these techniques used as an example, and focusing on comparison of runtime or problem sizes is simply evidence that you don't understand how computational complexity is studied.

Alan Chen wrote:

fb39ca4 wrote:

But can it run Folding@Home?

No. And that's a very valid question.

Admittedly, this was only meant as "proof of concept" exercise. Still, it's worth mentioning that these 2D lattice models for protein folding were cutting edge. . .20 to 30 years ago. I actually remember running a very similar calculation for a longer peptide on our cutting-edge 350mhz Dec Alpha workstations back in the late nineties. . .and even then it was a "lets tell the undergrads to reproduce a historical experiment in the literature as a class project" sort of thing.

Folding@Home uses fully off-lattice, all-atom models with long-range interactions that are at least six orders of magnitude more complex than this model. And even F@H gets eaten for breakfast by D.E. Shaw's special-purpose supercomputers:http://www.sciencemag.org/content/334/6055/517.abstract

I can't comment on the QA aspect of this paper, but there's a reason it's in an archival publication ("scientific reports") as opposed to even an archival protein folding journal. Because the entire notability of the paper is that it was at all possible and apparently must have involved some non-classical computing. But noone in the protein folding community is getting excited, much less holding their breath over results like these.

So I invented this car powered by cold fusion, it works really well but I won't let you look inside to see how it works. Sure, you'd love to at least see it drive side by side with a normal car under controlled conditions, but I'm going to race it against a horse and buggy. Oh, and the horse is kinda sick but that doesn't affect the notability of the results, and just ignore the fact that we just put this 100 kWH lead acid battery inside our car, that's just until we work out the efficiency kinks. . .

Scott Aaronson used to be the "Chief D-Wave Skeptic", but retired once they showed quantum effects and wants experimental physicists to take over.

This is the latest post of his on D-Wave that I can find, from a February visit:

Quote:

In summary, while the observed speedup is certainly interesting, it remains unclear exactly what to make of it, and especially, whether or not quantum coherence is playing a role.

Which brings me to Point #2. It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system. (Note that, if there’s no entanglement, then it becomes extremely implausible that quantum coherence could be playing a role in a speedup. For while separable-mixed-state quantum computers are not yet known to be efficiently simulable classically, we certainly don’t have any examples where they give a speedup.)

Quote:

I understand that, given their knowledge of decoherence mechanisms, some physicists are extremely skeptical that you could have rapid decoherence in the energy basis without getting decoherence in the computational basis also. So certainly the burden is on D-Wave to demonstrate that they maintain coherence “where it counts.” But at least I now understand what they’re claiming, and how it would be compatible (if true) with a quantum speedup.

Quote:

To see the obtuseness of this question, consider a simple thought experiment: suppose D-Wave were marketing a classical, special-purpose, $10-million computer designed to perform simulated annealing, for 90-bit Ising spin glass problems with a certain fixed topology, somewhat better than an off-the-shelf computing cluster. Would there be even 5% of the public interest that there is now? I think D-Wave itself would be the first to admit the answer is no. Indeed, Geordie Rose spoke explicitly in his presentation about the compelling nature of (as he put it) “the quantum computing story,” and how it was key to attracting investment. People don’t care about this stuff because they want to find the ground states of Ising spin systems a bit faster; they care because they want to know whether or not the human race has finally achieved a new form of computing. So characterizing the device matters, goddammit!

If you think that is a minor result, or an irrelevant one, it is simply because you don't understand the field in question. The use of protein folding is simply a convenient *application* of these techniques used as an example, and focusing on comparison of runtime or problem sizes is simply evidence that you don't understand how computational complexity is studied.

What I caught was that D-Wave might have demonstrated, on a toy problem , that they did a quantum computation with such a high error rate that they can only be used for quantum annealing, and even then it only got the right answer 13 out of 10,000 times. So all of that computational complexity stuff, like factoring large prime numbers or simulating many-body systems, forget it! This is a hammer that can find the global minimum of an Ising-type glassy system. Period. There is no obvious route to a general purpose qubit from the types of qubits that have been described. Hence the protein example, apparently using an algorithm that scales exponentially, as that's the sexiest sounding Ising-type model they could find.

And exactly how is this is supposed to make me believe the supposed 128 qubit system they supposedly have for sale is actually capable of solving non-trivial problems and is not being over-sold or over-hyped? Extraordinary claims like a 128 qubits machine that will "revolutionize computing" require extraordinary proof. I do not see it, so please enlighten me.

You are making a major mistake in focusing on comparing the utility of this machine for protein folding against other ways of doing protein folding. Your mistake is that this whole article is not really about protein folding at all, nor about the best way of simulating protein folding. It is not fundamentally even about molecular biology, at all.

This article is really about a different field entirely: *theoretical computer science*. We are talking about a major area of basic research in computer science, which is demonstrating on increasingly macroscopic scales that it is possible to do computations using quantum mechanical techniques with an computational efficiency that exceeds that of a classical computer. The question at hand, which is an extremely important one, is simply whether a device of this sort exceeds the efficiency of a classical computer *AT ALL*.

If you think that is a minor result, or an irrelevant one, it is simply because you don't understand the field in question. The use of protein folding is simply a convenient *application* of these techniques used as an example, and focusing on comparison of runtime or problem sizes is simply evidence that you don't understand how computational complexity is studied.

Well the paper says it is about protein folding. Now why would you call it protein folding instead of arranging sticky balls connected with rigid sticks on a grid unless you are trying to use "protein folding" to push up the "impact". Certainly seems like the spherical cow version of protein folding.

Looking at the paper I wonder if when they generating their Hamiltonian and doing all the conversions of the coordinate system is really just computing all the possible arrangements anyway given that the Hamiltonian for a sub experiment which involves just the position of two of the balls is 11 terms long. Not to mention solving the problem of mapping this onto the qubit array (which depends on device in questions since the device appears to have a 10% qubit not being good rate).

They also do not specifically show any evidence that the quantum nature is required for the operation of the device (maybe I just missed it). I do not see any reference about doing the simulation classically and getting either faster or more accurate results than the classical comparison.

They also do not specifically show any evidence that the quantum nature is required for the operation of the device (maybe I just missed it). I do not see any reference about doing the simulation classically and getting either faster or more accurate results than the classical comparison.

Well, no. They don't show that quantum effects are required for operation, and they don't do a hypothesis test against a null hypothesis that quantum effects are present I take it.

What they show, in fig. 2 and discussed in the text, is a prediction of the computation time that their result follows robustly and in detail, and then hand wave that it is because of quantum nature. It is a good hand wave and I would think they have made their conclusion likely. But how likely it seems we don't know.

The observed speedup is from an analogous physical system, where a magnet could be adjusted between classical and quantum annealing of domains, and "results indicate that quantum annealing hastens convergence to the optimum state." [From the abstract of their ref 19.]

This has been known since 1999, and it may be the only experiment for all I know, if that is Aaronsons' "observed speedup". It should be repeated by D-Wave early, if that is what interests them or their investors.

As for the generalization of this, I assume scaling up will help. The 2-factor connectivity is probably also an artifact of early demonstrators, or at least there should be no inherent problem. But as long as their production quality sucks, this won't go anywhere.

Even if they solve the problem, I have never understood the use of analog computing after the discovery of digital computing. Setting up the problem and forcing it through the bottleneck of the analog computing specialization, here finding local minima, should take much more resources than throwing cheap bits and cheap algorithms at the problem. It is the whole point of the digital revolution, and it has been a good one ever since.

To sum up, D-Wave need to push scalability by demonstrating the necessary production quality for really large systems. Then they need to demonstrate a speedup that makes it worthwhile to use their system over others.

Meanwhile I'm sure the protein folding problem will be solved in principle albeit likely not in all instances, there are many natural and will always be synthetic proteins out there. Maybe they will conquer the protein world some day. But I don't think there will be anyone riding a global D-Wave into the future of computing.

Not in general I think, it is D-Waves specific solution to putting an early system out there that is.

But maybe, in a digital qubit implementation, the qubit somewhat analog nature makes the mapping digital-analog and back somewhat non-trivial. (A quantum bit is inherently a hamiltonian 2D structure; like the spin of a particle.) Do we have any computer scientists in the house?

So are you saying that they have demonstrated quantum effects, but people are skeptical about the quantum effect role in the speedup? I'm still a bit confused.

I am _very_ confused, I can barely map Lee's article to the paper, and the same goes for Aaronson's text (which was on the status before the paper). They should be renamed "cubits", for "curious bits".

I _think_ a speedup is in question, after D-Wave made it likely, but didn't quantifiably test I think, that they have apparent quantum effects in their system. Lee claims there is no speedup, I don't know if it is because it has not yet been demonstrated or if it has been demonstrated that there is none.

Reading the paper didn't answer that question. The author's hint that there is a on old, likely then never repeated, demonstration of speedup on a system analogous to theirs.

Even if it is a demonstrable speedup, according to Aaronson the absence of entanglement should mean it isn't a large one. What do you others think?