A quantum optimizer folds its first proteins

In 2007, D-Wave announced with great fanfare that it had developed the world's first commercial quantum computer. Unfortunately, details were rather scarce, and it was hard to confirm that anything quantum was going on in the company's device. In the intervening time, D-Wave has backed away from its initial claims somewhat, now calling its device a quantum optimizer, and claiming that, while its device doesn't meet all the criteria to be called a quantum computer, it still offers benefits over a classical computer.

In a recent publication, researchers from D-Wave and Harvard University teamed up to use D-Wave's quantum optimizer to solve a protein folding problem. That demonstration, combined with a simulation of the device's performance, goes a long way to convincing me that D-Wave's optimizer may indeed be a quantum optimizer after all.

Folding proteins

The protein folding problem is a very difficult and very important one. Proteins are strings of amino acids, which, as they are joined up, can flop around and fold up in a huge number of ways. But—and this is the kicker—the final folded shape of the protein is what allows it to perform its function. Proteins that end up folded the wrong way don't work as well as a correctly folded protein or don't work at all, and they can even be harmful. At first glance it seems highly improbable that a protein with a virtually infinite number of potential configurations should, with near-certainty, fold itself correctly every time.

Current thought is that the correct configuration for a functional protein is the one that requires the least energy to hold it in place. This seems to be an eminently sensible idea, since every time it's knocked around by the environment, it's likely to refold into shapes that allow it to give up energy. Over the long run, any functional protein that could be knocked out of shape and not return to its functional form would be replaced.

To test this idea, and to learn more about protein shapes generally, researchers spend a lot of time calculating protein shapes, searching for the lowest energy form. But this is a long and tedious process, requiring many computer cycles per protein.

Folding proteins using magnets

One way to solve a protein folding problem is to place each amino acid randomly on a 3D grid and let them jump around. Each jump requires a certain amount of energy to get started, but that energy and more may be given up if the new location requires less energy—that is, if an amino acid interacts more strongly with those that have folded up next to it. The probability of a jump into any particular configuration depends on these energy calculations.

If you compute the energy for a large number of random jumps, you can find where the protein will have settled into a low energy configuration. But, is it the lowest energy solution? Maybe, maybe not. So, you start again, but with the amino acids in new starting positions.

This way of calculating protein folding is very similar to how magnets arrange their orientations on a 2D grid. If you control how strongly the magnets feel each other, then you can mimic the different bonding strengths between different amino acids, and the 3D nature of the protein. Once you have the magnets set up—setting this up is not easy, and it's a remarkable technical achievement on its own—you use it to find a low energy state.

When the magnets are hot, they have a lot of energy, and can flip their orientation. As they flip, they change the magnetic field around the other magnets, causing some of them to flip. This causes more magnets to flip, and so it carries on. However, you can slowly cool the magnets so there is less and less energy available to allow them to flip. With enough cooling, they tend to get locked into a configuration. If you have cooled slowly enough, then that configuration is likely to be the lowest energy configuration. Read that out, and you have the lowest energy 3D configuration of the protein that the magnets were modeling.

Now, in real life, the magnets are superconducting rings (superconducting quantum interference devices, or SQUIDs). The direction of the magnet is set by the direction that the current in the ring is circulating, and the coupling between the different magnets is not directly through the magnetic fields, but indirectly through capacitors, inductors, and other SQUIDs, This intervening hardware allows the coupling to be controlled. This method of calculating is called simulated annealing, and it works extremely well. It is, however, no faster than any other way of calculating the configuration of a protein: it is still a classical computer.

Riding a quantum horse to the rescue

So how does the quantum nature of a SQUID help? The trick is in the coupling between the different magnets. In the description I gave above, each magnet experiences the average of all the surrounding fields, so individual flips have an almost negligible effect on any other magnet. In a fully quantum description, the currents are added up, taking their phase into account, so interference between different SQUIDs can lead to cancellation or addition of their contributions, or anything in between.

Normally, we would discount phase, because the currents in each SQUID would have no fixed relationship to one another. In other words, there is no coherence. But if the SQUID array is coherent, then the interference between the different SQUIDs drives them toward the overall lowest energy solution faster than you would expect from a classical description.

And that brings us back to D-Wave, which has produced hardware that does simulated annealing, and claims that it is a quantum system. But it has been difficult to verify that claim.

The remarkable thing about this latest bit of work is not the protein folding—it was a rather small demonstration—but that they could use this to demonstrate that there may well be something quantum going on. The SQUID array was too small to directly simulate the six-amino acid protein—instead, the researchers broke the problem up into bits and combined them at the end. One of those bits was small enough that the SQUID array could be fully simulated, including the quantum bits, on a classical computer. The researchers found that the SQUID array behaved exactly as expected if the quantum aspects were contributing, solving the problem in the time expected. Experiment and theory agree, and all is well in the world.

But, as with all things, the picture is still incomplete. What I had hoped the paper would contain was a comparison between a full quantum simulation and a classical simulation. Let's imagine for a moment that the operating device loses coherence within a few nanoseconds, and after that, everything is classical. If the quantum simulation is accurate, it will reflect the loss of coherence, give the classical results, and agree with experimental results. A simulation that intrinsically assumes a lack of coherence (in other words, a classical model) will only agree if coherence is lost.

By comparing these two simulations with experiments, we would be able to be sure that the quantum part of the quantum optimizer was important. More interestingly, we would be able to make estimates of how the coherence of the array was decaying with time and distance.

Nevertheless, I have to say that I am largely convinced that D-Wave has produced evidence that its SQUID arrays might behave as a quantum optimizer. In the past, I have been less than convinced, and rather critical of D-Wave. I still think there is more to be learned about the degree of coherence in the SQUID arrays. This demonstration shows that there is much more to be done in practical terms before the optimizer is ready for larger problems. But progress has been rapid.

Promoted Comments

The limitation of this kind of protein modeling is that a "correctly" folded protein may not always be at the lowest possible energy state. For most proteins, there are probably multiple low-energy states in which the folding would be stable.

However, if you want to ever be able to consistently model protein folding, you do need to develop a system that can find the "nearest" stable state given a starting configuration. The holy grail of folding would be to develop a system that models the protein during and after translation, including key intermediate steps. This would have to include interaction with the cytoplasm and nearby chaperone molecules.

Chris Lee
Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He Lives and works in Eindhoven, the Netherlands. Emailchris.lee@arstechnica.com

So are you saying that they have demonstrated quantum effects, but people are skeptical about the quantum effect role in the speedup? I'm still a bit confused.

I am _very_ confused, I can barely map Lee's article to the paper, and the same goes for Aaronson's text (which was on the status before the paper). They should be renamed "cubits", for "curious bits".

I _think_ a speedup is in question, after D-Wave made it likely, but didn't quantifiably test I think, that they have apparent quantum effects in their system. Lee claims there is no speedup, I don't know if it is because it has not yet been demonstrated or if it has been demonstrated that there is none.

Reading the paper didn't answer that question. The author's hint that there is a on old, likely then never repeated, demonstration of speedup on a system analogous to theirs.

Even if it is a demonstrable speedup, according to Aaronson the absence of entanglement should mean it isn't a large one. What do you others think?

Its a rather subtle thing, I think. The SQUID array would solve the protein problem they threw at even if everything were operating in the classical regime. But, D-Wave claim, because their simulations of the machine agree with the actual operation, and the simulations contain the quantum aspects---namely the coherence between the different SQUIDs---then the coherence must be playing a role.

But, on its face that is simply not true. If the coherence (so, in this case, think of coherence as the ability to predict the phase of the current in a SQUID at some time in the future by measuring the phase of the current in a different SQUID now) faded very fast and everything were actually classical, the simulations would reflect this.

The first step would be to delve into the simulation, which explicitly contains the coherences, and examine how they change as a function of time. Show that, at least in simulation, the coherence is there, and, then leverage experimental agreement to say that it is there in the machine. Even then, you haven't shown that the coherence speeds things up. To do that, you need to run a second simulation with artificially increased dephasing times (so the coherence does vanish), and show that the annealing process is much slower. Then, I would say that the claim is on very strong ground.

As it stands, I am giving them the benefit of the doubt that the first step was taken (they did look into the model and saw that there was coherence and saw that it was good), but thought that reporting on that would detract from the focus of the paper.

Its a rather subtle thing, I think. The SQUID array would solve the protein problem they threw at even if everything were operating in the classical regime. But, D-Wave claim, because their simulations of the machine agree with the actual operation, and the simulations contain the quantum aspects---namely the coherence between the different SQUIDs---then the coherence must be playing a role.

But, on its face that is simply not true. If the coherence (so, in this case, think of coherence as the ability to predict the phase of the current in a SQUID at some time in the future by measuring the phase of the current in a different SQUID now) faded very fast and everything were actually classical, the simulations would reflect this.

The first step would be to delve into the simulation, which explicitly contains the coherences, and examine how they change as a function of time. Show that, at least in simulation, the coherence is there, and, then leverage experimental agreement to say that it is there in the machine. Even then, you haven't shown that the coherence speeds things up. To do that, you need to run a second simulation with artificially increased dephasing times (so the coherence does vanish), and show that the annealing process is much slower. Then, I would say that the claim is on very strong ground.

As it stands, I am giving them the benefit of the doubt that the first step was taken (they did look into the model and saw that there was coherence and saw that it was good), but thought that reporting on that would detract from the focus of the paper.

Thanks for an incredibly informative and insightful analysis, that makes alot more sense to me now. I do think it should have been in the article IMHO, as the context I was trying to inject is that one shouldn't give scrutiny of the quantum computing aspect a free pass thinking the primary significance of this work is the potential application to folding. I totally get that it's not, it could have been solving Sudoku squares and that's fine too, but the popular press has been covering this as "D-Wave solves the protein folding problem with quantum computing" which frankly makes my insides turn a little, so you can see where I'm coming from.

The limitation of this kind of protein modeling is that a "correctly" folded protein may not always be at the lowest possible energy state. For most proteins, there are probably multiple low-energy states in which the folding would be stable.

However, if you want to ever be able to consistently model protein folding, you do need to develop a system that can find the "nearest" stable state given a starting configuration. The holy grail of folding would be to develop a system that models the protein during and after translation, including key intermediate steps. This would have to include interaction with the cytoplasm and nearby chaperone molecules.

Right. How you got to the folded state is as important as how the final shape turns out. Fortunately, this is what Folding@home does (aided by our Ars Technica team #14). F@h also studies the misfolding of proteins that causes several diseases, so the final shape is not always certain.

Over the long run, any functional protein that could be knocked out of shape and not return to its functional form would be replaced.

Not strictly true. See: CJD, Mad Cow.

Oh that protein is *very* functional. Just not so good for the host, but it function's just fine for the protein's replication.

There aren't many agents thatrequire this sort of thing to make sure they're not coming back...

Doesn't prions make the case that functional proteins don't have to be the lowest energy state? Introduce a prion, and suddenly all the cool proteins want to change state... and they aren't changing back any time soon.

As for the quantum nature (or not) of the machine, I'm doubtfull. Or rather, they seem to have moved the goalposts for what a 'quantum computer' quite a lot (it seems they have admitted as much). And as long as their hardware is hidden in a black box, they won't be getting the benefit of the doubt.I assume the secrecy is at least in part to protect their designs, and they're very welcome to prove all us sceptics wrong when they feel like it... but so far it seems more like a publicity hunt than a hunt for new knowledge.

So you don't see developing scientific knowledge to have any value beyond what can be quantified today? You'd never have bothered settling the New World, would you?

The New World used to be contiguous to the Old World. Same climate zones, atmosphere, water and species. Your analogy is flawed.

Besides the obvious (putting a rover on Mars feeds a lot of technological development that aren't always immediately obvious - MRI scanners are courtesy of NASA/Apollo for instance), it seems rather shortsighted not to recognize the value of knowledge.By your broken logic (Mars and Earth are incredibly similar), is the exploration of space ever "worth it"? Apollo, ISS, Voyager... waste of money?

Over the long run, any functional protein that could be knocked out of shape and not return to its functional form would be replaced.

Not strictly true. See: CJD, Mad Cow.

Oh that protein is *very* functional. Just not so good for the host, but it function's just fine for the protein's replication.

There aren't many agents thatrequire this sort of thing to make sure they're not coming back...

Doesn't prions make the case that functional proteins don't have to be the lowest energy state? Introduce a prion, and suddenly all the cool proteins want to change state... and they aren't changing back any time soon.

As is typical in biology, all of the above are simultaneously true and any absolute statement I make will probably be proven wrong in short order!

But some context; All proteins have a natural turnover rate, that is at some point they spontaneously misfold or are damaged by the environment, at which point they are tagged for garbage disposal (i.e. enzymatic breakdown) by the proteosome and a new copy is synthesized to replace it. Some proteins last only minutes, others like the crystallin in your eye lens last your entire lifetime without being recycled, but the "average" protein has a half life of about 20 hours. The breakdown machinery itself has a finite capacity and processing rate that can't be exceeded for a cell not to die from all the accumulated damage, so as long as your proteins are not breaking down faster than normal operating parameters you don't even notice it other than needing calories and amino acids in your diet to keep things running.

That said, it should be pointed out proteins, left to their own devices, adopt the lowest free energy state, not just the lowest energy state as in these lattice models, which is in fact one of their largest drawbacks (they are essentially only correct at zero temperature). Since free energy has entropy in it and depends on temperature, if you raise the temperature enough the lowest free energy state may actually be a misfolded/unfolded state, because when you have lots of thermal energy randomly buffeting you, staying in a weakly stable folded state (proteins are only weakly stable) becomes very entropically unlikely. So protein folding is extremely temperature sensitive, which is one reason heat kills living organisms so easily. We have some defenses, such as heat shock proteins which are basically tiny tupperware containers that valiantly try to refold misfolded proteins with TLC, but they too can be overwhelmed if the temperature is just too hot.

Not all proteins spontaneously fold with any appreciable efficiency on their own. Some only fold when some other machine comes in and injects some sort of special stabilizing core or adds chemical accessories (i.e. heme groups, posttranslational modifications). Some don't fold in water but do fold in the cell membrane (a very chemically different environment). Some can only find their proper fold when they are paired up with other molecules that tango with them in sort of handshake protocol- these intrinsically disordered proteins therefore actually need to be unstructured to function properly. And even absent heat or environmental stress, some proteins just don't fold in short enough timescales to be useful to the host, so they get external assistance from "chaperone" proteins (of which the heat shock proteins I mentioned earlier are a subclass) - they sortof look like the planet killer from ST:TOS and require an energy source to finish folding the protein (that is, they do work on it, so forget the limitations of "free" energy). Therefore, we could not really hope to understand anything I described in this paragraph from a folding@home type of single-molecule simulation, at least until we have *ahem* quantum computers that will let us simulate an entire cell at once.

Lastly, the prion exception is simply thus: The lowest free energy state of the pathogenic prion form is only possible in the context of being next to a misfolded seed which adds to itself and grows to become a large aggregate. In isolation, a single prion protein would not prefer to be in that state so it is already in its free energy minimum - but its default behavior changes under the influence of bad apples (don't we all know someone like that?). Since the proteosome garbage collecting machinery I alluded to earlier can only handle individual misfolded proteins, not large aggregates, they ultimately can't undo the damage (same goes for alzheimers and other amyloid diseases).

...Lastly, the prion exception is simply thus: The lowest free energy state of the pathogenic prion form is only possible in the context of being next to a misfolded seed which adds to itself and grows to become a large aggregate. In isolation, a single prion protein would not prefer to be in that state so it is already in its free energy minimum - but its default behavior changes under the influence of bad apples (don't we all know someone like that?). Since the proteosome garbage collecting machinery I alluded to earlier can only handle individual misfolded proteins, not large aggregates, they ultimately can't undo the damage (same goes for alzheimers and other amyloid diseases).

I would like to add that certain mutations make prion precursors more prone to misfolding, probably by lowering the energy of the misfolded form as compared to the native. So while some turn bad by association, others are bad by birth.

Its a rather subtle thing, I think. The SQUID array would solve the protein problem they threw at even if everything were operating in the classical regime. But, D-Wave claim, because their simulations of the machine agree with the actual operation, and the simulations contain the quantum aspects---namely the coherence between the different SQUIDs---then the coherence must be playing a role.

But, on its face that is simply not true. If the coherence (so, in this case, think of coherence as the ability to predict the phase of the current in a SQUID at some time in the future by measuring the phase of the current in a different SQUID now) faded very fast and everything were actually classical, the simulations would reflect this.

The first step would be to delve into the simulation, which explicitly contains the coherences, and examine how they change as a function of time. Show that, at least in simulation, the coherence is there, and, then leverage experimental agreement to say that it is there in the machine. Even then, you haven't shown that the coherence speeds things up. To do that, you need to run a second simulation with artificially increased dephasing times (so the coherence does vanish), and show that the annealing process is much slower. Then, I would say that the claim is on very strong ground.

As it stands, I am giving them the benefit of the doubt that the first step was taken (they did look into the model and saw that there was coherence and saw that it was good), but thought that reporting on that would detract from the focus of the paper.

Thanks for an incredibly informative and insightful analysis, that makes alot more sense to me now. I do think it should have been in the article IMHO, as the context I was trying to inject is that one shouldn't give scrutiny of the quantum computing aspect a free pass thinking the primary significance of this work is the potential application to folding. I totally get that it's not, it could have been solving Sudoku squares and that's fine too, but the popular press has been covering this as "D-Wave solves the protein folding problem with quantum computing" which frankly makes my insides turn a little, so you can see where I'm coming from.

Well, I did put a shortened version of this in the article

"But, as with all things, the picture is still incomplete. What I had hoped the paper would contain was a comparison between a full quantum simulation and a classical simulation. Let's imagine for a moment that the operating device loses coherence within a few nanoseconds, and after that, everything is classical. If the quantum simulation is accurate, it will reflect the loss of coherence, give the classical results, and agree with experimental results. A simulation that intrinsically assumes a lack of coherence (in other words, a classical model) will only agree if coherence is lost.

By comparing these two simulations with experiments, we would be able to be sure that the quantum part of the quantum optimizer was important. More interestingly, we would be able to make estimates of how the coherence of the array was decaying with time and distance."

Unfortunately, that passage didn't come out as clearly as I managed it in the comments