Could this quantum computer be the real deal?

Researchers demonstrate a circuit that has all the components required for a …

I have been writing about quantum computing for a while now. If you look at my recent writing, though, you won't find much about quantum computing. Why? Well, it all felt a little repetitive. The publications were still coming, but each new one seemed very much like the previous one. I'm not being cynical here; sometimes you just burn out on a subject.

In that light, it takes something special to attract my attention. It turns out that making something that looks and feels like a complete quantum computer—albeit on the smallest of scales—will definitely attract my attention. What we have here, ladies and gentleman, is nothing more or less than the first quantum microprocessor.

Quantum computing has turned out to be a challenge because it relies on encoding information in quantum bits (qubits) that have two fundamental properties. The first is coherence, which allows qubit states to naturally change in a syncronized manner. The second is quantum entanglement, which correlates the states of different qubits with one another. When we perform operations and measurements on a qubit that is entangled with another qubit, we automatically learn about and modify the state of its partner. This provides a sort of quasi-parallelism that allows a quantum system to perform some calculations faster than a classical computer.

But a computer is more than its bits. You need a register to hold qubits and perform operations on them. You need a memory, so that you can store qubits between operations. And you need to be able to initialize and readout the qubit so that you can begin and end a calculation. Now, there are groups of researchers who have done all of these separately. And, using trapped ions, some groups can even claim to have done the whole lot together. But I don't think anyone seriously thinks that tables full of optics, lasers, and vacuum systems is the way to quantum computing nirvana.

No, quantum computing nirvana is firmly in the realm of solid-state physics. Unfortunately, this is where the problems begin. Qubits don't last long in the solid state. Entanglement lasts a few hundred nanoseconds and coherence decays away faster than a banking regulation. Yet despite these problems, a group of researchers have managed to make an entire quantum microprocessor out of superconducting qubits.

Admittedly, the computer is rather simple: a two-qubit register made from SQUIDs (superconducting quantum interference devices), two additional SQUIDs that can be used to zero the register (and act as readout), and microwave resonator striplines, which act as memory. The most significant part, however, is a bus that couples the two register qubits together. This bus enables the researchers to program the register to perform different logic operations. That is what makes this something I am willing to call a microprocessor—though it can't load up a sequential set of instructions into a memory element and execute them.

This all works through the magic of magnetic fields. (What, do we understand magnets now?) The microwave frequency that a SQUID likes to operate at depends on the magnetic field it is exposed to. The resonators have a fixed geometry that will only resonate at one microwave frequency. So a memory can be read or written by changing the magnetic field so that it is the same as that as the resonator. The same is true of the zeroing registers.

As a result, operations are really just a case of ramping magnetic fields up and down. Operations between qubits are performed by applying microwave pulses on the bus between them.

Conceptually, It is all very simple. It also is likely to scale well, since you just need to be able to choose different operating frequencies for each memory element and qubit.

Of course the qubits still don't last very long. Their entangled states last 400ns, and the memory holds its value for four times longer. But the length of microwave pulses required to perform a logic operation are on the order of 30ns, so that 400ns is an absolute age.

No doubt there are plenty of steps, pratfalls, and other interesting hiccups along the way, but this bit of work shows how incremental improvements can come together into something that looks quite spectacular.

Ars Science Video >

A celebration of Cassini

A celebration of Cassini

Nearly 20 years ago, the Cassini-Huygens mission was launched and the spacecraft has spent the last 13 years orbiting Saturn. Cassini burned up in Saturn's atmosphere, and left an amazing legacy.

Chris Lee
Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He is delocalised, living and working in Eindhoven and Enschede, the Netherlands. Emailchris.lee@arstechnica.com//Twitter@exMamaku

I was given to understand that quantum operations can't actually control which state an entangled partner is in and that the only way to know that the states were correlated is to compare the measurements of each after the fact. I'm assuming this must not be fully accurate...

What does the hardware look like right now? And does the hardware lend itself to miniaturization?

I read an article ages ago about a working quantum computer, but like everything that seems awesome (I was gonna say cool) It needs to be cryogenically frozen and it is the size of a table tennis table or something huge.

So, not at the stage of consumer electronics.

Like everything though, I'm sure it will me miniaturised soon, but I can't imagine people needing the processing power if it is 1000s of times faster these days for normal electronics.

What does the hardware look like right now? And does the hardware lend itself to miniaturization?

The basic unit, very very roughly equivalent to an individual transistor, is a single SQUID element. They're thin-film devices consisting of a superconductor (on a quick read-through, this paper didn't mention what material was used) on an insulating substrate. Size scale is on the order of a micron or so across, with a couple of features that are multiple tens of nanometers in width.

Add to that small device, of course, a cryostat capable of cooling to 15 mK and a rack full of microwave gear.

I was given to understand that quantum operations can't actually control which state an entangled partner is in and that the only way to know that the states were correlated is to compare the measurements of each after the fact. I'm assuming this must not be fully accurate...

Partially correct; you really can't "control" the specific state of an entangled partner via entanglement, however, if you're assuming only two states ("A" and "B"), then you'll know which state both partners are in simply by taking measurement of either one. Increase the number of available states, and I believe it turns into a game of probabilities.

Dumb question. If the qubits only last 400ns, does that mean the whole computer only works for 400ns?

Theoretically, how would a practical, everyday quantum computer work in the future (say in a consumer product)? Will that be possible even?

To your first question: yes, without some kind of error correction. This is one of the problems that plagues superconducting qubit architectures right now. There are sophisticated schemes, called dynamical decoupling protocols, that allow you to stretch this time out by quite a bit, but these involve more operations, so there is always some tradeoff.

For your second question, probably not in a purely superconducting architecture. More likely will be some hybrid architecture with superconducting elements to handle non-local qubit interactions, atomic/quantum dot architectures to handle data storage, and an optical setup to do long-range transmission.

Admittedly, the computer is rather simple: a two-qubit register made from SQUIDs (superconducting quantum interference devices), two additional SQUIDs that can be used to zero the register (and act as readout), and microwave resonator striplines, which act as memory.

John Martinis' group at UCSB -- the people behind this paper -- are at the forefront of superconducting quantum computing. They do a lot of neat things, are making good progress towards a real quantum computer, and are definitely worth watching. That said, I wouldn't hold your breath for a working quantum computer.

There are a host of possible quantum computing methods, each with strengths and weaknesses. The biggest strength of the superconducting approach is ease of coupling qubits together (the other big strength is that it mostly uses standard semiconductor industry materials and techniques to make its devices, at least for the moment); its biggest weakness is short coherence times. The situation is basically reversed with optical quantum computer using trapped ions. Other methods (like quantum dots) have other strengths and weaknesses.

Still, there is hope. I agree that solid state devices are ultimately the way to go. The short coherence times for superconducting qubits are mostly due to materials problems (e.g. lossy dielectrics in capacitors), so switching to different materials could be the solution. However, that usually means fabrication is more difficult (it means using less standard semiconductor industry stuff). The Martinis group and others are working hard on these new materials, so stay tuned.

Can anyone comment on D-Wave's hardware? I remember a lot of hooplah about their 'quantum computer' a few years back, not heard a lot since.

I was thinking the same thing, according to wiki they are selling systems to Lockheed Martin. They are no doubt more secretive about the actual inner workings, since they are trying to make money on this. But surely at this point someone should be able to at least confirm they have a quantum computer! Someone at Lockheed maybe??

Can anyone comment on D-Wave's hardware? I remember a lot of hooplah about their 'quantum computer' a few years back, not heard a lot since.

Their hardware is more accurately called an adiabatic quantum optimization (or annealing) device - it is probably not a universal quantum computer, but it can still solve certain non-trivial problems. There have been a lot of recent arguments (including some Ars stories) principally between Neil Dickson (D-Wave) and Boris Altshuler regarding its capabilities.

Can anyone comment on D-Wave's hardware? I remember a lot of hooplah about their 'quantum computer' a few years back, not heard a lot since.

There's been some question as to whether or not they ever really built a "quantum computer" in the first place. D-Wave claimed to have produced a computer capable of processing 128 qubits at a time; no one else has been able to build a system that can handle more than 10-12.

Basically, D-Wave made a bunch of incredible, yet unsubstantiated claims.

It is exciting how quickly noise rates have been dropping in these systems, it has been following an exponential curve. (I'd have to look it up, maybe a factor of 10 decrease every year or two?) If this can continue, then in the next three to five years the noise rates will be fantastic.

Also, this seems to be a group that is interested in scaling their devices, not just in the fundamental physics. There are plenty of interesting experiments to run on these, and the bigger the better.

Hey so could a 2-bit quantum computer be faster and more powerful than a 64-bit conventional electrical one? Just curious if maybe researchers are more interested in very fast smaller byte sized quantum processors at incredible speeds than larger more expensive larger byte sized processors.... I imagine a 1 TeraHZ 2-bit processor is better than a 1 Ghz 64bit processor?

Hey so could a 2-bit quantum computer be faster and more powerful than a 64-bit conventional electrical one? Just curious if maybe researchers are more interested in very fast smaller byte sized quantum processors at incredible speeds than larger more expensive larger byte sized processors.... I imagine a 1 TeraHZ 2-bit processor is better than a 1 Ghz 64bit processor?

Recall that they said the SQUID sensor could perform an action in around 30 ns. That means the "computer" is only capable of running at 33MHz, not 1THz. Also realize that you're not going to be hooking your quantum monitor, and be both playing and not playing Crysis at the same time. These things need to be controlled and measured by conventional processors, and so by necessity will have to operate at several times slower than those processors. Even the best GaAs transistors can only operate at speeds around 100GHz.

Can anyone comment on D-Wave's hardware? I remember a lot of hooplah about their 'quantum computer' a few years back, not heard a lot since.

Well D-Wave sold a single '12 qubit' system at $10 million in 2009 and I think that's about it. So far nobody has confirmed that their machine is a 'real' quantum computer. They've been pretty quiet about it recently too. All this makes me believe that their system is not a 'real' quantum computer.

And even if the D-Wave machine really is a 128 qubit adiabatic QC there's still some uncertainty about what exactly the theoretical capabilities of an adiabatic system really is. It's not clear if you can run, for example, Shor's algorithm on an adiabatic QC. I'm no good at quantum logic and math so I correct me if I'm wrong please.

No. A fully coherent and fully entangled adiabatic quantum computer is identical to a gate-based quantum computer. D-wave has stepped away from claiming they have that, and, rather, say that their machine is likely partially coherent and partially entangled. No one knows if such a machine has any benefit over a classical computer. I don't think it is very clear how one would test to see if you had such a machine and how you would demonstrate that it is faster than a classical equivalent.

Does 'entangled qubits' imply instantaneous correlation? Is this actually measurable? It kind of sounds like the scientists don't as yet have a complete explanation for 'spooky interaction at a distance' but the engineers are going ahead and building the darn thing anyway. I think that just about qualifies for 'cool' - go engineers.

Does 'entangled qubits' imply instantaneous correlation? Is this actually measurable? It kind of sounds like the scientists don't as yet have a complete explanation for 'spooky interaction at a distance' but the engineers are going ahead and building the darn thing anyway. I think that just about qualifies for 'cool' - go engineers.

Entanglement implies a certain kind of correlation. The fact that it works even if the measurements are done instantaneously is what a diminishing number of modern physicists find unsettling, but entanglement does not intrinsically depend on any sort of time ordering of the measurements. The existence of entanglement or the way to measure it -- at least for simple systems like two qubits -- is not controversial at all. Even the skeptics understand that entanglement has been repeatedly demonstrated experimentally, they just think there is also something more subtle going on.

But how much material does it take to get to such a ridiculously cold temperature here, and how much less might be needed with much colder ambient?

Certainly this is not saying, "don't build it on earth dumbasses!" It would be foolish to build something untested after having blasted it into space on a Russian rocket or something. Once we're up to speed on how to get it working, though, wouldn't every advantage be of massive benefit?

I'm just thinking in the macro. If it takes 50ft^2 of cooling equipment per 1in^2 of computer here, that's wildly inefficient for space and would probably rule out a large-scale operation depending on geometry - i.e. getting the cold to the center of the device. However, if you can reduce that space use via an exotic but known factor, it would seem to make the whole thing more feasible. Expensive as all shit, but feasible.

But how much material does it take to get to such a ridiculously cold temperature here, and how much less might be needed with much colder ambient?

It isn't that big in the grand scheme of things. Their cryostat is fairly big, but still room sized, not building sized. There are systems that get almost as cold, but are much smaller: like 2 19" racks.

Running in space wouldn't help you. The CMB is around 2 K, but the cooling power available is tiny -- in the vacuum of space the only way to get rid of heat is black-body radiation. There are a lot of conventional electronics to run this stuff, and they need to run near room temperature, and they generate heat of their own. One of the biggest heat loads on a cryostat like this is cooling all the wires connecting your SQUIDs to the room temperature control electronics. So you wouldn't actually be able to start much colder.

There are good reasons to put experiments in space, but not needing to be cold or in high vacuum (the vacuum of space is better than any laboratory vacuum, but inside a satellite is much worse). You can buy off the shelf systems that do what you need, and run them in the comfort of your local physics department.

Does 'entangled qubits' imply instantaneous correlation? Is this actually measurable? It kind of sounds like the scientists don't as yet have a complete explanation for 'spooky interaction at a distance' but the engineers are going ahead and building the darn thing anyway. I think that just about qualifies for 'cool' - go engineers.

Entanglement implies a certain kind of correlation. The fact that it works even if the measurements are done instantaneously is what a diminishing number of modern physicists find unsettling, but entanglement does not intrinsically depend on any sort of time ordering of the measurements. The existence of entanglement or the way to measure it -- at least for simple systems like two qubits -- is not controversial at all. Even the skeptics understand that entanglement has been repeatedly demonstrated experimentally, they just think there is also something more subtle going on.

Understood; and computers work despite quantum mechanics being at odds with general relativity. I guess the point I was - albeit clumsily - making is that in these areas practical implementations advance regardless of grey areas in theory. I bet one of your local units of currency (but only one!) that right now an engineer in a bank somewhere is thinking: 'hmmm, high energy neutrinos skipping through the bulk, great way to speed up communications for algorithmic trading' (hf traders are apparently getting cheesed off with that pesky speed of light thing).

Please correct me if I'm wrong. I thought I read that taking measurements, or observing in anyway, effectively breaks the quantum entanglement state between two particles.

Is this no longer (or never was) the case?

Generally yes. To measure entanglement you have to do one of two things: make a joint measurement on both entangled particles (that is, bring them back together and allow them to interact in a way that depends on their joint entangled state), or to make statistical measurement on an ensemble of identically prepared systems. The latter is the most common way to simply show you have entanglement between two systems. You just repeat the experiment a few thousand times and you can get a statistical reproduction of the quantum state. This is exactly equivalent to flipping a coin 100 times to find out what fraction of the time it gives heads or tails.

The former is actually what you have to do in a quantum computer, because those 'entanglement dependent interactions' are exactly the type of thing that makes fundamental gate operations for a quantum computer. For instance, the conditional-NOT gate (one of the universal 2-qubit gates) will turn a specific entangled state into a specific non-entangled state. Then you simply measure that you are in that specific unentangled state, and you have shown that the original was entangled. In the end, this is still done statistically, but it can be much easier. If I try to stretch the coin flipping analogy a little too far, now you have two coins that will always come up heads if the original state was entangled. If the state was not entangled, my coins will be random, so they will come up both heads 25% of the time, but if I get 10 heads in a row I can be confident that the intermediate state was very close to the entangled state I wanted.

Of course the qubits still don't last very long. Their entangled states last 400ns, and the memory holds its value for four times longer. But the length of microwave pulses required to perform a logic operation are on the order of 30ns, so that 400ns is an absolute age.

so no, not the real deal and definitely not scalable. They need another 3-4 orders of magnitude to do error correction.

John Martinis' group at UCSB -- the people behind this paper -- are at the forefront of superconducting quantum computing. They do a lot of neat things, are making good progress towards a real quantum computer, and are definitely worth watching. That said, I wouldn't hold your breath for a working quantum computer.

There are a host of possible quantum computing methods, each with strengths and weaknesses. The biggest strength of the superconducting approach is ease of coupling qubits together (the other big strength is that it mostly uses standard semiconductor industry materials and techniques to make its devices, at least for the moment); its biggest weakness is short coherence times. The situation is basically reversed with optical quantum computer using trapped ions. Other methods (like quantum dots) have other strengths and weaknesses.

Still, there is hope. I agree that solid state devices are ultimately the way to go. The short coherence times for superconducting qubits are mostly due to materials problems (e.g. lossy dielectrics in capacitors), so switching to different materials could be the solution. However, that usually means fabrication is more difficult (it means using less standard semiconductor industry stuff). The Martinis group and others are working hard on these new materials, so stay tuned.

In the "near term" we're looking at hybrid approaches. Something like NV centers for storage, QDs for gating and ions for readout. Try to play to the strengths of each approach. The main problem there is getting the qubits in and out of the system and doing up/down conversion.

No. A fully coherent and fully entangled adiabatic quantum computer is identical to a gate-based quantum computer. D-wave has stepped away from claiming they have that, and, rather, say that their machine is likely partially coherent and partially entangled. No one knows if such a machine has any benefit over a classical computer. I don't think it is very clear how one would test to see if you had such a machine and how you would demonstrate that it is faster than a classical equivalent.

Actually it's been proven that, the closer the possible quantum states are that you're trying to distinguish, the slower the energy of the system must be changed, or the computer will "skip" like a scratched record and screw up your answer. So an adiabatic quantum computer isn't exponentially faster than a classical computer, because the larger your search space, the slower it goes, just like a classical computer.