Majorana fermions will not be the perfect "decoherence-free" qubit.

Everyone loves a more powerful computer, right? This is probably the underlying motivation that drives much of modern quantum computing research. "I've got a superBogus™ gen 2 quantum processor, how about you?" Okay, maybe not the primary motivation. Along the way to quantum computing geek nirvana, scientists are learning an awful lot about quantum mechanics. These are not discoveries that shake the foundations of physics; instead, we are learning about the practicalities of manipulating quantum properties

One of the new kids on the block is called topologically protected quantum computing. The basic idea is to create a setup where the shape or layout of a quantum system self-stabilizes, making it impossible for the environment to effect it. Even at a distance, pairs of particles can link up, creating something called a Majorana fermion. This was thought to be immune to something called decoherence, making it the perfect object for a qubit. Unfortunately, a bit of thought shows that this is entirely untrue.

Majorana fermion?

Before we get started I should admit that I know very little about the Majorana fermion, and not a great deal about topological quantum computing either, so the details will be a little sparse in what follows.

Back in the 1930s, quantum mechanics was already established with respect to atomic structure. But Einstein had shown that mass and energy were interchangeable, as were time and space. Quantum mechanics, at the time, couldn't cope with such things.

Dirac came up with a solution that involved a soup of particles with negative energy, with every known particle having an anti-particle. It was all rather confusing, but it had one redeeming feature: it worked. A young physicist named Majorana played around with Dirac's work, showing that it implied the existence of a fermion that was its own anti-particle. (A fermion is a particle that obeys Pauli's exclusion principle.)

People have been searching for the Majorana fermion ever since, to no avail. At least no avail in the form that Majorana probably thought about it, as a single particle.

However, solid state physics allows the existence of quasi-particles. These are composite creatures where multiple objects take on the characteristics of a single, unique particle. A plasmon is an example—photons and electrons come together to create a slow moving particle with some of the character of light, some of the character of charge.

For our story, Cooper pairs are the quasi-particle we are interested in. Superconductors support currents without resistance through the formation of Cooper pairs by their electrons. These are composites of two electrons that behave as single particles.

I'm wearing my topological protection, Sir

So, supercurrents consist of pairs of electrons that can move smoothly through a material. Now, imagine making a one-dimensional wire of superconducting atoms. The electrons have no choice but to pair up with their nearest neighbor and move in single file along the wire. Except the wire isn't terminated—the pairs can form, but they can't go anywhere. After a bit of jostling, everything settles down, with all the electrons paired up from the center outwards, leaving a single, unpaired electron at each end of the wire.

These unpaired electrons, despite their large separation, form a composite particle that behaves exactly like a Majorana fermion. Because the electrons in the wire cannot modify the state of the Majorana fermion, the state is protected. Electrons cannot be scattered into or out of that state.

It seems like the perfect system for a qubit. Here is a quantum system that has well defined states that won't disapear, no matter how long you leave them. All you need do is maintain the superconducting state.

Nature, it doesn't like quantum physicists

Unfortunately, this is not true after all, say a trio of German physicists. Only the existence of the Majorana fermion is protected—its internal state is subject to everything outside the wire.

To understand the implications of this, let's take a quick step back. A qubit is the basic unit of quantum computing. The simplest implementation of a qubit is a quantum system that has two states. A quantum system doesn't have to be in either of those states; it can be in a mixture of both. It will only choose a single state when we force it to by making a measurement. The qubit represents the probability of obtaining either state one or state two when we make a measurement. Computing is simply the modification of a qubit's probabilities in a controlled fashion.

The problem with this idea is that these probabilities are modified by practically everything. So, although you might be able to set the qubit state, you cannot be sure that it will change in a predictable way. This loss of predictability is called decoherence, and it is one of the big limiting factors in most quantum computer implementations.

Our Majorana fermion was thought to be protected from decoherence, because the other electrons in the superconducting wire cannot influence the state of the two electrons at the end. Hence, no decoherence.

Amazingly, though, no one had considered the influence of electrons outside of the wire. This latest paper, using nothing but a series of simple arguments, shows that this is a problem. Although electrons outside of the wire do not destroy the Majorana fermion, they can certainly modify its state. In other words, it is no more immune to noise than any other implementation of a qubit.

I must admit that I was not surprised by this finding. Why? Well, consider this: if you want to set or read the state of a qubit, then you have to be able to get hold of the qubit and tell it what to do. The same channel used to talk to the qubit will have noise in it, and that noise will cause the qubit to lose its coherence over time. In other words, the qubit that is absolutely immune to noise is also the qubit we cannot set or read or, in any conceivable way, play with.

Does this mean that the last few years of research were useless? Certainly not. In learning about these systems, we learn an awful lot about how quantum objects behave in a practical way. Since we are touting nano-everything these days, this sort of research will find an application (directly or indirectly) somewhere. More specifically, even if we accept the argument that no useful qubit is immune to noise, we still need to find convenient ways of creating and using qubits. Majorana fermions may still work out in that respect.

Am I the only person who won't be the least bit surprised if quantum computing offers no more than proportional gains in speed over traditional computing machinery?

Then prepare to be surprised. It is already an established fact that factoring large numbers is fundamentally radically faster using a quantum computer (edit: short of a fundamental breakthrough in classical factoring algorithms). Schor's algorithm isn't conjecture.

I thought that the great hope for topological quantum computing wasn't these Majorana fermions, but in braiding non-abelian anyons to encode qubits. Does this result have any impact on that prospect? I don't quite see how this particular result has much to do with topological quantum computing in the first place.

Noise is annoying, but it's not an impossible obstacle by any means. Noisy communication channels are no problem if you're using the right code to add the needed level of redundancy and error correction. But you'll need more coherent qubits than anyone has yet achieved...

As an aside, isn't a one dimensional anything a point? Aren't we theoretically talking about a two dimensional object here which would be a line with no third dimension, therefore a line with no thickness? Been a while since I've pondered anything Euclidean so forgive me.

I thought that the great hope for topological quantum computing wasn't these Majorana fermions, but in braiding non-abelian anyons to encode qubits. Does this result have any impact on that prospect? I don't quite see how this particular result has much to do with topological quantum computing in the first place.

They discuss that as well towards the end of the paper. They claim that these will only offer protection provided that the inevitable imperfections do not result in additional low-lying states.

Noise is annoying, but it's not an impossible obstacle by any means. Noisy communication channels are no problem if you're using the right code to add the needed level of redundancy and error correction. But you'll need more coherent qubits than anyone has yet achieved...

Sure, but the specific claim was these things are protected from the influence of noise by their very nature. Discovering that this is untrue is worth noting

As an aside, isn't a one dimensional anything a point? Aren't we theoretically talking about a two dimensional object here which would be a line with no third dimension, therefore a line with no thickness? Been a while since I've pondered anything Euclidean so forgive me.

Then prepare to be surprised. It is already an established fact that factoring large numbers is fundamentally radically faster using a quantum computer

This bears clarification.

Is a UQC (universal quantum computer) faster then a UTM (universal turing machine)?

Yes. That much is proven and not in dispute. But the word "quantum computer" isn't generally taken to mean a UQC, but an actual working computer system which implements a UQC.

People forget that FOR THE PURPOSE FOR WHICH IT WAS BUILT... Colossus was faster then most 90's era 32 bit processors.

There are two ways to create a UQC. The first is to use qbits. The second is to zoom out so that instead 64 bit machines, you have 64 byte machines with each byte representing a qbit. You could purpose build a qbit computer in this way to take advantage of the mathematical strengths of a UQC over a UTM.

But the complexity would be absurd. Just like the old computers that used decimal or trinary computations instead of binary.

There are two ways to create a UQC. The first is to use qbits. The second is to zoom out so that instead 64 bit machines, you have 64 byte machines with each byte representing a qbit. You could purpose build a qbit computer in this way to take advantage of the mathematical strengths of a UQC over a UTM.

But the complexity would be absurd. Just like the old computers that used decimal or trinary computations instead of binary.

The interesting part of a quantum computer is the entanglement, not just the superposition. What you've described is just a computer that's got some likelihood of being wrong! Actually quantum computers are exponentially slower than classical ones for most operations, since you have to run the same code multiple times and statistically analyze the answers. They are only faster for algorithms that can exploit the *entanglement*, e.g. quantum Fourier transforms.

Laserboy, can you link to the arXiv version as well as the DOI? Paywall and all that. (Thanks tie.)

While this is an interesting paper, it does not, as far as I can see, affect the implementation of topological QC in the one-way model (e.g., using a 3D cluster state). A fault tolerance threshold for this system was proved a while ago that made allowable error probabilities as high as three quarters of a percent (at least an order of magnitude better than non-measurement-based QC models). This includes initialization, entangling gates and readout. Correlated errors of higher order (not included in the threshold) can be made arbitrarily small because cluster states can be naturally distributed over large distances with good single-qubit isolation.

There are two ways to create a UQC. The first is to use qbits. The second is to zoom out so that instead 64 bit machines, you have 64 byte machines with each byte representing a qbit. You could purpose build a qbit computer in this way to take advantage of the mathematical strengths of a UQC over a UTM.

Perhaps I am misunderstanding what you mean, but this statement is deeply wrong. A qubit is fundamentally different from a classical bit, and no amount of bits can give you a qubit. It's apples and pears. So the type of computing that you can do with qubits is fundamentally different from the type of computing you can do with classical bits. Practically that translates into the ability to solve "hard" classical problems on a quantum computer, such as factoring. John Preskill wrote a nice overview, available on the arXiv today.

Am I the only person who won't be the least bit surprised if quantum computing offers no more than proportional gains in speed over traditional computing machinery?

It's certainly possible that standard computers are so fast (in terms of operations per second) that, for most tasks, it will take longer to run the polynomial time quantum algorithm than the exponential time standard algorithm.

Still, it can take months or years to factor 1000+ bit integers on a standard computer, and we can be pretty confident that even a "slow" quantum computer would offer at least some significant speedup (whether it would be the instantaneous calculation that polynomial algorithms achieve on standard computers remains to be seen). The Fourier transform speedup (O(N^2) compared to O(N * 2^N)) is just too good. For 1000 bits, that's 1 million compared to 10^304.

Second, I think your instincts were correct and the paper is not particularly important. Topologically protected states are interesting because they are robust to local perturbations and offer longer lived entangled states. As you said, they cannot be totally immune to decoherence otherwise they would be unusable for measurement.

The paper's approach is to assume a large external perturbation, which is not valid in most active research to my knowledge, e.g., fermions in optical lattices. Most experimental systems were these states are being realizes are rather isolated from external environments.Cheers, great write up.

As someone who spent ~hour arguing with my physics TA that the damn cat is dead regardless of whether you look in the box, I bow to your superior knowledge. But this was the most understandable article on quantum physics I've read on Ars yet. I think the editors should put significant papers in the hands of the least knowledgeable people more often: it seems to force them to distill it down to layman's terms.

Chris Lee / Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands.