Quantum cryptography: yesterday, today, and tomorrow

Does it have a future? Classic cryptology isn't budging, but all depends on QKD.

The 'unhackable' QKD system

The main vulnerability in a QKD system is that we never detect a quantum event, instead a detector absorbs a photon and generates a macroscopic current. But two photons or more generate the same current. To make matters more uncertain, detectors never quite behave in an ideal fashion. This means that you can, with care, spoof the detectors into generating a bit sequence of your choosing—think of it as Eve providing Alice and Bob with her preferred key.

These are known as side-channel attacks. In QKD, a side-channel attack is one that exploits differences between the model that guarantees security, and the physical implementation of that model. For instance, the model makes assumptions about the performance of detectors: that they behave in a rather ideal fashion. But in reality, detectors are non-ideal: for instance, they cannot distinguish between single photon and multi-photon pulses, and they have moments right after detection when they are blind. So, by combining these two, one can selectively trigger and blind the detectors to generate the bits of your choosing. All known side-channel attacks in QKD have been based on spoofing the detectors in some way.

"One of the reasons that the traditional mathematicians got irritated by quantum physicists was that we claimed that this was perfect security. And, of course, it's provably perfect security, but the implementation matters," says Richdale. "We work a lot with the hacking community ... help them with hacking, help them find potential vulnerabilities, because that is the only way you can really try and test it."

When I brought up some of the recent QKD side-channel attacks, Richdale pointed out that these were under ideal circumstances. Researchers could break into the box and hook wires up to the detectors, something unlikely in practice. Nevertheless, these attempts reinforce the idea that, in the case of QKD, physical implementation matters as much as the software implementation for a classical cryptography system.

Legré is less defensive, pointing to several cases where different approaches managed to obtain between 98 and 100 percent of the secret key. "But," he says, "once you know the attack it is possible to find a counter measure." Though, it is probably more accurate to say these attacks show where the implementation differs from the model, and once those are found, the implementation can be modified to be closer to the model. Nevertheless, this emphasizes that the battle has not been given up, but the battleground has shifted from algorithms to physical implementations of equipment.

QKD systems will come under much more scrutiny in the future, says Legré. "Device independent QKD would be the optimal solution, but..." Legré pauses for a moment. The problem, he says, is that even if you share entanglement resources, there is a detector loophole—essentially, under the right circumstances, you can fake the correlations you would expect from a pair of entangled particles—which tells you that you cannot guarantee that the correlations you observe are due to entanglement if the amount of loss in the connecting fiber is over a certain threshold. The upshot is that even device independent QKD is not a perfect solution, and may be even more limited—in the sense that it cannot be safely used over long distances—than current QKD systems.

The future of QKD research

The University of Geneva has been one of the leading lights in the development of QKD. Appropriately, Professor Nicholas Gisin is housed in an isolated building that looks like it was built to withstand a French cannonade. Security through the ages, indeed. Although Gisin does work on the practical side, he is more interested in fundamental ideas. But, to put a practical flavor in all of this consider the problem of extending QKD to distances beyond 100km.

In all implementations, however, you require a direct fiber link between the two end-points in the system. That pretty much limits QKD to point-to-point back ups between data centers that are separated by no more than 100km. Essentially, the probability of a single photon being absorbed increases with distance. As a result, the time it takes to generate a key increases, increasing the time it takes to secure your data transfer. To put it in perspective, it would take several centuries of trying (at 10GHz), to successfully transmit a single photon over a 1000km fiber link.

In a classical communications, the same losses apply, but we can compensate for this with amplification. However, there is a limit to this because amplification is a stimulated process. The presence of a photon causes an excited ion to emit a photon in an identical state. But, that excited ion could also spontaneously emit as well. You might think "well, I've lost a single photon, but there are many more where that came from, after all there are trillions of excited ions in an amplifier." Unfortunately, it's not that simple: the spontaneously emitted photon can travel through the amplifier, causing stimulated emission. The result is that we amplify the noise. In a standard fiber link, we place the amplifiers such that the incoming light signal is strong enough that it dominates as the source for stimulated emission. But in QKD we only have a single photon so amplified spontaneous emission overwhelms the signal.

Clearly a different approach is required. Gisin goes on to describe the idea of a quantum repeater. The idea is to use quantum teleportation to transport the photon state over the full fiber length, but to achieve that, we need to have a pair of entangled photons: one at the beginning of the fiber and one at the end.

This requires something called entanglement swapping. Imagine that we have two light sources that emit pairs of entangled photons. One photon from each light source is brought together on a beamsplitter, where each photon can either be transmitted or reflected. We never know which goes where, so they have to be described as a single particle. But each of these photons was already entangled. Consequently, the two particles that were never mixed and were previously not entangled are now entangled.

This action can be repeated, so imagine a chain of sources emitting pairs of photons that are entangled. These get mixed with their neighbor, spreading the entanglement to the left and the right, until the photons at the ends of the chain are entangled with each other. This is a great idea in principle, but entanglement swapping is probabilistic: half the time, the photons both exit the same port on the beamsplitter and are not entangled. So, for two sources, it works half the time, for four sources, a quarter of the time. For a 1000km link, you might want 20 sources, meaning that you would expect to have a one in 2,500 chance of sharing an entangled resource between the two ends of the fiber.

"So developing quantum memories, sources of entangled photons, ways of distributing this entanglement, and all of that over long distances with enough reliability and fidelity—so good quality. This is really the major experimental challenge today" says Gisin. "In our group, we have already demonstrated entanglement over quite long distances. Not 50km inside the lab, and 20km outside the lab."

Gisin's focus, however, has moved to quantum memory. Now, a quantum memory is more often associated with a quantum computer, and, in that case, you require that the coherence and the entanglement last for long periods of time. "We have one advantage compared to the quantum computer, which is that we don't need so many qubits to interact. Essentially, it's two qubits," says Gisin. This simplicity is important, because nearly every operation on a quantum memory has a certain probability of success, meaning that the fewer qubits you have, the more chance there is for overall success.

Gisin is optimistic that using the rare earth doped crystals he works with—these are crystals that contain a small amount of europium or neodymium—it will be possible to obtain coherence times of a second. At present, the state of the art is milliseconds, but, when you consider that the normal lifetimes of the electronic states are on the order of microseconds, you realize what a magnificent achievement this is. To extend the coherence for such a long time, Gisin and others in the field are importing techniques that are used in magnetic resonance imaging (MRI).

Essentially, you can view the coherence as the population oscillating between two states in some characteristic way—it is important to realize that this is not physical motion, but changes in the electronic state of the ions. And, just like a swing, if you give the population a kick at the right moment, you keep it swinging. In this case though, the kick is a light pulse that acts to drive population from one state into another state. These techniques, perfected in the development of nuclear magnetic resonance spectroscopy (the less famous cousin of MRI), using radio frequency pulses, are in their infancy here.

In the optical regime, the electronic states are much more strongly influenced by things like stray magnetic fields, so the pulses that maintain the coherence need to be applied much more frequently. But, on the flip-side, the pulses have to have a certain amount of energy. So, if you want lots of pulses that are close together, then the pulses have to be short and have a high intensity. Then, things can go seriously wrong.

Gisin went on to discuss the efficiency of writing to and reading from a quantum memory. In his lab, they are around 20 percent: you have a one in five chance of successfully storing a qubit in the memory, and then reading it out again. The record, he tells me, is around 70 percent, but even that is not high enough. As he says, we know how to do it on paper, but the experimental realization is proving to be challenging.

A more subtle point: because the process of distributing entanglement is probabilistic, you won't know which photon is the one you want until after some classical communication has occurred. That means you need to constantly store photons, and then retrieve the photon that represents the qubit that is useful. So, although you may only need to work with pairs of qubits, you still need to store hundreds or thousands of qubits for a little while. In Gisin's case, this is possible because his group works with crystals, and the photon state is stored across billions of ions.

It is awe inspiring to realize QKD can trace its origin to the arguments between Bohr and Einstein in the very early days of quantum mechanics. This was a very philosophical argument that, through the work of many, has become more and more concrete.

The way this works is very cool, but a bit complicated to explain. The crystals they use are doped with ions, but each ion sits in its own electromagnetic environment. As a result, the color of light one ion might absorb is not the same as its neighbor. The goal, however, is to store a single photon across the electronic states of billions of ions. To do this, a pulse of light, consisting of many sharply defined and evenly spaced frequencies (colors), called a frequency comb, is sent into the medium. This excites the ions, creating an atomic frequency comb, where certain populations, in well-defined electromagnetic environments, end up in a particular electronic state. With the medium prepared, it is time to store our photon.

Our photon comes from a light source that emits over a range of colors, so any particular photon has a frequency range that covers two or a few more comb lines of the atomic frequency comb. As a result, when the photon enters the crystal it finds a few distinct populations which could absorb it, so they all absorb the photon as a collective. These ions enter an excited state. Now, to complete the storage process, a second, control pulse of light grabs the ions in the excited state and parks them in a new state. This process is similar in spirit—although, Gisin would not agree with this—to electromagnetically induced transparency.

The point is the combination of the single photon and the control pulse set up a coherence that stores the qubit over the billions of ions. Exactly where this occurs within the crystal depends on the timing between the single photon and the control pulse, so, you can stack the photons up in the crystal. On read out—again, through the application of control light fields—the photons leave in the order in which they arrived, so one can simply select the photon you want. Gisin's group has managed to store 64 photons this way, but this is where all the problems of probabilistic storage and limited coherence times start to play a role in limiting how much you can store and for how long.

On theory side, Gisin is very excited by some of the implications of quantum mechanics. "To be able to prove the security directly on nonlocality—which means the violation of a Bell inequality—this is called device independent QKD. There are even some vague ideas of how one could do it experimentally but it is still very far from being experimentally feasible," says Gisin.

Even more exciting, though, are more general findings. It seems that any theory that has nonlocality—the experimental verification of Bell inequalities support that—and no signaling—the requirement that sending a message involves transmitting energy or matter—can be shown to support a form of intrinsically secure communication.

It is quite awe inspiring to realize the idea of QKD can trace its origin to the arguments between Bohr and Einstein in the very early days of quantum mechanics. This was a very philosophical argument that, through the work of many, has become more and more concrete. Today, people, like Gisin, are talking seriously about the applied side of these ideas.

Chris Lee / Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands.