Quantum cryptography: yesterday, today, and tomorrow

Does it have a future? Classic cryptology isn't budging, but all depends on QKD.

Quantum cryptography is one of those amazing tools that came along before anyone really asked for it. Somehow there are companies out there selling very high end, and "provably secure" cryptography gear, all based on fundamental principles of quantum mechanics. Yet, despite being fundamentally unbreakable, there have been quite a few publications on more-or-less practical ways for Eve to eavesdrop on people whispering quantum sweet-nothings in darkened rooms.

As a bemused onlooker, I jumped on the TGV train from Paris to journey to the heart of quantum key distribution (QKD): Geneva. Geneva is where QKD was deployed in real-world demonstrations, it is the base of Id Quantique, a company that specializes in quantum physics based security products, and it is home to University of Geneva's GAP-Optique, a power house of quantum optics research.

My goal was to discover what all the fuss was about. Who buys a QKD distribution system? Why do they bother? If QKD becomes ubiquitous, where is the battle between white and black hats going to be played out, and how will that battle change? What is the future of QKD research?

A quantum eavesdropper

As the train pulled out of Paris Lyon, I contemplated, with a certain amount of reluctance, the ins and outs of current cryptography methods. Well, actually, there are myriad ways to secure data. So, I was only actually thinking about current commercial asymmetric key systems. Using broad brush strokes, we can divide the world of encryption into two classes: secret keys, and public/private keys. In the public/private key system, I have two keys—one of which I keep at home under the pillow, while the other is public. Now, if someone wants to send me an encrypted message, they use my public key to scramble the data, and I use my public and private keys to unscramble said data.

The prime recipe (Warning: math ahead)

The main requirement for a good public/private key system is that one should not be able to derive the private key from the public key. RSA algorithms (named for the trio of inventors) are one tool for creating such keys, so let's take a look at how this works:

We also need a second number, given by the product of p-1 and q-1 (p-1)(q-1) = 192. Now, in the range of 1-192, choose any number under the condition that its lowest common denominator with 221 is one. Let's choose seven.

Now it's time to calculate our key using these numbers. To do so, repeatedly calculate (p-1)(q-1)(1,2,3,...) + 1 until you get a number that is divisible by our chosen number (seven in our case). Filling in the integers one, two, three, etc, we get: 193, 385, 576,... In our case 385 is divisible by seven (giving 55).

We then have two keys: {7, 221} and {55, 221}. But without the values of the prime numbers used to calculate 221, it is not possible to use either key to derive the other. You do, however, know the product of the two prime numbers used to generate the key (221 in this case), so it is possible to figure this out by simply trying to factorize the product.

It turns out that this is no easy task. I wrote a simple script to test how finding prime factors scales with the size of the factor. It's a simple brute force calculation, which is not optimized, and the absolute times are probably dominated by the time it takes to load up Python and the required libraries. But, the actual time is not important. The thing to note is how fast the time increases with the size of the product.

In an ideal world, as the size of the product increases by an order of magnitude, the time taken to find the key should increase by at least an order of magnitude. However, even for my dumb script, it takes an increase in the order of magnitude of each prime—or two orders of magnitude in the product, if you will—to gain a single order of magnitude in the increase in the time it takes for the prime factors to be found. Now, of course, this is only the first step, the hacker still needs to guess a second number. That is, in general, a much simpler task.

Where we win is that the time taken to calculate the key is pretty much independent of the size of the prime factors used to generate the key, so, we can always choose primes large enough to make it impractical to factorize the product. And, this is why the bit length of keys used by asymmetric key generators is so long.

Shor's in da house

Okay, so, my script was really dumb. Others would, and no doubt have, found ways to optimize it. In the end, no optimization can win against a sufficiently large pair of prime numbers. This is where quantum information technologies play their role. Shor discovered that the game of finding prime factors is one that a quantum computer may be able to do efficiently. Ever since, Shor's algorithm has been the bugbear driving both QKD technology, and new classical cryptography approaches.

So, how does Shor's algorithm work? To be honest, that is really hard to describe. A situation not helped by the fact that, although I know a fair bit about quantum mechanics, I come at this from an entirely different direction than those studying quantum information technology. Needless to say, the following description will be fairly basic.

Imagine you have a product of two prime numbers, say, 221. Now, we set that number to be an endpoint—for the purposes of our game, there are no higher integers. If we multiply two numbers together and get a number larger than 221, it wraps around, so 15 times 15 results in 225-221 = 4. If we multiply two by itself, we only get four, which doesn't wrap, and we can do that 7 times before it wraps. But 28 results in 35. Got that? Great.

That is the numbers game, but in terms of physics, this looks like how waves fit in an optical cavity. The idea is that an integer number of half wavelengths must fit between two mirrors that face each other for that wavelength of light to be a mode. If the wavelength is a little too long or too short, then, when it travels from one mirror, to the other, and back to the start, it has a slightly different phase than when it began. The result is the wave does not add up in phase with itself, and, to some extent, destructively interferes with itself. In the end, these wavelengths fade out and don't stay between the two mirrors, while those that fit are reinforced and build up.

Put another way, every wavelength that is not precisely right gathers a small amount of extra phase every time it travels around the cavity. Shor's algorithm takes the job of finding the factors of large numbers and turns it into a the problem of estimating how much phase is accumulated by a wave travelling back and forth between two mirrors. If two numbers multiplied together are too large or two small, then they produce an error value. In the physical implementation, this is a phase error, resulting in destructive interference.

Quantum superposition

Superposition is nothing more than addition for waves. Let's say we have two sets of waves that overlap in space and time. At any given point, a trough may line up with a peak, their peaks may line up, or anything in between. Superposition tells us how to add up these waves so that the result reconstructs the patterns that we observe in nature.

In practice (or actually, not, since no one has more than a toy model), one uses a classical algorithm to produce a list of potential factors. So, for 221, you eliminate pairs like 112, leaving a bunch of potential factors. The quantum part relies on the fact that a quantum bit (qubit) can be in a superposition of different values. Instead of logical one or zero, the qubit takes on a value between zero and one which represents the probability of evaluating the qubit as a logic one when measured.

Quantum operations then modify the probability of each qubit being a logic one. A string of eight qubits represents every value from 0-255 in parallel. But if you were to measure the value of the qubit register, you would get just one value, with the chance of each value determined by the probability amplitudes of the qubits in the register.

As we run Shor's algorithm, the qubits go through a series of operations that lead to their states interfering. The nature of that interference—constructive or destructive interference—depends on whether the value held by the register is a factor of, in our case, 221. Destructive interference reduces the probability of the register returning that value when it is measured, while constructive interference increases the probability. Because we examine all possible factors simultaneously, this process has the potential to be much faster than existing methods for finding factors.

Let's consider a consequence of using phase to calculate prime factors: 221 has prime factors 17 and 13, and factors 1 and 221. We can eliminate the latter in the classical part of our algorithm. But, what about two and 111? "Wait," you say. "That is not a factor. The product is 222." Nevertheless, we need to think about it, because quantum algorithms are probabilistic. 17 and 13 have the highest probabilities, but two and 111 only have a phase error of 0.5 percent. The probability of Shor's algorithm returning the incorrect result is rather high. Unfortunately, a near miss (though easy to spot, since it is very quick to calculate that 2×111=222 not 221). This is likely not very useful in terms of decrypting a message, so we need to do something to increase the chance of getting the correct answer.

This can be done in two ways. You can run the same calculation many times and use the statistics of the results to determine the most probable, and, therefore, correct answer. Or equivalently, you can take the unmeasured results from the first calculation and use it as the start for a repeat calculation. Think of our nearly-right answer (two and 111). This has a phase error of one part in 221 after one iteration of the calculation. But, if we perform the calculation a second time, the phase error accumulates, so it increases to two parts in 211. Essentially, after every iteration, the probability of the correct answer increases, while all the probability of the close-but-no-cigar results reduce.

Chris Lee
Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands. Emailchris.lee@arstechnica.com//Twitter@exMamaku