Despite all the hype, we still don't know if we can build a quantum computer that is worth anything. Now we have something that might provide a shortcut to the test of a full machine - boson sampling.

We have made a lot of progress with quantum computing, but there is still the nagging doubt at the back of every researchers mind that there might be a "no-go" principle in operation. The problem with building a quantum computer is keeping the quantum effects operating properly as the device grows larger. Roughly speaking you can build a quantum computer that works with a few bits, but extending things to enough bits to do some thing that goes beyond what can be achieved using a classical computer is a tough practical problem.

This is fine as long as it is a tough practical problem and there is no basic theoretical result which limits what a quantum computer can do. If it is just practical then we need to push on and refine our technology until we reach the point where a quantum computer can beat a classical computer. If there is a "no-go" theorem then we might as well give up.

To be clear about this - no quantum computer to date has computed anything that isn't well within the ability of a classical computer.

The quantum computers of today have done things like factoring two-digit numbers in a time that can be easily beaten by a classical computer.

If you can increase the number of bits that the quantum computer works with then they could in the same time factor numbers that would take a classical computer the rest of the life of the universe to factor.

You can see that it would not be unreasonable for there to be a deep principle that says something like - "no quantum computer can ever compute something that is beyond the reach of a classical computer".

At the moment no one knows if there is such a theorem although Scott Arronson, a well-known MIT computer scientist, has offered a prize of $100,000 for any proof that quantum computers are impossible. He also is responsible for thinking up a test that might be easy enough to complete that would at least prove that there is no such "no-go" theorem.

Image courtesy of Alisha Toft

His experiment is quite simple. You set up a quantum state with n bosons (integer spin particles such as a photon of light) in particular configurations. You then allow the system to evolve and interact with gadgets such as beam splitters and phase shifters and observe the final state. In an experimental setup this corresponds to having n light sources interact with standard optical gadgets and then see where the photons exit the machine. This is fairly easy to set up and there are lots of quantum optics labs doing this sort of thing every day.

The whole point of the boson sampling setup is that as soon as you have more than a few starting photons the number of interactions grows so fast that you can't do the sums classically. In fact, the problem is in the complexity class #P and it is #P-complete. A boson sampling machine can beat a classical computer when it reaches, say, 20 photons. Unfortunately at the moment the number of photons actually used in the machines is around three or four. Currently the researchers are investigating how it all works and what can go wrong.

At least four groups have published papers showing that the idea works - for small numbers of bosons. They are all of the opinion that boson sampling is easy and they see no problems in scaling up their work. The only real question is - if it really is easy why only four photons? Perhaps the coming year will reveal if there is, or is not, a "no-go" theorem but it is worth noting that there are a few possible ways that things can go wrong.

The first is that boson sampling isn't a universal computational machine - perhaps a specialized machine is allowed to be faster than a classical computer. The "no-go" theorem could be of the form that no quantum Turing machine can deliver results beyond a classical Turing machine. There is also the small matter of proving that a classical machine cannot do the boson sampling calculation as fast as the "real thing".

The package, which is now on GitHub, is written in R and automatically detects anomalies such as spikes in data, which happen on Twitter when a major news item breaks, or there's a major sportin [ ... ]