The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once.

In Science, a group led by Anthony Laing at Bristol has now reported BosonSampling with 6 photons, beating their own previous record of 5 photons, as well as the earlier record of 4 photons achieved a few years ago by the Walmsley group at Oxford (as well as the 3-photon experiments done by groups around the world). I only learned the big news from a commenter on this blog, after the paper was already published (protip: if you’ve pushed forward the BosonSampling frontier, feel free to shoot me an email about it).

As several people explain in the comments, the main advance in the paper is arguably not increasing the number of photons, but rather the fact that the device is completely reconfigurable: you can try hundreds of different unitary transformations with the same chip. In addition, the 3-photon results have an unprecedentedly high fidelity (about 95%).

The 6-photon results are, of course, consistent with quantum mechanics: the transition amplitudes are indeed given by permanents of 6×6 complex matrices. Key sentence:

After collecting 15 sixfold coincidence events, a confidence of P = 0.998 was determined that these are drawn from a quantum (not classical) distribution.

No one said scaling BosonSampling would be easy: I’m guessing that it took weeks of data-gathering to get those 15 coincidence events. Scaling up further will probably require improvements to the sources.

There’s also a caveat: their initial state consisted of 2 modes with 3 photons each, as opposed to what we really want, which is 6 modes with 1 photon each. (Likewise, in the Walmsley group’s 4-photon experiments, the initial state consisted of 2 modes with 2 photons each.) If the number of modes stayed 2 forever, then the output distributions would remain easy to sample with a classical computer no matter how many photons we had, since we’d then get permanents of matrices with only 2 distinct rows. So “scaling up” needs to mean increasing not only the number of photons, but also the number of sources.

Nevertheless, this is an obvious step forward, and it came sooner than I expected. Huge congratulations to the authors on their accomplishment!

But you might ask: given that 6×6 permanents are still pretty easy for a classical computer (the more so when the matrices have only 2 distinct rows), why should anyone care? Well, the new result has major implications for what I’ve always regarded as the central goal of quantum computing research, much more important than breaking RSA or Grover search or even quantum simulation: namely, getting Gil Kalai to admit he was wrong. Gil is on record, repeatedly, on this blog as well as his (see for example here), as saying that he doesn’t think BosonSampling will ever be possible even with 7 or 8 photons. I don’t know whether the 6-photon result is giving him second thoughts (or sixth thoughts?) about that prediction.

This entry was posted
on Wednesday, August 19th, 2015 at 5:44 am and is filed under Announcements, Complexity, Quantum.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

79 Responses to “6-photon BosonSampling”

Slow horses For me (and many), the rate-of-progress in experimental quantum information theory — including linear-optics quantum computing and BosonSampling — has been like watching a horse-race in which, during the early going, the horses have taken \(2^n\) minutes to run the \(n\)-th furlong.

Competing explanations It’s logically consistent to say “horses are very unpredictable” and “the track is very muddy”, and to be optimistic that, later on in the race, the track will dry and the race pace will be sustained.

But it’s also logically consistent to wonder (as Gil Kalai wonders) whether the nature of quantum horses and quantum racetracks is such that the quantum track is never going to dry, and no finite quantum race pace can ever be sustained.

Unexpected news Unexpected good news for young folks working at the racetrack is that, despite the muddy track, semiclassical track-conditioning devices are (unexpectedly) running faster and faster. In consequence, semiclassical track-conditioning enterprises are prospering and hiring, while purebred quantum horse-stables encounter considerable difficulties in sustaining their economic viability.

New performative enterprises Hmmm … maybe new forms of quantum racing are emerging, that show reasonable prospects of sustaining family-supporting jobs in ample numbers for young quantum racetrack workers?

Conclusion Scientifically speaking, the gap is steadily widening between the (unexpectedly) slow purebred quantum horses, and the (unexpectedly) fast semiclassical track-conditioners, and the reasons for this widening gap are incompletely understood.

Kalai appreciation If Gil Kalai’s ideas help to explain this widening quantum gap, then further development of Kalai’s inspiring ideas will have considerable value, both fundamentally and in concrete terms of practical engineering and medical research.

Broader appreciation These are exciting times at the quantum racetrack, and the O’Brien group, and also Shtetl Optimized, and Gil Kalai too, all have contributed greatly to sustaining this excitement, for which they richly deserve appreciation and thanks from everyone.

Because after all

“It were not best that we should all think alike; it is difference of opinion that makes horse races.”

So now that we are up to the 6 photon events do we know how the scaling behavior works?

i.e. How many weeks of data gathering did it take to get these 15 6-fold photon coincidence events? Versus 5-photon, 4-photon etc. studies.

i.e. Are we at a point know where we can extrapolate the curve to how long a 45 photon Boson Sampling Run would take? I use 45-photon because based on a recent analysis by John Sidles on another thread it seems like 45-photon might be the magic case where the Boson Sampling device might sample a distribution inaccessible to any conventional computer of today.

Rahul #2: You can try doing extrapolations, but all they’ll tell you is the boring information of what happens if you scale up with the sources, detectors, etc. that are available today (hint: it’s not pretty). What you really want to know, but what’s harder to predict, is how the technology will improve. E.g., getting 15 coincidences took a long time with the technology of today, but it presumably would’ve taken much longer with even the technology of 2012 (or else it would’ve been done then).

And no, there’s no “magic number” of photons where BosonSampling becomes classically hard. If our complexity conjectures are correct, then it just gradually gets harder, probably roughly like 2 to the number of photons.

Personally, what I find awesome about this paper is the “universal linear optics” part.

Sure, it was an experiment involving 6 photons, and the number of photons really is the current bottleneck for BosonSampling, and this is pretty cool. As Scott pointed out, though, the way these photons are created pushes the physical “number of photons” boundary, but not really the complexity boundary – I suppose Gil Kalai could fall back on saying that this didn’t really push the “high complexity regime” much, and that what will be really hard is controlling the noise when producing 6 photons from 6 different sources (which is also a qualitatively different technological challenge, otherwise I guess the authors would have done it).

Now, what I think is an important highlight of this paper is not only that they have 6 photons, but that they have a *completely reconfigurable*, on-chip, 6-mode interferometer. Most previous BosonSampling papers used some restricted type of linear-optical network – either a quantum-walk-like network using only 50:50 beam splitters, or one which had all beam splitters parameters arbitrary but fixed (i.e. you could build whatever Haar-random interferometer you wanted, but you had to build a new chip for each new one), or at most only a few parameters were reconfigurable. This paper, though, did hundreds of different unitaries (both for BosonSampling and other linear-optical tasks) with the same piece of glass, just by changing the voltage on different parts of the glass. I think this bodes well not only for BosonSampling, but also for other linear-optical tasks (e.g. the measurement adaptations needed in KLM quantum computing).

“Well, the new result has major implications for what I’ve always regarded as the central goal of quantum computing research, much more important than breaking RSA or Grover search or even quantum simulation: namely, getting Gil Kalai to admit he was wrong.”

By magic number I merely meant the number of photons where you can Boson Sample without being able to calculate the distribution on the state of the art classical computing hardware in a reasonable time.

Is it fair to say this happens at somewhere around n=45 today? i.e. That’s the limit of a classically verifiable permanent calculation?

If I interpret things correctly that’s a very crucial target because it is at that point that one would consider the Extended Church Turing thesis overthrown?

“You can try doing extrapolations, but all they’ll tell you is the boring information of what happens if you scale up with the sources, detectors, etc. that are available today”

I think this isn’t “boring” information at all. It’s like saying the whole of technology & implementations is boring and the fundamentals underneath are the only interesting area.

e.g. Sure we do not know how fast technology will change in the future but at least we can see how progress went last few years? Is attacking ECT via BS a target we can reasonably try to reach in the next few years or is it likely to take decades?

I for one would love to know whether going from n=5 to n=6 was as hard or harder or easier than going from n=4 to n=5.

Especially in the context of a “universal linear optics” setup that this paper develops it should be less messy to study the n-scale-up keeping all other setup details constant.

Yes, the fact that these are adjustable devices they are using looks really neat and possibly exploitable. It makes one wonder if it we are getting to the point where we should start wondering if Boson sampling can be used for practical ends.

Completely speculating for a moment: One thing that sort of comes to mind is in the context of understanding BCS superconductivity where the electrons in Cooper pairs act essentially like bosons. Trying to understand when and at what temperatures and pressures the transition to superconductivity is a major ongoing matter. See especially the recent work with sulfur hydride becoming a conventional superconductor at record high temperatures.

Rahul #7: There are many numbers that you could consider reasonable targets. Even at 20-25 photons, you could argue that BosonSampling was sampling from the permanental probability distribution “faster” (in some appropriate sense) than a classical simulation would be—thus putting strain on any theory that invokes an efficient classical computation behind that can account for our observations. Of course 45 photons would be more impressive still—though too far beyond that, and you couldn’t even verify the results with a classical computer. (I just back-of-the-enveloped it, and got that at one billion arithmetic operations per second, computing a 45*45 permanent would take about a month.)

In any case, I doubt there will be a single crucial moment when the Extended Church-Turing Thesis gets overthrown. More likely that better and better experiments will simply put more and more strain on it, until we reach a point where only Leonid Levin still believes it (even Gil having jumped ship).

I doubt there will be a single crucial moment when the Extended Church-Turing Thesis gets overthrown. More likely that better and better experiments will simply put more and more strain on it, until we reach a point where only Leonid Levin still believes it”

Easy, easy, we are still at n=6. 🙂

For all we know the ECT might still have some tricks up its sleeve? Wait & watch I guess.

With this new universal optics setup I’m really excited & optimistic that the scale up to larger n will be fast.

1) Of course, this BosonSampling achievement is very impressive, and I congratulate the researchers very warmly.

2) My detailed prediction regarding BosonSampling is that (noisy) distributions witnessed by systems of photons can be approximated by low degree polynomials. And that physical computing devices whose computational power is so low (it is well below bounded depth computation) will not, in practice, be compatible (not to say superior) to classic computers.

A similar prediction refer to quantum circuits with a small number of qubits. In short, with this primitive computational power you will not be able to manifest quantum supremacy, or the very basic building blocks to quantum error-correction. The only mechanism to go above this primitive computation is a mechanism for creating classical computation.

3) Of course, this prediction, like any prediction based on asymptotic reasoning does depend on the constants. Those will be determined by experiment. But again, our experience with the way computational complexity interacts with reality is that we dont expect very very low level computational devices (in terms of their asymptotic behavior) making interesting computation in reality.

If one understands this position, she (or he) does not really need me to make the call. She can examine the next progress and make the call for me. (If this position makes no sense to you, or you simply disagree, you are welcome to discuss it with me if you wish.)

4)
The two options we have (and I think Scott and I agree on that) are roughly

A) The bad news is that quantum computers will not be built and quantum supremacy is not a real phenomenon. The good news is that even without quantum computers, there is no in principle (computational complexity) obstacle, for our ability to simulate nature and robust experimental outcomes.

B) The bad news is that there are in principle obstacles for simulating quantum physics on digital computers: We can witness robust behavior in nature or in experiments that we will never be able to reproduce on our digital computers. The good news is that quantum computers will sometime be built.

Both these options are stated in a fairly optimistic way. (And there are caveats for both) I think that A) is more plausible and that it will prevail. (And also that it will be important.)

Of course, as ever, the idea that understanding nature requires referring to some superior capabilities is tempting and surely has explanatory power. On the *scientific* side our instincts should be to try to find a way around it.

Rahul observes (#11) “For all we know the ECT might still have some tricks up its sleeve?”

One such ECT-affirming trick being, the prediction and optimization of superconducting transition temperatures in the new hydride superconductors, per arXiv:1506.08751v1, arXiv:1502.03017, and many more recent theoretical studies (with a nod to Joshua Zelinsky’s comment #9).

The ECT paradox The engineering of devices to ingeniously refute the ECT depends crucially upon algorithms that ingeniously affirm the ECT.

This paradoxical mutual affirmation is one reason why many folks (including me) do not foresee a near-term resolution of the ECT/Kalai-Harrow debate.

My understanding is that Moore’s law is purely based on observations from years of classical electronics engineering techniques in scaling.
For quantum electronics, the arguing sides don’t seem quite sure if the limitations are purely engineering or theoretical (unforeseen limitations in QM).
I think that the intellectual jump made by the complexity theorists in going from “bit” to “qbit” has created a lot of unfortunate expectations.

Is there some reason you see Levin as more likely to be intractable about his position than Gil?

Gil says that he thinks scalable QC is impossible (for reasons I’ve never understood), but he admits that maybe he’s wrong, and he says that in any case the science being done to find out is wonderful—so much so that Gil is now an active member of Hebrew University’s quantum computing center.

Leonid is absolutely certain scalable quantum computing is impossible, believes he’s proven it based on an argument about “metric vs. topology” that no one besides him has ever understood, and considers (I think) almost all research in quantum computing either incompetent or fraudulent, not even justified by the scientific interest of figuring out why it won’t work.

With Gil, the idea that the universe is a huge purposeful conspiracy to prevent QC (while still allowing classical computation) is left simmering beneath the surface—you need to talk to him for a while, before you realize that lots of individually-puzzling statements make sense only under that assumption. With Leonid, by contrast, the idea of an omnipotent, diabolical, Occam’s-Razor-flouting, QC-thwarting conspiracy is loudly and colorfully stated to anyone who will listen.

Hmm, I didn’t realize that Levin’s position was that much more extreme than Gil’s, but I’ve only read about both their positions rather than discussed them in person, so it is possible some amount of nuance gets lost that way. (I’m pretty sure that when I was an undergrad that Gil and I never discussed this. That was a long time ago and quantum computing was definitely not on my radar screen at the time.)

I’m also relieved that not understanding Levin’s metric v. topology claim is not a failing on my part, or if it is it is a failing that I share with good company.

Job #14: These are all questions answered in our paper and PowerPoint slides and other blog posts, but briefly…

BosonSampling doesn’t let you calculate the permanent of a matrix of your choice; it lets you sample from a probability distribution over matrices that’s biased toward matrices with large permanents. But using the #P-completeness of the permanent, one can argue that that sampling task is already probably hard for a classical computer.

Yes, there’s a danger of BosonSampling results getting overhyped, just like with quantum annealing, and one should carefully guard against it. At least with BosonSampling, (1) we have theoretical confidence that the underlying problem is something that admits an exponential quantum speedup (which isn’t the case for the kinds of optimization problems quantum annealing is used for), and (2) there’s not the huge distorting influence that comes from the claim of direct practical applications.

While there are lots of possibilities for the operator, my and Arkhipov’s proposal was to choose a Haar-random unitary transformation U. One can then prove that small submatrices of U will look like i.i.d. Gaussian matrices, and one can then at least formulate a clean conjecture (as we did in the paper), that approximating the permanents of i.i.d. Gaussian matrices is #P-hard.

BosonSampling is not based on qubits at all, so the basic “gates” that one uses to construct U are not things like Hadamard or CNOT at all. Instead they’re 1- and 2-mode beamsplitters. FWIW, in this paper Adam Bouland and I prove that any nontrivial 2-mode beamsplitter is universal for linear optics; there are no non-universal cases there analogous to Hadamard+CNOT.

if both you and Gil are rational scientists, then an Aumannian dialog between both of you should quickly lead to agreement about the probability that scalable qc is achievable.
Furthermore, this is certainly a purely technical discussion, because the question is independent of philosophical questions about the interpretation problem.

So then why is such agreement *not* expected anytime soon (a Bayesian estimate on my part)?

Perhaps there is utility in not agreeing, after all even in academia one gets more attention for not agreeing with the majority view than just being one of many members of the mainstream.

“Scott (#19): “Gil says that he thinks scalable QC is impossible (for reasons I’ve never understood)…,”. Hmm, let be double check that I understand Scott’s view and reasons then. Here is a summary of Scott’s view as I see/understand it:

1) Failure, in principle, of quantum computers will be much more surprising and interesting then building them.

2) Failure in principle for QC amounts to breakdown of QM, or to major changes of similar magnitude in our understanding of QM

3) All that is needed for building quantum computers is that noise rate will be dropped below the threshold. It will take time to achieve it but there is no reason to think it will not happen.

4) That noise cannot be in-principle obstacle for quantum computing is already a consequence of an observation of Bernstein and Vazirani, that quantum computers are robust against one-over-polynomial error.

5) There is no reason that, even without quantum fault-tolerance, Bosonsampling could not be scaled to 20-50 bosons, thus giving a convincing demonstration for quantum supremacy.

I think quantum computers aren’t feasible, but for relativistic reasons rather than quantum. Indeed, the analogy I had forged between integer factoring and atomic nuclear fission seems more and more relevant to me. If you managed to show that the Feynman diagram of a nuclear fission is isomorphic to the flow chart of Shor’s algorithm, that would represent the missing connection that proves the energy expenditure required to run Shor’s algorithm. Thus, the roadblock to running Shor’s algorithm on large inputs would be the algorithmic side of Einstein’s equation E=mc².

However, the existence of an efficient quantum algorithm for factoring is perhaps also the reason why no super-polynomial lower bounds factoring have been found for classical integer factoring. Because, once you’ve translated Shor’s algorithm into quantum logic, the predicate “classical” becomes meaningless (because undefinable). Indeed, is it possible to make a formal distinction between classical and quantum Turing machines within quantum logic? Therefore, if you had a proof that no polynomial classical algorithm can exist, then you’d also have a quantum proof that no polynomial algorithm at all can exist, be it classical or quantum. And this is notoriously wrong thanks to Shor.

Let’s end with quoting Leonid Levin:
“It is worth noting that the reasons why QC must fail are by no means clear; they merit thorough investigation. The answer may bring much greater benefits in the understanding of basic physical concepts than any benefits factoring devices could promise.”
So I’d bet that Levin, just like you Scott, considers at least some of the research in quantum computing to be amply justified by the scientific interest of figuring out why it won’t work… 😉

Can someone explain why this is usually referred to as “Linear Optics”, but a key phenomenon seem to be that Hong-Ou-Mandel effect, which is described as a “non-linearity”.

It’s called “linear optics” because, if you write everything in terms of photon creation operators, those transform in a strictly linear way (i.e., by a Bogliubov transformation). In this context, a “nonlinearity” would mean a force that coupled photons—or equivalently, a term in the Hamiltonian that was quadratic or higher-order in the creation operators. (Correction: Cubic or higher-order, as Daniel points out below.)

I don’t know who calls the Hong-Ou-Mandel dip a “nonlinearity,” or why, but I would never describe it that way. I’d just call it a manifestation of quantum interference.

One way that I like to think of it is that linear optics is any physical transformation that is completely defined by the way it affects single photons—the Hong-Ou-Mandel effect may look weird, and may look like its doing something different when two photons are present compared to when there’s just one, but it really isn’t. When you describe what a 50:50 beam splitter does to a single photon, the HOM effect just pops out automatically.

Of course, this mental picture of mine is really just a rephrasing of what Scott said 🙂

Tiny correction, though: linear optics are those transformation where the Hamiltonian is up to quadratic (e.g. has terms like a_1*a_2) in the creation operators. These quadratic terms act linearly on the creation operator as “hopping terms” (e.g., a term that annihilates a photon on mode 1 and create another in mode 2). An actual interaction Hamiltonian would need cubic or higher-order powers.

“One final caveat – in almost all papers on the topic of
boson-sampling, the interferometer is described as a passive linear device with non-interacting bosons (photons
in this case). However, the Hong-Ou-Mandel effect (followed by a projective measurement) imparts an effective
nonlinearity and hence an effective interaction at each
beamsplitter. The presence or absence of a photon in one
input mode radically changes the output state of a second
photon in another input mode. This `interaction’ between
indistinguishable particles, known as the exchange inter-
action, arises simply from the demand that the multi-
particle wavefunction be properly symmetrized. While
not a `force’ in the usual sense, it can give rise to quite
noticeable effects. […] It is therefore a misnomer to describe these interferometers as linear devices with non-interacting bosons.”

I think this wasn’t a very good choice of words, to be honest, I think it generates even more confusion than what the authors wanted to dispel. What I think is key is the parenthesis in “the Hong-Ou-Mandel effect (followed by a projective measurement) imparts an effective nonlinearity”.

The Hong-Ou-Mandel itself is a perfectly linear-optical transformation, described by quadratic (i.e. noninteracting) Hamiltonian, and is only a manifestation of interference, as Scott mentioned. However, projective measurements are *not* linear in that sense. You can, in fact talk about “measurement-induced nonlinearities”, where you can show that by using extra photons + linear optics + projective measurements you can in fact implement a nonlinear (e.g. quartic in creation operators) Hamiltonian (see e.g. http://arxiv.org/abs/quant-ph/0305082). In my opinion this is, implicitly, the main content of Knill, Laflamme and Milburn’s seminal paper where they first showed that adaptive measurements could replace nonlinear interactions in quantum optical computing.

So my understanding of what the authors meant to say is that you shouldn’t underestimate linear optics since you can use exchange “interaction” – i.e. the bosonic statistics – together with measurements to implement effective nonlinearities. But notice that BosonSampling, for example, explicitly forbids adaptive measurements in the first place, so none of what I said shows up there anywhere.

————Article I “Ultimate low system dark count rate for superconducting nanowire single-photon detector” (arXiv:1507.01743, 2015) reminds us that in high-efficiency BosonSampling applications, the photon-source modifies an outgoing vacuum-state that couples (hopefully with near-perfect fidelity) to the incoming vacuum-state of the detector, such that source-noise is coupled to detector-noise (necessarily with the same near-perfect fidelity).

The preprint describes how to “tune” the incoming vacuum-state so as to reduce the detector dark-noise (which is a trick we MRFMers have routinely used too). Otherwise it’s all-too-commonly observed that detector-fluctuations destabilize source-fluctuations.

Conclusion I Textbook formalisms that dynamically decouple source-noise from detector-noise — commonly by throwing-away plenty of photons — aren’t all that helpful in designing scalable BosonSampling experiments, in which source-currents and detector-currents necessarily are near-perfectly correlated.

————Article II Speaking of detector noise-currents, what’s going in high-efficiency photon-detectors at the microscale level? “Vortex assisted mechanism of photon counting in superconducting nanowire single photon detector revealed by external magnetic field” (arXiv:1507.01743, 2015) reminds us that a great deal of photon-detector/photon-source photonic engineering at present is empirical cut-and-try.

Conclusion II Given the appreciable prevalence (for the present) of cut-and-try design methods, present there’s little ground to be confident, and ample ground to be cautious, in predicting the feasibility (or not) of scalable BosonSampling technologies.

The particular method used for calculation of the 0.998 confidence is not quite clear due to some ambiguity in the way of interpretation of the rather short description. I’ve also seen reference they mentioned, but here is also some ambiguity, because it is suggested there to use the same experimental equipment to compare with “classical” case. The “classical” term is used for the same experiment with distinguishable photons. Such method of experimental proof is indeed looks simplest, but it is not clear if it was used for mentioned experiment with 15 events, because they also wrote about alternative method of checking, the comparison with some calculated values.

Alexander Vlasov observes (charitably) “The particular method used [by the Bristol group] for calculation of the 0.998 confidence is not quite clear due to some ambiguity.”

Yes, it takes considerable scientific chutzpah to ascribe such a high confidence-level to such a scantily described hypothesis-test.

To mention one crucial point of ambiguity, in models of quantum computation it is entirely feasible (even generically feasible) to specify noise-levels sufficiently large to allow PTIME simulation, and yet the observed dynamics is still grossly non-classical. The semi-obvious implication is that “0.998 confidence” does not mean “0.998 confidence that high-order permanental quantum amplitudes are necessary to simulate the data”.

And yet to be fair, to describe any realistic hypothesis-test in sufficient detail to enable replication of the confidence-level calculation (or even critical assessment of it) would require a longish article in itself (or even a thesis).

Conclusion Hopefully there are longer articles in the works, in accord with the Mephistophelian duality principle that cumulative advances in methods for simulating BosonSampling experiments are of comparable scientific interest to cumulative advances in methods for performing BosonSampling experiments.

“Gil is on record, repeatedly, on this blog as well as his (see for example here), as saying that he doesn’t think BosonSampling will ever be possible even with 7 or 8 photons.”

Yes!, but this prediction refers to k bosons with 2k modes (and not with 2 modes). Here is the relevant paper.

So for k=8 the dimension of the Hilbert space in my prediction is roughly 500,000, and not 9 as in the case of two modes.

Certainly, understanding the envelope of what is experimentally possible, in general, and the 2-mode case, in particular, is very interesting. For the reasoning explained in this comment (second item), I expect that the envelope of what is possible for experimental BosonSampling will fall well below the “quantum supremacy” territory. (My prediction applies also to physical implementation of Fermion Sampling as well, although it is in P).

Gil #42, output anyway has 6 modes and dimension 6! = 720. It is bigger than dimension for 9 qubits. I am not sure that input is so crucial – it is similar to complain, why we are using state |0000…0> a input of many quantum algorithms.
And finally, due to noise the initial space may spread in bigger dimension confirming my doubts about overcoming ECT using noise.

PS The conclusion of comment #43 should have noted that Gil Kalai too deserves great credit, for demonstrating that the quantum-skeptical literature can aspire to comparable mathematical and scientific quality to the quantum-optimistic literature … and also for unfaltering good humor.

And yet, a key tenet of the Church of the Larger Hilbert Space — specifically, Nielsen and Chang’s “Theorem 8.2: Unitary freedom in the operator-sum representation“ — affirms concretely that Lindblad processes realized as dimension-spreading noise can indistinguishably be realized as dimension-shrinking measurement processes.

In other words, the Quantum Scriptures ingeniously accommodate Mephistophelean Duality, by spurring optimistic racehorses and skeptical racehorses with equal efficacy.

Conclusion Quantum heaven intrinsically resembles the paradise of Mark Twain’s Extract from Captain Stormfield’s visit to Heaven (1907): it’s a paradise that is divinely, even hilariously, arranged so as to finely balance the strivings of quantum optimists and quantum skeptics.

Which is (semi-obviously) the very best kind of quantum paradise to which we mortals can reasonably aspire.

Amusingly, back in 2013 Dwave was claiming X3600. Now they claim X600, with two times more “qbits” and two times less noise. Not clear if they should further decrease the number of qbits or further increase the noise to improve performance. 🙂

Well, if one thinks that the level of noise is going to skyrocket very near to the point where classical simulations become impractical, simulations of smaller instances aren’t going to necessarily tell you that much.

The problem with quantum computers is that you must build a significantly more powerful machine each time you increase the size of your input data by one unit. The power complexity of the quantum model is perhaps linear, but with a proportionality constant of the order of the speed of light squared, due to the mass-energy equivalence. In this, quantum computers are reminiscent of particle colliders. Like those, they’re unlikely to leave the laboratory anytime soon. The polynomial-time behavior of quantum factoring is surely a theoretical marvel, but it’s also practical rip-off. The nice time behavior is obtained at the expense of a prohibitive energy model.

I think they don’t know the exact hamiltonian governing the evolution of the BosonSampling system – you can assume an ideal QED model or similar, but even that would be ridiculously complicated to simulate to sufficient accuracy and even that probably isn’t exactly how Nature works.

So adding a random noise element isn’t even close to being possible in a useful manner for a simulation of this type of physics.

Serge #50: I’m not going to change your mind, but maybe I should say for the benefit of others reading that the idea that QCs have a “power requirement of order the speed of light squared” is complete nonsense. Firstly, the units aren’t right! And secondly, a QC is not a nuclear reactor! It’s not converting mass to energy or vice versa in any relevant way. The energy requirements come from much more prosaic sources—it’s just whatever amount of energy you need to cool the qubits down, manipulate them, etc. (In practice, I believe the refrigeration costs typically dominate.)

And yet on the other hand, it’s true too that not every energy/entropy cost associated to BosonSampling is scientifically prosaic.

In standard photonic practice, the photon-emitter current fluctuations are decoupled from the photon-absorber current fluctuations by either (or even both of two methods), the first method being thermodynamically wasteful and the second method being thermodynamically subtle.

The wasteful isolation method Make the photon-source bright, and dump most of the photons. In the original Hanbury-Brown and Twiss experiment, the photon-emitter current was the entire output of the star Sirius, while the photon-absorber current was a few picoamperes (at an estimate) of photoelectrons.

The subtle isolation method interpose a non-reciprocal optical isolator between the photon-source and the photon-sink. This provides two more tunable/designable BosonSampling elements, namely, the (optionally quantum-squeezed) vacuum state injected at the isolator ports.

MRFM experiments commonly use both wasteful and subtle isolation methods simultaneously, and ultra-advanced experiments like LIGO are beginning to experiment with sophisticated vacuum-squeezing isolation.

If these isolation/decoupling physics-challenges aren’t easy for a highly experienced (and outstandingly talented) researcher like Carlton Caves, then how difficult are these challenges for less-experienced mortals?

That’s why its far from clear — far from clear to me at least, and perhaps far from clear to anyone — whether the extra modes associated to optical decouplers will turn out to be a blessing or a curse in regard to scalable BosonSampling; the principle of Mephistophelean duality affirms both eventualities with equal weight.

So very challenging — and yet so very revealing — are realistic simulations of realistic BosonSampling experiments, that further progress would be heartily welcomed by researchers of all persuasions and temperaments.

a #54: Too many negatives there, but no, there’s no reason not to believe the brain can’t do BosonSampling, and there’s every reason to believe it can’t. I mean, can you sample probability distributions defined in terms of permanents? I can’t even sample ones defined in terms of determinants—and it gives me a lot of trouble to do 2-digit multiplication. Why would anyone even imagine BosonSampling was the kind of thing people would be good at?

John #46, In my comment I just response on idea that the dimension is not enough. In fact, in previous paper of the same group (also mentioned in the post) they directly mentioned about dimension of Hilbert space > 50000.
Your “Mephistophelean Duality” sounds like yet another manifestation of “observer solipsism” – if some system violates ECT, but we may not measure that (due to noise or other reasons), the system does not violate ECT. But I just complained about that particular interpretation.

Alexander: “Your “Mephistophelean Duality” sounds like yet another manifestation of “observer solipsism” – if some system violates ECT, but we may not measure that (due to noise or other reasons), the system does not violate ECT. But I just complained about that particular interpretation.”

Dear Alexander, this I think this is an important issue and we discussed it in the past.

One aspect of it is this: if you have a noisy state that is easy to be prepared (so on its own does not violate ECT), but which is a mixture of pure states each of which demonstrates quantum supremacy (or violate ECT). Should we regard it as a violation of ECT?

Another aspect of it related to BosonSampling is the following: Suppose you can prove that BosonSampling with n Bosons with the property that every boson vanishes with probability (1/log n), say, violates the ECT. (And suppose that 1/log n is technologically achievable for large n’s ) Does this mean that noisy BosonSampling in such a regime would violate ECT?

Scott #52: my units aren’t right only because I was equating power and energy due to the tiny data thus far processed. Indeed, who cares how much longer it takes to factor 21 than 15? I came up with a comparison of Feynman diagrams with flowcharts after reading John Baez’s intriguing paper “Physics, Topology, Logic and Computation: a Rosetta Stone”. I admit the absence of energy-to-mass conversion during quantum processes, but wouldn’t it be beautiful if a new information-energy equivalence were discovered? There isn’t such a thing as a prosaic source of energy. Refrigeration costs dominate in the QCs based on supra-conductivity, but I suppose other types of energy costs dominate in the silicon-based QCs for example…

Alexander, I was referring to a model that Scott is often talks about (I hope I get it right) : you start with an n by m matrix (representing n bosons) then you delete t rows (so the error rate is t/n), and you average the bosonic distributions for all possible ways to delete t rows of the matrix. Of course, for my question, you can take instead one of the noise models Guy and I were studying where the number of bosons remain fixed.

Fred (#48): “I’m not clear why ppl who think that noise/error levels are gonna be the limiting factor can’t just write a simulation to demonstrate this (with various models of noise/error).”

Fred, we do talk about simulation in the paper linked above and after it was arxived we made some simulation in subsequent work with Maya Leshkowitz. There are intensive efforts by quite a few people for simulations of BosonSampling (and noise) on various levels. The most simplified models can be studied analytically, slightly more general models need simulations, and for simulating the realistic experimental process towards BosonSamling you need much harder and more detailed simulation. (Often FermionSampling behaves like BosonSampling and you can push simulation further for FermionSampling.) Of course, there is no difference in this respect between people who think that noise/error levels are gonna be the limiting factor and those who don’t.

Fred (#26) : The debater who has fully understood and integrated the other side’s arguments in its own reasoning is automatically more likely to be right?

Not at all, of course. My disposition is to try to understand other people’s view as much as I can. (And then often discuss and argue with them, if they wish, or occasionally tease them a little).It is not even something I recommend for others. Many times it is better to go on with your program and ideas and not to overly be distracted by others. But I really like to understand the line of thought and reasoning of other people and especially to identify the interesting ideas and stronger arguments.

“The [squeezed-light interferometry formalism] is a good example of why physicists are really obliged to give every explanation they can think of. The particle perspective, though intuitive and right, is sterile. The less intuitive wave perspective is the royal road [to better designs]”

For a synoptic view of sociological aspects of gravity-wave detection, Harry Collins’ book-length survey Gravity’s Shadow: the Search for Gravitational Waves is also commended (by me, and by pretty much everyone who was not offended by Collins’ sociological candor).

Triumphs The point is that gravity-wave detection, if and when it occurs — soon we hope per arXiv:1411.4547 — will be every bit as much a sociological triumph as a technical triumph; a triumph of Kalai-style/Caves-style collegial values, every bit as much as Kalai-style/Caves-style technical competence.

“The [large-n BosonSampling formalism] is a good example of why physicists are really obliged to give every explanation they can think of. The Hilbert-space perspective, though intuitive and right, is sterile. The less intuitive low-dimension polynomial perspective has turned out to be the royal road [to viable BosonSampling designs]”

Conclusion Large-n BosonSampling demonstrations are proving to be every bit as fundamentally important, difficult, uncertain, contentious, and thrilling, as astronomical gravity-wave detection.

“I’m not going to change your mind, but maybe I should say for the benefit of others reading that the idea that QCs have a “power requirement of order the speed of light squared” is complete nonsense. “

Isn’t that a bit like arguing about the daily dietary calorific requirement of a Unicorn? 🙂

Jokes apart, I do agree about the speed-of-light-squared assertion being wacky.

Assuming we are going to see n=7 Boson Sampling Experiments in the near future, are there any conceivable features such experiments might observe that might make you less cheerful about the ability of Boson Sampling to overthrow ECT?

i.e. If you had to play devil’s advocate, is there anything that, you think, might possibly derail the Boson Sampling scaling effort?

Rahul #64: Well, there are basically three such possibilities, mirroring the analogous possibilities for conventional quantum computing.

It could just turn out to be too laborious to scale—maybe no new ideas will ever be discovered to improve the reliability of the sources (and hence the coincidence detection rates), so things will stall where they are. That would be disappointing.

Or, there could be some dramatic new discovery that overthrows our current understanding of how identical bosons behave, or even of quantum mechanics itself. I regard that as fantastically unlikely—and if it happened, it would of course be a million times more interesting and worthwhile than a “mere success” at scaling BosonSampling.

Or, it could turn out that the permanents of Gaussian matrices can be approximated in BPPNP, and that all the other hardness conjectures one might use here are also false. In that case, at least approximate BosonSampling would be easy classically. I, too, would find that a surprising and fascinating outcome, but admittedly it would probably be more interesting to complexity theorists than to the rest of the world.

I feel I must reply, because my original comment was actually “the power complexity of the quantum model is perhaps linear, but with a proportionality constant of the order of the speed of light squared, due to the mass-energy equivalence”, which Scott shortened to “QCs have a power requirement of order the speed of light squared” before commenting that the units weren’t right! 🙂 That’s like Einstein mistakenly claiming E=c² because a copy-pasting mistake…

I had honestly come to believe there was some creation of output data during a quantum process. So tiny as this might have been, after being multiplied by c² it could have explained why it always takes more power to process a data that’s only one bit longer. I still believe that whatever energy you’ll spend to prevent the decoherence issue is directly related to the complexity of the computational problem being solved. I really hope the matter will be sorted out sooner or later.

So far Scott has presented four BosonSampling outcomes in all, namely three negative outcomes (of comment #64) and one positive outcome: “getting Gil Kalai to admit he was wrong”. This calls to mind a gently humorous passage in Robert Heinlein’s young-adult science-fiction novel Tunnel in the Sky (1955)

Psychologists once locked an ape in a room, for which they had arranged only four ways of escaping. Then they spied on him to see which of the four he would find. The ape escaped a fifth way.

“Fifth ways” in BosonSampling One plausible candidate for a “fifth way out” in regard to BosonSampling would be a PTIME algorithm for scattershot sampling from all but an exponentially small fraction of iid gaussian matrices.

Algorithmic humor In a more humorous vein, the cover of Forman S. Acton’s textbook Numerical Methods That Work (1970) hilariously embossed the ink-free word “usually” between “That” and “Work”.

Conclusion “Fifth way” algorithms that usually solve simplex, 3SAT, and drug-binding/protein-folding problems are exceedingly valuable both fundamentally and practically … and complexity theory provides no very compelling reason (known to me at least) to foresee that BosonSamplers will conceive no similarly valuable “fifth way” algorithms.

Question Would such “fifth way” algorithms for BosonSamplng prove that “Gil Kalai’s postulates are right?”. Even the logical possibility of such algorithms suggests that “Gil Kalai’s postulates are not obviously wrong”.

John #67: Your “fifth possibility” that you introduce with such fanfare is essentially equivalent to my third one. The only reason to specify that a BPPNP algorithm is for Gaussian matrices is that such an algorithm would only need to work for most matrices drawn from the iid Gaussian ensemble, not for all of them. (The latter is known to be #P-hard.)

Jay #47: the initial x3600 test was a comparison run between two systems that were specified by a customer (google) and run by an independent third party (cathy mcgeoch who is a world leading expert in this sort of test). It wasn’t dwave that either set the competition or run the tests.

The x600 number is between two completely different systems. Not only is the quantum computer totally different, the classical solvers used are completely different. As far as I can tell these tests were done by dwave and aren’t connected to acceptance tests for customers.

The motivation (possibly mistaken?) for this proviso is the proof in Aaronson/Arkhipov “The Computational Complexity of Linear Optics” (2011) of Theorem 4.3: Hardness of Approximating \(\mathsf{\text{Per}}\,(X)^2\).

The proof stipulates “an input [permanental] matrix, which we assume for simplicity consists only of 0s and 1s.” These (crucially?) restricted 0-or-1 elements, even allowing finite imprecision, are of course only a tiny subset of iid gaussian matrix elements, and moreover the “scattershot” proviso mixes these elements so thoroughly that (seemingly?) oracular assistance is required to sample a set of explicitly reducible 0-1 matrices.

The ensuing web of oracle-dependent complexity-theory implications is plenty tricky, for me at least, and plausibly tricky for the great majority of Shtetl Optimized readers too … there is no shortage here of inferences in which loopholes can reasonably be sought.

Closing confections Here are four quotes reflecting the time-honored interest in these difficult matters:

“A work should have either intelligibility or correctness; to combine the two is impossible, but to lack both is to be unworthy.”
— Bertrand Russel (1901)

“The virtue of a logical proof is not that it compels belief but that it suggests doubts.”
— Henry George Forder (1927)

“Beware of bugs in the above code; I have only proved it correct, not tried it.”
— Donald Ervin Knuth (1977)

Conclusion A loophole-closing Shtetl Optimized column that clarified the question “What would it take to really collapse the polynomial hierarchy?” would be gratefully and enthusiastically read by many (most definitely including me).

Best of all, and most importantly, there’s zero margin for doubt that Shtetl Optimized is an elite web-forum for ideas and information relating to quantum information and complexity theory.

John #70: No, sorry, you’re confused. The thing with the 0/1 matrices was only in our hardness result for exact BosonSampling, not in the result for approximate sampling (where we use iid Gaussian matrices throughout, embedding them as submatrices of Haar unitaries). And the complexity situation for scattershot BosonSampling is provably the same as that for ordinary BosonSampling—in the sense that both of them are hard to do, even approximately, assuming there’s no way to approximate the permanents of most matrices drawn from the iid Gaussian ensemble in BPPNP.

Scott, thank you for your gracious reply (#71), which is entirely correct (as it seems to me), yet surely too, cannot be the last word.

Perhaps the nearest remark that we have to the last word is this passage from your own Quantum Computing Since Democritus:

Something inside me [and, I suspect, inside many other computer scientists!] is suspicious of those parts of mathematics that bear the obvious imprint of physics, such as partial differential equations, differential geometry, Lie groups, or anything else that’s ‘too continuous.’

So instead, I start with some of the most ‘physics-free’ parts of math yet discovered: set theory, logic, and computability.

This passage presupposes, implicitly at least, a converse community of researchers that embraces the implications of partial differential equations, differential geometry, and Lie groups, and is suspicious of (or at least struggles to grasp) the implications of set theory, logic, and computability. That converse community includes me, needless to say!

This division is at least as old as the atomic (hence discrete) worldview of Democritus, versus the geometric (hence continuous) worldview of Pythagoras. Bridging this division isn’t easy, even with modern mathematical tools that marvelously unite the discrete with the continuous. As Joseph Landsberg remarks in the introduction to his Tensors: Geometry and Applications (2012):

In the course of preparing this book I have been fortunate to have had many discussions with computer scientists, applied mathematicians, engineers, physicists, and chemists. Often the beginnings of these conversations were very stressful to all involved. […]

This difference of cultures is particularly pronounced when discussing tensors: for some practitioners these are just multi-way arrays that one is allowed to perform certain manipulations on [while] for geometers these are spaces equipped with certain group actions.

In view of these divisions, the single most enjoyable passage in Quantum Computing Since Democritus (for me) was its zeroth passage, namely its cover image, Rembrandt’s portrait Democritus the Laughing Philosopher.

In this wonderful painting, Rembrandt portrays laughter in its most inspirational and philosophically sophisticated form, which is (as it seems to me) Spinoza’s hilaritas. And in turn, the philosophical notion of hilaritas beckons us to consider all of the mathematical disciplines that bear upon quantum phenomena — both geometric and atomic, both continuous and discrete — as best appreciated hilariously (sensu stricto) to be universal, natural, and performative (also sensu stricto).

Conclusion The challenges of 21st century quantum research require all the hilarity that we can summon.

Erratum There are dozens of marvelous “laughing philosopher” portraits; it turns out that the cover of Quantum Computing since Democritus is the portrait by Hendrick ter Brugghen, not the one by Rembrant van Rijn. Needless to say, it’s still a terrific cover choice for an outstanding book.

Gil #60, I am not sure, that you are talking about. It is resembling something like, e.g., Eq (*) in sec 2.2 of arXiv:1109.1674, but in such a case, I would not associate that with vanishing bosons, it is rather usual representation theory – irreducible (?) representation of unitary group in space of symmetric tensors/polynomials (e.g., Weyl “The theory of groups and quantum mechanics” ?).

This passage presupposes, implicitly at least, a converse community of researchers that embraces the implications of partial differential equations, differential geometry, and Lie groups, and is suspicious of (or at least struggles to grasp) the implications of set theory, logic, and computability. That converse community includes me, needless to say!

As far as I’m concerned, I’m not suspicious of any parts of mathematics. I’m just convinced that the counter-natural efforts that have systematically been trying to keep logic, computability and complexity away from physics are now reaching a dead-end. I foresee that a deep connection between relativity, quantum mechanics and complexity theory is about to get uncovered as an unwanted side-effect of the quantum computing adventure.

[ABET-certified Computer Science (CS)] students must have one year of science and mathematics:

1.Mathematics: At least one half year that must include discrete mathematics. The additional mathematics might consist of courses in areas such as calculus, linear algebra, numerical methods, probability, statistics, number theory, geometry, or symbolic logic. [CS]

2. Science: A science component that develops an understanding of the scientific method and provides students with an opportunity to experience this mode of inquiry in courses for science or engineering majors that provide some exposure to laboratory work. [CS]

One year of math, for any STEM degree, isn’t very much. And yet it is fundamentally infeasible to strengthen undergraduate mathematics curriculums, and still graduate students within four years.

As Scott’s senior MIT colleagues Gerald Sussman, Jack Wisdom, and Meinhard Mayer say in the introduction to their (excellent but immensely long) textbook Structure and Interpretation of Classical Mechanics

Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding.

If the Sussman/Wisdom/Mayer lament is true for classical mechanics, how much more true is it for quantum mechanics, as learnt by students pushed to study subjects too soon and too fast, by reason of a relentlessly fast-paced four-year curriculum?

Conclusion At the undergraduate level, the leisurely maturing of undergraduate mathematical capabilities, to foster a more universal, natural, and performative appreciation of STEM enterprises, is infeasible not by reason of obtuseness or conspiracy, but simply by reason of time.

[…] at least a half-dozen groups around the world have reported small-scale implementations—the record, so far, being an experiment at Bristol that used 6 photons, and experimentally confirmed that, […]