Hard questions about quantum crypto and quantum computing

We’ve been assured for 29 years that quantum crypto is secure, and for 19 years that quantum computing is set to make public-key cryptography obsolete. Yet despite immense research funding, attempts to build a quantum computer that scales beyond a few qubits have failed. What’s going on?

The soliton model challenges the Bell tests which purport to show that the wavefunctions of entangled particles are nonlocal. It also challenges the assumption that the physical state of a quantum system is entirely captured by its wavefunction &#936. It follows that local hidden-variable theories of quantum mechanics are not excluded by the Bell tests, and that in consequence we do not have to believe the security proofs offered for EPR-based quantum cryptography. We gave a talk on this at the theoretical physics seminar at Warwick on January 31st; here are the slides and here’s the video, parts 1, 2, 3, 4 and 5.

Which “security proofs offered for EPR-based quantum cryptography” exactly do you think may be affected by anything you say in that paper, and how exactly?

Oddly, you provide not a single literature reference to any such security proof. I found only a single, extremely vague paragraph talking about quantum cryptography at all (end of page 5), but nothing resembling a sound argument.

This paper discusses implications of Robert Brady‘s new “sonon” interpretation of quantum mechanics. The latter sounds indeed very interesting, but its verification is outside my area of expertise, something I’ll happily leave to professional physicists. The main claim following from his sonon model in this paper is that it would be geometrically and physically impossible to entangle more that 4 qubits with each other simultaneously. If that limit were indeed in place for our universe, it would instantly kill any attempt of quantum computing: any practically useful quantum computer would have to entangle thousands of qubits simultaneously, and for thousands of operations. So I think I have understood the argument of the paper regarding the first half of its title: why quantum computing may be impossible.

But what about the quantum cryptography half of the title?

Quantum cryptography is a very different endeavour from quantum computing. Most practically implemented protocols are quantum key distribution systems based on the original BB84 scheme, which does not involve any entanglement, but merely exploits the fact that nobody can measure the exact polarization angle of a single photon. I see nothing in your paper affecting that assumption.

If you refer to any other quantum-crypto proposals involving entanglement (which?), do these rely anywhere on Alice and Bob being able to entangle more than 4 qubits simultaneously? If not, what exactly does your paper say about these schemes?

Traditional interpretations of quantum mechanics appear to be extremely weird, because of the “spooky interaction” in the EPR paradox, and Bell’s claim that quantum mechanics cannot be explained more sanely by local hidden variables, judging from experimental evidence. Your coauthor’s work tries to show that Bell might be wrong and that the universe might well run on hidden variables after all, and offers a model to do at least quantum-electrodynamics (electrons, photons). That is extremely interesting, but I am not sure where the implication for quantum cryptography are. You can still implement quantum cryptographic protocols on top of hidden variables.

Your paper did not even hint at any new attacks against any quantum cryptographic protocol.

Dear Ross,
Hope things are well, and glad to see your interest in quantum information theory.

Let me say though, that there are several significant misapprehensions in the paper:

1) You espouse a local hidden variable model (LHV) of quantum theory. However, you won’t find a single reasonable physicist who believes such models, because they have all been ruled out by hundreds of experiments since 1982 where quantum theory is shown to violate a Bell inequality. All that’s required for Bell’s theorem to kick in, is for the measurements to be carried out at space-like seperated points. That’s it. What the measurements are being carried out on, how long the photons travelled, or how long some other wave travelled is irrelevant. So, you seem to dismiss a number of such experiments, but the reasons you give are irrelevant to Bell’s theorem. The reasons also seem somewhat opaque to me (information doesn’t travel faster than light in quantum theory; which arm the photons when through doesn’t matter; how long the photons travel doesn’t matter for Bell’s theorem; etc.). So, the point is that all we need is for the measurements to be space-like. If you’re going to claim that the entire experimental physics community doesn’t know how to make a space-like separated measurement, you’re going to have to provide more than a few lines of explanation!

2) As mentioned by the previous commenter, security of quantum cryptography doesn’t rely on a violation of Bell’s theorem. The first, and major protocols (BB84, E91) use measurements which admit an LHV, and their first security proofs (Meyers, Lo-Chau, Shor-Preskill) don’t use a violation of Bell’s theorem to prove security.

3) Perhaps more to the point than (2) — what conclusions you can draw on quantum cryptography or computation, depends on the extent to which you believe quantum theory or your theory correctly reproduces the results of measurements, and where they differ. It’s a bit unclear from your paper where you stand on this. At one point, you say that your theory reproduces quantum mechanics, and I get the impression you believe it correcly reproduces three qubits (since you seem to agree that we’ve demonstrated complete control, computation, and quantum mechanics for three qubits), but presumably you don’t believe that it reproduces the results of measurements on two qubits (e.g. as in Bell’s theorem). So, not sure what conclusions to draw here.. You also claim that your theory is a local hidden variable version of de Broglie Brohm. But the whole point of de Broglie Bohm is that it’s a non-local hidden variable theory which reproduces quantum theory. So, are you saying that your theory reproduces quantum theory? In which case, you might want to instead claim that you’ve found a classical poly time algorithm for factoring 🙂

There’s a number of other perplexing parts of your paper. E.g. time-reversal symmetry does not imply a violation of micro-causality. All classical deterministic theories have this property, and they don’t violate micro-causality. Same with most QFTs (or all, depending on your meaning of time-reversal symmetry). Perhaps these are all better discussed off-line.

Incidentally, the state of quantum computing is pretty exciting, and it’s not the case at all that we’re stuck at three qubits. See David Devincenco’s overview at this year’s QIP (the slides are there, and the video should be up soon)

Hello
if you agree with the experimental fact that not all the photons are detected – in spite of abstracts – , there are efficient simulations with hidden variables.
ie this one :http://www.q-crypt.com/animhtml5.html

Jonathan, you are right to say we have a local hidden-variable model of quantum mechanics. When Robert came up with his soliton model of the electron I was extremely impressed; we were not at the time aware of Yves Couder’s work, and I thought it was striking that a completely classical model could not only give us quantum electrodynamics and lead to a calculation of the fine structure constant. Like you, my immediate next reaction was “What about Bell’s theorem?” and then “Maybe that explains why quantum computing is stuck at three qubits.” We spent several months discussing these issues and, as we explain in our joint paper, we no longer believe that the Bell tests prove what most quantum theorists claim they do. The standard analysis assumes that the wavefunction ψ expresses all the physical state of the system, while in the soliton model there is the χ wave as well, on which ψ is modulated. Once that assumption is challenged, the whole game is open once more. This leaves two possibilities: that there still is spooky-action-at-a-distance, just less of it (what we call the weak soliton hypothesis, which is consistent with Cramer’s transactional interpretation of quantum mechanics and Mead’s version of quantum electrodynamics) or that we can have an underlying classical model of the world (the strong soliton hypothesis). Even if the soliton model turns out to be wrong in both its weak and strong forms, it makes an important contribution by forcing us to clarify the assumptions which underlie foundational arguments.

Of course “most physicists” believe orthodox quantum mechanics, but then “most physicists” believe that action is local and causal. The apparent contradiction between these popular beliefs intrigued Bell and is what makes Bell tests important. But it’s excessive to claim that I “won’t find a single reasonable physicist who believes such models”; they are explored in fora such as the Emergent Quantum Mechanics workshop, with some of whose attendees we’re having interesting discussions. Incidentally, a number of the other theories being kicked around there also have significant implications for quantum computing, quantum crypto or both.

As for the Bennett-Brassard quantum cryptosystem, I’m fully aware that it doesn’t use entanglement. In fact Markus and I had a long discussion at our security group meeting on Friday after I presented this work. It may well be that a particular implementation of quantum cryptography is secure (though watch out for this sort of thing); however the security case will be very different. Once there is a local hidden-variables theory of quantum mechanics, or even an emergent theory based on other assumptions, you will no longer be able to invoke a “proof” from physical principles. It will more like making a Tempest security case for a cryptographic device; you’ll have to analyse the electrodynamics of a particular design carefully and test it thoroughly.

Incidentally, that discussion threw up yet another possible explanation of why we’re stuck at three qubits, and which Markus contributed: the MIMO model. The gist is that there are only three independent degrees of freedom in radio from a single location in a scattering environment. Even though a full description of electromagnetic wave scattering in a building might involve a lot of apparent degrees of freedom (as with the conventional explanation of the quantum mechanical wavefunction) it ultimately reduces to a transformation of the coupling between sender and receiver, given by a matrix that is usually not degenerate. This provides another reason to question the conventional view that since the DeBroglie Bohm guiding waves of two entangled particles formally have six dimensions, they have no interpretation in the geometry of physical space. That doesn’t necessarily follow.

Ross, I’m trying to understand your last comment, but I’m lost as soon as you mention the wave function in relation to Bell’s theorem. The fact of the matter is that Bell’s theorem makes no reference to the wave function, since it is not a theorem about quantum mechanics at all, but rather about local hidden variable models. The notion of a wave function should not appear in either the statement or the proof of the theorem.

Ross, forget multi-quibit arguments, are you aware of the many sophisticated tests of quantum mechanics that have been carried out in the last few decades which prove something like your classical soliton model can not possibly work

Dear Ross,
Joe is right. Like I said in my original post, the only thing that’s required for a Bell-violation besides the statistics themselves is that the measurements to be space-like. That’s it. You need to avoid the situation where Alice’s measurement setting/outcome can be transmitted to Bob’s device before he makes his measurement.

There are certainly classical theories which can mimic aspects of quantum mechanics (Stochastic Electrodynamics, Spekkens models etc.), and some of them are interesting for various reasons, but if they don’t violate a Bell inequality, then they’ve been falsified by experiment.

I’m not sure what you mean by “less spooky-action-at-a-distance”, or why this is a desirable feature for a theory to have (sounds like being a little bit pregnant), but if you mean that you want to alter your theory just slightly so that it now has a small non-local hidden variable, then you have to contend with this theorem: http://arxiv.org/abs/0801.2218

The link you give lists a number of researchers, but I know of none of them who would suggest that a theory of nature doesn’t need to violate a Bell inequality. Even ‘t Hooft.

Regarding Bell’s inequality, I hope the following clarification is helpful.

It is recognised that Cramer’s transactional interpretation of quantum mechanics is consistent with experiments on Bell’s inequality, as is Mead’s adaptation of it. Both of these approaches are referenced in the sonon paper, together with a discussion of why the motion of sonons is consistent with them.

The underlying reason for the consistency is that Euler’s equation is time reversal symmetric. Cramer’s model makes use of the same time reversal symmetry. It might be thought unsatisfactory to rely on this property, which is why we suggest the alternative hypothesis, that it may be possible to interpret the sum of the advanced and retarded solutions in terms of a spacelike function. When I presented the paper to fluid dynamicists at Warwick recently, they were surprised there was any problem with there being non-local motions as a result of Euler’s equation (which in turn surprised me because the equation itself is, strictly speaking, completely local). Nevertheless, this is an open suggestion regarding an interpretation. The motion itself is consistent with experiments on Bell’s inequality for the reason described above.

classical models, however beautiful, have no capacity to approach any problem that quantum mechanics solves using the the notion of entangled states (aka many-particle superpositions), regardless of interpretation. At behest of my quantum information colleagues, I have made an explicit list of well known physical phenomena that invalidate your approach:

Seriously, though, I read this blog post five times and I’m completely lost. Is there any kind of layman’s explanation for what’s going on? I’m reasonably conversant in scientific terminology, but this is really obscure.

In case you didn’t get the point of my post above, even before you get to Bell inequalities and multi particle arguments your “classical” model of QM already fails according to modern experiments.

I posted above an example of an experiment where a SINGLE photon is shown to exhibit behavior that cannot possibly be explained by a local classical model. This should be your first concern – forget anything more complicated.

In fact, if you can refute the claim of the Aspect et al 2006 paper that no local hidden variable model could explain it then that in itself would be something pretty astonishing.

First of all, I do agree that Bell experiments up to now are not entirely conclusive. No single experiment rules out all the “loopholes” (locality, efficiency, …). However, I would also agree with Jonathan that the experiments together form a strong basis for argument. Any hidden-variable mechanism that gives the appropriate behavior for all of these experiments would have to be complicated. But I like to keep an open mind. I’d like to understand-not just postulate a Hilbert space and calculate.

I really like Couder’s experiments but one has to keep in mind that it is a classical-physics example of a system that behaves almost like Bohmian mechanics. The important difference is that Couder’s fluid has a finite propagation speed, while Bohmian mechanics has infinite propagation speed of the “quantum potential.” Bohmian mechanics is manifestly nonlocal (and this has nothing to do with dimensionality, I might add). You seem to be saying that the soliton model is local, since it is Lorentz invariant. At low amplitude, you say-does this mean it is really only approximately Lorentz invariant? This would be a downer.

My work on improving Bell tests has led me to look closely at quite a few local hidden variable proposals. They mainly fall in two categories: those that are in fact nonlocal, and those that cannot give the quantum predictions. A properly local hidden variable model is bounded by the Bell inequality, full stop. I might add here that the Bell inequality is not related to quantum mechanics as such; it is a statement about classical models (of a particular kind). Your model would be of the latter kind, unless it is really nonlocal because of the approximation I asked about. This means it cannot give all the quantum predictions, like long distance Bell violations. How fast does the correlations drop in your model? At what distance is there no violation anymore?

I’ve also been looking at a single spin-1 system to determine what a hidden-variable model would look like for that system. No locality issues arise, but it turns out that the quantum-mecanical predictions can only be had if the model is – for lack of a better word – ugly. Robert’s paper is about a spin-1/2 particle. Any attempts to go to higher spins?

Slava #9: I answered this question when you posed it on Scott Aaronson’s blog. Briefly, you think we must do a lot more work before we convince the mainstream physics community to take sonon theory seriously as an interpretation of quantum mechanics, like (say) the Copenhagen, transactional or many-worlds interpretations. We agree. However I think it’s unreasonable for you to ask us to rework the entire standard model, not to mention the exchange interaction, the gyromagnetic ratio, superconductivity and much else. That sets a much higher bar for us than for other people who have come up with novel interpretations. As a starter, would you be prepared to take sonon theory more seriously if we came up with further non-trivial results, such as on superconductivity or the weak interaction?

James #11: In the 2006 paper, Aspect sends photons along two 48m fibres, combines them, and they display an interference fringe – even if the light source is dimmed to single photons. Carver Mead’s model explains this.

Jan-Åke #12: This is a really interesting point, and one of the strong points of the soliton model rather than a “downer”. The nonlinear effects explain the Bose behaviour of light which is exploited all around us in lasers, while the fact that they average out over a cycle ensures that the expectation values (which are all you can measure) are all Lorentz covariant.

We don’t have anything on spin-1 particles ready for publication.

I’ll leave it to Robert to discuss the details of correlated spins and the standing wave between particles, but as a possibly useful analogy, consider a world in which no information travels faster than the speed of sound in water. A tsunami sets off from Indonesia and many hours later arrives simultaneously at Madras and Madagascar. The fact that the carrier wave is still phase coherent doesn’t mean that information can be transmitted instantaneously between these two places.

Let’s refer to the correlated spins in Figure 5b as |ud>. As you would expect from Bell, most of the fluid energy is not localised near the sonons. It is in the standing wave between them, which has an antinode on the mirror line.

The standing wave is mathematically equivalent to an advanced wave and a retarded one, as in the Feynman/Mead models of QED. That doesn’t mean information can be transmitted faster than c (whether the speed of sound, or of light, depending on the model) – See Ross #13.

The usual spin superpositions are also valid solutions. For example, |ud> + |du>, which has two separate standing waves. Following Mead’s model, suppose another device resonates with the wave of, say, |ud>. It will parasitically drain energy from |du>. This corresponds to “measuring” |ud> and removing the |du> state. The measurement is nonlocal in the sense described above.

Ross, I’m afraid I am again having difficulty parsing your comments. In #13 you refer to “sonon theory” as an interpretation of quantum mechanics, yet your present preprint is based on the fact that it diverges from the predictions of quantum mechanics is certain regimes. These are mutually exclusive, since any interpretation must yield identical predictions to quantum mechanics in all regimes, as otherwise it would be an entirely distinct theory and not an interpretation at all.

you forgot to mention that the Aspect et al 2006 experiment includes a spacelike separated quantum random number generator which “decides” the configuration of the interferometer.

However, I assumed you were emphasizing a true local classical model, like the ones in fluids you discuss in the paper – but if you are saying that’s not possible (after all) and you do in fact require Mead’s time-travelling wave model then I’m not sure your conclusions on bounding entanglement to a few qubits are at all justified.

All the quantum experiments only indicate that Bell’s inequalities are violated and that is all. They don’t actually do anything to prove that Bell’s Theorem is correct. BT is all about probabilities on the real line. As soon as you go to topological arguments involving parallelized 3-spheres, you can get probabilities that match quantum mechanics. Joy Christian’s local realstic model does just that which you can find inin his book. And more generally, Dr. Christian has shown that all quantum correlations can be explained by parallelized 7-sphere topology.

Now, since entanglement is shown to be just an illusion by Dr. Christian, if quantum computing is relying on that, then Ross and Robert are correct that it is going to be extremely difficult if not impossible to have scalable quantum computers. However, quantum probabilities do in fact beat Bell’s linear probabilities so perhaps some advantage could be had from that. Dr. Christian doesn’t think so.

Well… a cursory glance at your paper shows that you are only using Maxwell and classical mechanics. That means that your system is only accurate assuming an infinite speed of light.

As the speed of light is experimentally shown to be finite and constant in a vacuum, your system cannot model wave-function collapse correctly (again as measured in experiments in Space-like separated experiments).

On the plus side, you could be right and the Bell inequality is doesn’t hold in reality. That means Quantum Mechanics is axiomatically incorrect, and 80 years of Physics go out of the window. Basically, I may be telling the next in the line of Newton->Einstein->Dirac->Anderson that he’s wrong.

I’m not even sure if cracking the RSA actually needs quantum effects – its security is based around computers being based solely on NAND gates. I’m pretty sure you just need to find some physical experiment that can do Fourier transforms. Light-wave interference probably would do the trick if you could get the wavelengths accurate enough. I’m sure that the NSA have already thought of something – as a backup if they don’t just hack one of the root CAs, or just get a job as a cleaner and walk in with a USB key…

Maxwell derived his equations using classical mechanics, assuming that magnetic lines of force are what we’d nowadays call phase vortices in a fluid-like medium. Then the constant c is just the speed of sound in the fluid. See our more recent paper which spells all this out, and the associated press coverage. These explain how the Bell inequality is violated even in classical fluid mechanics.

Agreed – that fluid was called the Ether. Unfortunately, the speed of light turned out to be “constant in a vacuum”, and independent of the motion of the Ether and observer. That’s why *after* Maxwell, Einstein came along with Special relativity (with time dilation and all that jazz).

The problem is, two events in “space-time” do not have a well-ordered before/after if there isn’t time for light to travel between them. A bit like two threads in a race condition – if you don’t put a lock in, you can’t guarantee integrity of the data that they modify.

That’s why wave-function collapse is interesting – somehow entangled photons with opposite spin always are measured as being opposite, even though there is not enough time for any information to travel between them.

This is why you’d need infinite c – your system can’t guarantee that two measurements far apart will always come up with the right answer without communication.

WRT Quantum Computers – I don’t think that factorization is a killer application. Superposition – using one piece of circuitry for multiple calculation in parallel – is my want. You can do that with optical computers though…

If the fundamental particles are quasiparticles in a fluid, then they are solutions to the wave equation (to first order) and the Lorentz contraction applies to them. This was all worked out by Fitzgerald, Lorentz and others in the late 19th century. You get the observed effects, including the measured constancy of c, the speed of sound in the fluid.

Where most 19th-century physicists got confused is that they believed the ether must be an elastic solid, or a gel of some kind, rather than a fluid, as they could not understand how a fluid could support polarised waves. Then the Michelson-Morley experiment excluded the hypothesis of a non-fluid ether, so Einstein simply proposed turning the problem round and taking the observed constancy of c as an axiom.

Nowadays, however, we know of lots of ways in which polarised waves propagate in fluids, from atmospheric waves where the symmetry is broken by Coriolis forces, to waves in superfluids where superflows perform this function. It’s time to revisit the whole question, and other physicists are looking at this too. See for example Volovik’s hypothesis that the quantum vacuum is a fermionic superfluid; this is linked from here.

In the standard interpretation of quantum mechanics, there appears to be faster-than-light synchronisation between two entangled particles, but the no-signalling theorem stops this being used for the transmission of actual information. In our model, that isn’t an issue, as entangled particles are already synchronised using external mechanisms. Einstein always disliked the “spooky action at a distance.” An even more extreme modern example is when you entangle photon A with B, then B with C, then C with D, so that D is entangled with A despite the fact that they did not exist at the same time. Now when you measure D, you cannot signal back in time to A; the no-signalling theorem becomes a no-tardis theorem, so you can’t use it to order the assassination of your grandfather. Nonetheless it is a very strange interpretation to put on things. Viewing entanglement as pre-existing synchronisation is much more natural.

“Viewing entanglement as pre-existing synchronisation is much more natural.”

Natural, yes – but experimentally shown to be false, as predicted mathematically by the axioms of QM and the EPR experiments/Bell inequality.

This is why the “many worlds” QM interpretation exists. It is a completely crazy/unnatural concept, but it is the only way of explaining why a closed Quantum system Hamiltonian is Unitary (reversible computing), but for some reason a measurement is a projection (non-reversible).

Except, of course, if that observer is inside a closed system being observed from the *outside*, in which case the projection operation is still Unitary and thus reversible entanglement from the perspective of the external observer.

This is the basis of many worlds – there is no wave-function projection during collapse, the observer merely becomes irreversibly entangled with the system (s)he is observing (and doesn’t have enough state to reverse it).

From the perspective of an ultimate external “God” viewer, the measurements being done inside the system are not a projection at all, and everything is reversible. You also nicely solve the Entropy problem like this.

I disagree. Our paper on Maxwell’s fluid model of magnetism shows that a wave packet propagating along a magnetic line of force, modelled as a phase vortex, behaves exactly as a photon does; it obeys Maxwell’s equations and also breaks the CHSH inequality in precisely the same way as is predicted by quantum mechanics and measured in the Bell tests. The line of force provides the pre-existing synchronisation.

There is no need to postulate multiple universes, or information flowing faster than light or backwards in time.

FWIW – the A->B->C->D of your example in Multiverse would be described as follows:

Every time you perform a measurement, and therefore appear to perform a Projection of the wave function, the “Universe Computer” performs a fork() and you get entangled with one of possible outcomes.

As the observer inside one of these processes, all you see is “electron up” or “electron down”. You can’t reverse this measurement because you have lost a load of state that ended up in the forked process and have no IPC.

From the OS’s point of view, this process is reversible, because it is tracking the state of both pids… which is necessary because a closed system must be reversible in QM.

FWIW – many people think that our perception of time is purely a result of a random selection of one of the fork()s. Dunno if you could ever prove this.

This fork() system of course makes no sense from a CS perspective because your computer would grind to a halt. Fortunately, as the Universe is a purely a mathematical construct that exists for exactly the same reason that 1+1=2 (there is no other option but for this to be true), in maths we can just say: “it’s an infinite fractal”.

I would personally be uncomfortable applying fluid dynamics equations to Maxwell equations, mostly because the derivations of the Navier-Stokes that I have seen kind-of imply things like mass and have a u(x) representing motion of stuff that (I traditionally assumed) doesn’t exist in Maxwell Equations.

These are the non-relativistic versions of the N/S and Euler equations, so I’d be careful in applying them to relativistic speed equations. There is mention of invariance over a Lorentz transform, which would not apply to my understanding of the (non-relativistic) fluid dynamic equations, so at some level this must have been modified for astrophysical-style fluid dynamics.

Having said that, this is the sort of thing that a skilled Physicist could “just look at” and tell you whether it was right or wrong.

The basic tenet seems to be the “lines of force”, which seem to be Bernoulli flow lines traditionally associated with constant pressure and particle velocity. There is an implication that the entanglement is related to photons that travel on these flows. This would imply that any two waves packets that happened to end up in the same place would be entangled.

The non-locality implies that the wave polarization is “classically decided” before the wave reaches the polarized detectors to which ordinarily I would say: “no it’s in a superposition of possible quantum states”.

Traditionally you would try to modify the experiment to recombine the photons after they had gone through the polarizers to “prove” that a single photon passed through both polarizers unless observed at one or the other. I am not sure what this would mean for your wave model, presumably the observation would affect only one wave path and at the point of recombination the interference pattern would match the wavefunction. Except for probability renormalization maybe? Can your model result in the correctly predict the normalized probability pattern for observed photons?

The CHSH calculation doesn’t look familiar (I remember 1/sqrt(2) in the Bell equations), but again a real Physicist could just look at that.

Meh – I don’t know 🙂 I’m sure that someone in the Cavendish could say if this is feasible. If the Ether is back, I need to rethink my understanding of reality (especially Special Relativity).

OK – I had a think about this for a bit. The fluid dynamics model described by this paper is ultimately flawed.

Maxwell and indeed Schrodinger equations produce fluid dynamics-like waves, and the profusion of [d/dt +d2/d2x] operators make it very tempting to reuse equations from other disciplines, what actually happens internally in those systems is different at the level of phase constants.

In fluid dynamics, there will never be complex numbers in the system, and indeed even if you were to extend to a 6 dimensional fluid, the interactions between real and imaginary parts would not be correctly modeled by the Navier-Stokes equations. In QM, internally wave functions are modeled by complex numbers, but when mapped back to the real world at boundaries, all complex numbers disappear because they are either zero, or multiplied by their complex conjugates.

This means that if you are careful in selecting your experiments and boundary conditions, the wave equations become mathematically identical and, yes, many QM dynamic systems will map onto equivalent fluid systems. This is because at the boundaries you can engineer the QM equations to contain no imaginary components. If however you stray from calculations where the imaginary components of your boundary conditions are 0, the fluid dynamics equations will fail to correctly model their QM counterparts.

In short, what you have actually done is show that in some cases a Quantum Mechanical system can emulate a Fluid Dynamical equivalent if you arrange your QM functions to not have an imaginary components at the points where you measure.

For example, a QM standing wave mathematically behaves in exactly the same as a water standing wave.

Dear asdfasdf, Yes, indeed. Waves have an amplitude and a phase, which are described mathematically by a complex function R e^iS where R is the amplitude and S the phase. The same representation is used for the wavefuncton in quantum mechanics as for waves in a fluid.

In quantum mechanics you are not restricted to functions without “imaginary components at the points you measure”. For example, the phase of the wavefunction in a superconductor is measured using a Josephson junction.

Fully agreed – this is a very useful mathematical way of representing a wave . Yet my bath water never time travels, or however you would want to interpret that, because all the way through the real-world liquid only the Real part of “a+ib” is of relevance. You can reformulate Fluid Dynamics in a more complicated formula that doesn’t appeal to imaginary values.

…but compare the Schrodinger and Dirac equations. These both *require* the imaginary component to exist as a separately tracked value, either as a complex number or as a separate dimension with interactions that have the same effect (which FD can’t simulate).

Similarly, going back to boundary values. At no boundary in Fluid Dynamics would I need to specify an imaginary flow of water, because in reality liquids do not move like that. If you carefully choose your QM boundaries at points where there is no imaginary part, then these equations are mathematically the same and do have the same solutions.

If you were to try to use a QM boundary where the imaginary component is important, then you’ll find that the FD and QM formulae produce very different results.

…and for example in equation (1) of your paper where you try to use the Euler equation describing the motion of an incompressible liquid, you’ll be needing to explain why the Euler equation holds when describing a liquid moving in an “imaginary direction”.

So back to the basic response: yes, in some cases QM have equations that are superficially similar to FD. The underlying mechanisms leading to superficially similar mathematics can’t be the same though (for a start, Navier-Stokes models less state than Schrodinger/Dirac), and you can’t apply FD equations to cases where you can apply Schrodinger/Dirac without a careful mathematical rederivation.

There are only three axioms of Quantum Mechanics. You should try to use your system to show that these are either true, or an approximation of your solution. The result will mean:
(1) Your system is mathematically equivalent to QM (and so not particularly interesting)
(2) Your system predicts stuff that QM doesn’t and so can be measured experimentally
(3) Your system fails to even remotely predict QM, and you are in trouble.

I’m guessing you’ll find (2). Your bouncing liquid is an approximation accurate (as per Newton’s equations) if the speed of light is much larger than the timescales/distances you try to measure. Then we can put together an experiment to see which is right.

P.S. Every undergraduate first learns Maxwell, Schrodinger and Fluid Dynamics in the same year. Every undergraduate has at some point used the wrong equation in the wrong discipline. So far, no-one has gotten the correct result at the end.