Consensus Quantum Reality Revisited

My last post was a little ranty, perhaps. So lets be fair to the physicists. What physicists mean by randomness is that when they run an experiment, unpredictable results are seen. Furthermore, when viewed in aggregate, these unpredictable results perfectly match probability distributions of a certain sort. And given that there are no parameters one can control in these experiments to predict what the answers will be, the reasoning goes that we might as well consider them as random, and build our theory accordingly.

This is fine, IMO, so long as you’re not trying to build an ultimate theory of physics. It’s a good idea, even, in the same way that spherical cows are a good idea. However, if you’re trying to get the answer right, and describe the smallest levels of physical existence, then, by definition, mere approximations won’t cut it.

However, this assertion, on its own, probably doesn’t say, or explain enough. For instance, what about Bell’s Inequality? Bell’s Inequality experiments absolutely rule out local realism. Local hidden variable theories simply can’t work. Isn’t that reason a strong indicator that there is inherent randomness in the universe?

In short, no. This is because I can simulate Bell’s Inequality results in the comfort of my own home without resorting to quantum randomness once. This is doable because Bell’s Inequality says nothing about non-local hidden variable theories.

The most well known of these is Bohmian mechanics, an approach that was first presented in 1927. This method has been thoroughly explored by physicists, but most of them walk away from it fairly unsatisfied, because it requires that every point in the universe can have instantaneous interactions with any other. The math of Bohmian mechanics is set up to ensure that the answer comes out exactly as it does for classic QM, while keeping the system deterministic. But, given that this doesn’t add any expressive power, and makes the model non-local, that feels like a fairly poor compromise.

Fair enough. But Bohmian mechanics isn’t the only way to build a non-local theory. As we’ve pointed out on this blog, if you’re looking for a background independent model of physics, you have to start thinking carefully about how spatial points are associated with each other. And if you follow this reasoning in a discretist direction, you generally end up building networks, whether you’re into causal set theory, loop quantum gravity, quantum graphity, or any of the other variants currently being explored.

And, as soon as you start looking at networks, it’s clear that there are perfectly decent ways of non-locally connecting bits of the universe that are not only self-consistent, but provide you with tools that you can use to examine other difficult problems in physics.

If I seemed to be disparaging physicists for not considering hidden determinism in the universe in my last post, that was not my intention. I certainly don’t mean to poke the finger at any specific individuals, but I do believe that poking the finger at the culture of physics in this regard is important.

We have experimental evidence of the non-locality of physical systems. However, we have no evidence that the universe runs on a kind of non-computable, non-definable randomness that flies in the face of what we know about information and the mathematics of the real numbers. Doesn’t that mean that we should be working a little harder to put together some modern deterministic non-local theories? Is it really better to hide under the blankets of the Copenhagen interpretation because this problem is hard?

After all, while issues of interpretation are broadly irrelevant given most of the day to day business of doing physics research, there is the small matter of quantum mechanics and relativity remaining unreconciled for the last hundred years. I would venture to propose that if we ever want to close that gap, having the right interpretation of quantum mechanics is going to be an important part of the solution.

Your article pushes the point that some extensions makes quantum mechanics deterministic. You say the Bohmiam mechanics does this but with the non-local quantum potential I agree that Nature should be deterministic, but also local and real. However your statement about Bell’s theorem,

“Bell’s Inequality experiments absolutely rule out local realism.”

I do not accept. First the word absolutely is a bit strong. I find the evidence not convincing and I do not mean the experiments, but Bell’s conclusion. The quantum correlation that leads to the violation is assumed to occur because of the breakdown of Bell’s locality assumption, but what if that is not the case? I believe, and have shown with simulation, that a different definition of spin in the absence of a field (no probe) can gives the EPR correlation. The model is local and real. I am ready to submit the paper to Phys Rev A.

I agree that Nature should be deterministic (my spin model is by the way) but also local and real. Non-locality defies a logical explanation and makes no physical sense.

Hi Bryan,
Thank you for your comment, and sorry for the very slow reply. I’ve been learning to be a dad and doing three jobs at once. It has made my interaction with the blogging world rather patchy. Your achievement with regards to Bell’s Inequality sounds totally fascinating. I’ve started watching your videos. I’m only 5 minutes in so far, but will watch the lot. Your lecture style is both clear and approachable and I’m looking forward to the rest. What you’ve found, if it holds up, is very important, I think. Some months have passed since you made your comment. Did Phys Rev A accept the paper, and if so, have you received much interest?

I’d like to pick up on your comment about non-locality. Please note that the kind of non-locality I think makes most sense is not the kind that people generally toss around in physics circles. If you see real problems with the way that non-locality is often discussed, then I’d agree with you. However, I don’t think that reduces the importance of the idea.

I say this because if we are building a discrete representation of spacetime out of something like a network, the notion of the physical position of *anything* requires revisiting. The location of a node in a network can only be defined in terms of the other nodes that are its neighbors. We can hook networks up in a huge number of different ways, only a very few of which look smooth and manifold-like. Even among those cases, there will usually still be short paths across the spatial graph of some sort. Hence, for discrete physics that uses networks, locality has to either be an emergent property of a system, or an unlikely property that we force in by hand. This means far from not making any sense, some amount of non-locality in a discrete model of the universe seems inevitable.

This does not mean, of course, that it is a *requirement* for resolving the EPR paradox. Far from it. There may be many different ways to skin that particular half-dead cat. It’s just that non-locality is a property of a network-based universe that we can be confident exists, and we can also build simple models to show that it does the job.

Of course, you may have a different take on the fine-scale structure of space. If you think that networks aren’t the way to go, and have another candidate representation, I’d love to hear about it. I like networks because they’re isotropic in bulk, flexible and minimally complex. I struggle to think of a more natural representation. However, that doesn’t mean that one doesn’t exist.