Instead of making such an eye-catching hype, science should do some soul searching: {What has gone wrong?}

A) What went wrong?

The obvious WRONG conclusion is based on two speculations.
One, the matter (especially proton) and antimatter (antiproton) were created equal (amount) at the Big Bang.

Two, the FACT of today that THIS universe is dominated by matter is because that the antimatter has almost ALL been annihilated.

These two speculations lead to a new speculated conclusion: there must be a process which annihilates antimatter while preserving the matter.

Then, this speculated conclusion lead to the fourth speculation: there must be some differences between matter and anti-matter in addition to its definition, having opposite electric charge.

Yet, the recent data shows that there is virtually NO difference between the two.

B) Righting the wrong

Instead of making an eye-catching joke, science must conclude that one (at least) of the two original speculations must be wrong.

In G-theory, matter and anti-matter are not mirroring counter partners but are woven together by one string and one anti-string. That is, the anti-matter is the necessary partner co-exist with the matter simultaneously, and there is no anti-matter-annihilation massacre right after the Big Bang.

One, as the anti-matter is a co-existing partner of matter, the dark mass calculation must account the anti-matter together with the matter in the equation, and that calculation fits the Planck data perfectly.

Two, there are zillions anti-matter (anti-quarks) inside of proton or neutron; the anti-matter does not disappear from this matter-dominated universe.

D) Additional issues

Yet, the two facts above cannot escape from the fact that matter (such as proton, neutron, and electron) is after all DIFFERENT from its anti-partners (anti-proton, anti-neutron, and positon). That is, why is THIS universe dominated by the matter, not anti-matter?

This last question was addressed in G-theory long ago in terms of “Cyclic multiverse”.

On the other hand, Gerard ‘t Hooft (a Nobel Laureate of physics) published a book {The Cellular Automaton Interpretation of Quantum Mechanics (by Springer in 2016)} and followed up with a new article {Free Will in the Theory of Everything (in September 2017)} to propose a complete new FRAMEWORK for QM.

A) The ‘t Hooft/ Maudlin debate

However, ‘t Hooft’s new QM is violently attacked by many, such as Tim Maudlin. The center of the battlefield is still about the EPR argument, especially about its derivative (the Bell’s theorem).

Bell’s theorem: {No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics}; rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables).

In the general consensus, Bell’s theorem is now verified by Alain Aspect (1981) and Hensen (2015) experiments.

However, even John Stewart Bell admitted that Bell’s theorem can be invalided under the condition of superdeterminism.

Superdeterminism: the apparent freedom of choice of an agent (Alice or Bob) is in fact the reenacting a predetermined screenplay; that is, there is not true free-will. Thus, Bell’s theorem depends on the assumption of “free will”, which does not apply to deterministic theories.

Now, the battle line is very clear:

For Maudlin:

One, Bell’s theorem has verified.

Two, the automata are 1) following deterministic rules and 2) reacting at any time to only local inputs. That is, cellular automaton lying on a grid are updated according to laws that only involve nearest neighbors, nothing else, so that deserves to be called “local”.

Three: so I hope we agree that neither the local indeterministic automata nor the local deterministic automata of this sort could be used in an empirically acceptable theory, even though producing the right empirical results is logically possible in each case.

Four (conclusion): cellular automaton QM is totally wrong.

For ‘t Hooft:

One, my findings are so different from Bell’s. The core ingredient of my views is the existence of mappings of the states of a local, deterministic system onto orthonormal sets of basis elements of Hilbert space. QFT is a local indeterministic theory that obviously predicts violations of Bell’s inequality, and it was described by Bell himself as “not just inanimate nature running on behind-the-scenes clockwork, but with our behaviour, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined”

Two, ‘t Hooft’s CA is a *quantum* cellular automata: “the local indeterministic automata should produce behavior that is indistinguishable from local deterministic automata that are all running different deterministic pseudo-random number generators; that is, there exists an automaton-like theory with quantum evolution laws, mimicking the Standard Model at large distances, that yields the same predictions as a deterministic automaton.

With the superdeterminism loophole remains open, the above argument is identical to the ‘chicken talk to duck’, singing their own songs without any meaningful conversation.

B) The verdict

So, ‘t Hooft concluded: {I still feel the burden of producing more precise models, ones that generate more precisely systems of particles resembling the SM. As long as that hasn’t been completed, you can continue shouting at me.}

In Prequark Chromodynamics, both proton and neutron have the cellular automaton descriptions (as glider of Conway’s Life game, the base for a Turing computer), see http://www.prequark.org/Biolife.htm . And, this is now widely known via Twitter.

With Prequark Chromodynamics, the ‘t Hooft/Maudlin debate can now be settled. But, I do not agree with the view that superdeterminism plays a major role in QM. Thus, I will revisit this ‘Bell’s theorem’ issue.

In addition to the superdeterminism loophole, there are two issues for the experimental verification for the theorem.

One, there are loopholes for the experiments, and some of them are intrinsic, having new loopholes in ad infinitum sense.

Two, all experiments are theory-based (biased). That is, all the experimental verification will not guaranteed the intended theory to be CORRECT. The two best examples are GR (general relativity) and SM (standard model of particles). GR has passed ALL experimental tests which we human can throw at it, but it is now known as an ‘effective theory’ at best if not all the way wrong (as a gravity theory). SM has also passed all tests which we human can throw at it, but no one in the whole world believes that it is a complete theory.

On the other hand, a theorem (not law) could be disproved logically or linguistically.

Bell’s theorem: {No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics}; rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables).

Is this theorem logically or linguistically sound?

It consists of only two linguistic (logic) terms: {local hidden variables theory} and {quantum mechanics}.

“Local hidden variables” = “”local realism”

Locality: means that reality in one location is not influenced by measurements performed simultaneously at a distant location; that is, no instantaneous (“spooky”) action at a distance.

Realism: means that the moon is there even when not being observed; that is, microscopic objects have real properties determining the outcomes of quantum mechanical measurements.

Yet, violation of Bell’s inequality implies that at least one of the two assumptions (locality or realism) must be false.

Freedom refers to the physical possibility of determining settings on measurement devices independently of the internal state of the physical system being measured.

Non-locality: the signal involved must propagate instantaneously (or with superluminally signal), so that such a theory could not be Lorentz invariant.

If we can show that QM is totally local and real, then Bell’s theorem is invalid or simply moot.

QM is different from the local/real theory with only two major attributes: quantum uncertainty and superposition (Schrödinger’s cat).

One, quantum uncertainty: means that two noncommuting observables (such as position/momentum or time/energy) can never have completely well-defined values simultaneously, and this uncertainty is intrinsic, irremovable by the improvement of the measurements.

Two, superposition: the fate of Schrödinger’s cat.

In G-theory, these two mysterious QM wonders are totally deterministic.

First, QM is an emergent, not fundamental. QM uncertainty equation is the result of dark energy (the expansion of the universe).

In fact, all the Alain Aspect type experiments show only that quantum particles have a special attribute, the entanglement while the entanglement is 100% deterministic. There is no superluminally signal between the entangled particles as their states are superdetermined.

However, the superdeterministic feature of entanglement does not imply that the entire QM is superdeterministic. QM is completely deterministic (local and real) for three reasons.

One, the QM uncertainty is only the apparent effect of the expansion of the universe.

Two, the superposition is erased by the deterministic attractor.

Three, the entanglement is superdetermined.

Now, the Bell’s theorem can be mooted for three reasons.

One, there is a loophole (superdeterminism).

Two, all the experimental tests which support the Bell’s theorem cannot and will not guaranteed its validity (same fate as GR and SM).

Three, G-theory shows that 1) proton and neutron are Gliders (cellular automaton), 2) the expansion of the universe is 100% deterministic while the QM uncertainty is the emergent of it, 3) the superposition is erased by the deterministic attractor.

D) Clarifying the differences

I do agree with ‘t Hooft’s Cellular Automaton QM in principle as the G-theory (with proton/neutron as Glider) was developed 30 years before ‘t Hooft’s book (by Springer in 2016). I however do not agree with him about the ‘superdeterminism’ playing a MAJOR role in the case of completely excluding the ‘free will’.

At here, Mickey Mouse is an undefined term, understood in sociological sense. However, it has, at least, two attributes.

One, Mickey Mouse has no biological correspondence in terms of the ‘word’ mouse. That is, it is not real as a biological mouse.

Two, Mickey Mouse is observable as it is.

So, anything which encompasses the two attributes above will be a Mickey Mouse-like entity.

Example: if rhinoceros (or Saola, Narwhal, Unicornfish, Texas unicorn mantis, Okapi, Goblin spiders, Helmeted curassows, Unicorn shrimp, Arabian oryx, etc.) is clearly defined as not Unicorn, then Unicorn has no biological base, similar to Mickey Mouse, and it is a Mickey Mouse-like entity.

Yet, Unicorn is of course REAL in accordance to the “Mickey Mouse principle” as it is observable at many places, in arts (paintings, sculptures, animations, etc.).

The ‘free will’ is the backbone of the legal system (a subsystem of nature). Without IT, the entire legal system collapse. So, the ‘free will’ is at least a Mickey Mouse-like entity, and thus no law can exclude it.

On the same token, ‘superdeterminism’ cannot be excluded as it is the backbone for entanglement.

Of course, we cannot exclude the Bell’s theorem although it is a totally useless in the REAL world.