The Multiverse was invented after the discovery of String-landscape, having zillions different string-vacua. Thus, by definition, each bubble of the multiverse has different vacuum and thus has different natural constants and physics laws.

There are two immediate consequences from this definition:

One, THIS universe of ours is just a happenstance, one out of the zillions randomly floating bubbles. That is, the natural constants of THIS universe is not a consequence of any law or logic but a happenstance and thus cannot be DERIVED.

Two, those bubbles are disconnected by DEFINITION. The chance for anyone of them to collide is very small. {Note: no evidence of the bubble-collisions is detected.} And, this leads to the conclusion that the Multiverse is not falsifiable, and this starts the debate on Popperianism.

In fact, the Multiverse can be easily killed by showing two points.

One, all natural constants of THIS universe can be derived THEORETICALLY.

Two, the DERIVATIONS of these natural constants are bubble-independent.

On January 17, 2018 (almost five years after my article), Sean Carroll (Astrophysicist at Caltech) wrote an article: “Beyond Falsifiability”, saying {Cosmological models that invoke a multiverse – a collection of unobservable regions of space where conditions are very different from the region around us – are controversial, on the grounds that unobservable phenomena shouldn’t play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice.

…

cosmological multiverse, …. It’s not the best name, as the idea is that there is only one “universe,” in the sense of a connected region of space, but of course, in an expanding universe, there will be a horizon past which it is impossible to see. If conditions in far-away unobservable regions are very different from conditions nearby, we call the collection of all such regions “the multiverse.”}

It is very nice of seeing the diehard Multiverse devotee (Sean Carroll) has finally come around.

First, Carroll’s multiverse is no longer disconnected bubbles but is a bad name for the parts which hide beyond the event horizon of THIS universe.

Second, the stupidity of the Popperianism has finally been challenged. Truth by definition cannot be falsified.

Massimo Pigliucci (the K.D. Irani Professor of Philosophy at the City College of New York) still tried to defend Popper, and he wrote a comment (on January 22, 2018) on Carroll’s article; he wrote, {Setting aside that falsificationism is not a scientific theory, but rather a notion in philosophy of science (after all, how would you falsify Popper’s account?), Sean admits that he hasn’t gone over the nuances of what Popper actually wrote. That’s unfortunate, because Popper was a bit more of a sophisticated philosopher than he is usually given credit for. Even though his ideas are no longer current in philosophy of science (you know, philosophy does make progress!), if one invokes him to dismiss a scientific theory (as Ellis and Silk do), or, conversely, rejects his insight in order to deflect criticism against one’s favorite theory (as Sean does), it would be good to take a look at what the men actually wrote.

Without going into too much detail (for an in-depth discussion and pertinent quotes see my Aeon article mentioned above), Popper realized that falsification is not a sharp blade capable of neatly cutting off science front non-science. He was also aware of, and discussed at length, the fact that legitimate scientific theories do include ad hoc explanations that are used by scientists as place holders until (and if) they figure out what is wrong with the theory they are working on. Nobody has ever rejected a scientific theory because all its statements were not immediately falsifiable, nor did Popper suggest such a crude practice in the first place.}

It is nice to know that Popperianism is now a dinosaur in the Philosophy of Science, but the stupidity of Popperianism has done the science a great harm and is still misleading zillions people. There are two simple points on this Popperianism.

One, Popperianism is conceptually wrong although it might be having some practical use, as all the crap theories can be easily falsified. But, the truth by definition cannot be falsified while all truths can be verified.

Two, the damning concept of Popperianism on science was widely accepted by the Western society while Popper did not come out to denounce it; so, all his other good or great ideas on science cannot be the excuse for the wrongness of that naive-Popperianism which is not only wrong but is totally stupid.

The tide of eradicating the two tumors {the Parallel-Multiverse and the naive-Popperiansim} of physics is now chased by many mainstream physicists, such as Sabine Hossenfelder.

I must praise and complement Witten’s courage to give the death sentence to M-string theory finally, as it is after all his hallmark.

Of course, Wolchover’s article itself is heavily camouflaged for upholding Witten’s dignity, with many side-attractors. However, the following direct quotes of Witten’s statement from the article will reveal Witten’s true intention clearly.

A) The direct quotes

{Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

…

Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way.

…

I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

…

I think our understanding of what it (M-theory) is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

…

Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

…

I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. … But I can’t say anything useful.}

The above statements clearly show four points.

One, QFT is a failed program for describing the nature.

Two, {M-string theory + AdS/CFT + hologram} fail to describe nature.

Three, he suspects that there is an extra layer of abstractness in addition to the two above.

Four, he simply does not know what that extra layer of abstractness could be.

These four points not only give M-string death sentence but also on all other theoretical cornerstones (QFT, AdS/CFT, and hologram) of the mainstream paradigm for the past 50 years.

B) A brief history

Is his finally accepting the total defeat the result of the no-show of SUSY at LHC?

For many SUSY devotees, the no-show of SUSY at LHC is just a great reason for building a bigger collider, such as the proposed 100 Tev Chinese Super Collider, which was pushed by the entire West, the most notable prominent physicists are David Gross, Witten, Steven Weinberg, Sheldon Glashow, Hawking and countless others (Nima, Tommaso Dorigo, etc.).

Figure 1

Figure 1a

However, my Protégé Dr. Li xiaojian (Professor of North China University of Technology, Beijing, China) talked to David Gross at String 2016 about the G-theory.

Finally, on Oct 17, 2017, Steven Weinberg gave a video presentation for ‘Int’l Centre for Theoretical Physics’ and revealed that both Witten and Nima have given up M-string theory (see https://www.youtube.com/watch?v=mX2R8-nJhLQ at one hour 32 minutes mark).

I, however, must congratulate Witten’s courage of finally admitting that M-string was a total failure. Only hero has this kind of courage.

Note (added on December 4, 2017): on December 1, 2017, Scientific Controversies (Sci Con; a series of conversations between scientists hosted by PW Director of Sciences Janna Levin) held a public discussion with the title {Scientific Controversies: String Theory} with two prominent physics {David Gross (Nobel Laureate in Physics) and Clifford Johnson}.

Levin began the discussion by asking the two of them where they stood on string theory: pro, con or agnostic? This flustered Gross a bit (he’s one of the world’s most well-known and vigorous proponents of string theory) and Levin somehow took this as meaning that he was agnostic. Finally Gross clarified things by saying something like “I’ve been married to string theory for 50 years, not going to leave her now”.

Figure 10

Obviously, however wrong the M-string theory is, Gross cannot abandon her after 50 years marriage. Although without the courage of Witten, Gross’ loyalty for LOVE must also be praised.

One, the Higgs naturalness has now failed, even if SUSY Were existing at GUT scale.

Two, the Majorana neutrino is about completely ruled out.

First, a very strong hint shows that neutrino is different from its anti-particle.

Second, the observation of ‘Big Bang Nucleosynthesis’ is very much ruling out the Majorana neutrino.

Figure 26

D: The lingering hallucinations are cured

Two physics hallucinations happened about at the same time and they converge to the same delusive wonderland, the Multiverse.

The M-string theory gets zillions ‘string vacua’, which leads to the multiverse.

The ‘inflation scenario (without a guiding principle for the initial conditions)’ leads to ‘Eternal inflation’ which in turn leads to the multiverse.

The wonder drug for these hallucinations is showing that Multiverse is a delusion with two points.

One, the soul of the multiverse is that the structure constants of THIS universe are just happenstances (the result of the Boltzmann’s brain). That is, even nature (or God) does not know how to calculate the structure constants of this universe. So, by showing the ways of calculating them, that hallucination is cured.

Two, by showing that those calculations are not bubble dependent, it further bursts the delusion bubble.

SM (Standard Model of particle physics) has passed every test which we can throw at it, but no one believes that SM is a correct final theory.

On the other hand, everyone still sees GR (General Relativity) being a Gospel of gravity, especially after the LIGO announcement on October 16, 2017.

Indeed, GR has also passed all tests which we can throw at it. Indeed, LIGO could be a great tool for viewing the Cosmos in a different way. But, these will not change the fundamental FACT that GR is a totally wrong description of gravity.

The most important damning FACT on GR is that GR plays no role at ALL in the Heavenly Father’s description (HFD) of THIS universe.

In HFD, this universe is ruled by a Structure-Function which consists of {G (energy, dark energy) + G (mass; dark and visible)}.

The G (energy) leads to the acceleration of the expansion of this universe. But, most importantly, it also leads to ‘quantum-ness’.

The G (mass) is, of course, leading to Newtonian gravity, while GR is just an attribute of this G (mass).

Natural is moving nicely minute by minute for the past 14 billion years and is playing its predetermined dance to its predetermined destiny with grace and joy.

On the contrary, the human mainstream physics is now in a hellfire nightmare after the discovery of a new boson in 2012. Is it suddenly falling into this hellfire nightmare unexpectedly? Or, were many hellfire demons already plagued the mainstream physics since the beginning 100 years ago? Logically, the latter must be the case. That is, the cause for the nightmare today can be traced out from its history.

The brief history

One, in (1925 – 1927), Copenhagen doctrine DECLARED that ‘quantum uncertainty’ is an intrinsic attribute of nature, and it cannot be removed by improvement of measurement in principle, and this led to the ‘measurement mystery’.

Soon, Schrödinger came up a Cat-riddle, and it CREATED the ‘superposition mystery’, the omnipresent of the ‘Quantum God’.

Two, in early 1954, a general gauge symmetry theory was developed by Chen Ning Yang and Robert Mills. Then, in the first part of the 1960s, Murray Gell-Mann discovered the “Eightfold Way representation” from the experimental data. The Yang-Mills theory is a mathematics beautiful tool to describe some symmetries while the ‘Eightfold way’ is obviously encompassing some beautiful symmetry. However, the Yang–Mills field must be massless in order to maintain gauge invariance.

Three, in order for the Yang-Mills gauge to make contact with the real world (the Eightfold Way), it must be spontaneously broken. In 1964, Higgs and et al came up a ‘tar-lake like field’ (the Higgs mechanism) to break the SU gauge spontaneously.

Four, in 1967, Steven Weinberg and others combined an SU (2) gauge (a special Yang-Mills gauge) and the Higgs mechanism to construct the EWT (Electroweak Theory). And, this EWT works beautifully for a two quark model (with up and down quarks).

Five, in the November Revolution of 1974, Samuel Ting discovered Charm quark via the J/ψ meson; the original two quark model was thus expanded as a four-quark model.

Six, in 1973, Maskawa and Kobayashi introduced the “CP Violation in the Renormalizable Theory of Weak Interaction”. Together with the idea of Cabibbo angle (θc), introduced by Nicola Cabibbo in 1963, the ‘Cabibbo–Kobayashi–Maskawa matrix’ was constructed. As this CKM matrix demands AT LEAST ‘3 generations of quarks’, a six-quark model was constructed, the SM (Standard Model). The SM further predicts the weak-current (Ws) and neutral current (Z). tau (τ) lepton was discovered in 1975.

Seven, in 1983, the Ws was discovered, and Z soon after. Then, top quark was finally discovered in 1995.

At this point, the SM is basically confirmed. However, the Higgs mechanism also predicted a field boson. As the Higgs mechanism is the KEY cornerstone for SM, it (the SM) will not be complete if the Higgs field boson is not discovered.

The brief history of BSMs

With the great success of SM, a few BSMs (beyond standard model) quickly emerged.

One, the GUT (Grand Unified Theory), with a higher symmetry; {SU (5), SU (3) x SU (2) x U (1); at about 10^16 Gev energy scale}. This work was mainly done by Glashow in 1974. The key prediction of GUT is the proton decay. From the early 1980s, a major effort was launched to detect the proton decay. But, the proton decay’s half-life is now firmly set as over 10 ^ 33 years, much longer than the lifetime of this universe, To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles) have failed. With these results, Glashow was basically going into hibernation, while hoping that ‘sterile neutrino’ come to his rescue.

Two, the Preon model (done by Abdus Salam) which was expanded as Rishons model (mainly done by Haim Harari). It has sub-quarks (T, V): {T (Tohu which means “unformed” in Hebrew Genesis) and V ( Vohu which means “void” in Hebrew Genesis)}.

Rishons (T or V) carry hypercolor to reproduce the quark color, but this set up renders the model non-renormalizable. So, it was almost abandoned on day one.

Three, the M-string theory began as a bosonic string theory. In order to produce fermions, it must incorporate the idea of SUSY. That is, M-string theory and SUSY must be Dicephalic parapagus twins.

In the 1960s–1970s, Vera Rubin and Kent Ford had confirmed the existence of dark mass (not dark matter). SUSY was claimed as the best candidate to provide this dark mass. Thus, M-string theory dominates the BSM for the past 40 years.

The awakening of the demons

In 2012, a Higgs boson-like particle was discovered, with a measured mass = 125.26 Gev which is trillions and trillions smaller than the expected value.

The only way out of this predicament is by having a hidden massive partner to cancel (balance) out its huge mass. This massive partner can be a SUSY particle or a twin-Higgs. By March 2017, no twin-Higgs nor any SUSY were discovered under two (2) Tev range. Even if SUSY were existing in a higher energy sphere, it (SUSY) is no longer a solution for this Higgs-naturalness issue.

Furthermore, the b/b-bar should account for over 60% decaying channel for Higgs boson. But by now (November 2017), this channel is still not confirmed. The best number was 4.5 sigma from a report a year ago, which is not enough to make a confirmation. Most importantly, even if the channel were confirmed, it cannot meet this 60% mark.

Thus, many physicists now are open the possibility that this 2012-boson might not be the Higgs boson per se.

Yet, this Higgs demon does not stop its dance with the above issues.

The neutrino’s mass by definition cannot be accounted by Higgs mechanism, as a tar-lake like field to slow down the massless particle to gain an apparent mass, as neutrinos do not slow down in the Higgs field at all. Thus, neutrinos must be Majorana fermions.

Yet, the Majorana angel has never been observed.

One, by definition, Majorana particle must be its own antiparticle. But, many data now show that neutrino is different from its antiparticle.

Two, Majorana neutrino should induce the ‘neutrinoless double beta decay’, but its half-life is now set as over 10 ^ 25 years, much longer than the lifetime of this universe.

Three, by definition again, Majorana particle’s mass must come from ‘Sea-saw’ mechanism, that is, balanced by a massive partner, such as sterile neutrino or else (SUSY or whatnot). But, ‘sterile neutrino’ is now almost completely ruled out by many data (IceCube, etc.)

Four, the most recent analysis of the ‘Big Bang Nucleosynthesis’ fits well if the neutrino is a Dirac fermion (without a massive partner). If the neutrino is viewed as Majorana particle (with a hidden massive partner), ‘the Big Bang Nucleosynthesis’ can no longer fit the observation data.

Without a Majorana neutrino, the Higgs mechanism is DEAD. With a dead Higgs mechanism, SM is then fundamentally wrong as a correct model, although it is an effective theory.

This Higgs demon is now killing the SM, pushing the mainstream physics into the hellfire dungeon.

Of course, Weinberg and many prominent physicists still hope a rescue from one of the BSMs, especially from the M-string theory. But, SUSY (a major component of M-string) is now totally ruled out as an EFFECTIVE rescue. And, many most prominent String-theorists are now abandoning the M-string theory, see Steven Weinberg video presentation for ‘Int’l Centre for Theoretical Physics’ on Oct 17, 2017, at 1:32 (one hour and 32 minutes mark. The video is available at https://www.youtube.com/watch?v=mX2R8-nJhLQ . A brief quote for his saying is available at http://www.math.columbia.edu/~woit/wordpress/?p=9657

The rescuing angels

While the theoretical physics is falling into the hellfire dungeon step by step, the experimental physics angels are descending on Earth with sincerity and kindness.

One, dark mass (not dark matter) was firmly confirmed by 1970s.

Two, acceleration of the expansion of the universe was discovered in 1997.

Three, a good estimation of CC (Cosmology Constant) ~ 3×10−122 was reached in the 2000s.

Two, INVENTING almost unlimited ghost universes by using the dominant cosmology theory, the ‘inflation cosmology’.

“Inflation” was a reverse-engineering work for resolving some cosmology observations, such as the flatness, horizon and homogeneous cosmologic facts. As a reverse-engineering, it (inflation) of course fits almost all the old data and many NEW observations. But, almost all reverse-engineering are only constrained by the THEN observed data while without any ‘guiding principle’.

That is, the ‘initial condition’ of the ‘inflation’ cannot be specified or determined. This guidance-less fact allows unlimited ‘inflation models’ to be invented. Of course, it leads to ‘eternal inflation’, having unlimited bubble-universes.

At the same time, the M-string theory also reached its final destination, the ‘String Landscape’, having also unlimited string vacua, again for unlimited bubble-universes (the Multiverse). That is,

“Eternal inflation” = ‘string landscape’ = multiverse

Now, there is a CONVERGENCE coming from two independent pathways, and this could be a great justification for its validity.

With the superweapon of Multiverse, ‘the Higgs mechanism led army’ is no longer besieged by the angel of facts. Those facts (nature constants, etc.) of this universe is just a random happenstance, and even Nature does not know how to calculate them.

Furthermore, the G-theory universe is all about ‘computation’, that is, there must be a computing device in the laws of physics. And, of course, there is. In G-theory, both proton and neutron are the base of Turing computer, see http://www.prequark.org/Biolife.htm .

In addition to ‘computation’, THIS (not other-verse) universe is all about energy and mass. So, the Structure-Function of THIS universe can be defined as:

S (universe) = S (energy, mass)

= S (dark energy, dark mass, visible relativistic mass/energy)

As both Newtonian and GR are related to the structure of this universe, Gravity can be defined by the S-function, as:

Gravity = G (S) = G (dark energy, dark mass, visible mass)

= G (dark energy) @ G (mass)

For G (mass), it has only one parameter, mass. This FACT shows that every ‘mass’ must interact with ALL other masses in THIS universe. That is, the Simultaneity Function can be defined by G (mass), that is,

G (mass) = Si (mass); G (mass) is a simultaneity function.

This Si function can be renormalized only if the gravity interaction transmits instantaneously. In fact, if the gravity of the Sun reaches Earth with light speed, it will not fit the reality. The Sun/Earth gravitational interaction is precisely described with Newtonian gravity law, which encompasses instantaneity.

So, for Sun/Earth gravity at least (if not for other cases), G (mass) should be the function of both {simultaneity and instantaneity}. Thus, we can define:

G (Sun/Earth) = G (mass, simultaneity, instantaneity)

For Newtonian gravity, the ‘masses’ are wrapped into two points, the ‘center of mass’ while the simultaneity and instantaneity are the innate part of the equation.

For GR, the simultaneity and instantaneity are wrapped into the ‘spacetime sheet’. When mass interacts with the GR spacetime sheet, it transmits both simultaneously and instantaneously.

This kind of wrapping makes both gravity theories automatically incomplete, as effective theories at best. Now, Newtonian gravity is now viewed as wrong in terms of Occam’s razor, and thus it does the modern physics no harm. On the other hand, GR is still viewed as the Gospel of gravity, and it becomes the greatest hindrance to getting a correct gravity theory.

If GR did provide us some insights before, it is a long time ago past tense. The recent promotion of the greatness of the LIGO discovery will further drag us down the hellfire dungeon. LIGO indeed might provide some additional data to confirm what we already know, but it cannot rescue GR’s fate as a total trash. The following is just a short list of GR’s shortcomings.

Yes, GR is, of course, a very EFFECTIVE gravity theory (as a great reverse-engineering work) but is definitely a wrong one for the correct theory. The GR wrapping which hides the essences of gravity (simultaneity and instantaneity) renders it unsalvageable and unamendable. That is, it is, in fact, the greatest hindrance to getting a correct gravity theory. So, GR is the other Arch-Demon for modern physics.

Here is the ArchAngel

All the calculations for those angel facts (of section D) are done in G-theory (Prequark Chromodynamics).

Superficially, Prequark model is similar to the Preon (Rishons) model, but there are at least four major differences between them.

One, the Rishons model has sub-quarks (T, V): {T (Tohu which means “unformed” in Hebrew Genesis) and V ( Vohu which means “void” in Hebrew Genesis)}. But, Harari did not know what T is (just being unformed). On the other hand, the A (Angultron) is an innate angle, a base to calculate Weinberg angle and Alpha, see http://prebabel.blogspot.com/2012/04/alpha-fine-structure-constant-mystery.html .

Two, the choosing of (T, V) as the bottom in the Rishons model was ad hoc, a result of reverse-engineering. On the contrary, there is a very strong theoretical reason for where the BOTTOM is in G-theory.

In G-theory, the universe is ALL about computation, computable or non-computable. For computable, there is a TWO-code theorem. For non-computable, there are 4-color and 7-color theorems.

That is, the BOTTOM must be with two-codes. Any lower level under the two-code will become TAUTOLOGY, just repeating itself.

Anything more than two codes (such as 6 quarks + 6 leptons) cannot be the BOTTOM.

Three, rishons (T or V) carry hypercolor to reproduce the quark color, but this set up renders the model non-renormalizable, quickly going into a big mess. So, it was abandoned almost on day one. On the other hand, prequarks (V or A) carry no color, and the quark color arises from the “prequark SEATs”. In short, Rishons model cannot work out a {neutron decay process} different from the SM process.

In addition to being the theory to describe particles, G-theory also resolves ALL cosmologic issues which consist of only three:

One, the initial condition of THIS universe

Two, the final fate of THIS universe

Three, the BaryonGenesis mystery

BaryonGenesis determines the STRUCTURE of THIS universe, that is,

G (S) = G (dark energy, dark mass, visible mass)

= G (dark energy) @ G (mass)

So, BaryonGenesis must be the function of G (S), which is described as:

(dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82)

The calculation of this Planck CMB date in G-theory uses the ‘mass-LAND-charge’, that is, all 48 fermions (24 matter and 24 antimatter) carry the same mass-land-charge while their apparent masses are different. And, MASS-pie of THIS universe is evenly divided among those 48 fermions. That is, the antimatter does in fact not disappear (not be annihilated) while it is invisible. See the calculation below. More details, see https://tienzengong.wordpress.com/2017/10/26/science-is-not-some-eye-catching-headlines/ .

This BaryonGenesis calculation must also link to the issues of the {initial condition and the final fate}. And indeed, it does.

BaryonGenesis, in fact, has two issues.

One, where is the antimatter in THIS universe?

Two, why is THIS universe dominated by matter while not by antimatter?

The ‘One’ was answered with the above calculation.

The ‘Two’ can only be answered by ‘Cyclic Multiverse’.

However, for THIS universe goes into a ‘big crunch’ state, the omega (Ω) must be larger than 1, while it is currently smaller than 1. That is, there must be a mechanism to move (evolve) Ω from less than 1 to bigger than 1.

Again, only G-theory has such a mechanism, and it is not a separately invented but is a part of BaryonGenesis calculation, the ‘Dark Flow, W’.

The death of Naturalness is a precursor to the death of Higgs Mechanism.

Steven Weinberg just revealed the death of M-string theory in his October 2017 video lecture.

Paul J. Steinhardt announced the death of ‘inflation cosmology’ in 2016.

Conclusion

The current hellfire nightmare of the mainstream physics did not start in 2012 but is the results of three demons {Copenhagen doctrine, GR and the Higgs mechanism}, began in 100 years ago. Fortunately, many angel facts (experimental data) have revealed their demon-faces. Finally, the ArchAngel (the G-theory) has come to the rescue. With the growing army of ArchAngel, the human physics’ salvation is now secured.

Instead of making such an eye-catching hype, science should do some soul searching: {What has gone wrong?}

A) What went wrong?

The obvious WRONG conclusion is based on two speculations.
One, the matter (especially proton) and antimatter (antiproton) were created equal (amount) at the Big Bang.

Two, the FACT of today that THIS universe is dominated by matter is because that the antimatter has almost ALL been annihilated.

These two speculations lead to a new speculated conclusion: there must be a process which annihilates antimatter while preserving the matter.

Then, this speculated conclusion lead to the fourth speculation: there must be some differences between matter and anti-matter in addition to its definition, having opposite electric charge.

Yet, the recent data shows that there is virtually NO difference between the two.

B) Righting the wrong

Instead of making an eye-catching joke, science must conclude that one (at least) of the two original speculations must be wrong.

In G-theory, matter and anti-matter are not mirroring counter partners but are woven together by one string and one anti-string. That is, the anti-matter is the necessary partner co-exist with the matter simultaneously, and there is no anti-matter-annihilation massacre right after the Big Bang.

One, as the anti-matter is a co-existing partner of matter, the dark mass calculation must account the anti-matter together with the matter in the equation, and that calculation fits the Planck data perfectly.

Two, there are zillions anti-matter (anti-quarks) inside of proton or neutron; the anti-matter does not disappear from this matter-dominated universe.

D) Additional issues

Yet, the two facts above cannot escape from the fact that matter (such as proton, neutron, and electron) is after all DIFFERENT from its anti-partners (anti-proton, anti-neutron, and positon). That is, why is THIS universe dominated by the matter, not anti-matter?

This last question was addressed in G-theory long ago in terms of “Cyclic multiverse”.

On the other hand, Gerard ‘t Hooft (a Nobel Laureate of physics) published a book {The Cellular Automaton Interpretation of Quantum Mechanics (by Springer in 2016)} and followed up with a new article {Free Will in the Theory of Everything (in September 2017)} to propose a complete new FRAMEWORK for QM.

A) The ‘t Hooft/ Maudlin debate

However, ‘t Hooft’s new QM is violently attacked by many, such as Tim Maudlin. The center of the battlefield is still about the EPR argument, especially about its derivative (the Bell’s theorem).

Bell’s theorem: {No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics}; rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables).

In the general consensus, Bell’s theorem is now verified by Alain Aspect (1981) and Hensen (2015) experiments.

However, even John Stewart Bell admitted that Bell’s theorem can be invalided under the condition of superdeterminism.

Superdeterminism: the apparent freedom of choice of an agent (Alice or Bob) is in fact the reenacting a predetermined screenplay; that is, there is not true free-will. Thus, Bell’s theorem depends on the assumption of “free will”, which does not apply to deterministic theories.

Now, the battle line is very clear:

For Maudlin:

One, Bell’s theorem has verified.

Two, the automata are 1) following deterministic rules and 2) reacting at any time to only local inputs. That is, cellular automaton lying on a grid are updated according to laws that only involve nearest neighbors, nothing else, so that deserves to be called “local”.

Three: so I hope we agree that neither the local indeterministic automata nor the local deterministic automata of this sort could be used in an empirically acceptable theory, even though producing the right empirical results is logically possible in each case.

Four (conclusion): cellular automaton QM is totally wrong.

For ‘t Hooft:

One, my findings are so different from Bell’s. The core ingredient of my views is the existence of mappings of the states of a local, deterministic system onto orthonormal sets of basis elements of Hilbert space. QFT is a local indeterministic theory that obviously predicts violations of Bell’s inequality, and it was described by Bell himself as “not just inanimate nature running on behind-the-scenes clockwork, but with our behaviour, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined”

Two, ‘t Hooft’s CA is a *quantum* cellular automata: “the local indeterministic automata should produce behavior that is indistinguishable from local deterministic automata that are all running different deterministic pseudo-random number generators; that is, there exists an automaton-like theory with quantum evolution laws, mimicking the Standard Model at large distances, that yields the same predictions as a deterministic automaton.

With the superdeterminism loophole remains open, the above argument is identical to the ‘chicken talk to duck’, singing their own songs without any meaningful conversation.

B) The verdict

So, ‘t Hooft concluded: {I still feel the burden of producing more precise models, ones that generate more precisely systems of particles resembling the SM. As long as that hasn’t been completed, you can continue shouting at me.}

In Prequark Chromodynamics, both proton and neutron have the cellular automaton descriptions (as glider of Conway’s Life game, the base for a Turing computer), see http://www.prequark.org/Biolife.htm . And, this is now widely known via Twitter.

With Prequark Chromodynamics, the ‘t Hooft/Maudlin debate can now be settled. But, I do not agree with the view that superdeterminism plays a major role in QM. Thus, I will revisit this ‘Bell’s theorem’ issue.

In addition to the superdeterminism loophole, there are two issues for the experimental verification for the theorem.

One, there are loopholes for the experiments, and some of them are intrinsic, having new loopholes in ad infinitum sense.

Two, all experiments are theory-based (biased). That is, all the experimental verification will not guaranteed the intended theory to be CORRECT. The two best examples are GR (general relativity) and SM (standard model of particles). GR has passed ALL experimental tests which we human can throw at it, but it is now known as an ‘effective theory’ at best if not all the way wrong (as a gravity theory). SM has also passed all tests which we human can throw at it, but no one in the whole world believes that it is a complete theory.

On the other hand, a theorem (not law) could be disproved logically or linguistically.

Bell’s theorem: {No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics}; rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables).

Is this theorem logically or linguistically sound?

It consists of only two linguistic (logic) terms: {local hidden variables theory} and {quantum mechanics}.

“Local hidden variables” = “”local realism”

Locality: means that reality in one location is not influenced by measurements performed simultaneously at a distant location; that is, no instantaneous (“spooky”) action at a distance.

Realism: means that the moon is there even when not being observed; that is, microscopic objects have real properties determining the outcomes of quantum mechanical measurements.

Yet, violation of Bell’s inequality implies that at least one of the two assumptions (locality or realism) must be false.

Freedom refers to the physical possibility of determining settings on measurement devices independently of the internal state of the physical system being measured.

Non-locality: the signal involved must propagate instantaneously (or with superluminally signal), so that such a theory could not be Lorentz invariant.

If we can show that QM is totally local and real, then Bell’s theorem is invalid or simply moot.

QM is different from the local/real theory with only two major attributes: quantum uncertainty and superposition (Schrödinger’s cat).

One, quantum uncertainty: means that two noncommuting observables (such as position/momentum or time/energy) can never have completely well-defined values simultaneously, and this uncertainty is intrinsic, irremovable by the improvement of the measurements.

Two, superposition: the fate of Schrödinger’s cat.

In G-theory, these two mysterious QM wonders are totally deterministic.

First, QM is an emergent, not fundamental. QM uncertainty equation is the result of dark energy (the expansion of the universe).

In fact, all the Alain Aspect type experiments show only that quantum particles have a special attribute, the entanglement while the entanglement is 100% deterministic. There is no superluminally signal between the entangled particles as their states are superdetermined.

However, the superdeterministic feature of entanglement does not imply that the entire QM is superdeterministic. QM is completely deterministic (local and real) for three reasons.

One, the QM uncertainty is only the apparent effect of the expansion of the universe.

Two, the superposition is erased by the deterministic attractor.

Three, the entanglement is superdetermined.

Now, the Bell’s theorem can be mooted for three reasons.

One, there is a loophole (superdeterminism).

Two, all the experimental tests which support the Bell’s theorem cannot and will not guaranteed its validity (same fate as GR and SM).

Three, G-theory shows that 1) proton and neutron are Gliders (cellular automaton), 2) the expansion of the universe is 100% deterministic while the QM uncertainty is the emergent of it, 3) the superposition is erased by the deterministic attractor.

D) Clarifying the differences

I do agree with ‘t Hooft’s Cellular Automaton QM in principle as the G-theory (with proton/neutron as Glider) was developed 30 years before ‘t Hooft’s book (by Springer in 2016). I however do not agree with him about the ‘superdeterminism’ playing a MAJOR role in the case of completely excluding the ‘free will’.

At here, Mickey Mouse is an undefined term, understood in sociological sense. However, it has, at least, two attributes.

One, Mickey Mouse has no biological correspondence in terms of the ‘word’ mouse. That is, it is not real as a biological mouse.

Two, Mickey Mouse is observable as it is.

So, anything which encompasses the two attributes above will be a Mickey Mouse-like entity.

Example: if rhinoceros (or Saola, Narwhal, Unicornfish, Texas unicorn mantis, Okapi, Goblin spiders, Helmeted curassows, Unicorn shrimp, Arabian oryx, etc.) is clearly defined as not Unicorn, then Unicorn has no biological base, similar to Mickey Mouse, and it is a Mickey Mouse-like entity.

Yet, Unicorn is of course REAL in accordance to the “Mickey Mouse principle” as it is observable at many places, in arts (paintings, sculptures, animations, etc.).

The ‘free will’ is the backbone of the legal system (a subsystem of nature). Without IT, the entire legal system collapse. So, the ‘free will’ is at least a Mickey Mouse-like entity, and thus no law can exclude it.

On the same token, ‘superdeterminism’ cannot be excluded as it is the backbone for entanglement.

Of course, we cannot exclude the Bell’s theorem although it is a totally useless in the REAL world.