Baer, Barger, Salam, Serce, and Sinha (Oklahoma+Wisconsin) argue that the \(125\GeV\) Higgs boson, along with the absence of superpartners at the LHC at this point as well as the null results of the dark matter direct search experiments, is exactly what the most conventional string theory scenario – equipped with a naturally sounding refreshed notion of naturalness and a seemingly conservative type of the anthropic veto – has always predicted.

From the beginning, BBSSS make it clear that they belong to Team Stanford – or, given their admiration for Michael Douglas' stringy adjustments to naturalness considerations, Team Rutgers-Stanford (although by current locations, I should say Team StonyBrook-Stanford). They surely believe in a vast landscape of de Sitter vacua.

OK, how does their theory of everything work and what methods and assumptions does it use?

They need some anthropic selection. BBSSS think that the anthropic explanation of the small cosmological constant (promoted with some full quotes of Steven Weinberg) – as embedded in string theory – is a great discovery of the late 20th century physics if not one of the most profound discoveries of the 14th billion of years after the Big Bang. They believe that this success should naturally extend to other parameters of Nature.

Vacua that don't allow life are vetoed. This veto is applied in a rather binary way. So while some other statistical phenomena – to be discussed soon – may push some parameters in some extreme corners, the regions of the parameter spaces that don't allow life are just killed without any compassion.

In this way, they embrace the elimination of the vacua whose cosmological constant isn't tiny enough. Also, they eliminate the vacua whose electroweak scale is insufficiently low – insufficiently separated from the Planck scale – because they lead to some life-killing nuclear physics. Also, vacua with qualitatively undesirable traits – such as those without the electroweak symmetry breaking or those with the charge or color-breaking minima – are vetoed on anthropic grounds.

This anthropic veto wouldn't get them too far, there are still many possibilities that would survive. So they look for low-energy effective field theories that include the MSSM, the Minimal Supersymmetric Standard Model, and that arise from string compactifications. Among those, the preferred traits are being chosen by their preferred version of naturalness.

They use two versions of naturalness:

Practical naturalness:
When a quantity is calculable as a sum of many terms, none of the terms should be significantly larger than the total sum.Douglas' stringy naturalness:
A region of effective field theories' parameter spaces is more natural than a competitor if it is derivable from a larger number of string vacua.

Great (although I have doubts about both principles above which I will avoid in this text). Under some assumptions about the stringy physics, they believe that these two principles are basically equivalent to one another. What is their "not so widely appreciated" twist to naturalness? How do they use their naturalness?

Well, they write down the usual formula for the vacuum energy in MSSM – which includes the hidden sector contribution, \(F\)-terms, and \(D\)-terms. There may be several \(F\) and \(D\) term SUSY breaking fields, namely \(n_F\) and \(n_D\), and they figure out – using some previous papers – that due to the practical naturalness, it's "harder than expected" to make many SUSY-breaking terms simultaneously small if \(n\) is larger.

For this reason, practical naturalness predicts some pressure that prefers larger soft terms – they are favored because of an extra factor in the probability distributions\[

f_{SUSY} \sim m_{soft}^n, \quad n=2n_F+n_D-1.

\] The power law becomes more impactful if there is a greater number of mass-like parameters to tune – I have actually expressed the same opinion for years.

OK, so the underlying practical naturalness actually wants to make the electroweak scale high (close to the Planck scale) as well, but at some point, the possibilities are killed by the anthropic veto. When these considerations are quantified, BBSSS conclude that \(n\in\{1,2\}\) are realistic values. Within the stringy MSSM parameter space, they find out that the Higgs masses between \(120\) and \(126\GeV\) are heavily favored, and those around \(125\GeV\) are really the most likely ones.

On top of that, they make predictions for the superpartner masses and related parameters:

\(m_{\tilde g} \sim 4\pm 2 \TeV\)

\(m_{\tilde t_1}\sim 1.5\pm 0.5\TeV\)

\(m_A\sim 3\pm 2 \TeV\)

\(\tan\beta\sim 13\pm 7\)

\(m_{\tilde \chi^\pm_1,\tilde \chi^0_{1,2}}\sim 200\pm 100\GeV\)

\(m_{\tilde \chi^0_2}-m_{\tilde \chi^0_1} \sim 7\pm 3 \GeV\)

\(m_{\tilde q,\tilde \ell}\sim 20\pm 10\TeV\)

Those are nice values – consistent with the non-detection of the superpartners at the LHC but potentially discoverable at the HE-LHC (high energy LHC) or even already at HL-LHC (high luminosity LHC) in some cases. They humbly don't talk about any really new collider such as the FCC but if they did, they could have calculated the IQ of those who want to veto such a collider:\[

IQ \sim 60\pm 5,

\] especially because such a next-generation collider would be rather likely to see the squarks and sleptons, too. At any rate, they draw lots of histograms in various two-dimensional planes depicting the parameter space and conclude that their picture makes sense.

They think that the high-luminosity LHC has the greatest chance to discover supersymmetry in the SDLJMET channel (soft dilepton plus jet plus missing energy). On the other hand, the next-generation SI DD (spin-independent direct detection) experiments could find the WIMP dark matter. The other HL-LHC channels and dark matter searches are less promising.

I think that there's some wishful thinking included in this picture and the conclusions depend on many explicit and hidden assumptions as well as covert and overt biases. You know, the successful Higgs mass prediction managed to reduce the a priori allowed range between \(50\) and \(1000\GeV\) by a factor of \(500\) or \(512\) or so – that's like predicting nine bits of information. It seems fair to estimate that their choices amount at least to "nine bits of parameters" that they could have adjusted, so the "real amount of prediction" could be zero.

At the end, the predictions by BBSSS for the parameters of new physics could end up being BS (times a factor of 2-3; that was a stupid pun). But they could also be completely right or almost right. Similar playing with these modest and stringy variations of naturalness is highly desirable. For years, I have believed that in the anthropic-vs-conventional wars, progress would only be achieved once some people got their hands dirty and played with some viable combinations of anthropic and old-fashioned criteria. The anthropic selection is tautologically true to some extent and it is probably helpful to explain something – on the other hand, it can't be the only or final principle to do this part of physics because the anthropic probability calculus isn't even internally consistent. The old-fashioned criteria of naturalness must still hold in one way or another and the remaining task is to figure out how these two very different classes of paradigms co-exist.

Some people may want to ban all the methodology used in particle physics for decades (e.g. any kind of naturalness) because of the absence of Beyond the Standard Model physics at the LHC so far. But real physicists won't do it, of course. There exist plausible scenarios implying that the observations so far are compatible with the conventional picture and new physics should be expected in the upgraded or next-generation experiments and real physicists won't ever pretend that this scenario has been refuted because it hasn't been.

And that's the memo.

Off-topic: Why it's so wonderful to sell your famous brewery to the Japanese (Pilsner Urquell to Asahi). In recent days, I was accidentally served this ad about Gambrinus, the main mass market beer brand produced in Pilsen (the name is borrowed from a legendary Nordic semigod of beer), before I watched random unrelated YouTube videos.

I think that someone at Asahi must have directed the RUR production team of Jan Všetíček (well, under McCann Prague) because the sci-fi style of this ad is rather unusual in our ad market. So all Pilsner patriots recognize every piece of the streetsandparks in the video – which are augmented with skyscrapers, robots, fashion (which may be changed in milliseconds while on the street), and new effective means of transportation from the year 2169 AD. I actually hope that this is how Pilsen's gonna look like already in 2050 AD! :-) The punch line of the video is obviously that while many things will change, Gambrinus will not.

Because it was much more intriguing than the average ad, I had to find the video and analyze it very slowly. Many ads of the brewery were situated in the 19th century so they just jumped 300 years elsewhere. ;-) Just to be sure, Gambrinus was founded in 1869, i.e. 150 years ago, so we're in the middle now.