Monday, February 11, 2019

Nima, the latest target of the critics of physics

Randall is famous for some clever and (even now) intriguing scenarios in particle physics while Hossenfelder is famous for persuading crackpots that physics is bad. That's a difference that Hossenfelder and her readers couldn't forgive to Randall and if Lisa Sundrum (as they romantically renamed her) were capable of giving a damn about what a bunch of irrelevant aßholes write on the Internet, they would have given her a hard time.

As you must agree, it would be a discrimination if the female big shot Lisa Randall were the only target. So Peter Woit has secured the minimum amount of fairness and political correctness when he (along with his readers) chose Nima Arkani-Hamed as a man who deserves some criticism today:

In Summer 2017, Nima pointed out that some algorithms looking for the global maximum of a function (such as the maximum of the accuracy of a physical theory) that are based on "small adjustments" may fail because they get stuck around a wrong local maximum – which is not a global maximum, however (e.g. the correct theory) – and a bolder jump towards the right basin of attraction is actually needed to find the correct solution.

In plain English, bold thinkers and courageous steps are sometimes necessary for paradigm shifts that may be needed, too.

Nima surely still agrees with the comments as described above – and I would guess that he still thinks that the observations may be relevant for the search for a better (or final) theory in fundamental physics. After all, many of us have played with these algorithms to search for the global maximum. One typical approach is to jump around and prefer the jumps that increase the function (which looks like an improvement) – the jump is more likely to be "approved" if we see an apparent improvement; or we may jump in the direction of some gradient – but they also add some noise which plays the role of the "experiments" that give us a chance to jump into another basin. Once we're sufficiently sure that we're near the right basin, we may reduce the noise – we may reduce the "temperature" that acts as a variable in this algorithm – and we may quickly converge to the global extremum.

Incidentally, I want to emphasize that "too high temperature" – jumping everywhere almost randomly – is no good, either. If you don't care about the local improvements at all, you can't converge to the truth, either – you are just randomly jumping in the whole configuration space which is probably very large and the probability of hitting the maximum is infinitesimal.

But you know, Woit was annoyed that Nima "changed his mind" in 2019. Around 1:09:00 of the January 2019 talk given in front of some very bright young folks in Princeton, Nima said that the explanations why we should continue to do research of certain classes of theories of new (particle) physics may look like excuses of a paradigm that has failed. In fact, even the theories that weren't confirmed by the LHC may already be considered to be tweaks or excuses of some simpler earlier theories that had failed.

However, Nima says, people shouldn't give up trying to tweak what they have because almost no promising theories of the same type have ever completely failed in the history of physics. Some tweaks or reinterpretations were what was needed when the theory really looked promising. Nima suggested that the people may be inclined to be bottom-up physics builders or top-down theorists – and especially the latter should keep on tweaking, combining, and recombining the toolkit that they have developed or mastered.

As you can see, this is an immensely wise recommendation.

The process of finding and establishing better theories of particle physics resembles the boring of a new tunnel. It's a tunnel between the everyday life of the doable experiments such as those on the LHC or the FCC on one side; and the nearly philosophical, Platonic, idealist realm of very precise, principled, and powerful equations, mathematical structures, and ideas that most naturally work in the regime that is experimentally inaccessible.

To one extent or another, all fundamental physicists who care about the empirical truth at all are digging a tunnel between the \(1\TeV\) energy scale of the LHC and the \(10^{16}\TeV\) Planck (energy) scale. The tunnel is being dug from two directions and people on both sides must have some idea where they want to get. It's plausible that the two teams of workers will meet in the middle. It's also plausible that one of the two teams will be almost useless and the other, successful team will just dig the whole tunnel from their side to the other side! ;-)

Boring is a boring activity and you shouldn't imagine that just like the Ejpovice tunnel was extended by 15 meters every day, physicists add one order of magnitude to the energy every month (or year). Instead, the construction of the tunnel may be very non-uniform in time. In particular, the top-down theorists have prepared some potentially promising thermonuclear bombs that may help to dig a hole going in the right direction within milliseconds.

We don't really know – and we have never known – which of the teams is more promising to dig the whole damn tunnel. And there are some obvious differences between the two teams. The team digging from low energies, i.e. from the \(1\TeV\) LHC scale, cares about the ongoing experiments a lot and this team is affected by the results of those experiments. The other team – that mentally lives near the Planck scale – doesn't care about some latest experimental twists too much. They need to care about the problems with the rocks at the Planckian side of the mountain, and some approximate aiming needed to get to the LHC throat of the mountain.

People following or contributing to the hep-th archive – such as string theorists – are those on the top-down or Planckian side of the tunnel; people following or contributing to the hep-ph archive live on the low-energy, bottom-up, LHC side of the future tunnel.

OK, the tunnel is being built inside a mountain that rather clearly has some precious metals in it such as the gold of supersymmetry. It's almost fair to say that the gold of supersymmetry – I mean the supersymmetry breaking scale – is hiding somewhere in the bulk of the mountain. The bottom-up and top-down people have a different way of thinking about the location of that gold. Needless to say, the existence of the two approaches (and archives) – which was hinted at by Nima's comments – was completely overlooked by Woit and the other cranks. They probably don't understand the concept of hep-th and hep-ph at all.

For the bottom-up people who mentally live around the LHC, there are good reasons to think that the gold shouldn't be far enough. Gold is useful for circuits, golden teeth, jewels, coins, and other things – and supersymmetry is good to explain why the Higgs boson isn't much heavier than it is. So supersymmetry should be rather close, it shouldn't be too badly broken.

Well, the top-down people also understand that but they mentally live at much higher energies and \(1\TeV\) or \(10\TeV\) are rather close to each other – they're energy scales much smaller than the Planck scale (by some 15 orders of magnitude). So top-down people – well, at least your humble correspondent – were just never carried away by the idea that the superpartner masses "have to be" \(1\TeV\) instead of \(10\TeV\). The supersymmetric gold seems to be a mechanism that pushes the Higgs boson to the opposite, low-energy side of the mountain. But something must push supersymmetry itself to low enough energies as well – which is arguably "easier" and "more natural" than to make the Higgs boson light – and this mechanism is responsible for most of the lightness of the Higgs.

Well, it doesn't have to be responsible for 100% of the lightness of the Higgs. There is some physics near the \(100\GeV\) up to \(10\TeV\) energy range. The Higgs may very well be accidentally 10 times lighter than the average Standard Model superpartner. That's been my view for a long time which is why, in 2007, I estimated the probability of the SUSY discovery at the LHC to be 50%. I still made a bet against Adam Falkowski who felt sure it was just 1% or less but 50% indicated my agnosticism that was surely more widespread among the top-down people.

The fine-structure constant is \(\alpha\approx 1/137.036\), it's also rather far from one. Now, yes, I can give you some explanations why the constant defined in this way isn't quite of order one. But it's possible that we don't quite understand the right logic (the right formulae based on the relevant mechanism of supersymmetry breaking) to estimate the ratio of the Higgs and gluino masses and if that ratio were comparable to something like \(1/137.036\) as well, I wouldn't be "totally" shocked.

And I have always "accused" many bottom-up phenomenologists of a bias preferring early discovery and testability – which leads them to a wishful thinking. You know, if you "believe" that the superpartners are light enough to be discovered by 2018, it has the advantage that it's exciting, and if you make the exact prediction and it happens to be correct, you will also get the big prizes in 2018 or soon afterwards and you don't need to apply the discount rate too much.

Note that this bias – which is obviously "diverging away from the objective arguments for the truth" – is almost equivalent to the "increased testability" preferences. People on the hep-ph side who really care about ongoing experiments may simply prefer "more (easily) testable theories" over "less (easily) testable theories". In particular, they prefer theories with lighter new particles over theories with heavier new particles.

This bias is good for them if the particles are there – and it backfires and becomes a disadvantage when the new particles aren't observed. As a top-down theorist, I look at these developments from a distance. I don't get passionate about these hopes which are irrational. The conclusion that many superpartners should be lighter than \(1\TeV\) was never justified by terribly strong arguments. My view is that "more (easily) testable theories" simply aren't more likely to be true than "less (easily) testable theories". As long as a theory is testable in principle, it's scientifically meaningful – and only actual material (theoretical or empirical) evidence for validity, not the "ease of testability", may help theories to beat others! I think that this is implicitly how other top-down physicists think as well but I think that almost no one explains these things as clearly as I do.

The LHC has found no such new particle which proves that according to some measures, the degree of "experimentally proven" fine-tuning is already something like \(\Delta \geq 100\). That's a large number but it is not insanely large. Even if the probability predicted by naturalness and SUSY were that superpartners should have been seen by the LHC by now with the probability 95%, and I think it's less than 95%, the absence of such superpartners is still just a 2-sigma deficit of new physics! I just translated the 95% \(p\)-level to the number of standard deviations, using the usual dictionary. For 99% which we may need, we would get 2.5 sigma or so.

Well, a 2-sigma deficit may be said to be curious but it is not insanely curious. We saw a 4-sigma excess of the \(750\GeV\) diphoton and it was just a statistical fluke. So why couldn't a milder, 2- or 2.5-sigma deficit of new physics at the LHC be a fluke? Of course it can be a fluke. As far as I am concerned, nothing has dramatically changed about the reasons to expect the discovery of supersymmetry in doable experiments. It's a non-rigorous but perfectly rational reasoning, especially for a top-down theorist who mentally digs on the Planckian side of the future tunnel.

Nima says that people should keep on playing with – and tweaking and reinterpreting – the very promising models of physics beyond the Standard Model. There are two basic reasons why it's completely sensible in practice:

the absence of alternatives (hep-ph view)

the existence of many known alternatives (hep-th view)

These two reasons are perfectly complementary to each because they contradict one another! ;-) So what do I fudging mean? Well, the first reason, "the absence of alternatives", describes the fact that among the effective field theories as understood by the bottom-up phenomenologists – who see how some mysterious complete theory ultimately reduces to the Standard Model or its supersymmetric extension – the pictures that were considered most promising, such as those with the MSSM, are still most promising.

I think that good physicists are eager to jump into a better "basin of attraction" as discussed at the beginning. But for this paradigm shift to make sense, such a basin of attraction must first be found; and it must be shown that it's at least equally promising as the known one(s). This hasn't really taken place which is why it's really nonsensical in practice to expect sane physicists to completely abandon their pictures. They would have nowhere to go. Their job is not to be satisfied with the currently known approximate theory. They are trying to learn more and among the candidate theories, they simply choose the most promising one.

The second reason I mention is the opposite one – the existence of many alternatives. Well, what I actually mean is the landscape of ideas available to a top-down theorist such as a string theorist. You know, these people will also refuse to totally abandon what they have – because what they have is everything that is mathematically consistent and known to the mankind.

My point is that almost independently of events at the LHC or other experiments, folks like string theorists are constantly enriching their brains by all the theories, systems of equations etc. that make any sense and that have a chance to be relevant for fundamental high-energy physics and quantum gravity. They already work with all of them, at least as a community. The criticism that they're too narrow-minded – e.g. focusing on the same kind of vacua, models, or mathematical methods – is self-evidently wrong. They are already using extremely diverse methods, descriptions, \(10^{500}\) semi-realistic vacua in many classes, and many other things.

To summarize, of course that the absence of new physics at the LHC so far cannot lead rational people to any jump because the "destination" of such a jump is either impossible to guess, or it doesn't look better than what we have, or it's already being investigated by some theorists. Whether you find it emotionally pleasing or not, the confirmation of a null hypothesis gives us a very little amount of information and a very little reason to make any qualitative shifts.

You know, what some emotional laymen might prefer would be for physicists to say: Physics has failed, now I accept Allah or loop quantum gravity (or any other crackpottery) as my savior and surrender. But that's exactly what a competent and rational physicist won't do. Physics cannot really fail. And even the relatively big qualitative ideas – which are "less than physics" but still pretty important – haven't been falsified.

People were combining, recombining, tweaking, and reinterpreting their ideas and models before the LHC runs and they will do so now, too. There is no other rational way to proceed. And of course they will push the goal posts. That's what scientists do when they accumulate some new data – improved lower bounds on the masses etc. Improved lower bounds on masses means that the broad classes of theories and strategies have to be adjusted and goal posts have to be shifted. The latter is just a negatively sounding description of the correct fact that "a scientist should care about the empirical facts"! "Shifting the goal posts" is a phrase automatically persuading the listener that it describes a sin or a crime – but when this "shifting" is a reaction to some experimental data, it's just a synonym for "Bayesian inference"! It is a good thing.

The people at Woit's blog who dislike modern physics have understood that Woit wanted them to write variations of his own attack on Nima and they provided Woit with many copies of it. For example, Marshall Eubanks wrote:

... Phlogiston was abandoned too soon? What history of physics is he talking about? ...

It's very funny but if you listen to Nima's actual talk, you will see that he is aware of the phlogiston – because he explicitly discussed it and some other examples. It's very obvious that Nima simply considers the existing promising pictures such as split SUSY and/or MSSM to be analogous to the theories that we already know to be successful (although they needed time, work, and tweaks to get fully mature – e.g. atomic hypothesis or the continental drift), and not to the likes of the phlogiston that have been refuted.

Nima also mentioned Ptolemy who "wasn't too far from wrong". Maybe he wanted to say "from right", maybe not. At any rate, I am sure he wanted to say that even the Copernican viewpoint may be viewed as a "twist" or "tweak" to the Ptolemaic astronomy and I surely agree with it. Epicycles are an analogy of a parameterization of the Fourier series for the orbits (which is always possible assuming the periodicity) and Copernicus, Brahe, Kepler, and Newton gradually developed a framework to predict relationships between the Ptolemaic Fourier coefficients, while allowing the orbits to change from one year to another, and while encouraging to switch to a more modern, heliocentric system of coordinates. Copernicus and his followers faced huge troubles with the Church but that doesn't mean that physics itself started from the blank slate (the Church has harassed people because they were inconvenient for its religious framework, not because of precisely quantified differences in physical theories). Copernicus, Brahe, Kepler, and Newton didn't have to declare Ptolemy a "failed loser". Real physicists may always see how they built something on the shoulders of giants (thanks to Newton for these words) and I've also read essays by Einstein who painted his own "revolutionary" work as a twist on top of Newton's, Maxwell's, and Lorentz's work.

The question is what are the legitimate analogies for the theories that (BSM) particle physicists work with today. No analogy is perfect and no one can even rigorously prove that some analogy is right. If you could find reliable analogies between cooking of a lunch and supersymmetric models, cooks would be enough to answer all important questions about supersymmetry – and they could join kooks who already think that they are doing so. ;-)

So you know, there is a Not Even Wrong happening involving "monster minds" such as Peter Woit, David Levitt, Bob, Sabine Hossenfelder, Ayloka, Marshall Eubanks, Quentin Ruyant, and RGT. (RGT probably mostly agrees with Nima, see the comments, sorry for being in this list.) All of them share the general point which leads Peter Woit to a conclusion:

There seems to be a consensus that Arkani-Hamed’s argument from history doesn’t hold up…

That would be a great lesson if important questions could be answered in this way. The only problem with this methodology is that consensus between a bunch of brain-dead crackpots is uncorrelated to the truth in fundamental physics and if the correlation coefficient is nonzero, its sign is negative. Why don't you focus all your limited intellectual abilities and manage to notice, Frau and Gentlemen, that you're just kooks whose opinions are completely worthless relatively to Nima's?

Thank you in advance!

I just quoted from Woit's most recent comment as of now. But the last paragraph of his actual blog post above says:

If you had to pick the single most influential theorist out there on these issues, it would probably be Arkani-Hamed. This kind of refusal to face reality is I think a significant factor in what has caused Sabine Hossenfelder to go on her anti-new-collider campaign. While I disagree with her and would like to see a new collider project, the prospect of having to spend the decades of my golden years listening to the argument “we were always right about SUSY, it just needs a tweak, and we’ll see it at the FCC” is almost enough to make me change my mind…

Note that he has only "almost" changed his opinion about the FCC. Whether he "fully" changes it probably depends on his getting at least as a nice treatment as he received from the Inference journal.

At any rate, just think about the "logic" that led Woit to "almost change" his opinions about the FCC. Woit basically brags that his opinions about the FCC (and it's totally analogous in the case of theoretical physics) aren't determined by any arguments revolving around physics, its knowledge, or the collider itself. Instead, he would like to use the survival or cancellation of a collider (or a whole subfield of physics) as a tool to say "f*ck you" to Nima or someone else. Everything that Mr Woit has ever written was driven by his desire to revenge and to calm his inferiority complex. He is just a malicious man and I despise everyone who has some tolerance for him.

At least he could entertain us by the phrase about the "decades of his golden years". Even "minutes when he was more than a pile of waste" would be too much to ask.