A goal is to determined the interactions of the newly found Higgs boson. If its spin equals two, Nima will quit physics, and if the particle is a techni-dilaton, he will kill himself.

I share these sentiments – but won't repeat Nima's pledges. The possibility that the spin is two is already being excluded experimentally but I have never considered it realistically possible a priori. Every spin-two particle has to be either a member of a stringy or similar infinite tower of states; or a mode of the graviton of some sort, to get rid of the unphysical modes using a gauge symmetry.

But if it had something to do with the graviton, its couplings would have to be closer to the gravitational ones as well and one would still need something to make the WW scattering unitary. All these things just fail to make any sense from my perspective. If Nature were behaving in this way, it would reveal so fundamental two-decades-long errors in all my (and others') thinking about Her that I don't think it would make much sense to contribute to the field. Well, maybe I would change my mind if some sensible explanation of the bizarre result emerged.

Technicolor is silly. 't Hooft's influential paper on naturalness presented it as a sort of the "default option" – supported by the desire to find new physics of the most generic type at the relevant energy scale – except that none of it seems compatible with the Nature's patterns or explanatory when we look a bit more deeply. Technicolor or compositeness theories typically predict lots of other deviations from the Standard Model that we don't see and they would also like to make the particle too light or too heavy, not to speak about many other particles.

Needless to say, my main reason to dismiss "techni-dilaton" and similar explanations is that technicolor doesn't seem to arise naturally in the stringy vacua I know of. Moreover, the whole technicolor industry is partly driven by an emotional hostility towards fundamental scalar fields which seems completely irrational to me, because of the insights in string theory and other considerations. Empirically, the new particle seems to be as Higgs-boson-like as we can get so it's a safe bet to think that its deviations from the Standard Model Higgs boson, if they exist, may be considered "small perturbations", so we're not dealing with a qualitatively different particles incorporated into a network of new physical phenomena right at \(100\GeV\) or so.

In 2012, we may know the couplings of the Higgs up to 40%. It will get to 10% in some years. Some people are proposing Higgs factories, new machines that could measure the couplings down to 3%. Nima says exactly the same objection I have written and said many times: even if a deviation from the Standard Model is found by these Higgs factories, we may still be suspicious that the deviation is due to some error. It won't convince us and it may be too much to pay billions for an experiment that is guaranteed to be either "not new" or "inconclusive" for the beginning. One should better find the new particles directly. Focusing on a few numbers – the interaction strengths – may be telling us too little about the new physics. Producing particles and resonances tells us not just about some numbers but about some "qualitative things" on new physics. We get "whole curves" of the data which is simply better than some precision measurements of numbers within a qualitatively predetermined theory.

Nima says that in 1/2 of cases, the Higgs could decay invisibly and we may be missing that. So some search should be made. Nima is utterly unimpressed by the CoGeNT-like coalition "dark matter is seen" around \(7\GeV\). He still presents a model generating the light dark matter out of higher-dimension operators.

He says that people outside HEP physics should be told that an elementary Higgs is responsible for the electroweak symmetry breaking. In superconductivity, strong dynamics does it in superconductivity. Technicolor should have been true, he thinks. Well, I find the contrast between these assertions and his desire to commit suicide if technicolor is true a bit inconsistent. I am conservative but I don't think this conservativeness implies that strong dynamics should be expected. Strong dynamics is important in metals and other complex materials of squalid state physics because it's squalid. It's pretty much guaranteed that something in squalid systems has to be strongly coupled. But the vacuum in particle physics may be purer and it therefore makes sense to consider more minimal, more economical theories here. And in one S-dual description or another, pretty much any interaction is weakly coupled. Because we know how to write down Higgs-based weakly coupled theories that break the electroweak symmetry and we don't know a truly different alternative, it's a huge argument that Nature does it in the Higgs way – and it seems to be the case. And yes, as Nima says, the "superconductivity is the role model" argument is also weakened by the observation that the generic superconductivity/technicolor model behind the electroweak symmetry breaking breaks the Lorentz symmetry, too. We don't want to lose it.

Arkani-Hamed says that "new technicolor-like physics at the same scale" has already solved 3 out of 3 of major naturalness-like problems in the 20th century physics. The electron self-energy in classical physics which diverged and was attempted to be solved by silly models of a shell-like electron was finally explained in a conceptually similar way with totally different technical details, Nima says: relativity and quantum mechanics. I disagree with that. Even in relativistic quantum field theory, there's a divergent self-energy that pretty much mimics the classical divergence, regardless of the modifications of physics that starts to dominate at the Compton wavelength. Renormalization and RG is what shows that this divergence shouldn't be paid attention to. And why the finite leftover mass is so light actually does require things like chiral symmetry that go well beyond just relativity and quanta.

Nima's second example of the technicolor-like resolution is the recognition that pions that mediated the nuclear force in the old-fashioned Yukawa way was a strongly bound composite of quarks. The pions' unnatural divergences look analogous to the Higgs' ones. OK, that's nice but there have always been reasons to think that the strongly coupled particles were composite and strongly coupled. The mess (the zoo of hadrons) was seen (almost) first. If we don't see any zoo like that near the W-bosons and Z-bosons, it's a reason to think different.

His third example is \(K\)-\(\bar K\) mixing. OK, I see it just as another technicality copying the previous point. The strongly interacting particles – hadrons – are composite. It seems strange to count it twice and I disagree that "strong coupling" incorporating relativistic and quantum effects already regulates the classically divergent electron's self-energy. So as far as I can say, there is only one example in which strongly coupled theories saved the day – it was when they described observations in QCD. But in the electroweak case, all the other arguments we had in the QCD case we absent. We didn't observe any strongly coupled mess and zoos; and we didn't have any nice theories how to construct observed particles out of (techni)colored quark-like constituents. On the contrary, in the electroweak case, we had a nice renormalizable theory based on elementary scalars, something that we didn't have in the QCD case. So the situations are very different, I would say.

At any rate, I agree with Nima that in the QCD case, people didn't have to try to clarify what fine-tuning meant and was simply because the theory with the compositeness at the same scale worked and no reason to be thought of as a fine-tuned one. In the Higgs case, some fine-tuning seems to be there so there's room for whining and excuses here.

Nima admits that the history of physics may be abused to support any polemical point you want which is why it's dangerous – he thanked Ann for that. He says other examples that seem to be a bit tuned – the deuteron binding energy up to 5% tuning, nuclear physicists usually tell Nima. Two neutrons are unbound just by \(60\keV\). He thinks this small number is a coincidence without a technical explanation. I can't give you one, rigorously, but I have some doubts that there's no non-anthropic explanation for this smallness.

My first guess would be that the smallness is due to some kind of a non-relativistic approximation for the neutrons where \(p^2/2m\) is rather small because the residual force between the neutrons becomes rather weak at those distances, too. So the binding energies are comparable to kinetic energies with are suppressed by the rather large \(m\gg |p|\). We understand why the hydrogen atom has the binding energy \(-13.6\eV\) which is much lighter than the proton (and electron) mass. It boils down to two powers of the fine-structure constant and there's an effective rather small (although not so small) fine-structure constant for the residual neutron-neutron interactions which makes the binding energies also small. The residual between-nucleons interaction is weak because it's some multipole leftover that is left from the colorful interactions after the colors are neutralized. I can't make this reasoning quite exact and rigorous but I do think it may be dangerous to immediately scream that some small value is unnatural just because you don't "immediately" see an explanation of the smallness.

He enumerates some fun "pro-anthropic" coincidences. For somewhat different scales, all atoms except for hydrogen would be unstable. A universe full of hydrogen. It could have happened. Or in some other point of the parameter space, sphalerons would guarantee that the universe is full of neutrinos, the main enemies of Nima among the particle species.

At any rate, the SM-like LHC results so far indicate that Nature is either simple and unnatural; or natural but fine-tuned by discrete choices determining viable non-minimal models.

I think that there's one important point that must be taken into account when you determine whether some fine-tuning is too much: the look-elsewhere effect. If you have many (\(N\)) quantities and quantity pairs that a priori may display some fine-tuning, the probability that one of them displays a fine-tuning is really \(N\) times greater. In complicated physical systems, there are lots of opportunities.

Before 40:00, Nima talks about various ways to arrive at the heavy scalars scenario and/or split SUSY and/or anomaly mediation. It's motivated from the top-down perspective, for example. The heavy scalars solve the SUSY flavor problem. In response to a question, he also says that all the tunings become related if one assumes an anthropic character of the answer. He discusses the complaints about some physicists' alleged practice of changing the goalposts – "we will discover it next time because it wasn't last time," and so on. He says that theories, and not theorists, have to be consistent. Right! If something isn't known or calculable, like the superpartner masses, it's not only OK but a duty for a theorist to be affected by the newest observations. A theorist simply can't dogmatically defend some claims that have no justification, especially not against the evidence. And what the experiments can falsify are well-defined theories, not theorists who may think about many theories during their career. Nima is among those who surely depend on this freedom. ;-)

Moreover, as Nima says, there have been theorists who haven't changed their predictions since the 1980s. Their picture has already passed a small test.

Nima defends scenarios in which the problems are solved safely, without any risk, any need to rely on good luck or cancellations, without any need to be clever or ingenious, with a lot of "overhead". He requires the models to be "findable" by as stupid people as generic Milner Prize winners like himself. ;-) So some particles are better sent to \(1,000\TeV\), he says. I am not so sure it's the right assumption. Some problems may only be solved "marginally". If there are many problems and constraints in physics, it's guaranteed by the look-elsewhere effect that some of them will only be obeyed "barely". Not all of them have to be obeyed "barely" but some of them may.

He describes some spectacular signatures of the models he favors, with gluinos decaying to 4 tops, 2 Higgses, missing energy – the Standard Model is zero. You see the events and you're done.

Nima emphasizes that all his favorite theories predict precise SM-like couplings of the Higgs, within conceivable error margins. A clear falsification is possible. It's because such a deviations would require additional light charged particles but they would make the Higgs potential indisputably unstable and by assumptions, the instability couldn't be cured or compensated by any light scalars.

He also says that natural theories have actually much more room to be adjusted than the unnatural ones – because they are full of promiscuous scalars. Nima admires the Standard Model for its robust consistency and ability to suppress its perturbations. The Higgs partial decay widths could have been millions of times larger or smaller but they were right within a factor of two.

A \(125\GeV\) Higgs smells very supersymmetric, Nima would be shocked if it were composite. It smells so partly because it's closer to the W/Z-bosons than the top where you would expect the mass without SUSY. But it's not in the most smelly place, so there are some new things to be learned. Nima also says that certain processes and observed decays or higher-dimension operators immediately tell you there is something interesting to pursue near \(100\TeV\), justifying 70 more years in particle physics.

After a question, we hear about a project Nima assigned to Jared Kaplan, to work out cosmological consequences of those models. They're fine. The dark matter still has to be there, in fact a bit lighter than otherwise, and it's not a thermal relic. Around 95:00, he reiterates the point that the small tunings and good lucks should be avoided and replaced by something like one major tuning that, however, has a well-defined task to fulfill. Two minutes later, he promotes a model of baryogenesis in which moduli decay to one proton, with some reasonable odds.

Have a look at slide 22 of the link. It has the same luminosity on the higgs as ILC ( at 1/10th of the cost) and a number of other advantages. There is even a proposal for an 80km ring called TLEP ! for later.

Dear Dilaton, The main advantages are physics advantages because it will be easier to accumulate high statistics than at the linear collider which throws away the beams with each collision. Here the beams will be circulating and continually used at the interaction points, while their energy losses from synchrotron radiation will be compensated. Cost has also to do with statistics and the higher the statistics the better the knowledge accumulated.( smaller errors). For example the LHC will not be able to give for crucial parameters errors better than 5% due to the systematics of the complex collisions, no matter how long it runs. ( and it will be cheaper to keep running it than building a new accelerator).

That's of course a good question. There is technical definition people are familiar with: avoid couplings and similar parameters much smaller than one or forced to sit at a very narrow band relatively to the preferred one, unless there is an enhancement of symmetry etc. in this special place.

However, there are various imperfections of this definition. For example, it doesn't acknowledge that one may effectively fine-tune things by adding lots of new fields and couplings that randomly give what is needed. So some people implicitly demand the probability to be significant even if the probability includes the probability of choosing one particular spectrum (discrete choices for fields etc.).

That's a point Nima mentions at one moment of the talk because natural theories usually have more room to be adjusted when they have many scalars that are unrestricted. So perhaps there is a complementarity and the game is completely different if one pays attention to many sources of fine-tuning.

Then, "natural" is also being used as a word without any strict definitions.

which is a new U(1) symmetry used in models that try to explain why the coefficient of the CP-odd "instanton counting" term "F wedge F" in QCD is so tiny. The coefficient is reinterpreted as a dynamical field, an axion, and if this axion is charged under a U(1) symmetry, the PQ symmetry, one gets a natural explanation why its vev and mass are so low (it's the Goldstone boson from the broken PQ symmetry).

These models - QFTs - may also be represented by various stringy vacua. The PQ charge is carried by a new brane in the braneworlds. In QFT, PQ models have various consequences for the nonrenormalizable, higher-order interactions etc. Nima mentioned one correlation between PQ and some other quantities.

It's a rather technical stuff that's familiar to most phenomenologists.

i don't know if it's because he is young, or because he is not an anglosaxon or because unlike many others he is actually really smart but the guy actually looks and sounds really smart.

so many of them pretend to be much smarter than they actually are and like i 've said before try too hard to talk poetic.

by anglosaxon i don't mean race but the sort of university physics people - culture that has been dominated in universities in western countries.i wonder if his style will change with age and become like the others.this is not a guy that is extremely good in math but has problems in common sense and reason like many others in physics.it is very common for many Iranians to try to show that the things they have are better than others and that they are smarter (Greeks do it too but they are usually less ridiculous) and they are proud of Nima and Vafa. does anyone know if Nima ever talks against the regime like he should?

what I don't understand is that now, that naturalness is given up, do we have any reasons to expect SUSY in the LHC ? I know that it is required to remove the tachyon from string theory, but the Planck-scale is far away. So is there a low energy argument for SUSY ? Maybe something about dark matter?

In principle, relying on guaranteed insights only, SUSY may indeed kick in at arbitrarily late energies. That doesn't mean that there are no reasons to think that it could be in reach.

If SUSY gives the dark matter, a big portion of viable models imply that for cosmological reasons, the dark matter particle - a superpartner if SUSY is relevant here - isn't heavier than a few TeV.

Also, the grand unification of couplings works nicely in MSSM - much more accurately than in the Standard Model where it's close but it demonstrably fails - but this advantage of MSSM over SM only holds if the superpartners are much closer to the electroweak scale than the Planck scale. It's not a strict proximity argument but it prefers near-electroweak superpartners, too.

Most importantly, the hierarchy problem indicates that something stabilizes the Higgs mass which would otherwise look unnaturally tiny relatively to what it could be from quantum corrections - up to the Planck scale. So it seems that the spectrum of new particles keeping the Higgs light should be "rather close" to the Higgs mass; the probability that the lightest superpartners (as guaranters of a light Higgs) is heavier than M should decrease with M so that it becomes substantially smaller when M is an order of magnitude above the Higgs mass.

None of these arguments is strict. We don't have a universal solid method to calculate the mass of the superpartners in Nature. Still, the arguments above inevitably influence expectations of a rational person, much like the experimental data that so far influence it in the opposite direction. ;-)