While experimentalists gathered in Grenoble present the latest results on High-Energy Physics searches and measurements, phenomenologists like Sven Heinemeyer are working 24/7 to update the picture of the breathing space left for Supersymmetry, in the light of the most recent searches.

You of course do not need to be reminded that Supersymmetry is not a theory but a framework, within which a host of possible manifestations of subnuclear physics are configurable based on the value of 120-or-so free parameters. Because of that, if one wants to discuss in detail what are the most likely versions of SUSY left on the table, and what is the value of the most representative and critical theory parameters, one needs more than paper and pencil.

A 30-page article is a better shot if one wants to give the flavour of how the recent experimental constraints combine with earlier data to provide best-fit values for tan(beta), M_A, M_0 and M_1/2, etcetera; and a constantly updated web page is still better. So let us see what Sven's group has produced for us. Okay, okay - I will try not to be lazy and give recognition straight off: the authors of the study I am discussing are O. Buchmueller, R. Cavanaugh, D. Colling, A. De Roeck, M.J. Dolan, J.R. Ellis, H. Flacher, S. Heinemeyer, G. Isidori, D. Martınez Santos, K.A. Olive, S. Rogerson, F.J. Ronga, and G. Weiglein.

The paper is titled "Supersymmetry and Dark Matter in Light of LHC 2010 and XENON 100 Data". Yes, I know. 2010. But keep reading and you'll be rewarded.

The analysis focuses on minimal-minimal versions of the MSSM, to facilitate the discussion. And the MSSM is of course the minimal supersymmetric extension of the Standard Model, i.e. the one which introduces the smallest amount of extra parameters possible (about 100). Of course: when you have such a beast to deal with, you are forced to simplify matters a bit. The paper considers four specific versions of the MSSM: a constrained version, CMSSM, where the soft SUSY-breaking mass parameters are assumed to be universal at a very high energy scale where theories unify; the NUHM1, which relaxes the previous constraint to allow freedom in the Higgs boson masses; a variation of the former called VCMSSM (I know, we're running out of alphabetical signs with these theories!), and the good-old mSUGRA model. Details in the paper.

For each considered model, the group analyzed the effect of the new LHC measurements based on the 2010 data sample, as well as a few additional results obtained with 2011 data, and they included in the analysis the results obtained by the XENON 100 collaboration, which is looking for a direct signal of dark matter particles and produced constraints in the spin-independent cross section of these particles in nuclei.

The method of investigation chosen is a frequentist analysis, whereby the data are included in a likelihood function which is then sampled in the interesting subspaces of the parameter space, obtaining nice coloured bidimensional figures which describe where is the bulk of the probability. This is performed with a tool called "MasterCode", which employs the Markov Chain Monte Carlo to sample million of space points. MasterCode allows to incorporate new constraints in older samplings by evaluating their impact on the likelihood, sizably simplifying the work.

The paper is quite extensive and describes each input in detail, examining what impact the new measurements have on the relative probability of different points in the parameter space. I have no chance to provide an exhaustive summary, so let me peek at the results for the mass of the gluino in the four considered theories. A picture is shown below.

From left to right and from top to bottom, the four theories considered in the graphs are the CMSSM, the NUHM1, the VCMSSM, and mSUGRA. The hashed lines are previous best-fit values, the full lines include the new LHC results. Bear in mind that what is plotted is always the delta-chisquare from the minimum (eg. the maximum likelihood value): of course, the "likelihood" that these parameter values describe the reality of Nature has decreased given the null search results. And as expected, the preferred mass value of the gluino generally moves up as new LHC results are included in the fit. All theories considered are now pointing at gluinos with masses exceeding one TeV.

Despite the above moving up of gluino masses, the best fit points overall do not move dramatically after the inclusion of the new inputs. As the authors comment in their summary,

The negative results of the searches to date are not in serious tension with the ranges of parameter spaces favoured pre- LHC in the models we have studied. The favoured regions yet to be explored offer good prospectsfor the SUSY searches during the LHC run in 2011/12. However, it is worthwhile to consider whether the exclusion by the LHC of very light squark and gluino masses may already have messages for future experimental studies of supersymmetry (if it exists).

Wow. Die-hard SUSY aficionados such as my friends Sven and Albert are now adding disclaimer brackets to their summaries! I am impressed. Recently I commented that the failed 2010 searches for SUSY at the LHC was already starting to worry theorists. Now I receive positive confirmation in print. Indeed, the authors continue their summary by examining the impact of these results on the possible design of a next linear collider. I prefer to keep myself on the ground and claim that if the LHC sees nothing, I doubt we will be discussing whether 500+500 GeV are enough or 600+600 would be better, and similar angel's sex issues. But I congratulate with the authors of the study for a very extensive survey.

Of course, those of you who frowned when I first mentioned 2010 data are now wondering what are they still doing here. That's right: you should visit the MasterCode web site, where updated results are shown and compared to the latest 2011 limits! I decided to discuss the paper here, but maybe there's space for one more figure:

This is the M_0 - M_1/2 parameter space of the CMSSM after the inclusion of 2011 results shown at EPS! The colours provide the fit probability, and the green line encompasses areas with a fit probability better than 5%. The best-fit probability is 11%!! Does this mean that the CMSSM is practically excluded at 90% CL over all the parameter space ? I turn this question to Sven.

Comments

Actually I was wondering about this 'fit probability' when I first saw this paper. It is not a term I am familiar with. Is it just the p-value of the fit or something similar at each point? I cannot find the term used elsewhere nor a definition anywhere.

the reason for the decreasing fit probability is the following:
`low energy data' favors relativly light SUSY masses, where mostly uncolored particles are concerned.
ATLAS/CMS data disfavors light SUSY masses, where, however, only colored particles are concerned.
In a constrainted model such as the CMSSM, where colored and uncolored masses are tightly connected,
a tension between low-energy and high-energy data is slowly growing, resulting in a reduced fit probability.

On the other hand, there are many (equally constrained) SUSY models (e.g. GMSB, AMSB, NUHM2, ...) in which the connection between the colored and uncolored sector naturally results in larger mass gap between these two sectors. If the LHC continues to find no SUSY particles, those models will naturally attract more attention.
Even in the NUHM1 (which is effectively the CMSSM with M_A or mu as additional free parameter) the fit probability is substantially higher than in the CMSSM after the inclusion of the 1/fb LHC data, as can be seen in fig. 6 here:http://mastercode.web.cern.ch/mastercode/news.php .

Dear Sven,the proposed gap between the gluino masses and the other gaugino masses looks awkward to me. If the gluinos are many TeVs, I would kind of expect the other superpartners not to be much lighter, either - and the preferences for low-mass particles is a coincidence caused by other things. It's just looking strange why Nature would make some superpartners so much lighter - and why it would be exactly those that can't be easily produced by the available collider. ;-)

But before I converge to this conclusion, a question: why are there the islands - surviving in 3 of the 4 theories - near 400 GeV gluino where the chi-squared graphs make pretty sharp dips to the "realistic" bottom realm? Are those signals inevitable for some a priori reasons, and if they're not, what kind of experiments or events drive the existence of these islands?

This is a completely irrational, Nazi-style propaganda, Garrett. The LHC has excluded *all* models that predicted any new physics in the first inverse femtobarn and SUSY is not any more "punished" than any other theoretical approach that claimed to have "observable predictions" at this point.For the same reason, SUSY obviously remains the top candidate for the new physics that the LHC may find but doesn't have to find in the future.

Aha. For eight years I have heard it quoted as the single most influential pro-SUSY experimental result (from "strong evidence" to "clear indication"), and now that it is not so much any more in line with others we think of removing it ? No sir. Please let us not forget that post-data inferences are always going to produce biased conclusions.

Dear Tommaso, for example, I have defended SUSY by hundreds of arguments - sometimes even by experimental deviations - but (g-2) of muon was never among them.At this moment, when one is satisfied with similar-size "anomalies", he can exclude both SM and MSSM. And indeed, it is very plausible that both are excluded. After all, Nature has no reason to be minimal. At the same moment, it's totally plausible that one or some of the input data used to make the exclusion is wrong.

When experiments lead to potential contradictions, one *always* has to recheck and/or revise his reasoning. Not doing so would mean not listening to the experiments.

Oh, i'm sorry Tommaso, don't get me wrong.
I don't want to get involved in history, i was just saying that the fits are strongly driven by (g-2)_mu. And lately it seems that it might be not a "signal".
I'm not defending the MSSM nor trying to do post-data inferences (i'm not saying "the result is ugly, lets forget about (g-2)").
And their are doing correctly including it. It's just curiousness, what changes removing that?
I think it's fair, like asking what changes if DM (another controversial field) is removed?

The best option would obviously be to fire all the people who work in high energy physics and force them to do more productive things with their time, or by all means continue to create metaphysical theories in their spare-time.

I liked the points made by David Gross in the closing talk. He said that to model supersymmetry with CMSSM and other limited versions of SUSY would be like testing for the standard model using a version where all the quark masses were equal.

The MSSM without any unification assumptions has more than 100 parameters.
It is simply hard to imagine that any theory with 100 free parameters is really a theory realized in nature.
Therefore, some GUT assumptions (together with the fact that the coupling constants become equal at
around 2 x 10^16 GeV), are very reasonable. We just do not know *which* GUT based model is realized.
The CMSSM is one of the most simple assumptions, but others (GMSB, AMSB, ...) are equally simple.

If one (or some) of these models have a larger gap (one does not have to put the Gluino at 3 TeV yet...) which fits
all the data better, this model would be preferred in the near future.

There is absolutely no reason to leave out (g-2)_mu from any analysis. This is 'picking the results', but not physics.
Many smart people worked very hard on this observable, from the theory side and from the experimental side, and there is quite good agreement between different evaluations. Most recently also the issue with the tau data may have been fully resolved (http://arxiv.org/abs/arXiv:1101.2872).
It is correct that (g-2)_mu has a strong impact on SUSY fits, but this happens if there is a discrepancy between SM and experimental data and another model can easily fix it. :-)

It is indeed spam Daniel. You cannot see it but I do. Apparently the filter blocks it from view if you're not an admin, but still warns you that they arrived. Sorry. Hope the team of Science20 fixes this glitch.

Spammers will always be ahead of spam blocking but in this case I manually blocked the IP range to try and stop it, since they were hammering this particular article since last night.

I will change it so that only approved comments send out notifications. When we first created this, only registered users could leave comments so email notification was accurate. Thanks for pointing out it was sending emails even if comments were unmoderated, Daniel.

We're not Facebook, we don't have $500 million for programmers. :) And we are working around the world standard in Akismet so we want it all to play nicely together but there will be a few hiccups before it is all done.