Wednesday, 16 May 2018

In the particle world the LHC still attracts the most attention, but in parallel there is ongoing progress at the low-energy frontier. A new episode in that story is the Qweak experiment in Jefferson Lab in the US, which just published their final results. Qweak was shooting a beam of 1 GeV electrons on a hydrogen (so basically proton) target to determine how the scattering rate depends on electron's polarization. Electrons and protons interact with each other via the electromagnetic and weak forces. The former is much stronger, but it is parity-invariant, i.e. it does not care about the direction of polarization. On the other hand, since the classic Wu experiment in 1956, the weak force is known to violate parity. Indeed, the Standard Model postulates that the Z boson, who mediates the weak force, couples with different strength to left- and right-handed particles. The resulting asymmetry between the low-energy electron-proton scattering cross sections of left- and right-handed polarized electrons is predicted to be at the 10^-7 level. That has been experimentally observed many times before, but Qweak was able to measure it with the best precision to date (relative 4%), and at a lower momentum transfer than the previous experiments.

What is the point of this exercise? Low-energy parity violation experiments are often sold as precision measurements of the so-called Weinberg angle, which is a function of the electroweak gauge couplings - the fundamental parameters of the Standard Model. I don't like too much that perspective because the electroweak couplings, and thus the Weinberg angle, can be more precisely determined from other observables, and Qweak is far from achieving a competing accuracy. The utility of Qweak is better visible in the effective theory picture. At low energies one can parameterize the relevant parity-violating interactions between protons and electrons by the contact term

where v ≈ 246 GeV, and QW is the so-called weak charge of the proton. Such interactions arise thanks to the Z boson in the Standard Model being exchanged between electrons and quarks that make up the proton. At low energies, the exchange diagram is well approximated by the contact term above with QW = 0.0708 (somewhat smaller than the "natural" value QW ~ 1 due to numerical accidents making the Z boson effectively protophobic). The measured polarization asymmetry in electron-proton scattering can be re-interpreted as a determination of the proton weak charge: QW = 0.0719 ± 0.0045, in perfect agreement with the Standard Model prediction.

New physics may affect the magnitude of the proton weak charge in two distinct ways. One is by altering the strength with which the Z boson couples to matter. This happens for example when light quarks mix with their heavier exotic cousins with different quantum numbers, as is often the case in the models from the Randall-Sundrum family. More generally, modified couplings to the Z boson could be a sign of quark compositeness. Another way is by generating new parity-violating contact interactions between electrons and quarks. This can be a result of yet unknown short-range forces which distinguish left- and right-handed electrons. Note that the observation of lepton flavor violation in B-meson decays can be interpreted as a hint for existence of such forces (although for that purpose the new force carriers do not need to couple to 1st generation quarks). Qweak's measurement puts novel limits on such broad scenarios. Whatever the origin, simple dimensional analysis allows one to estimate the possible change of the proton weak charge as

where M* is the mass scale of new particles beyond the Standard Model, and g* is their coupling strength to matter. Thus, Qweak can constrain new weakly coupled particles with masses up to a few TeV, or even 50 TeV particles if they are strongly coupled to matter (g*～4π).

What is the place of Qweak in the larger landscape of precision experiments? One can illustrate it by considering a simple example where heavy new physics modifies only the vector couplings of the Z boson to up and down quarks. The best existing constraints on such a scenario are displayed in this plot:

From the size of the rotten egg region you see that the Z boson couplings to light quarks are currently known with a per-mille accuracy. Somewhat surprisingly, the LEP collider, which back in the 1990s produced tens of millions of Z boson to precisely study their couplings, is not at all the leader in this field. In fact, better constraints come from precision measurements at very low energies: pion, kaon, and neutron decays, parity-violating transitions in cesium atoms, and the latest Qweak results which make a difference too. The importance of Qweak is even more pronounced in more complex scenarios where the parameter space is multi-dimensional.

Qweak is certainly not the last salvo on the low-energy frontier. Similar but more precise experiments are being prepared as we read (I wish the follow up were called SuperQweak, or SQweak in short). Who knows, maybe quarks are made of more fundamental building blocks at the scale of ~100 TeV, and we'll first find it out thanks to parity violation at very low energies.

Monday, 7 May 2018

It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles.

It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T. In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck: ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However, this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space). Nevertheless, dedicated experiments will soon be taking over. Recently, two collaborations published first results from their prototype detectors: one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor. Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV. A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.

Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

Thursday, 19 April 2018

Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation - the general relativity - has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl～10^19 GeV. But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,

So the massive gravity theory in its usual form cannot be used at distance scales shorter than ～300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments, it is relevant for the movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed in effective theories. Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale, parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area excluded theoretically and where the graviton mass satisfies the experimental upper limit m～10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ～1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

Monday, 9 April 2018

NA62 is a precision experiment at CERN. From their name you wouldn't suspect that they're doing anything noteworthy: the collaboration was running in the contest for the most unimaginative name, only narrowly losing to CMS... NA62 employs an intense beam of charged kaons to search for the very rare decay K+ → 𝝿+ 𝜈 𝜈. The Standard Model predicts the branching fraction BR(K+ → 𝝿+ 𝜈 𝜈) = 8.4x10^-11 with a small, 10% theoretical uncertainty (precious stuff in the flavor business). The previous measurement by the BNL-E949 experiment reported BR(K+ → 𝝿+ 𝜈 𝜈) = (1.7 ± 1.1)x10^-10, consistent with the Standard Model, but still leaving room for large deviations. NA62 is expected to pinpoint the decay and measure the branching fraction with a 10% accuracy, thus severely constraining new physics contributions. The wires, pipes, and gory details of the analysis were nicely summarized by Tommaso. Let me jump directly to explaining what is it good for from the theory point of view.

To this end it is useful to adopt the effective theory perspective. At a more fundamental level, the decay occurs due to the strange quark inside the kaon undergoing the transformation sbar → dbar 𝜈 𝜈bar. In the Standard Model, the amplitude for that process is dominated by one-loop diagrams with W/Z bosons and heavy quarks. But kaons live at low energies and do not really see the fine details of the loop amplitude. Instead, they effectively see the 4-fermion contact interaction:

The mass scale suppressing this interaction is quite large, more than 1000 times larger than the W boson mass, which is due to the loop factor and small CKM matrix elements entering the amplitude. The strong suppression is the reason why the K+ → 𝝿+ 𝜈 𝜈 decay is so rare in the first place. The corollary is that even a small new physics effect inducing that effective interaction may dramatically change the branching fraction. Even a particle with a mass as large as 1 PeV coupled to the quarks and leptons with order one strength could produce an observable shift of the decay rate. In this sense, NA62 is a microscope probing physics down to 10^-20 cm distances, or up to PeV energies, well beyond the reach of the LHC or other colliders in this century. If the new particle is lighter, say order TeV mass, NA62 can be sensitive to a tiny milli-coupling of that particle to quarks and leptons.

So, from a model-independent perspective, the advantages of studying the K+ → 𝝿+ 𝜈 𝜈 decay are quite clear. A less trivial question is what can the future NA62 measurements teach us about our cherished models of new physics. One interesting application is in the industry of explaining the apparent violation of lepton flavor universality in B → Kl+ l-, and B → D l 𝜈 decays. Those anomalies involve the 3rd generation bottom quark, thus a priori they do not need to have anything to do with kaon decays. However, many of the existing models introduce flavor symmetries controlling the couplings of the new particles to matter (instead of just ad-hoc interactions to address the anomalies). The flavor symmetries may then relate the couplings of different quark generations, and thus predict correlations between new physics contributions to B meson and to kaon decays. One nice example is illustrated in this plot:

The observable RD(*) parametrizes the preference for B → D 𝜏 𝜈 over similar decays with electrons and muon, and its measurement by the BaBar collaboration deviates from the Standard Model prediction by roughly 3 sigma. The plot shows that, in a model based on U(2)xU(2) flavor symmetry, a significant contribution to RD(*) generically implies a large enhancement of BR(K+ → 𝝿+ 𝜈 𝜈), unless the model parameters are tuned to avoid that. The anomalies in the B → K(*) 𝜇 𝜇 decays can also be correlated with large effects in K+ → 𝝿+ 𝜈 𝜈, see here for an example. Finally, in the presence of new light invisible particles, such as axions, the NA62 observations can be polluted by exotic decay channels, such as e.g.K+ → axion 𝝿+.

The K+ → 𝝿+ 𝜈 𝜈 decay is by no means the magic bullet that will inevitably break the Standard Model. It should be seen as one piece of a larger puzzle that may or may not provide crucial hints about new physics. For the moment, NA62 has analyzed only a small batch of data collected in 2016, and their error bars are still larger than those of BNL-E949. That should change soon when the 2017 dataset is analyzed. More data will be acquired this year, with 20 signal events expected before the long LHC shutdown. Simultaneously, another experiment called KOTO studies an even more rare process where neutral kaons undergo the CP-violating decay KL → 𝝿0 𝜈 𝜈, which probes the imaginary part of the effective operator written above. As I wrote recently, my feeling is that low-energy precision experiments are currently our best hope for a better understanding of fundamental interactions, and I'm glad to see a good pace of progress on this front.

Sunday, 1 April 2018

Artificial intelligence (AI) is entering into our lives. It's been 20 years now since the watershed moment of Deep Blue versus Garry Kasparov. Today, people study the games of AlphaGo against itself to get a glimpse of what a superior intelligence would be like. But at the same time AI is getting better in copying human behavior. Many Apple users have got emotionally attached to Siri. Computers have not only learnt to drive cars, but also not to slow down when a pedestrian is crossing the road. The progress is very well visible to the bloggers community. Bots commenting under my posts have evolved well past !!!buy!!!viagra!!!cialis!!!hot!!!naked!!! sort of thing. Now they refer to the topic of the post, drop an informed comment, an interesting remark, or a relevant question, before pasting a link to a revenge porn website. Sometimes it's really a pity to delete those comments, as they can be more to-the-point than those written by human readers.

AI is also entering the field of science at an accelerated pace, and particle physics is as usual in the avant-garde. It's not a secret that physics analyses for the LHC papers (even if finally signed by 1000s of humans) are in reality performed by neural networks, which are just beefed up versions of Alexa developed at CERN. The hottest topic in high-energy physics experiment is now machine learning, where computers teach humans the optimal way of clustering jets, or telling quarks from gluons. The question is when, not if, AI will become sophisticated enough to perform a creative work of theoreticians.

It seems that the answer is now.

Some of you might have noticed a certain Alan Irvine, affiliated with the Los Alamos National Laboratory, regularly posting on arXiv single-author theoretical papers on fashionable topics such as the ATLAS diphoton excess, LHCb B-meson anomalies, DAMPE spectral feature, etc. Many of us have received emails from this author requesting citations. Recently I got one myself; it seemed overly polite, but otherwise it didn't differ in relevance or substance from other similar requests. During the last two and half years, A. Irvine has accumulated a decent h-factor of 18. His papers have been submitted to prestigious journals in the field, such as the PRL, JHEP, or PRD, and some of them were even accepted after revisions. The scandal broke out a week ago when a JHEP editor noticed that the extensive revision, together with a long cover letter, was submitted within 10 seconds from receiving the referee's comments. Upon investigation, it turned out that A. Irvine never worked in Los Alamos, nobody in the field has ever met him in person, and the IP from which the paper was submitted was that of the well-known Ragnarok Thor server. A closer analysis of his past papers showed that, although linguistically and logically correct, they were merely a compilation of equations and text from the previous literature without any original addition.

Incidentally, arXiv administrators have been aware that, since a few years, all source files in daily hep-ph listings were downloaded for an unknown purpose by automated bots. When you have excluded the impossible, whatever remains, however improbable, must be the truth. There is no doubt that A. Irvine is an AI bot, that was trained on the real hep-ph input to produce genuinely-looking particle theory papers.

The works of A. Irvine have been quietly removed from arXiv and journals, but difficult questions remain. What was the purpose of it? Was it a spoof? A parody? A social experiment? A Facebook research project? A Russian provocation? And how could it pass unnoticed for so long within the theoretical particle community? What's most troubling is that, if there was one, there can easily be more. Which other papers on arXiv are written by AI? How can we recognize them? Should we even try, or maybe the dam is already broken and we have to accept the inevitable? Is Résonaances written by a real person? How can you be sure that you are real?

Update: obviously, this post is an April Fools' prank. It is absolutely unthinkable that the creative process of writing modern particle theory papers can ever be automatized. Also, the neural network referred to in the LHC papers is nothing like Alexa; it's simply a codename for PhD students. Finally, I assure you that Résonaances is written by a hum 00105e0 e6b0 343b 9c74 0804 e7bc 0804 e7d5 0804[core dump]

Wednesday, 21 March 2018

The EDGES discovery of the 21cm absorption line at the cosmic dawn has been widely discussed on blogs and in popular press. Quite deservedly so. The observation opens a new window on the epoch when the universe as we know it was just beginning. We expect a treasure trove of information about the standard processes happening in the early universe, as well as novel constraints on hypothetical particles that might have been present then. It is not a very long shot to speculate that, if confirmed, the EDGES discovery will be awarded a Nobel prize. On the other hand, the bold claim bundled with their experimental result - that the unexpectedly large strength of the signal is an indication of interaction between the ordinary matter and cold dark matter - is very controversial.

But before jumping to dark matter it is worth reviewing the standard physics leading to the EDGES signal. In the lowest energy (singlet) state, hydrogen may absorb a photon and jump to a slightly excited (triplet) state which differs from the true ground state just by the arrangement of the proton and electron spins. Such transitions are induced by photons of wavelength of 21cm, or frequency of 1.4 GHz, or energy of 5.9 𝜇eV, and they may routinely occur at the cosmic dawn when Cosmic Microwave Background (CMB) photons of the right energy hit neutral hydrogen atoms hovering in the universe. The evolution of the CMB and hydrogen temperatures is shown in the picture here as a function of the cosmological redshift z (large z is early time, z=0 is today). The CMB temperature is red and it decreases with time as (1+z) due to the expansion of the universe. The hydrogen temperature in blue is a bit more tricky. At the recombination time around z=1100 most proton and electrons combine to form neutral atoms, however a small fraction of free electrons and protons survives. Interactions between the electrons and CMB photons via Compton scattering are strong enough to keep the two (and consequently the hydrogen as well) at equal temperatures for some time. However, around z=200 the CMB and hydrogen temperatures decouple, and the latter subsequently decreases much faster with time, as (1+z)^2. At the cosmic dawn, z～17, the hydrogen gas is already 7 times colder than the CMB, after which light from the first stars heats it up and ionizes it again.

The quantity directly relevant for the 21cm absorption signal is the so-called spin temperature Ts, which is a measure of the relative occupation number of the singlet and triplet hydrogen states. Just before the cosmic dawn, the spin temperature equals the CMB one, and as a result there is no net absorption or emission of 21cm photons. However, it is believed that the light from the first stars initially lowers the spin temperature down to the hydrogen one. Therefore, there should be absorption of 21cm CMB photons by the hydrogen in the epoch between z～20 and z～15. After taking into account the cosmological redshift, one should now observe a dip in the radio frequencies between 70 and 90 MHz. This is roughly what EDGES finds. The depth of the dip is described by the formula:

As the spin temperature cannot be lower than that of the hydrogen, the standard physics predicts TCMB/Ts ≼ 7 corresponding T21 ≽ -0.2K. The surprise is that EDGES observes a larger dip, T21 ≈ -0.5K, 3.8 astrosigma away from the predicted value, as if TCMB/Ts were of order 15.

If the EDGES result is taken at face value, it means that TCMB/Ts at the cosmic dawn was much larger than predicted in the standard scenario. Either there was a lot more photon radiation at the relevant wavelengths, or the hydrogen gas was much colder than predicted. Focusing on the latter possibility, one could imagine that the hydrogen was cooled due to interactions with cold dark matter made of relatively light (less than GeV) particles. However, this idea very difficult to realize in practice, because it requires the interaction cross section to be thousands of barns at the relevant epoch! Not picobarns typical for WIMPs. Many orders of magnitude more than the total proton-proton cross section at the LHC. Even in nuclear processes such values are rarely seen. And we are talking here about dark matter, whose trademark is interacting weakly. Obviously, the idea is running into all sorts of constraints that have been laboriously accumulated over the years.

One can try to save this idea by a series of evasive tricks. If the interaction cross section scales as 1/v^4, where v is the relative velocity between colliding matter and dark matter particles, it could be enhanced at the cosmic dawn when the typical velocities were at its minimum. The 1/v^4 behavior is not unfamiliar, as it is characteristic of the electromagnetic forces in the non-relativistic limit. Thus, one could envisage a model where dark matter has a minuscule electric charge, one thousandth or less that of the proton. This trick buys some mileage, but the obstacles remain enormous. The cross section is still large enough for the dark and ordinary matter to couple strongly during the recombination epoch, contrary to what is concluded from precision observations of the CMB. Therefore the milli-charge particles can constitute only a small fraction of dark matter, less then 1 percent. Finally, one needs to avoid constraints from direct detection, colliders, and emission by stars and supernovae. A plot borrowed from this paper shows that a tiny region of viable parameter space remains around 100 MeV mass and 10^-5 charge, though my guess is that this will also go away upon a more careful analysis.

So, milli-charge dark matter cooling hydrogen does not stand scrutiny as an explanation for the EDGES anomaly. This does not mean that all exotic explanations must be so implausible. Better models are being and will be proposed, and one of them could even be correct. For example, models where new particles lead to an injection of additional 21cm photons at early times seem to be more encouraging. My bet? Future observations will confirm the 21cm absorption signal, but the amplitude and other features will turn out to be consistent with the standard 𝞚CDM predictions. Given the number of competing experiments in the starting blocks, the issue should be clarified within the next few years. What is certain is that, this time, we will learn a lot whether or not the anomalous signal persists :)

Wednesday, 14 March 2018

Last time this blog was active, particle physics was entering a sharp curve. That the infamous 750 GeV resonance had petered out was not a big deal in itself - one expects these things to happen every now and then. But the lack of any new physics at the LHC when it had already collected a significant chunk of data was a reason to worry. We know that we don't know everything yet about the fundamental interactions, and that there is a deeper layer of reality that needs to be uncovered (at least to explain dark matter, neutrino masses, baryogenesis, inflation, and physics at energies above the Planck scale). For a hundred years, increasing the energy of particle collisions has been the best way to increase our understanding of the basic constituents of nature. However, with nothing at the LHC and the next higher energy collider decades away, a feeling was growing that the progress might stall.

In this respect, nothing much has changed during the time when the blog was dormant, except that these sentiments are now firmly established. Crisis is no longer a whispered word, but it's openly discussed in corridors, on blogs, on arXiv, and in color magazines. The clear message from the LHC is that the dominant paradigms about the physics at the weak scale were completely misguided. The Standard Model seems to be a perfect effective theory at least up to a few TeV, and there is no indication at what energy scale new particles have to show up. While everyone goes through the five stages of grief at their own pace, my impression is that most are already well past the denial. The open question is what should be the next steps to make sure that exploration of fundamental interactions will not halt.

One possible reaction to a crisis is more of the same. Historically, such an approach has often been efficient, for example it worked for a long time in the case of the Soviet economy. In our case one could easily go on with more models, more epicycles, more parameter space, more speculations. But the driving force for all these SusyWarpedCompositeStringBlackHairyHole enterprise has always been the (small but still) possibility of being vindicated by the LHC. Without serious prospects of experimental verification, model building is reduced to intellectual gymnastics that can hardly stir imagination. Thus the business-as-usual is not an option in the long run: it couldn't elicit any enthusiasm among the physicists or the public, it wouldn't attract new bright students, and thus it would be a straight path to irrelevance.

So, particle physics has to change. On the experimental side we will inevitably see, just for economical reasons, less focus on high-energy colliders and more on smaller experiments. Theoretical particle physics will also have to evolve to remain relevant. Certainly, the emphasis needs to be shifted away from empty speculations in favor of more solid research. I don't pretend to know all the answers or have a clear vision of the optimal strategy, but I see three promising directions.

One is astrophysics where there are much better prospects of experimental progress. The cosmos is a natural collider that is constantly testing fundamental interactions independently of current fashions or funding agencies. This gives us an opportunity to learn more about dark matter and neutrinos, and also about various hypothetical particles like axions or milli-charged matter. The most recent story of the 21cm absorption signal shows that there are still treasure troves of data waiting for us out there. Moreover, new observational windows keep opening up, as recently illustrated by the nascent gravitational wave astronomy. This avenue is of course a non-brainer, already explored since a long time by particle theorists, but I expect it will further gain in importance in the coming years.

Another direction is precision physics. This, also, has been an integral part of particle physics research for quite some time, but it should grow in relevance. The point is that one can probe very heavy particles, often beyond the reach of present colliders, by precisely measuring low-energy observables. In the most spectacular example, studying proton decay may give insight into new particles with masses of order 10^16 GeV - unlikely to be ever attainable directly. There is a whole array of observables that can probe new physics well beyond the direct LHC reach: a myriad of rare flavor processes, electric dipole moments of the electron and neutron, atomic parity violation, neutrino scattering, and so on. This road may be long and tedious but it is bound to succeed: at some point some experiment somewhere must observe a phenomenon that does not fit into the Standard Model. If we're very lucky, it may be that the anomalies currently observed by the LHCb in certain rare B-meson decays are already the first harbingers of a breakdown of the Standard Model at higher energies.

Finally, I should mention formal theoretical developments. The naturalness problem of the cosmological constant and of the Higgs mass may suggest some fundamental misunderstanding of quantum field theory on our part. Perhaps this should not be too surprising. In many ways we have reached an amazing proficiency in QFT when applied to certain precision observables or even to LHC processes. Yet at the same time QFT is often used and taught in the same way as magic in Hogwarts: mechanically, blindly following prescriptions from old dusty books, without a deeper understanding of the sense and meaning. Recent years have seen a brisk development of alternative approaches: a revival of the old S-matrix techniques, new amplitude calculation methods based on recursion relations, but also complete reformulations of the QFT basics demoting the sacred cows like fields, Lagrangians, and gauge symmetry. Theory alone rarely leads to progress, but it may help to make more sense of the data we already have. Could better understanding or complete reformulating of QFT bring new answers to the old questions? I think that is not impossible.

All in all, there are good reasons to worry, but also tons of new data in store and lots of fascinating questions to answer. How will the B-meson anomalies pan out? What shall we do after we hit the neutrino floor? Will the 21cm observations allow us to understand what dark matter is? Will China build a 100 TeV collider? Or maybe a radio telescope on the Moon instead? Are experimentalists still needed now that we have machine learning? How will physics change with the centre of gravity moving to Asia? I will tell you my take on such and other questions and highlight old and new ideas that could help us understand the nature better. Let's see how far I'll get this time ;)

About Résonaances

Résonaances is a particle physics blog from Paris. It's about the latest news and gossips in particle physics and astrophysics. The main goal is to make you laugh; if it makes you think too, that's entirely on your own responsibility...