Category: arxiv

Looks like the good folks at the BaBar experiment at SLAC, feeling that my attention has been distracted by the Higgs boson, decided that they might be able to slip a pet peeve of mine past an unsuspecting public without drawing my ire. Not so fast, good folks at BaBar!

They are good folks, actually, and they’ve carried out an extremely impressive bit of experimental virtuosity: obtaining a direct measurement of the asymmetry between a particle-physics process and its time-reverse, thereby establishing very direct evidence that the time-reversal operation “T” is not a good symmetry of nature. Here’s the technical paper, the SLAC press release, and a semi-popular explanation by the APS. (I could link you to the Physical Review Letters journal server rather than the arxiv, but the former is behind a paywall while the latter is free, and they’re the same content, so why would I do that? [Update: the PRL version is available free here, but not from the PRL page directly.])

The reason why it’s an impressive experiment is that it’s very difficult to directly compare the rate of one process to its precise time-reverse. You can measure the lifetime of a muon, for example, as it decays into an electron, a neutrino, and an anti-neutrino. But it’s very difficult (utterly impractical, actually) to shoot a neutrino and an anti-neutrino directly at an electron and measure the probability that it all turns into a muon. So what you want to look at are oscillations: one particle turning into another, which can also convert back. That usually doesn’t happen — electrons can’t convert into positrons because charge is conserved, and they can’t convert into negatively-charged pions because energy and lepton number are conserved, etc. But you can get the trick to work with certain quark-antiquark pairs, like neutral kaons or neutral B mesons, where the particle and its antiparticle can oscillate back and forth into each other. If you can somehow distinguish between the particle and antiparticle, for example if they decay into different things, you can in principle measure the oscillation rates in each direction. If the rates are different, we say that we have measured a violation of T reversal symmetry, or T-violation for short.

As I discuss in From Eternity to Here, this kind of phenomenon has been measured before, for example by the CPLEAR experiment at CERN in 1998. They used kaons and anti-kaons, and watched them decay into different offspring particles. If the BaBar press release is to be believed there is some controversy over whether that was “really” was measuring T-violation. I didn’t know about that, but in any event it’s always good to do a completely independent measurement.

So BaBar looked at B mesons. I won’t go into the details (see the explainer here), but they were able to precisely time the oscillations between one kind of neutral B meson, and the exact reverse of that operation. (Okay, tiny detail: one kind was an eigenstate of CP, the other was an eigenstate of flavor. Happy now?)

They found that T is indeed violated. This is a great result, although it surprises absolutely nobody. There is a famous result called the CPT theorem, which says that whenever you have an ordinary quantum field theory (“ordinary” means “local and Lorentz-invariant”), the combined operations of time-reversal T, parity P, and particle/antiparticle switching C will always be a good symmetry of the theory. And we know that CP is violated in nature; that won the Nobel Prize for Cronin and Fitch in 1980. So T has to be violated, to cancel out the fact that CP is violated and make the combination CPT a good symmetry. Either that, or the universe does not run according to an ordinary quantum field theory, and that would be big news indeed.

All perfectly fine and glorious. The pet peeve only comes up in the sub-headline of the SLAC press release: “Time’s quantum arrow has a preferred direction, new analysis shows.” Colorful language rather than precise statement, to be sure, but colorful language that is extremely misleading. Read More

The South Pole Telescope is a wonderful instrument, a ten-meter radio telescope that has been operating at the South Pole since 2007. Its primary target is the cosmic microwave background (CMB), but a lot of the science comes from observations of the Sunyaev-Zeldovich effect due clusters of galaxies — a distortion of the frequency of CMB photons as they travel through the hot gas of the cluster. We learn a lot about galaxy clusters this way, and as a bonus we have a great way of looking for small-scale structure in the CMB itself.

Now the collaboration has released new results on using SPT observations to constrain cosmological parameters.

We present a measurement of the cosmic microwave background (CMB) temperature power spectrum using data from the recently completed South Pole Telescope Sunyaev-Zel’dovich (SPT-SZ) survey. This measurement is made from observations of 2540 deg^2 of sky with arcminute resolution at 150 GHz, and improves upon previous measurements using the SPT by tripling the sky area. We report CMB temperature anisotropy power over the multipole range 650<ell<3000. We fit the SPT bandpowers, combined with the results from the seven-year Wilkinson Microwave Anisotropy Probe (WMAP7) data release, with a six-parameter LCDM cosmological model and find that the two datasets are consistent and well fit by the model. Adding SPT measurements significantly improves LCDM parameter constraints, and in particular tightens the constraint on the angular sound horizon theta_s by a factor of 2.7…[abridged]

Here is the first plot anyone should look for in a paper like this: Read More

Tom Banks has long been skeptical of the popular picture of the string theory landscape — the idea that there is some extremely large (10500 or more) number of phases of string theory, representing different ways to compactify the extra dimensions, and that all these phases are dynamically connected to each other, possibly by cosmological transitions during eternal inflation. Tom’s reasons aren’t of the curmudgeonly you-kids-get-off-my-lawn sort, but arise from his views about how quantum gravity works. (He thinks different cosmological boundary conditions represent truly different quantum theories, not just different regions of one big spacetime.) Well worth considering, if only because it’s too easy to run off in the direction of conventional wisdom when you’re far away from the realm of experimental testing.

The String Landscape is a fantasy. We actually have a plausible landscape of minimally supersymmetric $AdS_4$ solutions of supergravity modified by an exponential superpotential. None of these solutions is accessible to world sheet perturbation theory. If they exist as models of quantum gravity, they are defined by conformal field theories, and each is an independent quantum system, which makes no transitions to any of the others. This landscape has nothing to do with CDL tunneling or eternal inflation.

A proper understanding of CDL transitions in QFT on a fixed background dS space, shows that the EI picture of this system is not justified within the approximation of low energy effective field theory. The cutoff independent physics, defined by the Euclidean functional integral over the 4-sphere admits only a finite number of instantons. Plausible extensions of these ideas to a quantum theory of gravity obeying the holographic principle explain all of the actual facts about CDL transitions in dS space, and lead to a picture radically different from eternal inflation.

Theories of Eternal Inflation (EI) have to rely too heavily on the anthropic principle to be consistent with experiment. Given the vast array of effective low energy field theories that could be produced by the conventional picture of the string landscape one is forced to conclude that the most numerous anthropically allowed theories will disagree with experiment violently.

We were all transfixed by the Higgs seminars on July 4, but the work was nowhere near over for the experimentalists — they had to actually write up papers describing the results. And of course taking the opportunity to do a little more analysis along the way.

A search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented. The datasets used correspond to integrated luminosities of approximately 4.8 fb^-1 collected at sqrt(s) = 7 TeV in 2011 and 5.8 fb^-1 at sqrt(s) = 8 TeV in 2012. Individual searches in the channels H->ZZ^(*)->llll, H->gamma gamma and H->WW->e nu mu nu in the 8 TeV data are combined with previously published results of searches for H->ZZ^(*), WW^(*), bbbar and tau^+tau^- in the 7 TeV data and results from improved analyses of the H->ZZ^(*)->llll and H->gamma gamma channels in the 7 TeV data. Clear evidence for the production of a neutral boson with a measured mass of 126.0 +/- 0.4(stat) +/- 0.4(sys) GeV is presented. This observation, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10^-9, is compatible with the production and decay of the Standard Model Higgs boson.

The CMS Collaboration
(Submitted on 31 Jul 2012)
Results are presented from searches for the standard model Higgs boson in proton-proton collisions at sqrt(s)=7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns at 7 TeV and 5.3 inverse femtobarns at 8 TeV. The search is performed in five decay modes: gamma gamma, ZZ, WW, tau tau, and b b-bar. An excess of events is observed above the expected background, a local significance of 5.0 standard deviations, at a mass near 125 GeV, signalling the production of a new particle. The expected significance for a standard model Higgs boson of that mass is 5.8 standard deviations. The excess is most significant in the two decay modes with the best mass resolution, gamma gamma and ZZ; a fit to these signals gives a mass of 125.3 +/- 0.4 (stat.) +/- 0.5 (syst.) GeV. The decay to two photons indicates that the new particle is a boson with spin different from one.

For the three of you reading this who haven’t yet heard about it, the OPERA experiment in Italy recently announced a genuinely surprising result. They create a beam of muon neutrinos at CERN in Geneva, point them under the Alps (through which they zip largely unimpeded, because that’s what neutrinos do), and then detect a few of them in the Gran Sasso underground laboratory 732 kilometers away. The whole thing is timed by stopwatch (or the modern high-tech version thereof, using GPS-synchronized clocks), and you solve for the velocity by dividing distance by time. And the answer they get is: just a teensy bit faster than the speed of light, by about a factor of 10-5. Here’s the technical paper, which already lists 20 links to blogs and news reports.

The things you need to know about this result are:

It’s enormously interesting if it’s right.

It’s probably not right.

By the latter point I don’t mean to impugn the abilities or honesty of the experimenters, who are by all accounts top-notch people trying to do something very difficult. It’s just a very difficult experiment, and given that the result is so completely contrary to our expectations, it’s much easier at this point to believe there is a hidden glitch than to take it at face value. All that would instantly change, of course, if it were independently verified by another experiment; at that point the gleeful jumping up and down will justifiably commence.

This isn’t one of those annoying “three-sigma” results that sits at the tantalizing boundary of statistical significance. The OPERA folks are claiming a six-sigma deviation from the speed of light. Read More

Here’s an interesting paper from yesterday’s hep-th batch of abstracts. One of the exciting prospects in observational cosmology over the next several years is finding signals of gravitational waves in the cosmic microwave background. These can be produced by inflation, and indeed simple models (one scalar field, no funny tricks) predict a “consistency relation” between characteristics of ordinary density perturbations and the gravitational waves. If signatures of the waves were detected (typically, by finding “tensor modes” in CMB polarization) and shown to be consistent with the simple prediction, it would be a huge boost for inflation.

The world isn’t always so simple, of course. It’s not too hard to think of models that violate the consistency relation. Now Senatore, Silverstein, and Zaldarriaga are pointing to a new mechanism: gravitational waves produced directly by particle or string sources during inflation. (Silverstein is of course well-known as a string theorist, but her recent work in early-universe cosmology has been as good as anybody’s in the field.) The usual mechanism is simply quantum fluctuations of fields in their vacuum state; these folks are imagining that particular objects — particles or strings — are produced during inflation, which then act as sources for gravitational waves. In a simple model you would expect that this effect might be important at a fixed scale corresponding to the energy of the sources, but they argue that it’s not hard to come up with models where gravitational waves are produced at all scales.

We point out that detectable inflationary tensor modes can be generated by particle or string sources produced during inflation, consistently with the requirements for inflation and constraints from scalar fluctuations. We show via examples that this effect can dominate over the contribution from quantum fluctuations of the metric, occurring even when the inflationary potential energy is too low to produce a comparable signal. Thus a detection of tensor modes from inflation does not automatically constitute a determination of the inflationary Hubble scale.

The bad news here is in the last line — in simple models (there’s that word again) the amplitude of gravitational waves is simply proportional to the expansion rate of the universe during inflation. Now that’s no longer so obvious. But research isn’t about finding good news or bad news, it’s about finding the right answer. Being unreasonably confident in the predictions we make from our models is just as dangerous as not having compelling models at all.

One of the benefits of being a Master of Time and Space is that I get to see the future. For example, cosmologically-inclined folks have been wondering for a few weeks about this press release from the TAUP conference in Munich, which includes the lines “Latest results from the CRESST Experiment provide an indication of dark matter. The press conference will be held on 6. September 2011 starting at 2:00 pm.” Suspense! But I know what they’re going to say.

As Neal Weiner points out, we don’t have to wait for the press conference; the friendly folks at the CRESST experiment have ambitiously decided to write a paper as well as giving a press conference, and that paper appeared on the arxiv this evening. (You remember Neal as a distinguished guest blogger; re-read that post to get your bearings in this complicated game.) Very short version: they claim to see some signal that is statistically significantly greater than background, consistent with a WIMP dark-matter particle with a mass of around 20-40 GeV. Slightly longer version:

Kris Stanek alerted me to this fun idea from the astronomers at Ohio State: when they submit a paper to arxiv, they accompany it by a simple YouTube video that explains the basic idea. Called “Coffee Briefs,” there is only one such video so far, for a paper by Jennifer van Saders and Mark Pinsonneault. But they hope to make it regular series.

The base of the convection zone is a source of acoustic glitches in the asteroseismic frequency spectra of solar-like oscillators, allowing one to precisely measure the acoustic depth to the feature. We examine the sensitivity of the depth of the convection zone to mass, stellar abundances, and input physics, and in particular, the use of a measurement of the acoustic depth to the CZ as an atmosphere-independent, absolute measure of stellar metallicities. We find that for low mass stars on the main sequence with $0.4 M_{odot} le M le 1.6 M_{odot}$, the acoustic depth to the base of the convection zone, normalized by the acoustic depth to the center of the star, $tau_{cz,n}$, is both a strong function of mass, and varies at the 0.5-1% per 0.1 dex level in [Z/X], and is therefore also a sensitive probe of the composition. We estimate the theoretical uncertainties in the stellar models, and show that combined with reasonable observational uncertainties, we can expect measure the the metallicity to within 0.15 – 0.3 dex for solar-like stars. We discuss the applications of this work to rotational mixing, particularly in the context of the observed mid F star Li dip, and to distguishing between different mixtures of heavy elements.

This example might not be immediately accessible to non-experts, but I think the idea is to pitch the video at the level of astronomy grad students. Certainly the participants deserve a lot of credit for trying out an innovative way to talk about their research.

The key to the ambition of making this a regular even is keeping it simple and easy. If it takes a couple of hours to put it together, no problem; if it takes a couple of days, enthusiasm will flag. I’m not sure what software was used to make the video and the simple graphics — iMovie, maybe? For the DNA computer video we showed some time back, it was quite an elaborate job, and you would worry that it would be onerous to do something like that for every paper one writes.

How did the universe come to be? We don’t know yet, of course, but we know enough about cosmology, gravitation, and quantum mechanics to put together models that standing a fighting chance of capturing some of the truth.

Stephen Hawking‘s favorite idea is that the universe came out of “nothing” — it arose (although that’s not really the right word) as a quantum fluctuation with literally no pre-existing state. No space, no time, no anything. But there’s another idea that’s at least as plausible: that the universe arose out of something, but that “something” was simply “chaos,” whatever that means in the context of quantum gravity. Space, time, and energy, yes; but no order, no particular arrangement.

It’s an old idea, going back at least to Lucretius, and contemplated by David Hume as well as by Ludwig Boltzmann. None of those guys, of course, knew very much of our modern understanding of cosmology, gravitation, and quantum mechanics. So what would the modern version look like?

Despite the importance of the Second Law of Thermodynamics, it is not absolute. Statistical mechanics implies that, given sufficient time, systems near equilibrium will spontaneously fluctuate into lower-entropy states, locally reversing the thermodynamic arrow of time. We study the time development of such fluctuations, especially the very large fluctuations relevant to cosmology. Under fairly general assumptions, the most likely history of a fluctuation out of equilibrium is simply the CPT conjugate of the most likely way a system relaxes back to equilibrium. We use this idea to elucidate the spacetime structure of various fluctuations in (stable and metastable) de Sitter space and thermal anti-de Sitter space.

It was Boltzmann who long ago realized that the Second Law, which says that the entropy of a closed system never decreases, isn’t quite an absolute “law.” It’s just a statement of overwhelming probability: there are so many more ways to be high-entropy (chaotic, disorderly) than to be low-entropy (arranged, orderly) that almost anything a system might do will move it toward higher entropy. But not absolutely anything; we can imagine very, very unlikely events in which entropy actually goes down.

The single most interesting feature of attempts to replace dark matter with a modification of gravity is Milgrom’s discovery that in a wide variety of galaxies, there’s a unique place where ordinary gravity plus ordinary matter stops working: when the acceleration due to gravity (as Newton would have calculated it) drops below a fixed value a0 ≈ 10−10 m/s2. This is the basis of MOND, but the pattern itself is arguably more interesting than any current attempt to account for it. Very possibly it can be explained by the complicated dynamics of baryons and dark matter in galaxies — but in any event it should be explained somehow.

The existence of this feature gives a strong motivation for testing gravity in the regime of very tiny accelerations. Note that this isn’t even a statement that makes sense in general relativity; particles move on geodesics, and the “acceleration due to gravity” is always exactly zero. So implicitly we’re imagining some global inertial frame with respect to which such acceleration can be measured. That’s a job for a future theory to make sense of; for the moment we’re forgetting that we know GR and thinking like Newton would have.

So now Hernandez, Jimenez, and Allen have tried to test gravity in this weak-acceleration regime — and they claim it fails!

Assuming Newton’s gravity and GR to be valid at all scales, leads to the dark matter hypothesis as a forced requirement demanded by the observed dynamics and measured baryonic content at galactic and extra galactic scales. Alternatively, one can propose a contrasting scenario where gravity exhibits a change of regime at acceleration scales less than $a_{0}$, and obtain just as good a fit to observations across astrophysical scales. A critical experiment in this debate is offered by wide orbit binary stars. Since for $1 M_{odot}$ systems the acceleration drops below $a_{0}$ at scales of around 7000 AU, an statistical survey of relative velocities and binary separations reaching beyond $10^{4}$ AU should yield a conclusive answer to the above debate. By performing such a study we show Kepler’s third law to fail precisely beyond $a approx a_{0}$ scales, precisely as predicted by modified gravity theories designed not to require any dark matter at galactic scales and beyond.

Color me dubious, but interested in seeing further studies. It’s very hard to collect this kind of data, and note that it’s just a statistical survey of velocities, not a precise measurement of individual systems. In principle a statistical survey is fine; in practice, it opens up the possibility of hidden subtle systematic effects.

Still, intriguing and worth checking out. Any time you have the chance to overthrow Sir Isaac Newton, you go for it.