Share this story

Last year, when the Large Hadron Collider shut down at the end of August, its detectors had recorded enough data to provide a suggestive bump in the data, hinting that the Higgs boson might be lurking somewhere around 125GeV. At the time, CERN's director indicated that the facility would do what it could to make sure it had a definitive answer on the question by the end of this year's run. And, over the winter, the people running the collider picked parameters that should get us there: a slight bump in energy and a high number of proton bunches circulating at once.

So far, it's all gone nicely according to plan. Last year, when proton collisions wrapped up in October, the ATLAS detector had recorded 5.25 inverse femtobarns of collisions. This year, halfway through June, ATLAS has already recorded over 5.5 inverse femtobarns. (It was below that number as of the last count, shown above, but there's been a long run of collisions already this morning.)

Something could still go wrong, perhaps even catastrophically wrong. But, if the machine continues on this pace, by the time it switches to lead ion collisions in the fall, it will have gathered as much data this year as the Tevatron did in decades of operation.

Dan, Normally I'd be reluctant to agree with you as I encourage people to learn new things, however, this small amount of information isn't enough to do anything with unless you are familiar with the Large Hadron Collider already. That being said, I think explaining acronyms the first time they're used in EVERY article is a good practice and should be emphasized as the number of acronyms grows.

By "catastrophically wrong", do you mean we'd stop getting useful data, or that the LHC could still open up a time vortex and zOMG DOOM US ALL?!!!111 (or whatever the complaint was before it was switched on...)

To be honest, I'm truly delighted that for once it's just solid science being reported and not yet another emotional debate about which phone brand loves you better or whatever facepalm-worthy software patent is being played out against a competitor. Those articles tick me off.

I'm no particle physicist, but I never get tired of bleeding-edge science like the LHC or the ITER project.

Dan, Normally I'd be reluctant to agree with you as I encourage people to learn new things, however, this small amount of information isn't enough to do anything with unless you are familiar with the Large Hadron Collider already. That being said, I think explaining acronyms the first time they're used in EVERY article is a good practice and should be emphasized as the number of acronyms grows.

Actually, every test destroys the world. They just have it setup to automatically send a message back in time with the test results so we get the results without having to actually run the test and destroy the world.

How many inverse femtobarns are expected by October? How will that affect the "sigma significance" of the "bump" if we assume the same outcome / distribution for this run?

This is done at higher energy, so they have some 40 % better resolution IIRC. (More of the searched for events vs noise.)

But they will start all over, I read somewhere (3 Quarks Daily?), to get away from the "look elsewhere" effect. I.e. they test only the narrower energy ranges where they haven't already excluded an Higgs. (Between 115-135 GeV and the less likely 650 - 850 GeV IIRC.) By doing so they will get better certainty.

But someone will surely combine old data and new, just as they combine experiments while they should actually test each others observations independently, to get an "observation" early. Just not a very good or definitive one.

They want to get the signal to 5 sigma to claim (a theory dependent) observation, because of "look elsewhere". [ http://en.wikipedia.org/wiki/Look-elsewhere_effect : "The look-elsewhere effect is a phenomenon in the statistical analysis of scientific experiments, particularly in complex particle physics experiments, where an apparently statistically significant observation may have actually arisen by chance because of the size of the parameter space to be searched."]

It seems they will get there, just about.

Tempor wrote:

Quote:

5.25 inverse femtobarns of collisions.

And here people are blaming Star Trek for all their "techno-babble".

It is a measure.

"Fine, it is 5.25." "5.25 what? Pears or apples?" "Why do you care, do you want techno-babble? It is 5.25, not 5.24 or 5.26, 5.25 I tell you!"

Ironically, as it looks now LHC is a "doomsday" machine. A 125 GeV standard Higgs, if that is what is observed and tested for within the next couple of years instead of some 125 GeV non-standard model or some other mass, makes a quasi-stable vacuum.

Remember when protons were tested for instability? Matter would eventually decay and leave an empty and cold universe over some 10^30 years. But the Standard Model prevailed over Grand Unification theories.

Next up was the "Big Rip" doomsday. Spacetime would eventually be torn and the universe decay over some 10^100 years. But the Standard Cosmology prevailed over Big Rip theories.

Now it is the Higgs doomsday. The quantum vacuum with all its fields will eventually decay over some 10^100 years.

Our universe is a mere 10^10 years, but life could hypothetically live off of feeding mass to SMBHs for energy until they evaporate in some 10^80-10^100 years. Hence a 125 GeV Higgs is a nice test for anthropic selection setting some important parameters of our universe, exactly as the cosmological constant was. Parameters giving not too short lifetime for observers to arise, but not much longer than life can possibly be around, so certainly a member of the likelihood peak of distribution of universes with potential life.

The alternative to such environmental selection would be a theory nailing all such parameters down. That seems long in the making. Testing for multiverses looks good, so far! Can't wait for the Planck probe data this year to start to tell us if eternal inflation is a possibility, which would start to nail this exciting physics theory down as durable as opposed to mere most tested alternative right now.

How many inverse femtobarns are expected by October? How will that affect the "sigma significance" of the "bump" if we assume the same outcome / distribution for this run?

I think they're aiming for at least 15 inverse femtobarns ( http://en.wikipedia.org/wiki/Barn_(unit) ) by the end of the year, but obviously they will just take as much as they can get. They're currently running at nearly twice the luminosity of the record last year which means twice as much data per hour as the best period last year. My personal guess would be somewhere between 15 and 20 if everything goes smoothly.

By October I wouldn't know, but you could make an extrapolation and expect a bit more than 10...