Life as a Physicist

If you follow newspapers, facebook, or twitter, you’ve undoubtedly seen these already – but the LHC has done it – managed to collide two beams of protons! They never made it that far last year. Here is an event from ATLAS:

That isn’t to say there is a lot of work left to do. These are at an energy 900 GeV, which is much less than the 7,000 GeV energy they plan to get up to by the end of running in 2010. And the beams are not very intense yet. Still!!!

I’m currently in Seattle – I wish I could have been there for this. Being in or around the control room – though I would have been mostly in the way for this phase. Unlike at the Tevatron, I wasn’t really responsible for any bit of the detector or DAQ at ATLAS – and those are the people that need to be there right now. Still, I would have loved to have been there.

Ironically, I first heard about these collisions from facebook. People in the control room that I’m friends with were posting status updates as LHC tuned up their beam. Press release, twitter, etc., all lagged behind that. And of the people I’m friends with a theorist posted the news second (!) – this theorist was not the member of any collaboration. Ahhh… new media! 😉

So, taking that sentiment to the limit. I must now ignore the LHC and get back to preparing for class! Must. Not. Look. At. Accelerator. Status. <said in best Cptn’ Kirk voice>.

The biggest, most expensive physics machine in the world is riddled with thousands of bad electrical connections.

Ouch.

So starts a mostly accurate article in the New York Times about the current state of the LHC. There is good news and bad news in this sentence. To paraphrase a famous politician currently sight-seeing north of South Korea, it really depends on your definition of the word bad. To most people, if someone says that the electrical connection between your light and the wall socket is bad, then that means your light won’t work. That is the normal definition of bad. We High Energy Physicists have a different definition of bad. 🙂

For us, bad means that the connection isn’t going to conduct as much current as it could (I had a blog post about this a while back – but this article contains an excellent explanation – well worth registering if you have to to read it). And this is the reason behind the timing of this article. As I mentioned in that article it would not be until the beginning of August that the LHC group of scientists would have finished measuring all those connections – all those splices – and know exactly how bad they were. Tomorrow the LHC and CERN will announce exactly what energy they will run the LHC at initially.

But scientists say it could be years, if ever, before the collider runs at full strength, stretching out the time it should take to achieve the collider’s main goals…

And that is the bad part of the news. The bad connections mean that we can’t run at the full 14 TeV energy – we will run something short of that (I’m betting it will be 7.5 TeV – if I get it right it isn’t because I have inside information from the accelerator group!). The article is correct that running at this reduced energy won’t give us the access to the science we’d all expected and hoped for if we were running at 14 TeV.

But another thing to keep in mind is: we need data. Any data. And not to discover something new – because we need to tune up and commission our detectors! We’ve never run these things in anything but a simulated collider environment or looking for cosmic rays. We would probably be able to keep ourselves busy for almost a year with two months of data.

Indeed, these are birthing problems – no one has ever run a machine like this before. Which brings me to the one spot in the article that got my hackles up:

“I’ve waited 15 years,” said Nima Arkani-Hamed, a leading particle theorist at the Institute for Advanced Study in Princeton. “I want it to get up running. We can’t tolerate another disaster. It has to run smoothly from now.”

Nima, whom I also know (and like), is a theorist. If an experimentalist said this we would all make them run outside turn around three times, and spit to the north to cancel the jinx they would have just placed on the machine. I think we can all guarantee that there are going to be other failures and problems that occur. We hope none of them are as bad as this last one. But if they are, we will do exactly what we’ve done up to now: pick up the bits, study them, figure out exactly what we did wrong, and then fix it better than it was originally made, and try again.

There was one last quote in that article I would have liked to have seen more of a back story to:

Some physicists are deserting the European project, at least temporarily, to work at a smaller, rival machine across the ocean.

The story behind this is fascinating because it is where science meets humanity. The machine across the ocean is the Tevatron at Fermilab (I’m on one of the experiments there, DZERO). There is plenty of science still there, and the race for the Higgs is very much alive – more so with each delay in the LHC. So scientifically it is attractive. But, there is also the fact that a graduate student in the USA must use real data in their thesis. Thus the delays in the LHC mean that it will take longer and longer for the graduate students to graduate. In the ATLAS LHC experiment the canonical number of graduate students quoted I hear is about 800. Think of that – 800 Ph.D.’s all getting ready to graduate – about 1/3rd or more of them waiting for the first data (talk about a “big bang”). Unfortunately, you can’t be a graduate student forever – so at some point the LHC is taking long enough and you have to move back to the USA in order to get a timely thesis. Similar pressures exist for post-docs and professors trying to get tenure.

UPDATE: Just announced earlier today: they will start with 3.5×3.5 – that is, 7 TeV center of mass. This is exactly half the design energy of the LHC. The hope is that if all runs well at that energy they can slowly ramp up to 4×5 or 8 TeV. At 8 things start to get interesting as a decent amount of data at 8 will provide access to things that the Fermilab Tevatron can’t. Fingers crossed all goes well!

I don’t know where this idea came from, but it was brilliant – CERNVM. It solves two big problems, all at once. And in such a elegant way.

Here is the basic problem: you want to build and run ATLAS (or CMS, ALICE, LHCb, etc.) software on your local university machine or laptop. This is painful for two reasons. The first reason is that it is not likely you are running the proper kind of Linux. Scientific Linux was designed to run science and particle physics code, not run email or have a nice gui (i.e. it isn’t a Mac, or Windows, or ubuntu). There are ways around this, of course – start up a virtual machine and install scientific linux on it, etc. But then you hit the second problem: you have to install the ATLAS software. I’m not sure about the other experiments, but for ATLAS this is a 6 gig affair. And, each time a new version of the release comes out you have to install it all over again. In a virtual machine this can be painful (take hours of your time).

This is where CERNVM comes in. The first realization was that when you compile, build, and test-run, you need only about 10% of that 6 gigs of software. Of course, everyone needs a different 10%, but in general, it is only a small fraction of the release that is needed. So why download everything? The second realization was: automatically install each release or file only when it was needed – and automatically add each release. This second bit means that I, as an end user, never have to install another ATLAS release. Ever! How sweet is that!?!? I can just use it. As soon as a new release is out, and CERN publishes it, then I can access it!

CERNVM is a virtual machine. For folks that have been around CERN a while, they will know that CERNVM refers to the venerable IBM system that CERN used to have as one of its mainframes. According to Predrag Buncic, so I believe is the lead on the modern CERNVM project, that choice of names was on purpose. See Predrag’s talk slides from CHEP for some more details.

CERNVM accomplishes its magic with a FUSE file system. This brilliant open source project creates a virtual file system. Whenever you try to access a file in it, FUSE hands the request off to some user-written code. In CERNVM’s case, this code looks for the file up on a master server at CERN, and then downloads it locally. Once it has been cached locally, the file is accessible like any other file. So, CERN can publish a whole ATLAS release on their master servers, and then when I try to setup that release, CERNVM will automatically bring down exactly the files that I need to get my work done. Better yet, if I’ve already got the files locally, then I can hope on an airplane and everything will still work! Not to shabby!

The virtual machine part of this gets around the wrong OS type – it is based on Scientific Linux. Thus, the two main problems with running local build code are solved! Very nice, eh!?

The thing I spent half my sabbatical on is finally public. It started as an ATLAS wide exercise to make sure our computing environment could handle physics analysis. What better way than to actually do a “dry” run of physics analysis. It grew from there. So much work was invested in it (years, by some peoples account) that it seemed crazy not to make the results public. That took a year: I and my colleagues stopped doing real analysis work for this note over a year ago.

Well, today it was put out on arXiv: Expected Performance of the ATLAS Experiment – Detector, Trigger, and Physics. It is 1852 pages long, so I can’t exactly see everyone rushing to print this out for bedtime reading. No fears; I won’t be disappointed. But it is the PDF to download if you want to know what ATLAS thinks it will be able to do. We even do our best to simulate initial data conditions (misalignment, small data sets). It has chapters on all the big physics topics (Higgs, Top, Exotics, Standard Model, B, etc.) and performance expectations too (electron, muon, tau, b-tagging, etc.).

We’ve made it through the first day of 2009. I have mixed feelings about this coming year.

Federal Science Funding Levels. The economy is crashing down around our ears. Business responds quickly (layoffs :() – government is a bit slower. If things followed their natural course of action that would mean science funding, along with everything else, will take yet another hit. However, the incoming Obama administration seems to be committed to spending the USA’s way out of this recession, so in the end funding might not change very much. I am hopeful that hard sciences funding will remain at least stable.

Federal Science Funding Directions. Climate change is what the Obama administration is focused on. There is a good chance that if you are researching something connected with climate change you may have access to increased funding opportunities. I would expect a funding profile similar to NIH’s funding during its years of increase. I would like to think that funding will spill over into the physical sciences – it should because there are connections between the physical sciences and clean air technologies. All of this is applied scientific research. I hope that the pure research funding gets an increase as well, as an investment in this countries future (particle physics is pure research, of course). I’m feeling neutral here.

Federal Science. Obama’s science team is just a BLAST of fresh air when compared to the current administration’s. After all, his DOE nominee is a Nobel prize winning experimental physicist. Even if the science advisor isn’t elevated to a cabinet position (PDF), there will be someone in the room that knows a great deal about science, research, and how it is done. Even if there are cuts to science funding, I’m very hopeful there will be intelligent cuts rather that unscientifically motivated cuts. I’m very hopeful in this respect.

State Universities. The economy in states is depressing. Some states, like my own (Washington) that rely on sales tax are being hit hard and very fast. State universities can’t escape that, obviously, and my university is no exception. Unfortunately, this usually translates to reduced raises, inability to counter offers from outside, reduced support for research, etc. In our own department I wouldn’t be surprised if some people left for other universities that, for whatever reason, were able to make good offers in this awful climate. There is, in fact, already evidence this is happening. The only consolation is most universities are in the same boat, and so most of them are having similar problems. I know less about private universities, but I do know the endowments of many of them are also having difficulty. I’m very downbeat about this: it will be a rough two years at least, I think.

My Science. When it comes to the Tevatron and the LHC… Well, I see no reason that the Tevatron shouldn’t continue to break records in luminosity (they just broke one earlier this week). And the experiments will continue to be flooded with data. While it is possible for one experiment or the other to have a catastrophic failure, I doubt that will happen. And they should continue to produce papers and science at a furious rate. I also am looking forward to real LHC collision data this year. While I hope it will be at the full 14 TeV, I suspect it is more likely to be at 2 TeV, just a hair above the Tevatron’s luminosity. We’ll hopefully know what the machine scientists think about that sometime in February. I’m really hopeful about this.

New Years Resolutions. Well, I made only one. That way I have a hope of keeping it: make bread more often. 🙂 I think there is a chance that I will keep this one. Especially now that I’ve said it publically. 🙂

Of course, this should also be a fun year, as noted by the Beacon News:

Frustrated with their failed attempt to destroy the world in 2008, the scientists at Fermilab and their counterparts at Switzerland’s CERN physics lab resolve to perfect their new device, the Large Planet-Sucking Black-Hole-o-Tron.

Here is to another great year of data collection and science at the Tevatron and first collision data at the LHC!

I was looking at some predictions for the size of the Trigger Processing farm for ATLAS. This farm is basically racks and racks of computers that will decide, in real time, if the data should be kept or discarded as it rolls off the ATLAS detector.

Back in 2003 (see powerpoint slides, page 23, for example) we were predicting that the computer industry would be making 8 GHz processors by now. 🙂 Of course, due to power and heat problems, we are now sitting around 3 GHz, but with many cores.

Probably (for ATLAS new predictions on this should be released in a few months). But in the context of the Tevatron and the LHC Higgs search that isn’t really what is important.

The ATLAS prediction that it might take 3 years to reach the 5 sigma level for a low mass Higgs discovery got a lot of airplay. It got me to thinking. Lets say the two accelerators are in close competition for the Higgs. The Tevatron can really only speak to the 3 sigma level. It isn’t ever going to get to the 5 sigma level. Further, at the Tevatron the CDF and DZERO experiments will have to combine their results to even reach this 3 sigma level. So, I find it highly unlikely that the LHC will sit back and let the Tevatron get away with this. I certainly wouldn’t (and I’m on a LHC experiment). So what to do? Obvious – beat the Tevatron at its own game: combine results from CMS and ATLAS and the 3 sigma level will be obtained much more quickly. At that point the LHC has stolen the thunder from the Tevatron and CMS and ATLAS can now race each other to individual discoveries of the Higgs at the 5 sigma level.

I don’t expect the experiments to combine for the 5 sigma discovery (I could well be wrong, of course – I know of no plans to not do this or to do this!). There are many forces at play that are driving each experiment to make the first paper submission of a 5 sigma signal. This may, indeed, be what gives the Tevatron space to slip in with a 3 sigma evidence paper. And in the grand scheme of things – the Tevatron goes out with a 3 sigma evidence and the LHC with a 5 sigma discovery – that doesn’t seem like a bad “split”. But who has ever heard of the free market working like that!?

As a member of DZERO I want to push as hard as possible to nail a low mass Higgs. As a member of ATLAS, I want the experiment to scramble as fast as possible to get the Higgs – evidence and discovery. After all, that is one of the LHC’s main points.

There was an unspoken theme at the DZERO workshop this week. Stick with the Tevatron for a huge, but iffy, payoff. Or switch to the LHC now because it is a “sure” bet (as sure as anything gets in research).

This is all about the Standard Model Higgs search at the two accelerators. If such a Higgs does exist the LHC is bound to discover it. The LHC has some “difficulty” at low mass Higgs (below about 125 or so). Difficulty for the LHC means it could take up to 3 years for a single experiment to declare a 5 sigma discovery, the gold standard of “discovery”.

At the Tevatron the Higgs analysis is all about difficulty. Each new Higgs result you hear or read about is a tour-de-force of new techniques and new methods of extracting every last bit of signal out of the experiments. As a graduate student I never remember techniques this sophisticated. And the LHC pre-trial analyses are not as sophisticated either (on the other hand, they don’t need to be).

Global fits to the Standard Model currently predict the Higgs to be low mass – between 114 GeV and 120 or 125 GeV. The Tevatron is currently x2 away from being sensitive to this mass range. By doubling our dataset to 6 fb-1 of data and making a number of improvements to our analyses, we expect that we should be there. These improvements are not easy – it will require a lot of work and a lot of people. Nor are they assured. At best, if the Higgs is there, and we aren’t unlucky, we should be able to see it at the 3 sigma level. But never the 5 sigma discovery level. That will have to be left to the LHC in any case.

So is it worth sticking with the Tevatron? Well… the payoff would be huge to see something at the 3 sigma level. So it is like a lottery with high stakes. The chance of winning is not all that sure, but the jackpot is big!

Me? Well, I’m working on both the LHC and the Tevatron (as are many US physicists). I have a student working on the Higgs search at Fermilab, for example. I’m deeply involved in a number of topics at the LHC as well.

What will happen? Hard to tell. Things to watch? Well, that is easy. There are only two things that really matter here – the performance of the Tevatron and the performance of the LHC. Each physicist who is on both collaborations is performing some complex calculus to optimize their time on the two experiments depending on the chances of success.

A few posts back some folks were wondering where to watch for LHC news as the startup nears. The “LHC First Beam” website seems like a pretty good place to start. For example, recently posted:

After a period of optimization, one bunch was kicked up from the transfer line into the LHC beam pipe and steered about 3 kilometres around the LHC itself on the first attempt. On Saturday, the test was repeated several times to optimize the transfer before the operations group handed the machine back for hardware commissioning to resume on Sunday.

Hey! There has been beam in the LHC!! The website contains a “count-down” clock too. 28 days…

Now I have a stupid question for the folks putting this website together: Why isn’t there a RSS feed!?!?! So old-skool!

The space agency wanted to make sure its long-awaited and astronomically expensive telescope — soon to be launched into orbit above the turbulent fog of the atmosphere — made an appropriately cosmic splash. The advice from those of us in the press peanut gallery was always the same and simple: pictures — cosmic postcards like the live pictures of other planets being transmitted from the Viking and Voyager spacecraft — early and often.

This is PR 101 — everyone, including us scientists, is easily captured by pictures. Especially stunning ones. Sure — they may not be the best way to convey accurate scientific measurements – but they are very easy to relate to. Are we doing the right thing for the start of the LHC? Do we know what pictures – science pictures – we are going to be pushing to the public?

ATLAS has a whole outreach group (as does CMS, I’m sure). We have the ATLAS Book. We have a movie. But what cool picture are we going to give the press when the science starts? Another picture of our detector – like the one attached to this blog posting? Surely we can do better. Our event displays – most are tuned for us to look at as scientists, not for the press or the public. Do we have anything?

Enough of my ideas. What should we have ready when science starts to roll out? At the Tevatron we write these plain-English-summaries. They aren’t totally plain, unfortunately. But perhaps we should get someone from the PR office to work with every analysis that is published to work on something like that?