First, a bit of background on this paper. This is authored by two theorists who analyzed publically released FermiLAT/GLAST data. Fermi is a NASA funded project and one of its stipulations is that all data it collects must be made publically available 6 months after it has been collected. The authors of the paper downloaded the data, used a simple background model, added in their dark matter theory, and did a fit. And pow:

The red points are the data from Fermi, the dash-dot line and the dotted line are backgrounds (galactic diffuse, and a single TeV source), and the dashed line is their model. Nice fit, eh? Yep – looking at this my first reaction is “Wow – is this right? This is big – how did Fermi miss this?” and then I run across the hall to find someone that actually knows this data well.

It turns out the basic problem with this analysis is that not all sources of background are included. This is the galactic center, and, as one would imagine, there are lots of sources there. Not just one TeV source modeled above. My impression from hallway conversations is that when you take into account all of these sources there is much less (if any) room left for the dark matter model. I don’t think that Fermi has published a paper on this yet, but I suspect they will try a some point soon.

Ok, so all’s well. Fermi will publish the paper and everyone will know the right way to do this non-trivial analysis. Except that things got away from them. Nature news has picked it up and wrote a short update. This is pretty widely read. Now Fermi has a PR problem on its hands – people are running around talking about their data and they’ve not really had a voice yet (the science coordinator for Fermi was interviewed for this bit, but her comments were relegated to the end of the post). Fermi is a big collaboration (yes, not the size of the LHC), even if their paper is close to publication it would probably be at least a month or more before the collaboration could agree on a response. So what to do?

There are a lot of issues surrounding making data public. To first order, it is the tax payers that are paying for these experiments, so the data should be public. On the other hand, you can already see that besides the work and infrastructure of making the data public (which costs real $$ – especially for a big experiment like Fermi or one of the LHC experiments), you have to respond to other folks that analyze your data – basically pointing out their mistakes and trying to help them along, even when they might be in competition with some of your internal analyses. In NASA’s case all the data has to be made public – it is written into every grant submission and NASA even provides money for it. This is not currently the case for particle physics. In many of these advanced experiments the data is quite complex – and someone that can’t depend on the large infrastructure of the experiment to help interpret it is bound to have some difficulties.

One only wishes that the authors had gotten in contact with some Fermi folks before submitting their note to the archive…

It was a fascinating listen. Hans-Joachim Popp is the head of computing (CIO) at Deutsches Zentrum für Luft- und Raumfahrt, a company that makes software for German spacecraft. Some of the things that he said were jaw dropping in the context of the ATLAS software (and DZERO software). For example, averaged over the whole project about 0.6 lines of code are written per hour. They have code reviews, where the developer stands in a room and has to defend every single line of code. Their test code is a x12 longer than the actual code they write. They scan all of their code with a static analysis tool to look for “dumb” bugs.

Apparently the programming environment is similar for devices that run in intensive care units in hospitals. Comforting, I suppose. Modern critical airplane software has dueling versions of the code written (much like the space shuttle). In the case of the A380, apparently, one is written in C and the other in Ada* (other than this podcast this slide show was the only reference I could find).

Of course, the standard modern day project is only 20,000 lines of code. Voyager, which has been up there for 30 years now, has only 4 KB of memory for programming – so only 4000 assembly language instructions. He also mentioned that the ability to update the code in Voyager has been crucial to keeping the missing going this long.

I wonder how much of this formal methodology was followed for the GLAST LAT software? The trigger, for example? I remember Toby talking about how it was impossible to change because of all the paperwork and reviews involved in changing a single line of code.

One thing I’ll miss about returning to Seattle is I’ll loose this commute. Yeah, backwards. I’d rather be without the commute. But one advantage is I get to listen to all these podcasts (normally I listen to news and politics from the USA).

* I was in love with the idea of Ada when it first came out. I never wrote any code in it, but I thought that was the coolest programming language when it came out. Heck – it has multi-threading built in. Was over designed to catch errors early. Now, I’d hate it as being too restrictive.

If you are online right now (about 11:45am Eastern Time), you can watch the GLAST launch realtime video here!

11:59: It sounds like they are working in the D0 control room! They have disabled an alarm! We do that all the time. 🙂

12:08: Already up 35 miles! And fast — 7000 miles/hour.

12:12: The room they track the progress from looks just like one of our movable counting houses in a particle physics experiment. Lots and lots of racks of electronics and displays. I’d feel at home. One difference: the guys I can see are wearing ties! Not tee-shirts!

12:14: 100 miles up there. I’m in an office at CERN watching this on my computer. The offices is crammed full of people — but they are all working on various things ATLAS. I basically had to yell at them to get them to watch the launch they were so intensely involved on the problems they were solving. 🙂

12:16: Ha — the guy watching those banks of displays and electronics is getting supplemental information from what looks like a laptop! 🙂 That is so just like D0!

12:19: Now it is coasting in a low orbit – so it will sit there for an hour. That was cool! I’ve never seen a launch before – so this is the closest I’ve ever come. Sounds like an hour or two from now that will all be done and the solar arrays will open and GLAST infrastructure will start to power up.

1:40pm: Hey — they got it in orbit and the solar pannels out and the thing is under power. That was 90 minutes from launch pad to orbit. It was, oh, about 10 years from design to launch pad. And, it will be another 2 weeks before they start to power up the instrument (the next two weeks will be work on making sure the spacecraft that carry’s GLAST is in good shape). 90 minutes. Wow.

And congrats to my fellow prof at UW, Toby Burnett, who has been working on this from before my arrival at UW. He must be dancing in the streets by now!

My friend and colleague, Toby Burnett, is right now down in Florida for a GLAST collaboration meeting. This is a funny place for a GLAST meeting… until you realize the launch date for the satellite is “any day now!”. The last I heard it was supposed to be June 7th (see the official page for news releases). But apparently the members of GLAST have been told that Saturday is now out.

I’ve been urging him to blow off university duties and stay down there if he can. After all he has been working, along with everyone else in the collaboration, towards this goal for the last 10 years!