Pages

Wednesday, April 14, 2010

Plenary round up

Catching up on the technical plenaries from yesterday and today: yesterday we had a talk within the area of bioinformatics followed by autonomic computing; today the opening plenary was on fusion, then WLCG - the computing side of the Large Hadron Collider at CERN.

Modesto Orozco started off yesterday by pointing out he wasn't a computing person; claiming he could tell the difference between a computer and a toaster - but more subtle distinctions weren't quite so clear. Possibly a bit self-deprecating, given that he showed the impact that computers have had within bioinformatics, and that we can expect them to have in the future. It was interesting to here how the problem domains are tackled, and how most real tasks consists of a number of sub tasks, from simulation, database lookup and refinement procedures - which swap between HTC and HPC segments.

The second talk, on autonomic computing focused on a few techniques the speaker had used, and extolling principles. The one area that I felt was missing was some hint as to how we might go about using these ideas - indeed, that was one of the questions, about tool kits and so on, to which the response was that there aren't any. Overall, it felt a bit flat in the end.

This morning started by talking about the EUFORIA work, simulation of fusion processes, leading to the construction of ITER - an Tokomak style fusion reactor. Similar with the bioinformatics from yesterday, there are several different scales of looking at the same thing; proteins in the bio case, and plasmas for the fusion case. Ideally, we'd use the first principles for everyhting, but that's intractable for real problems, so there is a collection of models at different length and time scales that are applied with some steps to integrate the ensemble into single picture.

Finally, the LHC and CERN. In some respects this was one of the first 'customers' of the Grid, and after such a long run up it's nice to see the benefit of the Grid. Interestingly, the time scale of LHC experiments from collision till avaiable for physics analysis is a matter of hours - and that's sustained at 4 GB/s. As a result of that, the time from first collisions to . One point made was that the Grid was born out of political and social concerns - it would be much simpler to put all the computers in one room - but that's not an option (power, sociological etc). Sustainability is the main point of thinking here - looking at using mainstream tool's, rather than HEP-specials.