Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

ATLAS uses its cloud structure for communications- Every Tier-2 is coupled to a Tier-1 - 5 national clouds; others have foreign members (e.g. “Germany” includes Krakow, Prague, Switzerland; Netherlands includes Russia, Israel, Turkey) - Each cloud has a Tier-2 coordinatorRegional organizations, such as:+ France Tier-2/3 technical group:- coordinates with Tier-1 and with experiments - monthly meetings - coordinates procurement and site management+ GRIF:Tier-2 federation of 5 labs around Paris+ Canada:Weekly teleconferences of technical personnel (T1 & T2) to share information and prepare for upgrades, large production, etc.+ Many others exist; e.g. in the US and the UK

Tier-2 Overview Board reps: Michel Jouvin and Atul Gurtu have just been appointed to the OB to give the Tier-2s a voice there.

Tier-2 mailing list:Actually exists and is being reviewed for completeness & accuracy

Tier-2 GDB:The October GDB was dedicated to Tier-2 issues+ reports from experiments: role of the T2s; communications + talks on regional organizations + discussion of accounting + technical talks on storage, batch systems, middleware Seems to have been a success; repeat a couple of times per year?

But how much of this is a problem of under-use rather than under-contribution? a task force has been set up to extract installed capacities from the Glue schema

Monthly APEL reports still undergo significant modifications from first draft. Good because communication with T2s better Bad because APEL accounting still has problemsAccounting seems to be very finicky; breaks when the CE or MON box is upgraded

How does the LHC delay affect the requirements and pledges for 2009?+ We are told to go ahead and buy what was planned but we have already seen some under-use of CPU capacity and we have seen this for storage as well

How does the LHC delay affect the requirements and pledges for 2009?+ We are told to go ahead and buy what was planned but we have already seen some under-use of CPU and we are now starting to see this for storage as well

We need to use something other than SpecInt2000!+ this benchmark is totally out-of-date & useless for new CPUs + continued delays in SpecHEP can cause sub-optimal decisions

Networking to the nodes is now an issue.+ with 8 cores per node, 1 GigE connection ≈ 16.8 MB/sec/core + Tier-2 analysis jobs run on reduced data sets and can do rather simple operations have seen 7.5 MB/sec at ATLAS and much more (x10?) + Do we need to go to Infiniband? + We certainly need increased capability for the uplinks; we should have a minimum of fully non-blocking GigE the worker nodes.

 We need more guidance from the experiments The next round of purchases is now!