Abstract

The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

Codispoti, G., et al.: Use of the gLite-WMS in CMS for production and analysis. In: Proceedings of 17th International Conference on Computing in High Energy Physics and Nuclear Physics. J. Phys. Conf. Ser., in press (2009)Google Scholar

Bonacorsi, D., Egeland, R., Metson, S.: SiteDB: marshalling the people and resources available to CMS. In: Poster at the International Conference on Computing in High Energy and Nuclear Physics (CHEP 2009), Prague, 21–27 March 2009Google Scholar