2
connect communicate collaborate 2 Introduction: Describe some of the changes in the computing model of the LHC experiments. Demonstrate the importance and usage of the network. Show the relation between LHCONE and LHCOPN. Bring together and present the user requirements for future LHC physics analysis. Provide the information to facilitate the presentations on the Architecture and the Implementation of LHCONE.

3
connect communicate collaborate 3 A Little History Requirements paper from K. Bos (Atlas) and I. Fisk (CMS) in autumn 2010. Experiments had devised new compute and data models for LHC data evaluation basically assuming a high speed network connecting the T2s worldwide. Ideas & proposals were discussed at a workshop held at CERN in Jan 2011. Gave input from the networking community. An "LHCONE Architecture" doc finalised in Lyon in Feb 2011. Here K. Bos proposed to start with a prototype based on the commonly agreed architecture. K. Bos and I. Fisk produced a "Use Case" note with list of sites for the prototype. In Rome late Feb 2011 some NRENs & DANTE formed ideas for the " LHCONE prototype planning " doc.

4
connect communicate collaborate LHCOPN LHC: Changing Data Models (1) LHC computing model based on MONARC served well > 10 years ATLAS strictly hierarchal; CMS less so. The successful operation of the LHC accelerator & start of data analysis, brought a re-evaluation of the computing and data models. Flatter hierarchy: Any site might in the future pull data from any other site hosting it. LHCOPN 4 Artur Barczyk

5
connect communicate collaborate LHC: Changing Data Models (2) Data caching: A bit like web caching. Analysis sites will pull datasets from other sites on demand, including from Tier2s in other regions, then make it available for others. Possible strategic pre-placement of data sets Datasets put close to physicists studying that data / suitable CPU power. Use of continental replicas. Remote data access: jobs executing locally, using data cached at a remote site in quasi-real time. Traffic patterns are changing – more direct inter-country data transfers 5

7
connect communicate collaborate Data Flow EU – US ATLAS Tier 2s Example above is from US Tier 2 sites Example above is from US Tier 2 sites Exponential rise in April and May, after LHC start Changed data distribution model end of June – caching ESD and DESD Much slower rise since July, even as luminosity grows rapidly Kors Bos 7

8
connect communicate collaborate LHC: Evolving Traffic Patterns One example of data coming from the US 4 Gbit/s for ~ 1.5 days (11 Jan 11) Transatlantic link GÉANT Backbone NREN Access Link Not an isolated case Often made up of many data flows Users getting good at running gridftp 8

12
connect communicate collaborate CMS Data Transfers Data Placement for Physics Analysis Once data is onto the WLCG, it must be made accessible to analysis applications. Largest fraction of analysis computing at LHC is at the Tier2s. New flexibility reduces latency for end users. Daniele Bonacorsi 12 T1 T2 dominates T2 T2 emerges

16
connect communicate collaborate Requirements for LHCONE Sites are free to choose the way they wish to connect. Flexibility & extensibility required: T2s change Analysis usage pattern is more chaotic – Dynamic Networks of interest World-wide connectivity required for LHC sites. There is concern about LHC traffic swamping other disciplines. Monitoring & fault-finding support should be built in. Cost effective solution required – may influence the Architecture. No isolation of sites must occur. No interruption of the data-taking or physics analysis A prototype is needed. 16

17
connect communicate collaborate Requirements Fitting in with LHC 2011 data taking 17 Machine development & Technical Stops provide pauses in the data taking. This does not mean there is plenty of time. LHCONE prototype might grow in phases.