In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.

I have read and understood the above.

Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.

HEPiX Spring 2011 Workshop

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Michel Jouvin
(LAL / IN2P3), Sandy Philpott
(JLAB), Walter Schon

Description

HEPiX meetings bring together IT system support engineers from the High Energy Physics (HEP) laboratories, institutes, and universities, such as BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and others.

Meetings have been held regularly since 1991, and are an excellent source of information for IT specialists in scientific high-performance and data-intensive computing disciplines. We welcome participation from related scientific domains for the cross-fertilization of ideas.

The hepix.org website provides links to information from previous meetings.

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

In grid computing we use an X509 PKI security infrastructure. This infrastructure is used to enable secure connections between hosts to deliver payload. This often leads to scalability and reliability issues. This talk presents the alternative approach of signing messages for asynchronous handling, allowing authentication of the payload rather than the connection.
The implications of this approach will be illustrated showing how service interdependency can be reduced, and clustering simplified. AMQP (RabbitMQ) will be used as a transport mechanism in this talk to illustrate these concepts. Both the openssl command line and a python library can be used to authenticate signed messages making scalable secure authentication between sites resources practical for administrators.

Speaker:
Owen Synge
(DESY (HH))

Slides

14:30

Computer security update30m

This presentation provides an update of the security landscape since the last meeting.
It describes the main vectors of compromises in the academic community and discusses security risks management in general, as well as the security aspects of the current hot topics in computing, for example identity federation and virtualisation.

Speaker:
MrRomain Wartel
(CERN)

Slides

15:00

Host based intrusion detection with OSSEC30m

In this talk the open source host-based intrusion detection system OSSEC is described.
Besides an overview of its features it will also be explained how to use it for non-security related monitoring and notifying. Furthermore several possible real life scenarios will be demonstrated and some of the current drawbacks will be elaborated.

Speaker:
Bastian Neuburger
(GSI)

Slides

15:30
→
16:00

Coffee Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

16:00
→
17:30

ComputingHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Convener:
DrMichele Michelotto
(Univ. + INFN)

16:00

Batch Monitoring and Testing30m

In order to improve its batch service for local and Grid users, development is ongoing at CERN to design a batch monitoring system and set up a test instance. The goal is to enhance the batch service by investigating new scheduler features, fine-tuning the already used ones and decreasing the time spent in problem identification and fault resolution.

Speaker:
MrJerome Belleman
(CERN)

Slides

16:30

Selecting a new batch system at CC-IN2P330m

2 years ago, CC-IN2P3 decided to give up the home made batch system (BQS) for a new product.
This presentation will expose the process we set up to make the selection and will explain our choice.

Speaker:
MrBernard CHAMBON
(CC-IN2P3 /CNRS)

Slides

17:00

Grid Engine setup at CC-IN2P330m

As you know, we chose Grid Engine as the next batch system for CC-IN2P3.
This presentation will focus on 2 aspects we have examined during last months :
1/ Scalability and robustness testing
2/ Specific requirements at CC-IN2P3 : problems and solutions

It is planned to describe the updated status for the computing infrastructure of High Energy Physics Division (HEPD): LAN (400 hosts), mail service for the Institute, other centralized servers, computing cluster. A number of updated topics is observed: security and SPAM, cluster virtualization, WiFi, video conferences.

Overview of computing systems at Diamond, including current status and planned future developments.

Speaker:
MsTina Friedrich
(Diamond Light Source Ltd)

Slides

10:30

BNL Site report15m

A report on the current status of the RHIC/ATLAS Computing Facility at BNL with an emphasis on developments and updates since the last Fall Hepix meeting.

Speaker:
DrOfer Rind
(BROOKHAVEN NATIONAL LABORATORY)

Slides

10:45
→
11:15

Coffee Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

11:15
→
11:45

Site ReportsHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Conveners:
MrAlan Silverman
(CERN), philippe olivero
(CC-IN2P3)

11:15

ASGC site report15m

ASGC current status.

Speaker:
MrFelix Lee
(Academia Sinica)

Slides

11:30

PSI - Site report15m

Site report for the Paul Scherrer Institut.

Speaker:
DrDerek Feichtinger
(PSI)

Slides

11:45
→
13:15

IT InfrastructureHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Convener:
DrHelge Meinhard
(CERN-IT)

11:45

Drupal at CERN30m

Drupal is an open source content management platform used worldwide. CERN has chosen to use Drupal for building multilingual, content managed web sites and applications. The infrastructure is based on cluster of Apache web servers, MySQL database servers and storage servers. The setup uses SLC6 operating system as a platform. The high availability configuration is achieved with Red Hat Cluster Suite. The talk will present the details of the Drupal configuration at CERN, current status of the project and integration with existing CERN services: e-groups, CERN Authentication and CERN Document Server.

Speaker:
MrJuraj Sucik
(CERN)

Slides

12:15

Indico - Present and future30m

Indico (Integrated Digital Conference: http://indico.cern.ch) is a web-based, multi-platform conference lifecycle management system and agenda. It has also become the long term archiving tool for documents and metadata related to all kinds of events that take place at CERN. The software is used in production at CERN (hosting >114.000 events, > 580.000 presentations, > 770.000 files and around 10.000 visitors per day) and installed in more than 90 institutes world-wide.
Indico has been changing a lot in the last 3 years, therefore we will review all these changes and new features along this period, and we will also give an overview of the future for Indico.

Speaker:
MrJose Benito Gonzalez Lopez
(CERN)

Slides

12:45

Invenio at CERN30m

Invenio <http://invenio-software.org/> is a software suite enabling to run a digital library or document repository on the web. The technology offered by the software covers all aspects of digital library management from document ingestion through classification, indexing, and curation to dissemination. Invenio has been originally developed at CERN to run the CERN document server (CDS), managing over 1,000,000 bibliographic records in high-energy physics since 2002, covering articles, books, journals, photos, videos, and more. Invenio is nowadays co-developed by an international collaboration comprising institutes such as CERN, DESY, EPFL, FNAL, SLAC and is being used by about thirty scientific institutions worldwide.
The presentation will focus on the current and future usage of Invenio at CERN: integration with other CERN IT services (Drupal, GRID, Indico, MediaArchive, AIS, etc.) as well as other HEP-related information systems, newly introduced features and workflows, usage statistics, etc. The software development strategy, including future planned developments as well as insight into the underlying technologies will be covered.

Speaker:
MrJerome Caffaro
(CERN)

Slides

13:15
→
14:15

Lunch
1h
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

14:15
→
15:45

ComputingHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

14:15

OpenMP Performance on Virtual Machines30m

Virtualization technology has been applied to a variety of areas including server consolidation, High Performance Computing, as well as Grid and Cloud computing. Due to the fact that applications do not run directly on the hardware of a host machine, virtualization generally causes a performance loss for both sequential and parallel applications.
This talk studies the OpenMP applications running on a virtualized multicore machine. It shows the overhead of parallelization and compares the parallel performance on virtual machines with the performance of native executions. An interesting scenario is that one application runs much slower in parallel than the sequential runs. A performance analysis tool is then applied to investigate the cause of such abnormal behavior. The talk demonstrates the performance optimization and the results based on the analysis.

Speaker:
DrJie Tao

14:45

CMS 64bit transition and multicore plans30m

CMS has ported its complete software stack to run natively on 64 bit linux and it's using it for all its computing workflows, from data acquisition to final analysis. In this talk we'll present our experience with such a transition, both in terms of deployment issues and actual performance gains. Moreover, we'll give an insight of what we consider our present and future challenges, focusing in particular on how we plan to exploit multi-core architectures.

Speaker:
MrGiulio Eulisse
(FERMILAB)

Slides

15:15

Performance Comparison of Multi and Many-Core Batch Nodes30m

The compute power of batch nodes is measured in units of HEP-SPEC06 which is based on the industry standard SPEC CPU2006 benchmark suite.
In this talk I will compare the HEP-SPEC06 scores of multi-core worker nodes with accounting data taken from the batch system.

Speaker:
MrManfred Alef
(Karlsruhe Institute of Technology (KIT))

Slides

15:45
→
16:15

Coffe Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

16:15
→
17:30

IT InfrastructureHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Convener:
DrHelge Meinhard
(CERN-IT)

16:15

FAIR 3D Tier-0 Green-IT Cube1h

FAIR computing presents computing requirements for the first level processing of the experiment data exceeding those at CERN. All computing resources, including the first level event selectors will be hosted in one data center, which is currently being planned. It sets new standards with respect to energy density, implementing more than 100 kW/sqm and energy efficiency by requiring less than 10% for the data center cooling, while allowing the use of general purpose computer servers. The over all FAIR computing concept is presented as well as the FAIR Tier-0 data center architecture.

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

09:15

Evaluation of gluster file system at IHEP30m

GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. At IHEP, we setup a testbed to evaluate the file system, including functionality, performance and current status. The advantages and disadvantages for HEP data processing are also disscussed.

The talk will show the benefits of grouping a number of heterogeneous tape libraries into one virtual container of
tape media and drives. The backup and archive applications send their data to this huge container which has all
necessary mechanisms to control and access the tape resources(cartridges, drives, physical libraries).The
implementation is based on an ibm software "Enterprise Removable Media Manager"

We will present the performance achieved during data taking for the 2010 LHC run, including heavy ion run. The operational benefits reaped from the deployed improvement as well as the roadmap for further developments to consolidate the system and lower its deployment cost will be introduced. Our performance assessment for the new generation of Oracle tape drives – T10000C – will also be shown.

Speaker:
Eric Cano
(CERN)

Slides

10:45
→
11:15

Coffee Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

11:15
→
12:45

Storage & FileSystemsHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

11:15

The DESY Grid-Lab, a detailed 'local access protocol' evaluation30m

Since mid of 2010, DESY IT is operating a performance evaluation facility of the size of a small gLite Tier II, the DESY Grid-Lab. Regular gLite software is deployed, allowing to execute commonly used LHC analysis jobs as well as applications provided by other communities. This presentation focuses on the comparison of different implementations of XROOTD, dCap as well as of the NFS4.1/pNFS dCache implementation. The evaluation scenarios include real world analysis jobs of LHC VO's including standard Hammercloud jobs, I/O intensive jobs provided by the ROOT team and examples from non HEP communities.

Speaker:
Dmitry Ozerov
(DESY)

Slides

11:45

Lustre at GSI30m

Lustre has been employed with great success as the general purpose distributed file system for all experimental and theory groups at GSI.
Currently there are 100 mio files stored on Lustre, and between batch nodes, interactive nodes and desktops there are ca. 500 clients with access to Lustre.
Past issues with stability have been overcome by running the Lustre 1.8 version. Hardware upgrades of metadata servers and OSS are under way. The total file space will increase to > 2 PB soon.

Speaker:
Thomas Roth
(GSI)

Slides

12:15

Evaluation of distributed file systems using trace and replay mechanism30m

Reliable benchmarking of file systems is a complex and time consuming task when one has to test against a production environment to achieve relevant results.
In case of the HEP community, this eventually leads to setting up a particular experiments' software environment, which could be a rather complicated task for a system administrator.
To simplify this task, we developed an application for exact replaying of IO requests to reliably replicate an IO behavior of the original applications without a need of installing the whole working environment.
Using the application, we present performance comparison of Lustre, GPFS and Hadoop file systems by replaying traces of LHCb, CMS and ATLAS jobs.

Speaker:
MrJiri Horky
(Institute of Physics of Acad. of Sciences of the Czech Rep. (ASCR))

Slides

12:45
→
13:15

Networking & SecurityHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Convener:
DrDavid Kelsey
(RAL)

12:45

HEPiX IPv6 Working Group30m

A new working group on IPv6 in HEP was dsicussed and agreed at the previous HEPiX meeting. This new working group has recently been created and work is just starting. This talk will present the status and plans of the working group for the year ahead.

Speaker:
DrDavid Kelsey
(RAL)

Slides

13:15
→
14:15

Lunch
1h
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

14:15
→
15:45

IT InfrastructureHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

14:15

Overview of the new Computing room at CC-IN2P330m

The presentation will give an overview of the new commissioned infrastructure and computing room at CC-IN2P3.
The presentation will give an update of technical infrastructure project and improvement. It will focus on major achievement and will spotlight and describe the future capacity offered up to 2019
Topics to be reviewed: Building, Cooling system, Power distribution and Confined Racks, Future capacity, projects and scheduling.

Speaker:
MrPascal Trouve
(CC-IN2P3)

Slides

14:45

Evolution of CERN's Computing Facilities30m

CERN is currently evolving its computing facilities through a number of projects. This presentation will give an overview of the various projects and their current status.

Speaker:
MrWayne Salter
(CERN)

Slides

15:15

Implementing Service Management processes with Service-Now30m

The choice of Service-Now as a tool for handling the request fulfillment and incident management ITIL processes in the IT and the General Services Departments at CERN has created several months of intensive development. Besides the implementation of these two standardized ITIL process it has been a very interesting task to model CERN Service catalogue in the tool. The integration with third party systems and workflows, as SSO, GGUS, organization data, knowledge base, has started and will be a running task for the next couple of years. The biggest challenge will be the transition of existing non-ITIL processes implemented in other tools into Service-Now.

Speaker:
Zhechka Toteva
(CERN)

Slides

15:45
→
16:15

Coffee Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

16:15
→
17:00

IT InfrastructureHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

16:15

Scientific Linux Status Report + Discussion45m

Progress of Scientific Linux over the past 6 months. What we are currently working on. What we see in the future for Scientific Linux.

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

09:15

Moving virtual machines images securely between sites.30m

A Grid service allows applications to the run on many sites without modification. Virtualization provides the potential for deploying the same customized operating system at many sites.
This talk will present one of many possible security infrastructures and models to allow sharing and deployment of virtual machines images that meets the objectives of secure non-repudiation of images, auditing and fault tolerance that has been developed within the HEPIX virtualization working group. The talk focus will be made on the meta data describing the virtual machines, how to share this meta data, how to share the image described and to verify an image adn its meta data, presenting packages and deployment, and how to audit the approach.

Virtualization at CERN: a status report
We present updates to the virtualization services provided by CERN IT.
- CERNs internal cloud has been moved into full production mode in
December 2010, and has been running since then providing virtualized
batch resources. We will report on operational experiences, as well
as further developments made since the last meeting in Cornell,
including benchmark results, OpenNebula and ISF experiences,
a first view on SLC6.
- the CVI Self-Service continues to grow rapidly (>1200 VMs on >200
hypervisors), and so are the use requirements. We describe the
service evolution of CVI 2, with a focus on Linux VMs.
We will also present the plans to evaluate Openstack at CERN.

Speaker:
DrUlrich Schwickerath
(CERN)

Slides

10:15

StratusLab Marketplace for Sharing Virtual Machine Images30m

StratusLab (http://stratuslab.eu/) provides a complete, open-source solution for deploying an "Infrastructure as a Service" cloud infrastructure. Use of the cloud requires the use of prepared machine and disk images, yet preparing correct, secure images remains difficult and represents a significant barrier to the adoption of cloud technologies.
The StratusLab Marketplace is an image registry, containing cryptographically signed metadata associated with shared images. It simultaneously allows: end-users to search for existing images, image creators to publicize their images, and cloud administrators to evaluate the trustworthiness of an image. The image files themselves are stored elsewhere--either in cloud storage or in web-accessible repositories.
The StratusLab Marketplace facilitates the sharing of images and use of IaaS cloud infrastructures, allowing users access to a diverse set of existing images and providing cloud administrators with the confidence to allow them to run. Its integration with the StratusLab distribution makes use of registered images easy, further reducing barriers to adoption.

Speaker:
Cal Loomis
(CNRS/LAL)

Slides

10:45
→
11:15

Coffee Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

11:15
→
12:45

Cloud, grid and virtualizationHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

11:15

Operating a distributed IaaS Cloud for BaBar MC production and user analysis30m

In the last year we have established a system which replicates a standard Condor HTC environment across multiple distinct IaaS clouds of different types including EC2, Nimbus and Eucalyptus. Users simply submit batch jobs to a Condor queue containing a custom attribute which is a pointer to the Virtual Machine Image they would like booted to service their job. The system automatically boots instances of the requested machine type on one of the available clouds and contextualizes them to connect to the batch system. The system is being used on a continual basis for astronomy and HEP jobs. We report on our experience operating this system which has booted over 30 000 VMs and completed over 250 000 jobs.

Speaker:
Ian Gable
(University of Victoria)

Slides

11:45

FermiGrid Scalability and Reliability Improvements30m

The Fermilab Campus Grid (FermiGrid) is a meta-facility that provides grid infrastructure for scientific computing at Fermilab. It provides highly available centralized authorization and authentication services, a site portal for Globus job submission, coordination for interoperability among the various stakeholders, and grid-enabled mass storage interfaces. We currently support approximately 25000 batch processing slots. This presentation will describe the current structure of FermiGrid and recent improvements in scalability and reliability of our authorization and authentication services. These improvements include orders of magnitude improvement in our web services based Site AuthoriZation service (SAZ). We will also describe recent enhancements to the information system and matchmaking algorithm of our site job gateway. Finally we will describe the FermiGrid HA2 project currently under way which distributes our services across two buildings, making us resilient in the case of major building outages.

Speaker:
DrKeith Chadwick
(Fermilab)

Paper

Slides

12:15

Adopting Infrastructure as Code to run HEP applications30m

GSI is a German national laboratory for heavy-ion beams, planning to build the new accelerator complex "Facility for
Antiproton and Ion Research" (FAIR). In preparation for the Tier-0 computing center for FAIR different Infrastructure as a
Service (IaaS) cloud technologies have been compared, to construct a private cloud. Simultaneously, effort has been taken
to learn how to efficiently execute HEP applications in a virtual environment. The result is a private cloud testbed, called
SCLab, build with the help of the OpenNebula toolkit. The concept Infrastructure as Code (IaC), based on the Chef
configuration management system, has been adopted for the deployment and operation of HEP applications in clouds.
Tools have been developed to start virtual clusters in any IaaS cloud on demand. The first successful applications are a
completely virtual AliEn grid site for the ALICE experiment at LHC and simulations for radiation protection studies for
FAIR. The talk will present the design decisions and the experience in running HEP applications in IaaS clouds

Speaker:
Mykhaylo Zynovyev
(GSI)

Slides

12:45
→
14:00

Lunch
1h 15m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

14:00
→
16:00

OracleHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Discussion with Oracle

14:00

An introduction to Oracle Linux40m

In this presentation, Lenz will provide an overview about Oracle Linux, Oracle's Enterprise Linux distribution and the Oracle Unbreakable Enterprise Kernel (UEK). The session will cover the technical highlights and improvements as well as the support offerings that complement it.

Speaker:
MrLenz Grimmer
(Oracle)

Slides

14:40

Open Source at Oracle30m

In this presentation, Oracle will go over its major open source products, and their future directions.

The European Open File Systems society is a non profit organisation to coordinate the future development of lustre. Founding members of the organisation are Universities, Supercomputing Centers and partners from industry. The next release of a lustre version is scheduled for summer 2011.

Seminarraum Theorie (room no. SB3 > > 3.170)

GSI

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

Convener:
DrHelge Meinhard
(CERN-IT)

09:15

Version Control Services at CERN30m

CERN offers three Version Control Services, one using SVN and two older services using CVS. The older CVS service is to be closed by 2Q 2011 and will be merged into the high availability CVS service on AFS where the performance has been improved to suite the needs of all users. The main SVN service has expanded a great deal, in users, commits and repositories, since it started in 2009. Our future plans include new tools for users, internal software upgrades, improved statistics and monitoring.

Speaker:
MrAlvaro Gonzalez Alvarez
(CERN)

Slides

09:45

CernVM-FS Production Service and Deployment30m

CernVM-FS is now a production service supported at CERN distributing VO software to sites/worker nodes. This talk will describe the production service as well as give details on the deployment and management required to use CVMFS at sites.

Speaker:
MrIan Peter Collier
(STFC RAL Tier1)

Slides

10:15
→
10:45

Coffe Break
30m
Hörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

Planckstr. 1, 64291 Darmstadt, Germany

10:45
→
11:45

Cloud, grid and virtualizationHörsaal / lecture hall

Hörsaal / lecture hall

GSI Helmholtzzentrum für Schwerionenforschung GmbH

The UK's National Grid Service is investigating how it can best make use of cloud technologies in the future. The focus is on users, not only those who want to perform computationally intensive research, but also others in the wider academic setting. The usefulness of Infrastructure as a Service clouds to this community is crucial in determining future cloud provision in this area. To examine this question, Eucalyptus-based clouds were deployed at the Universities of Edinburgh and Oxford to gain real experience from the users' perspective.

Speaker:
DrSteve Thorn
(University of Edinburgh)

Slides

11:15

HEPiX VWG Status Report30m

This presentation will give an update of the activities of the HEPiX Virtualisation Working Group over the past few months, describe the current status and give an outlook on future progress.