BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:Welcome Address
DTSTART;VALUE=DATE-TIME:20040927T070000Z
DTEND;VALUE=DATE-TIME:20040927T073000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294157@indico.cern.ch
DESCRIPTION:Speakers: Wolfgang von Rueden (CERN)\nhttps://indico.cern.ch/e
vent/0/contributions/1294157/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294157/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Conference Closeout
DTSTART;VALUE=DATE-TIME:20041001T102500Z
DTEND;VALUE=DATE-TIME:20041001T105500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294291@indico.cern.ch
DESCRIPTION:Speakers: Wolfgang von Rueden (CERN/ALE)\nhttps://indico.cern.
ch/event/0/contributions/1294291/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294291/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Conference Conclusions
DTSTART;VALUE=DATE-TIME:20041001T095500Z
DTEND;VALUE=DATE-TIME:20041001T102500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294292@indico.cern.ch
DESCRIPTION:Speakers: L. BAUERDICK (FNAL)\nhttps://indico.cern.ch/event/0/
contributions/1294292/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294292/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Online Computing
DTSTART;VALUE=DATE-TIME:20041001T063000Z
DTEND;VALUE=DATE-TIME:20041001T065500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294293@indico.cern.ch
DESCRIPTION:Speakers: Pierre Vande Vyvre (CERN)\nhttps://indico.cern.ch/ev
ent/0/contributions/1294293/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294293/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Core Software
DTSTART;VALUE=DATE-TIME:20041001T072000Z
DTEND;VALUE=DATE-TIME:20041001T074500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294295@indico.cern.ch
DESCRIPTION:Speakers: Philippe Canal (FNAL)\nhttps://indico.cern.ch/event/
0/contributions/1294295/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294295/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Event Processing
DTSTART;VALUE=DATE-TIME:20041001T065500Z
DTEND;VALUE=DATE-TIME:20041001T072000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294296@indico.cern.ch
DESCRIPTION:Speakers: Stephen Gowdy (SLAC)\nhttps://indico.cern.ch/event/0
/contributions/1294296/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294296/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Distributed Computing Systems and Experiences
DTSTART;VALUE=DATE-TIME:20041001T084000Z
DTEND;VALUE=DATE-TIME:20041001T090500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294297@indico.cern.ch
DESCRIPTION:Speakers: Douglas OLSON ()\nhttps://indico.cern.ch/event/0/con
tributions/1294297/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294297/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Distributed Computing Services
DTSTART;VALUE=DATE-TIME:20041001T074500Z
DTEND;VALUE=DATE-TIME:20041001T081000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294298@indico.cern.ch
DESCRIPTION:Speakers: Massimo LAMANNA (CERN)\nhttps://indico.cern.ch/event
/0/contributions/1294298/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294298/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Widea Area Networking
DTSTART;VALUE=DATE-TIME:20041001T093000Z
DTEND;VALUE=DATE-TIME:20041001T095500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294299@indico.cern.ch
DESCRIPTION:Speakers: Peter CLARKE ()\nhttps://indico.cern.ch/event/0/cont
ributions/1294299/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294299/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Summary - Computer Fabrics
DTSTART;VALUE=DATE-TIME:20041001T090500Z
DTEND;VALUE=DATE-TIME:20041001T093000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294300@indico.cern.ch
DESCRIPTION:Speakers: Tim Smith (CERN)\nhttps://indico.cern.ch/event/0/con
tributions/1294300/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294300/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Offline Framework of the Pierre Auger Observatory
DTSTART;VALUE=DATE-TIME:20040927T161000Z
DTEND;VALUE=DATE-TIME:20040927T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294452@indico.cern.ch
DESCRIPTION:Speakers: L. Nellen (I. DE CIENCIAS NUCLEARES\, UNAM)\nThe Pie
rre Auger Observatory is designed to unveil the nature and the\norigin of
the highest energy cosmic rays. Two sites\, one currently\nunder construct
ion in Argentina\, and another pending in the Northern\nhemisphere\, will
observe extensive air showers using a hybrid detector\ncomprising a ground
array of 1600 water Cerenkov tanks overlooked by\nfour atmospheric fluore
scence detectors. Though the computing demands\nof the experiment are les
s severe than those of traditional high\nenergy physics experiments in ter
ms of data volume and detector\ncomplexity\, the large geographically disp
ersed collaboration and the\nheterogeneous set of simulation and reconstru
ction requirements\nconfronts the offline software with some special chall
enges.\n\nWe have designed and implemented a framework to allow collaborat
ors to\ncontribute algorithms and sequencing instructions to build up the\
nvariety of applications they require. The framework includes\nmachinery
to manage these user codes\, to organize the abundance of\nuser-contribute
d configuration files\, to facilitate multi-format file\nhandling\, and to
provide access to event and time-dependent detector\ninformation which ca
n reside in various data sources. A number of\nutilities are also provide
d\, including a novel geometry package which\nallows manipulation of abstr
act geometrical objects independent of\ncoordinate system choice. The fram
ework is implemented in C++\, follows\nan object oriented paradigm\, and t
akes advantage of some of the more\nwidespread tools that the open source
community offers\, while keeping\nthe user-side simple enough for C++ non-
experts to learn in a\nreasonable time. The distribution system includes
unit and acceptance\ntesting in order to support rapid development of both
the core\nframework and contributed user code. Great attention has been
paid to\nthe ease of installation.\n\nhttps://indico.cern.ch/event/0/contr
ibutions/1294452/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294452/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Huge Memory systems for data-intensive science
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294493@indico.cern.ch
DESCRIPTION:Speakers: Richard Mount (SLAC)\nhttps://indico.cern.ch/event/0
/contributions/1294493/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294493/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The CMS User Analysis Farm at Fermilab
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294136@indico.cern.ch
DESCRIPTION:Speakers: Ian FISK (FNAL)\nUS-CMS is building up expertise at
regional centers in preparation for analysis of LHC data. The User Analysi
s \nFarm (UAF) is part of the Tier 1 facility at Fermilab. The UAF is bein
g developed to support the efforts of the \nFermilab LHC Physics Center (L
PC) and to enableefficient analysis of CMS data in the US.\n\nThe support\
, infrastructure\, and services to enable a local analysis community at a
computing center which is \nremote from the physical detector and the majo
rity of the collaboration present unique challenges.\n\nThe current UAF is
a farm running the LINUX operating system providing interactive and batch
computing for \nusers. Load balancing\, resource and process management a
re realized with FBSNG\, the batch system \ndeveloped at Fermilab. Over th
e course of the next three years the UAF must grow in size and functionali
ty\, \nwhile continuing to support simulated analysis activities and test
beam applications.\n\nIn this presentation we will describe the developmen
t of the current cluster\, the technology choices made\, the \nservices re
quired to support regional analysis activities\, and plans for the future.
\n\nhttps://indico.cern.ch/event/0/contributions/1294136/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294136/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Production mode Data-Replication framework in STAR using the HRM G
rid
DTSTART;VALUE=DATE-TIME:20040927T145000Z
DTEND;VALUE=DATE-TIME:20040927T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294203@indico.cern.ch
DESCRIPTION:Speakers: E. Hjort (LAWRENCE BERKELEY LABORATORY)\nThe STAR ex
periment utilizes two major computing facilities for its data processing \
nneeds - the RCF at Brookhaven and the PDSF at LBNL/NERSC. The sharing of
data \nbetween these facilities utilizes data grid services for file repl
ication\, and the \ndeployment of these services was accomplished in conju
nction with the Particle \nPhysics Data Grid (PPDG). For STAR's 2004 run
it will be necessary to replicate \n~100 TB. The file replication is base
d on Hierarchical Resource Managers (HRMs) \nalong with Globus tools for s
ecurity (GSI) and data transport (GridFTP). HRMs are \ngrid middleware de
veloped by the Scientific Data Management group at LBNL\, and STAR \nfile
replication consists of an HRM interfaced to HPSS at each site with GridFT
P \ntransfers between the HRMs. Each site also has its own installation
of the STAR \nfile and metadata catalog\, which is implemented in MySQL.
Queries to the catalogs \nare used to generate file transfer requests. Si
ngle requests typically consist of \nmany thousands of files with a volume
of hundreds of GBs. The HRMs implement a \nplugin to a Replica Registrat
ion Service (or RRS) which is utilized for automatic \nregistration of new
files as they are successfully transferred across sites. This \nallows ST
AR users immediate use of the distributed data. Data transfer statistics \
nand system architecture will be presented.\n\nhttps://indico.cern.ch/even
t/0/contributions/1294203/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294203/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Status of the alignment calibrations in the ATLAS-Muon experiment
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294365@indico.cern.ch
DESCRIPTION:Speakers: V. GAUTARD (CEA-SACLAY)\nATLAS is a particle detecto
r which will is being built at CERN in \nGeneva. The muon detection system
is made up among other things\, of \n600 chambers measuring 2 to 6 m2 and
30 cm thick. The chambers' \nposition must be known with an accuracy of +
/- 30 m for translations \nand +/-100 rad for rotations for a range of +/-
5mm and +/-5mrad. \nIn order to fulfill these requirements\, we have desi
gned different \noptical sensors.\nDue to (i) the very high accuracy requi
red\, (ii) the number of \nsensors (over 1000) and (iii) the different typ
e of sensors\, we \ndeveloped one user interface which manages among other
things \nseveral control command software. Each of this software is \nass
ociated with an accurate calibration bench. In this conference\, \nwe will
present only the most complex one which combines command \ncontrol\, an a
nalysis module\, real time processing and database \naccess. These softwar
es are now currently used for sensors \ncalibration.\n\nhttps://indico.cer
n.ch/event/0/contributions/1294365/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294365/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CLHEP Infrastructure Improvements
DTSTART;VALUE=DATE-TIME:20040930T145000Z
DTEND;VALUE=DATE-TIME:20040930T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294375@indico.cern.ch
DESCRIPTION:Speakers: Andreas PFEIFFER (CERN)\nCLHEP is a set of HEP-speci
fic foundation and utility classes such as\nrandom number generators\, phy
sics vectors\, and particle data tables.\nAlthough CLHEP has traditionally
been distributed as one large library\, \nthe user community has long wan
ted to build and use CLHEP packages separately.\n\nWith the release of CLH
EP 1.9\, CLHEP has been reorganized and enhanced\nto enable building and u
sing CLHEP packages individually as well as\ncollectively. The revised bu
ild strategy employs all the components of\nthe standard autotools suite:
automake\, autoconf\, and libtool. In\ncombination with the reorganization
\, the use of these components makes\nit easy not only to rebuild any sing
le package (e.g.\, when that package\nchanges)\, but also to add new packa
ges.\n\nThis presentation will discuss the new CLHEP structure\, illustrat
e the\nrole and use of the autotools\, and describe how other packages wit
h\nsimilar organization can be seamlessly integrated with the CLHEP\nlibra
ries.\n\nhttps://indico.cern.ch/event/0/contributions/1294375/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294375/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ZOOM Minimization Package
DTSTART;VALUE=DATE-TIME:20040930T134000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294449@indico.cern.ch
DESCRIPTION:Speakers: M. Fischler (FERMILAB)\nA new object-oriented Minimi
zation package is available via the ZOOM cvs \nrepository. This package\
, designed for use in HEP applications\, has all \nthe capabilities of Mi
nuit\, but is a re-write from scratch\, adhering to modern \nC++ design pr
inciples. \n \nA primary goal of this package is extensibility in severa
l directions\, so that \nits capabilities can be kept fresh with as little
maintenance effort as \npossible. These flexibility goals have been met\
, as demonstrated by extensions \nof the package to add new types of termi
nation conditions\, new domains and \nrestrictions on the solution space\,
and a new minimization algorithm. Each \nof these extensions was strai
ghtforward to implement. \n \nThe object-oriented design style also has se
veral advantages at the user level. \nOne such advantage is the ability to
consider several problems simultaneously \nwithin a single program. Anot
her is that we can make the Minimzer objects \neasy to coordinate the Mini
mizer with the use of other products. To verify \nthat this goal is met\
, we demonstrate examples of using the Minimizer in the \ncontext of a Ro
ot application\, and in an application using the "R" statistical \nanalys
is environment and language. \n \nWe compare and contrast this package
with other free C++ Minimization packages \nsuitable for HEP (most of whi
ch have origins in Minuit). Following Minuit \noverly precisely makes it
difficult to design an object oriented package \nwithout undue distortio
ns. This package is distinguished by the priority \nthat was assigned to
C++ design issues\, and the focus on producing an \nextensible system that
will resist becoming obsolete.\n\nhttps://indico.cern.ch/event/0/contribu
tions/1294449/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294449/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Geant4: status and recent developments
DTSTART;VALUE=DATE-TIME:20040927T132000Z
DTEND;VALUE=DATE-TIME:20040927T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294381@indico.cern.ch
DESCRIPTION:Speakers: J. Apostolakis (CERN)\nGeant4 is relied upon in prod
uction for increasing number of HEP \nexperiments and for applications in
several other fields. Its \ncapabilities continue to be extended\, as its
performance and \nmodelling are enhanced. \n\nThis presentation will giv
e an overview of recent developments in \ndiverse areas of the toolkit. T
hese will include\, amongst others\, \nthe optimisation for complex setups
using different production \nthresholds\, improvements in the propagation
in fields\, and highlights \nfrom the physics processes and event biasing
. \n\nIn addition it will note the physics validation effort undertaken in
\ncollaboration with a number of experiments\, groups and users.\n\nhttps
://indico.cern.ch/event/0/contributions/1294381/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294381/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mis-use Cases for the Grid
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294435@indico.cern.ch
DESCRIPTION:Speakers: D. Skow (FERMILAB)\nThere have been a number of effo
rts to develop use cases for the Grid\nto guide development and useability
testing. This talk examines the\nvalue of "mis-use cases" for guiding th
e development of operational\ncontrols and error handling. A couple of the
more common current\nnetwork attack patterns will be extrapolated to a gl
obal Grid\nenvironment. The talk will walk through the various activities\
nnecessary for incident response and recovery and strive to be\ntechnology
neutral.\n\nGedanken incident response exercises are being discussed amon
g the HEP\nPKI infrastructure specialists\, but a systems-wide approach to
the\nissues and necessary tools is needed. Determining scope of incidents
\,\nperforming forensics and containing the spread requires a much more\nd
istributed approach than our previous experiences. A new set of tools\nand
communication patterns are likely to be needed. This talk will be\naimed
at applications and middleware developers as well as operations\nteams for
grids.\n\nAs time allows\, the talk will survey current grid testbed midd
leware\,\nidentify the current control points and responsibilities and sug
gest\nplaces where extensions or modifications would be beneficial.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294435/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294435/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Monitoring CMS Tracker construction and data quality using a grid/
web service based on a visualization tool
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294407@indico.cern.ch
DESCRIPTION:Speakers: G. Zito (INFN BARI)\nThe complexity of the CMS Track
er (more than 50 million channels to monitor) now in \nconstruction in ten
laboratories worldwide with hundreds of interested people \, will \nrequi
re new tools for monitoring both the hardware and the software. In our app
roach \nwe use both visualization tools and Grid services to make this mon
itoring possible. \nThe use of visualization enables us to represent in a
single computer screen all \nthose million channels at once. The Grid will
make it possible to get enough data \nand computing power in order to che
ck every channel and also to reach the experts \neverywhere in the world a
llowing the early discovery of problems .\n\nWe report here on a first pro
totype developed using the Grid environment already \navailable now in CMS
i.e. LCG2. This prototype consists on a Java client which \nimplements th
e GUI for Tracker Visualization and a few data servers connected to the \n
tracker construction database \, to Grid catalogs of event datasets or dir
ectly to \ntest beam setups data acquisition . All the communication betwe
en client and servers \nis done using data encoded in xml and standard Int
ernet protocols.\n\nWe will report on the experience acquired developing t
his prototype and on possible \nfuture developments in the framework of an
interactive Grid and a virtual counting \nroom allowing complete detector
control from everywhere in the world.\n\nhttps://indico.cern.ch/event/0/c
ontributions/1294407/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294407/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Usage statistics and usage patterns on the NorduGrid: Analyzing th
e logging information collected on one of the largest production Grids of
the world.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294174@indico.cern.ch
DESCRIPTION:Speakers: O. SMIRNOVA (Lund University\, Sweden)\nThe Nordic G
rid facility (NorduGrid) came into production operation during\nthe summer
of 2002 when the Scandinavian Atlas HEP group started to use\nthe Grid fo
r the Atlas Data Challenges and was thus the first Grid ever\ncontributing
to an Atlas production. Since then\, the Grid facility has\nbeen in conti
nuous 24/7 operation offering an increasing number of\nresources to a grow
ing set of active users coming from various scientific\nareas including ch
emistry\, biology\, informatics. As of today the Grid has\ngrown into one
of the largest production Grids of the world continuously\nrunning Grid jo
bs on the more than 30 Grid-connected sites which offer\nover 2000 CPUs.\n
\nThis article will start with a short overview of the design and\nimpleme
ntation of the Advanced Resource Connector (ARC)\, the NorduGrid\nmiddlewa
re\, which delivers reliable Grid services to the NorduGrid\nproduction fa
cility. This will be followed by a presentation of the\nlogging facility o
f NorduGrid\, describing the logging service and the\ncollected informatio
n. The main part of the talk will focus on the\nanalysis of the collected
logging information: usage statistics\, usage\npatterns (what is a typical
grid job on the NorduGrid looks like?). Use\ncases from different applica
tion domains will also be discussed.\n\nReferences:\n-NorduGrid live: www.
nordugrid.org -> Grid Monitor\n-Atlas Data-Challenge 1 on NorduGrid: http:
//arxiv.org/abs/physics/0306013\n\nhttps://indico.cern.ch/event/0/contribu
tions/1294174/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294174/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CERN Modular Physics Screensaver or Using spare CPU cycles of CERN
's Desktop PCs
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294557@indico.cern.ch
DESCRIPTION:Speakers: A. Wagner (CERN)\nCERN has about 5500 Desktop PCs. T
hese computers offer a large pool of resources \nthat can be used for phys
ics calculations outside office hours.\nThe paper describes a project to m
ake use of the spare CPU cycles of these PCs for \nLHC tracking studies. T
he client server application is implemented as a lightweight\, \nmodular s
creensaver and a Web Application containing the physics job repository. Th
e \ninformation exchange between client and server is done using the HTTP
protocol. The \ndesign and implementation is presented together with resul
ts of performance and \nscalability studies. A typical LHC tracking study
involves some 1500 jobs\, each over \n100\,000 turns\, requiring about 1 h
our of CPU on a modern PC. A reliable and easy to \nuse Linux interface to
the CPSS Web application has been provided. It has been used \nfor a prod
uction run of 15\,000 jobs\, using some 50 desktop Windows PCs\, which \nu
ncovered a numerical incompatibility between Windows 2000 and XP. It is e
xpected \nto make available up to two orders of magnitude more computing p
ower for these \nstudies at zero cost.\n\nhttps://indico.cern.ch/event/0/c
ontributions/1294557/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294557/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Implementation of a reliable and expandable on-line storage for co
mpute clusters
DTSTART;VALUE=DATE-TIME:20040927T145000Z
DTEND;VALUE=DATE-TIME:20040927T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294334@indico.cern.ch
DESCRIPTION:Speakers: J. VanWezel (FORSCHUNGZENTRUM KARLSRUHE)\nThe HEP ex
periments that use the regional center GridKa will handle\nlarge amounts o
f data. Traditional access methods via local disks or\nlarge network stora
ge servers show limitations in size\, throughput or\ndata management flexi
bility.\n\nHigh speed interconnects like Fibre Channel\, iSCSI or Infiniba
nd as\nwell as parallel file systems are becoming increasingly important i
n\nlarge cluster installations to offer the scalable size and throughput\n
needed for PetaByte storage. At the same time the reliable and proven\nNFS
protocol allows local area storage access via traditional Ethernet\nvery
cost effectively.\n\nThe cluster at GridKa uses the General Parallel File
System (GPFS) on\na 20 node file server farm that connects to over 1000 FC
disks via a\nStorage Area Network. The 130 TB on-line storage is distribu
ted to the\n390 node cluster via NFS. A load balancing system ensures an e
ven load\ndistribution and additionally allows for on-line file server exc
hange.\n\nDiscussed are the components of the storage area network\, speci
fic\nLinux tools\, and the construction and optimisation of the cluster fi
le\nsystem along with the RAID groups. A high availability is obtained and
\nmeasurements prove high throughput under different conditions. The use o
f\nthe file system administration and management possibilities is presente
d\nas is the implementation and effectiveness of the load balancing system
.\n\nhttps://indico.cern.ch/event/0/contributions/1294334/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294334/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Role of Tier-0\, Tier-1 and Tier-2 Regional Centres in CMS DC04
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294368@indico.cern.ch
DESCRIPTION:The CMS 2004 Data Challenge (DC04) was devised to test several
key \naspects of the CMS Computing Model in three ways: by trying to \nsu
stain a 25 Hz reconstruction rate at the Tier-0\; by distributing \nthe re
constructed data to six Tier-1 Regional Centers (FNAL in US\, \nFZK in Ger
many\, Lyon in France\, CNAF in Italy\, PIC in Spain\, RAL in\nUK) and han
dling catalogue issues\; by redistributing data to Tier-2 \ncenters for an
alysis. Simulated events\, up to the digitization step\, \nwere produced p
rior to the DC as input for the reconstruction in the \nPre-Challenge Prod
uction (PCP04).\n\nIn this paper\, the model of the Tier-0 implementation
used in DC04 is \ndescribed\, as well as the experience gained in using th
e newly \ndeveloped data distribution management layer\, which allowed CMS
to \nsuccessfully direct the distribution of data from Tier-0 to Tier-1 \
nsites by loosely integrating a number of available Grid components. \nWhi
le developing and testing this system\, CMS explored the overall \nfunctio
nality and limits of each component\, in any of the different \nimplementa
tions which were deployed within DC04.\n\nThe role of Tier-1's is presente
d and discussed\, from the import of \nreconstructed \ndata from Tier-0\,
to the archiving on to the local mass storage \nsystem and the data \ndist
ribution management to Tier-2's for analysis. Participating Tier-\n1's dif
fered in \navailable resources\, set-up and configuration: a critical eval
utation \nof the results \nand performances achieved adopting different st
rategies in the \norganization and \nmanagement of each Tier-1 center to s
upport CMS DC04 is presented.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294368/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294368/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Porting LCG Applications
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294415@indico.cern.ch
DESCRIPTION:Speakers: I. Reguero (CERN\, IT DEPARTMENT)\, J A. Lopez-Perez
(CERN\, IT DEPARTMENT)\nOur goal is two fold. On one hand we wanted to ad
dress the interest of\nCMS users to have LCG Physics analysis environment
on Solaris. On the\nother hand we wanted to assess the difficulty of porti
ng code written in\nLinux without particular attention to portability to o
ther Unix\nimplementations. Our initial assumption was that the difficulty
would be\nmanageable even for a very small team. This is because the impl
icit\nrespect by Linux of most Unix interfaces and standards such as the I
EEE\n(PASC) 1003.1 1003.2 specifications.\n\nWe started with the LCG Exter
nal software\n(http://spi.web.cern.ch/spi/extsoft/platform.html)\nin order
to use it to build the LCG applications such as POOL and SEAL\n(http://lc
gapp.cern.ch/project/) .\nWe will discuss the main problems found with the
system interfaces as well\nas the advantages and disadvantages of using t
he GNU compilers and\ndevelopment environment versus the vendor provided o
nes.\n\nhttps://indico.cern.ch/event/0/contributions/1294415/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294415/
END:VEVENT
BEGIN:VEVENT
SUMMARY:New distributed offline processing scheme at Belle
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294212@indico.cern.ch
DESCRIPTION:Speakers: I. Adachi (KEK)\nThe Belle experiment has accumulate
d an integrated\nluminosity of more than 240fb-1 so far\, and a daily logg
ed\nluminosity has exceeded 800pb-1. This requires more\nefficient and rel
iable way of event processing. To meet\nthis requirement\, new offline pro
cessing scheme has been\nconstructed\, based upon technique employed for t
he Belle\nonline reconstruction farm. Event processing is performed\nat PC
farms\, which consists of 60 quad(0.7GHz) and 225\ndual(1.3GHz or 3.2GHz)
CPU PC nodes. Raw event data are\nread from a Solaris tape server connect
ed to a DTF2 tape\ndrive\, and they are distributed over all PC nodes.\nRe
constructed events are recorded onto 8 file servers\,\nwhich are newly ins
talled last year. To maximize\nprocessing capabilities\, various optimizat
ions such as PC\nclustering\, job control\, output data management and so
on\nhave been done. As a result\, processing power with this\nscheme has b
een more than doubled\, which corresponds to\nthat more than 3 fb-1 of bea
m data per day can be\nprocessed. In this talk\, stable operation of our n
ew\nsystem\, together with a description of the Belle offline\ncomputing m
odel\, will be demonstrated by showing computing\nperformance obtained fro
m experience in processing beam\ndata.\n\nhttps://indico.cern.ch/event/0/c
ontributions/1294212/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294212/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Status and Plans of the LCG PI Project
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294286@indico.cern.ch
DESCRIPTION:Speakers: A. Pfeiffer (CERN\, PH/SFT)\nIn the context of the L
HC Computing Grid (LCG) project\, the Applications Area\ndevelops and main
tains that part of the physics applications software and\nassociated infra
structure that is shared among the LHC experiments.\n\nThe Physicist Inter
face (PI) project of the LCG Application Area encompasses\nthe interfaces
and tools by which physicists will directly use the software.\nIn collabor
ation with users from the experiments\, work has concentrated on\nthe Anal
ysis Services subsystem\, where implementations of the AIDA interfaces\nfo
r (binned and unbinned) histogramming\, fitting and minimization as well a
s\nmanipulation of tuples have been developed and adapted. In addition\, b
indings\nof these interfaces to the Python interpreted language have been
done using\nthe dictionary subsystem of the SEAL project.\n\nThe actual st
atus and the future planning of the project will be presented.\n\nhttps://
indico.cern.ch/event/0/contributions/1294286/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294286/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evolution of LCG-2 Data Management
DTSTART;VALUE=DATE-TIME:20040927T134000Z
DTEND;VALUE=DATE-TIME:20040927T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294151@indico.cern.ch
DESCRIPTION:Speakers: J-P. Baud (CERN)\nLCG-2 is the collective name for t
he set of middleware released for \nuse on the LHC Computing Grid in Decem
ber 2003. This middleware\, \nbased on LCG-1\, had already several improv
ements in the Data \nManagement area. These included the introduction of t
he Grid File \nAccess Library(GFAL)\, a POSIX-like I/O Interface\, along w
ith MSS \nintegration via the Storage Resource Manager(SRM)interface.\n\nL
CG-2 was used in the Spring 2004 data challenges by all four LHC\nexperime
nts. This produced the first useful feedback on scalability \nand functio
nality problems in the middleware\, especially with regards \nto data mana
gement.\n\nOne of the key goals for the Data Challenges in 2004 is to show
that \nthe LCG can handle the data for the LHC\, even if the computing mo
del \nis still quite simple. In light of the feedback from the data \nchal
lenges\, and in conjunction with the LHC experiments\, a strategy \nfor th
e improvements required in the data management area was \ndeveloped. The a
im of these improvements was to allow both easier \ninteraction and better
performance from the experiment frameworks and \nother middleware such as
POOL.\n\nIn this talk\, we will first introduce the design of the current
data\nmanagement solution in LCG-2. We will cover the problems and issues
\nhighlighted by the data challenges\, as well as the strategy for the \nr
equired improvements to allow LCG-2 to handle effectively data \nmanagemen
t at LCG volumes. In particular\, we will highlight the new \nAPIs provid
ed\, and the integration of GFAL and the EDG Replica \nManager functionali
ty with ROOT.\n\nhttps://indico.cern.ch/event/0/contributions/1294151/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294151/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Condor based CDF CAF
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294267@indico.cern.ch
DESCRIPTION:The CDF Analysis Facility (CAF) has been in use since April 20
02\nand has successfully served 100s of users on 1000s of CPUs.\nThe origi
nal CAF used FBSNG as a batch manager. \nIn the current trend toward multi
site deployment\, \nFBSNG was found to be a limiting factor\, \nso the CAF
has been reimplemented to use Condor instead.\nCondor is a more widely us
ed batch system and \nis well integrated with the emerging grid tools.\nOn
e of the most useful being the ability to run seamlessly\non top of other
batch systems.\nThe transition has brought us a lot of additional benefits
\,\nsuch as ease of installation\, fault tolerance and \nincreased managea
bility of the cluster.\nThe CAF infrastructure has also been simplified a
lot\nsince Condor implements a number of features we had to \nimplement ou
rselves with FBSNG.\nIn addition\, our users have found that Condor's fair
share mechanism\nprovides a more equitable and predictable distribution o
f resources.\nIn this talk the Condor based CAF will be presented\, \nwith
particular emphasis on the changes needed to run with Condor\,\nthe probl
ems found during and the advantages gained by the transition.\nSome backgr
ound and the plans for the future\, as well as results \nfrom Condor scala
bility tests will also be presented.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294267/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294267/
END:VEVENT
BEGIN:VEVENT
SUMMARY:IceTray: a Software Framework for IceCube
DTSTART;VALUE=DATE-TIME:20040927T155000Z
DTEND;VALUE=DATE-TIME:20040927T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294130@indico.cern.ch
DESCRIPTION:Speakers: T. DeYoung (UNIVERSITY OF MARYLAND)\nIceCube is a cu
bic kilometer-scale neutrino telescope under construction at the South\nPo
le. The minimalistic nature of the instrument poses several challenges fo
r the\nsoftware framework. Events occur at random times\, and frequently
overlap\, requiring\nsome modifications of the standard event-based proces
sing paradigm. Computational\nrequirements related to modeling the detect
or medium necessitate the ability for\nsoftware components to defer proces
sing events. With minimal information from the\ndetector\, events must be
reconstructed many times with different hypotheses or\nmethods\, and the
results compared. The appropriate series of software components\nrequired
to process an event varies considerably\, and can be determined only at r
un\ntime. Finally\, reconstruction algorithms are constantly evolving\, w
ith development\ntaking place throughout the collaboration\, so it is esse
ntial that conversion of\nprivate analysis code to online production softw
are be simple and\, given the\ninaccessibility of the experimental site\,
robust. The IceCube collaboration has\ndeveloped the IceTray framework\,
which meets these needs by blending aspects of push-\nand pull-based archi
tectures to produce a highly modular system which nevertheless\nallows eac
h software component a significant degree of control over the execution fl
ow.\n\nhttps://indico.cern.ch/event/0/contributions/1294130/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294130/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Optimizing Selection Performance on Scientific Data by utilizing B
itmap Indices
DTSTART;VALUE=DATE-TIME:20040930T130000Z
DTEND;VALUE=DATE-TIME:20040930T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294442@indico.cern.ch
DESCRIPTION:Speakers: Vincenzo Innocente (CERN)\nBitmap indices have gaine
d wide acceptance in data warehouse\n applications handling large amounts
of read only data. High\n dimensional ad hoc queries can be efficiently pe
rformed by utilizing\n bitmap indices\, especially if the queries cover on
ly a subset of the\n attributes stored in the database. Such access patter
ns are common\n use in HEP analysis. Bitmap indices have been implemented
by several\n commercial database management systems. However\, the provid
ed query\n algorithms focus on typical business applications\, which are b
ased on\n discrete attributes with low cardinality. HEP data\, which are m
ostly\n characterized by non discrete attributes\, cannot be queried\n eff
iciently by these implementations.\n \n Support for selections on continu
ously distributed data can be added\n to the bitmap index technique by ext
ending it with an adaptive\n binning mechanism. Following this approach a
prototype has been\n implemented\, which provides the infrastructure to pe
rform index based\n selections on HEP analysis data stored in ROOT trees/t
uples. For the\n indices a range encoded design with multiple components h
as been\n chosen. This design concept allows to realize a very fine binnin
g\n granularity\, which is crucial to selection performance\, with an inde
x\n of reasonable size. Systematic performance tests have shown that the\n
query processing time and the disk-I/O can be significantly reduced\n com
pared to a conventional scan of the data. This especially applies\n to opt
imization scenarios in HEP analysis\, where selections are\n slightly vari
ed and performed repetitively on one and same data\n sample.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294442/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294442/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automatic Procedures as Generated Analysis Tool
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294133@indico.cern.ch
DESCRIPTION:Speakers: G. Asova (DESY ZEUTHEN)\nThe photo injector test fac
ility at DESY Zeuthen (PITZ) was built to \ndevelop\, operate and optimize
photo injectors for future free \nelectron lasers and linear colliders. I
n PITZ we use a DAQ system \nthat stores data as a collection of ROOT file
s\, forming our database \nfor offline analysis. Consequently\, the offlin
e analysis will be \nperformed by a ROOT application\, written at least pa
rtly by the user \n(a physicist). To help the user to develop safe filters
and data \nvisualisation (graphs\, histograms) with minimal effort in an
\nexisting ROOT framework application\, we provide a GUI that generates \n
C++ source files\, compiles and links them to the rest of the \napplicatio
n. We call these C++ routines "Automatic Procedures" (AP).\nStandard filte
r conditions and data visualisation can be generated \nby click or drag- a
nd-drop\, while more complex tasks may be \nexpressed as small pieces of C
++ code. Once compiled by ACLiC \n(ROOTs Automatic Compiler Linker)\, an A
utomatic Procedure may be \nreused without repeated compilation. E. g. the
injector shift \ncrew will run a number of ROOT applications\, controlled
by APs in \nregular intervals. Alternatively every AP can be read in and
loaded \nto the GUI for further improvement. A number of APs can run in a
\nlogical sequence\, parameters can be transferred from one AP to an \not
her. They can be selected by picking a point from a graph.\nThe GUI was co
nstructed with Qt\, because that offers a comprehensive \nGUI programming
toolkit.\nThe photo injector test facility at DESY Zeuthen (PITZ) was buil
t to \ndevelop\, operate and optimize photo injectors for future free \nel
ectron lasers and linear colliders. In PITZ we use a DAQ \nsystem that sto
res data as a collection of ROOT files\, forming our \ndatabase for offlin
e analysis. Consequently\, the offline analysis \nwill be performed by a R
OOT application\, written at least partly by \nthe user (a physicist). To
help the user to develop safe filters and \ndata visualisation (graphs\, h
istograms) with minimal effort in an \nexisting ROOT framework application
\, we provide a GUI that generates \nC++ source files\, compiles and links
them to the rest of the \napplication. We call these C++ routines "Automa
tic Procedures" (AP).\nStandard filter conditions and data visualisation c
an be generated \nby click or drag- and-drop\, while more complex tasks ma
y be \nexpressed as small pieces of C++ code. Once compiled by ACLiC \n(RO
OTs Automatic Compiler Linker)\, an Automatic Procedure may be \nreused wi
thout repeated compilation. E. g. the injector shift \ncrew will run a num
ber of ROOT applications\, controlled by APs in \nregular intervals. Alter
natively every AP can be read in and loaded \nto the GUI for further impro
vement. A number of APs can run in a \nlogical sequence\, parameters can
be transferred from one AP to an \nother. They can be selected by picking
a point from a graph.\nThe GUI was constructed with Qt\, because that offe
rs a comprehensive \nGUI programming toolkit.\n\nKeywords: Automatic Proce
dure\, ROOT\, ACLiC\, Data Analysis\, Data \nVisualisation\, GUI\, Qt\n\nh
ttps://indico.cern.ch/event/0/contributions/1294133/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294133/
END:VEVENT
BEGIN:VEVENT
SUMMARY:InfiniBand for High Energy Physics
DTSTART;VALUE=DATE-TIME:20040929T134000Z
DTEND;VALUE=DATE-TIME:20040929T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294558@indico.cern.ch
DESCRIPTION:Speakers: A. Heiss (FORSCHUNGSZENTRUM KARLSRUHE)\nDistributed
physics analysis techniques as provided by the rootd \nand proofd concepts
require a fast and efficient interconnect between\nthe nodes. Apart from
the required bandwidth the latency of message \ntransfers is important\, i
n particular in environments with many nodes. \nEthernet is known to have
large latencies\, between 30 and 60 micro seconds for\nthe common Giga-bit
Ethernet. \nThe InfiniBand architecture is a relatively new\, open indust
ry standard. \nIt defines a switched high-speed\, low-latency fabric desig
ned to connect compute \nnodes\nand I/O nodes with copper or fibre cables.
The theoretical bandwidth\nis up to 30 Gbit/s. The Institute for Scientif
ic Computing (IWR) at the \nForschungszentrum Karlsruhe is testing InfiniB
and technology since begin of 2003\, \nand has a cluster of\ndual Xeon nod
es using the 4X (10 Gbit/s) version of the interconnect. \nBringing the RF
IO protocol - which is part of the CERN CASTOR \nfacilities for sequential
file transfers - to InfiniBand has been \na big success\, allowing signif
icant reduction of CPU consumption \nand increase of file transfer speed.
\nA first prototype of a direct interface to InfiniBand for the root\ntoo
lkit has been designed and implemented. \nExperiences with hard- and softw
are\, in particular MPI performance results\, will be \nreported.\nThe met
hods and first performance results on rfio and root will be\nshown and com
pared to other fabric technologies like Ethernet.\n\nhttps://indico.cern.c
h/event/0/contributions/1294558/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294558/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Level-2 trigger algorithm for the identification of muons in the
Atlas Muon Spectrometer
DTSTART;VALUE=DATE-TIME:20040927T124000Z
DTEND;VALUE=DATE-TIME:20040927T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294563@indico.cern.ch
DESCRIPTION:Speakers: A. Di Mattia (INFN)\nThe Atlas Level-2 trigger provi
des a software-based event selection \nafter the initial Level-1 hardware
trigger. For the muon events\, the \nselection is decomposed in a number o
f broad steps: first\, the Muon \nSpectrometer data are processed to give
physics quantities \nassociated to the muon track (standalone features ext
raction) then\, \nother detector data are used to refine the extracted fea
tures. \nThe "muFast" algorithm performs the standalone feature extraction
\, \nproviding a first reduction of the muon event rate from Level-1. It \
nconfirms muon track candidates with a precise measurement of the \nmuon m
omentum. The algorithm is designed to be both conceptually \nsimple and fa
st so as to be readily implemented in the demanding \nonline environment i
n which the Level-2 selection code will run. \nNever-the-less its physics
performance approaches\, in some cases\, \nthose of the offline reconstruc
tion algorithms. This paper describes \nthe implemented algorithm together
with the software techniques \nemployed to increase its timing performanc
e.\n\nhttps://indico.cern.ch/event/0/contributions/1294563/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294563/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Building Global HEP Systems on Kerberos
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294189@indico.cern.ch
DESCRIPTION:Speakers: M. Crawford (FERMILAB)\nAs an underpinning of AFS an
d Windows 2000\, and as a formally proven\nsecurity protocol in its own ri
ght\, Kerberos is ubiquitous among HEP\nsites. Fermilab and users from oth
er sites have taken advantage of this\nand built a diversity of distribute
d applications over Kerberos v5. We\npresent several projects in which thi
s security infrastructure has been\nleveraged to meet the requirements of
far-flung collaborations. These\nrange from straightforward "Kerberization
" of applications such as\ndatabase and batch services\, to quick tricks l
ike simulating a\nuser-authenticated web service with AFS and the "file:"
schema\, to more\ncomplex systems. Examples of the latter include experime
nt control room\noperations and the Central Analysis Farm (CAF).\n\nWe pre
sent several use cases and their security models\, and examine how\nthey a
ttempt to address some of the outstanding problems of secure\ndistributed
computing: delegation of the least necessary privilege\;\nestablishment of
trust between a user and a remote processing facility\;\ncredentials for
long-queued or long-running processes\, and automated\nprocesses running w
ithout any user's instigation\; security of\nremotely-stored credentials\;
and ability to scale to the numbers of\nsites\, machines and users expect
ed in the collaborations of the coming\ndecade.\n\nhttps://indico.cern.ch/
event/0/contributions/1294189/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294189/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dynamic Matched Filters for Gravitational Waves Detection
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294377@indico.cern.ch
DESCRIPTION:Speakers: S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI
"R.CACCIOPPOLI")\nThe algorithms for the detection of gravitational waves
are usually very complex \ndue to the low signal to noise ratio. In parti
cular the search for signals coming \nfrom coalescing binary systems can b
e very demanding in terms of computing power\, \nlike in the case of the c
lassical Standard Matched Filter Technique. To overcome \nthis problem\, w
e tested a Dynamic Matched Filter Technique\, still based on Matched \nFil
ters\, whose main advantage is the requirement of a lower computing power.
In \nthis work this technique is described\, together with its possible a
pplication as a \npre-data analysis algorithm. Also the results on simulat
ed data are reported.\n\nhttps://indico.cern.ch/event/0/contributions/1294
377/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294377/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Middleware for the next generation Grid infrastructure
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294283@indico.cern.ch
DESCRIPTION:Speakers: E. Laure (CERN)\nThe aim of the EGEE (Enabling Grids
for E-Science in Europe) is to\ncreate a reliable and dependable European
Grid infrastructure for\ne-Science. The objective of the Middleware Re-en
gineering and Integration\nResearch Activity is to provide robust middlewa
re components\,\ndeployable on several platforms and operating systems\, c
orresponding\nto the core Grid services for resource access\, data managem
ent\,\ninformation collection\, authentication & authorization\, resource\
nmatchmaking and brokering\, and monitoring and accounting. \n\nFor achiev
ing this objective\, we developed an architecture and\ndesign of the next
generation Grid middleware leveraging experiences\nand existing components
mainly from AliEn\, EDG\, and VDT. The\narchitecture follows the service
breakdown developed by the LCG ARDA\nRTAG. Our goal is to do as little ori
ginal development as possible but\nrather re-engineer and harden existing
Grid services. The evolution of\nthese middleware components towards a Ser
vice Oriented Architecture\n(SOA) adopting existing standards (and followi
ng emerging ones) as\nmuch as possible is another major goal of our activi
ty.\n\nA rapid prototyping approach has been adopted\, providing a sequenc
e of\nmore sophisticated prototypes to the EGEE candidate applications\nco
ming from the LHC HEP experiments and the Biomedical field. The\nclose fee
dback loop with applications via these prototypes in\nindispensible for ac
hieving our ultimate goals of providing a reliable\nand dependable Grid in
frastructure.\n\nIn this paper we will report on the architecture and desi
gn of the\nmain Grid components and report on our experiences with early\n
prototype systems.\n\nhttps://indico.cern.ch/event/0/contributions/1294283
/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294283/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Virtual Geometry Model
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294400@indico.cern.ch
DESCRIPTION:Speakers: I. Hrivnacova (IPN\, ORSAY\, FRANCE)\nIn order for p
hysicist to easily benefit from the different existing \ngeometry tools us
ed within the community\, the Virtual Geometry Model\n(VGM) has been desig
ned. In the VGM we introduce the abstract interfaces \nto geometry objects
and an abstract factory for geometry construction\,\nimport and export. T
he interfaces to geometry objects were defined to be\nsuitable to describe
"geant-like" geometries with a hierarchical volume\nstructure.\nThe imple
mentation of the VGM for a concrete geometry model represents\na small lay
er between the VGM and the particular native geometry.\nAt the present tim
e this implementation is provided for the Geant4 and\nthe Root TGeo geomet
ry models.\nUsing the VGM factory\, geometry can first be defined independ
ently from\na concrete geometry model\, and then built by choosing a concr
ete\ninstantiation of it. Alternatively\, the import function of the VGM f
actory\nmakes it possible to use VGM directly with native geometries (Gean
t4\,\nTGeo). The export functions provide conversion into other native \ng
eometries or the XML format.\nIn this way\, the VGM surpasses one-directio
nal geometry converters\nwithin Geant4 VMC (Virtual Monte Carlo): roottog4
and g4toxml\, and\nautomatically provides missing directions: g4toroot\,
roottoxml. To port\na third geometry model\, then providing the VGM layer
for it is sufficient\nto obtain all the converters between this third geom
etry and already\nported geometries (Geant4\, Root).\nThe design and imple
mentation of the VGM classes\, the status of existing\nimplementations for
Geant4 and TGeo\, and simple examples of usage will be\npresented.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294400/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294400/
END:VEVENT
BEGIN:VEVENT
SUMMARY:INTAS Discussion
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294511@indico.cern.ch
DESCRIPTION:INTAS ( http://www.intas.be): International Association for th
e \npromotion of co-operation with scientists from the New Independent \nS
tates of the former Soviet Union (NIS). INTAS encourages joint \nactivitie
s between its INTAS Members and the NIS in all exact and \nnatural science
s\, economics\, human and social sciences.\n\nINTAS supports a number of N
IS participants to attend the 2004 \nComputing in High Energy Physics Conf
erence (CHEP'04). \nDuring CHEP'04\, this discussion has been organised so
that NIS \ndelegates can meet specifically with their physicists and comp
uter \nscientists counterparts to discuss topics of mutual interest.\n\nPr
ovisional agenda\n - Welcome message from Wolfgang Von Rueden\n - Present
status about East-West science cooperation inside the High\n Energy Phys
ics Environment\n - Main directions to go\n - New developments and technol
ogies needed to achieve the goals\,\n especially in the network infrastr
ucture area\n - Time and financial schedules\n - Needed man power\n - Fina
ncial scenarios including funding scenarios (INTAS will need \nthis as the
y plan for their future activities to focus on selected \nthematic)\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294511/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294511/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Evolution of Computing: Slowing down? Not Yet!
DTSTART;VALUE=DATE-TIME:20040929T070000Z
DTEND;VALUE=DATE-TIME:20040929T073000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294154@indico.cern.ch
DESCRIPTION:Speakers: Andrew Sutherland (ORACLE)\nDr Sutherland will revie
w the evolution of computing over the past \ndecade\, focusing particularl
y on the development of the database and \nmiddleware from client server t
o Internet computing. \n\nBut what are the next steps from the perspective
of a software \ncompany? Dr Sutherland will discuss the development of Gr
id as well \nas the future applications revolving around collaborative wor
king\, \nwhich are appearing as the next wave of computing applications.\n
\nhttps://indico.cern.ch/event/0/contributions/1294154/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294154/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience with Deployment and Operation of the ATLAS Production S
ystem and the Grid3+ Infrastructure at Brookhaven National Lab
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294322@indico.cern.ch
DESCRIPTION:Speakers: X. Zhao (Brookhaven National Laboratory)\nThis paper
describes the deployment and configuration of the\nproduction system for
ATLAS Data Challenge 2 starting in May 2004\,\nat Brookhaven National Labo
ratory\, which is the Tier1 center in\nthe United States for the Internati
onal ATLAS experiment. We will\ndiscuss the installation of Windmill (supe
rvisor) and Capone (executor)\nsoftware packages on the submission host an
d the relevant security\nissues. The Grid3+ infrastructure and information
service are used\nfor the deployment of grid enabled ATLAS transformation
s on the Grid3+\ncomputing elements. The Tier 1 hardware configuration inc
ludes 95\ndual processor Linux compute nodes\, 24 TB of NFS disk and an HP
SS\nmass storage system. VOMS server maintains both VO services for US\nAT
LAS and BNL local site policies. This paper describes the work of\noptimiz
ing the performance and efficiency of this configuration.\n\nhttps://indic
o.cern.ch/event/0/contributions/1294322/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294322/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Networking for High Energy and Nuclear Physics as Global E-Science
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294354@indico.cern.ch
DESCRIPTION:Speakers: H. Newman (Caltech)\nWide area networks of sufficien
t\, and rapidly increasing end-to-end \ncapability are vital for every pha
se of high energy physicists' work. \nOur bandwidth usage\, and the typica
l capacity of the major national \nbackbones and intercontinental links us
ed by our field have \nprogressed by a factor of more than 1000 over the p
ast decade\, and the\noutlook is for a similar increase over the next deca
de\, as we enter \nthe era of LHC physics served by Grids on a global scal
e. Responding \nto these trends\, and the emerging need to provide rapid a
ccess and \ndistribution of Petabyte-scale datasets\, physicists working w
ith \nnetwork engineers and computer scientists are learning to use\nnetwo
rks effectively in the 1-10 Gigabit/range\, placing them among \nthe leadi
ng developers of global networks.\n\nIn this talk I review the network req
uirements and usage trends\, and \npresent a bandwidth roadmap for HEP and
other fields of "data \nintensive" science. I give an overview of the sta
tus and outlook for \nthe world's research networks\, technology advances\
, and the problem \nof the Digital Divide\, based on the recent work of IC
FA's\nStanding Committee on Inter-regional Connectivity (SCIC).\nFinally\,
I discuss the role of high speed networks in the next \ngeneration of Gri
d systems that are now being constructed to support \ndata analysis for th
e LHC experiments.\n\n[This is a candidate Plenary Presentation.]\n\nhttps
://indico.cern.ch/event/0/contributions/1294354/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294354/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Online Monitoring and online calibration/reconstruction for the PH
ENIX experiment
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294432@indico.cern.ch
DESCRIPTION:Speakers: Martin purschke ()\nThe PHENIX experiment consists o
f many different detectors and \ndetector types\, each one with its own ne
eds concerning the \nmonitoring of the data quality and the calibration. T
o ease the task \nfor the shift crew to monitor the performance and status
of each \nsubsystem in PHENIX we developed a general client server based
\nframework which delivers events at a rate in excess of 100Hz. \n\nThis m
odel was chosen to minimize the possibility of accidental \ninterference w
ith the monitoring tasks themselves. The user only \ninteracts with the cl
ient which can be restarted any time without \nloss or alteration of infor
mation on the server side. It also \nenables multiple people to check simu
ltaneously the same detector - \nif need be even from remote locations. Th
e information is \ntransferred in form of histograms which are processed b
y the client. \nThese histograms are saved for each run and some html outp
ut is \ngenerated which is used later on to remove problematic runs from t
he \noffline analysis. An additional interface to a data base is provide \
nto enable the display of long term trends.\n\nThis framework was augmente
d to perform an immediate calibration \npass and a quick reconstruction of
rare signals in the counting \nhouse. This is achieved by filtering out i
nteresting triggers and \nprocessing them on a local Linux cluster. That e
nabled PHENIX to \ne.g. keep track of the number of J/Psi's which could be
expected \nwhile still taking data.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294432/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294432/
END:VEVENT
BEGIN:VEVENT
SUMMARY:H1OO - an analysis framework for H1
DTSTART;VALUE=DATE-TIME:20040929T155000Z
DTEND;VALUE=DATE-TIME:20040929T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294219@indico.cern.ch
DESCRIPTION:Speakers: J. Katzy (DESY\, HAMBURG)\nDuring the years 2000 and
2001 the HERA machine and the H1 \nexperiment performed substantial lumin
osity upgrades. To cope with \nthe increased demands on data handling an e
ffort was made to \nredesign and modernize the analysis software. Main goa
ls were to \nlower turn-around time for physics analysis by providing a si
ngle \nframework for data storage\, event selection\, physics analysis and
\nevent display. The new object oriented analysis environment is using \n
C++ and is based on the RooT framework. Data layers with a high \nlevel of
abstraction are defined\, i.e. physics particles\, event \nsummary inform
ation and user specific information.\n \nA generic interface makes the use
of reconstruction output stored \nin BOS format transparent to the user.
Links between all data layers \nand partial event reading allow correlatin
g quantities of different \nabstraction levels with high performance. De
tailed physics \nanalysis is performed by passing transient data between d
ifferent \nanalysis modules. Binding of existing fortran based libraries o
n \ndemand allows the use of existing utility functions and interface to \
nthe existing data base. On this basis tools with enhanced \nfunctionality
are provided. This framework has become standard for \ndata analyses of t
he previously and currently collected data.\n\nhttps://indico.cern.ch/even
t/0/contributions/1294219/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294219/
END:VEVENT
BEGIN:VEVENT
SUMMARY:INDICO - the software behind CHEP 2004
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294471@indico.cern.ch
DESCRIPTION:Speakers: T. Baron (CERN)\nCHEP 2004 conference is using the I
ntegrated Digital Conferencing \nproduct to manage part of its web site an
d processes to run the\nconference.\n\nThis software has been built in the
framework of InDiCo European \nProject. It is designed to be generic and
extensible with the goal of\nproviding help for single seminars as well as
large conferences\nmanagement. Partly developped at CERN within the Docum
ent Server (CDS)\nteam\, it focuses on supporting future events in HEP dom
ain and it will\nbe distributed with an open source license.\n\nThe presen
tation will explain the main application features before \ngoing into the
details of its object oriented development. It will be \ndemoed how the In
DiCo can be used as a platform for scheduling events\nin a large instituti
on like CERN and how it can give to conference\nchairperson a solid basis
to set up\, run and archve content of meetings.\n\nhttps://indico.cern.ch/
event/0/contributions/1294471/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294471/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data management services of NorduGrid
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294128@indico.cern.ch
DESCRIPTION:Speakers: O. Smirnova (Lund University\, Sweden)\nIn common gr
id installations\, services responsible for storing big data\nchunks\, rep
lication of those data and indexing their availability are usually\ncomple
tely decoupled. And a task of synchronizing data is passed to either\nuser
-level tools or separate services (like spiders) which are subject to\nfai
lure and usually cannot perform properly if one of underlying services\nfa
ils too.\n The NorduGrid Smart Storage Element (SSE) was designed to try t
o\novercome those problems by combining the most desirable features into o
ne\nservice. It uses HTTPS/G for secure data transfer\, Web Services for\n
control (through same HTTPS/G channel) and can provide information to\nind
exing services used in middlewares based on the Globus Toolkit (TM). At\nt
he moment\, those are the Replica Catalog and the Replica Location\nServic
e. The modular internal design of the SSE and the power of C++\nobject pro
gramming allows to add support for other indexing services in\nan easy way
.\n There are plans to complement it with a Smart Indexing Service capable
of\nresolving inconsistencies hence creating a robust distributed data st
orage\nsystem.\n\nhttps://indico.cern.ch/event/0/contributions/1294128/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294128/
END:VEVENT
BEGIN:VEVENT
SUMMARY:FroNtier: High Performance Database Access Using Standard Web Comp
onents in a Scalable Multi-tier Architecture
DTSTART;VALUE=DATE-TIME:20040927T124000Z
DTEND;VALUE=DATE-TIME:20040927T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294338@indico.cern.ch
DESCRIPTION:Speakers: L. Lueking (FERMILAB)\nA high performance system has
been assembled using standard web components to deliver\ndatabase informa
tion to a large number (thousands?) of broadly distributed clients. \nThe
CDF Experiment at Fermilab is building processing centers around the world
\nimposing a high demand load on their database repository. For deliverin
g read-only\ndata\, such as calibrations\, trigger information and run con
ditions data\, we have\nabstracted the interface that clients use to retri
eve database objects. A middle tier\nis deployed that translates client re
quests into database specific queries and\nreturns the data to the client
as HTTP datagrams. The database connection management\,\nrequest translati
on\, and data encoding are accomplished in servlets running under\nTomcat.
Squid Proxy caching layers are deployed near the Tomcat servers as wel
l as\nclose to the clients to significantly reduce the load on the databas
e and provide a\nscalable deployment model. This system is highly scalabl
e\, readily deployable\, and\nhas a very low administrative overhead for d
ata delivery to a large\, distributed\naudience. Details of how the system
is built and used will be presented including its\narchitecture\, design\
, interfaces\, administration\, and performance measurements.\n\nhttps://i
ndico.cern.ch/event/0/contributions/1294338/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294338/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The evolution of the distributed Event Reconstruction Control Syst
em in BaBar
DTSTART;VALUE=DATE-TIME:20040927T120000Z
DTEND;VALUE=DATE-TIME:20040927T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294462@indico.cern.ch
DESCRIPTION:Speakers: A. Ceseracciu (SLAC / INFN PADOVA)\nThe Event Recons
truction Control System of the BaBar experiment was redesigned in\n2002\,
to satisfy the following major requirements: flexibility and scalability.\
n\nBecause of its very nature\, this system is continuously maintained to
implement the\nchanging policies\, typical of a complex\, distributed prod
uction enviromnent.\nIn 2003\, a major revolution in the BaBar computing m
odel\, the Computing Model 2\,\nbrought a particularly vast set of new req
uirements in various respects\, many of\nwhich had to be discovered during
the early production effort\, and promptly dealt\nwith. Particularly\, th
e reconstruction pipeline was expanded with the addition of a\nthird stage
. The first fast calibration stage was kept running at SLAC\, USA\, while\
nthe two stages doing most of the computation were moved to the ~400 CPU\n
reconstruction facility of INFN\, Italy.\n\nIn this paper\, we summarize t
he extent and nature of the evolution of the Control\nSystem\, and we demo
nstrate how the modular\, well engineered architecture of the\nsystem allo
wed to efficiently adapt and expand it\, while making great reuse of\nexis
ting code\, leaving virtually intact the core layer\, and exploiting the\n
"engineering for flexibility" philosophy.\n\nhttps://indico.cern.ch/event/
0/contributions/1294462/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294462/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Genetic Programming and its application to HEP
DTSTART;VALUE=DATE-TIME:20040930T161000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294553@indico.cern.ch
DESCRIPTION:Speakers: E. Vaandering (VANDERBILT UNIVERSITY)\nGenetic progr
amming is a machine learning technique\, popularized by Koza in\n1992\, in
which computer programs which solve user-posed problems are\nautomaticall
y discovered. Populations of programs are evaluated for their\nfitness of
solving a particular problem. New populations of ever increasing\nfitness
are generated by mimicking the biological processes underlying\nevolution.
These processes are principally genetic recombination\, mutation\,\nand s
urvival of the fittest. \n\nGenetic programming has potential advantages o
ver other machine learning\ntechniques such as neural networks and genetic
algorithms in that the form of\nthe solution is not specified in advance
and the program can grow as large as\nnecessary to adequately solve the po
sed problem.\n\nThis talk will give an overview and demonstration of the g
enetic programming\ntechnique and show a successful application in high en
ergy physics: the\nautomatic construction of an event filter for FOCUS whi
ch is more powerful than\nthe experiment's usual methods of event selectio
n. We have applied this method\nto the study of doubly Cabibbo suppressed
decays of charmed hadrons ($D^+$\,\n$D_s^+$\, and $\\Lambda_c^+$).\n\nhttp
s://indico.cern.ch/event/0/contributions/1294553/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294553/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ScotGrid: A prototype Tier 2 centre
DTSTART;VALUE=DATE-TIME:20040927T130000Z
DTEND;VALUE=DATE-TIME:20040927T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294339@indico.cern.ch
DESCRIPTION:Speakers: S. Thorn ()\nScotGrid is a prototype regional comput
ing centre formed as a collaboration between\nthe universities of Durham\
, Edinburgh and Glasgow as part of the UK's national\nparticle physics gri
d\, GridPP. We outline the resources available at the three core\nsites an
d our optimisation efforts for our user communities. We discuss the work\n
which has been conducted in extending the centre to embrace new projects b
oth from\nparticle physics and new user communities and explain our method
ology for doing this.\n\nhttps://indico.cern.ch/event/0/contributions/1294
339/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294339/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Recent evolutions of CMT. Multi-project and activity management.
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294310@indico.cern.ch
DESCRIPTION:Speakers: C. ARNAULT (CNRS)\nSince its introduction in 1999\,
CMT is now used as a production tool\nin many large software projects for
physics research (ATLAS\, LHCb\,\nVirgo\, Auger\, Planck). Although its ba
sic concepts remain unchanged\nsince the beginning\, proving their viabili
ty\, it is still improving\nand increasing its coverage of the configurati
on management\nmechanisms. Two important evolutions have recently been int
roduced\,\none for explicitly supporting multi-project environments\, and
the\nother to specify and manage configuration activities.\n\nThe existing
concept of package area is now extended to cover the\nsupport of sub-proj
ects structuring\, with the possibility of assigning\nconfiguration manage
ment properties (typically strategies) to each sub\nproject\, allowing for
instance to have installation area mechanisms\nonly applicable for some o
f them.\n\nIt is also possible to specify parameterized activities that wi
ll be\nrun on demand either through make or through an explicit activation
\ncommand\, which ensures that the runtime environment is properly setup.\
n\nhttps://indico.cern.ch/event/0/contributions/1294310/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294310/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Control in the ATLAS TDAQ System
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294525@indico.cern.ch
DESCRIPTION:Speakers: D. Liko (CERN)\nThe unprecedented size and complexit
y of the ATLAS TDAQ system \nrequires a comprehensive and flexible control
system. Its role \nranges from the so-called run-control\, e.g. starting
and stopping \nthe datataking\, to error handling and fault tolerance. It
also \nincludes intialisation and verification of the overall system.\nFol
lowing the traditional approach a hierachical system of \ncustomizable con
trollers has been proposed. For the final system all \nfunctionallity woul
d be therefore available in a distributed manner\, \nwith the possibility
of local customisation.\n\nAfter a technology survey the open source exper
t systemn CLIPS has \nbeen chosen as a basis for the implementation of the
supervison and \nthe verification system. The CLIPS interpreter has been
extended to \nprovide a general control framework. Other ATLAS Online soft
ware \ncomponents have been integrated as plugins and provide the mechanis
m \nfor configuration and communication.\nSeveral components have been imp
lemented\, that share this technology.\nThe dynamic behaviour of the indiv
idual component is fully described \nby the rules\, while the framework is
based on a common \nimplementation. During these year these components ha
ve been the \nsubject of scalability tests up to the full system size. \nE
ncouraging results are presented and validate the technology choice.\n\nht
tps://indico.cern.ch/event/0/contributions/1294525/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294525/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Full Event Reconstruction in Java
DTSTART;VALUE=DATE-TIME:20040930T124000Z
DTEND;VALUE=DATE-TIME:20040930T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294248@indico.cern.ch
DESCRIPTION:Speakers: N. Graf (SLAC)\nWe describe a Java toolkit for full
event reconstruction and analysis. The toolkit \nis currently being used f
or detector design and physics analysis for a future \nlinear e+ e- linear
collider. The components are fully modular and are available \nfor tasks
from digitization of tracking detector signals through to cluster \nfindin
g\, pattern recognition\, fitting\, jetfinding\, and analysis. We discuss
the \narchitecture as well as the implementation for several candidate det
ector designs.\n\nhttps://indico.cern.ch/event/0/contributions/1294248/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294248/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Production Experience of the Storage Resource Broker in the BaBar
Experiment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294281@indico.cern.ch
DESCRIPTION:Speakers: A. Hasan (SLAC)\nWe describe the production experien
ce gained from implementing and using\nexclusively the San Diego Super Com
puter Center developed Storage Resource Broker\n(SRB) to distribute the Ba
Bar experiment's production event data stored in ROOT\nfiles from the exp
eriment center at SLAC\, California\, USA to a Tier A computing \ncenter a
t ccinp3\, Lyon France. In addition we outline how the system can be read
ily\nexpanded to\ninclude more sites.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294281/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294281/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Domain Specific Visual Query Language for HEP analysis or How far
can we go with user friendliness?
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294217@indico.cern.ch
DESCRIPTION:Speakers: V M. Moreira do Amaral (UNIVERSITY OF MANNHEIM)\nThe
re is a permanent quest for user friendliness in HEP Analysis. This\ngrowi
ng need is directly proportional to the analysis frameworks'\n interface c
omplexity. In fact\, the user is provided with an analysis\nframework that
makes use of a General Purpose Language to program the\nquery algorithms.
Usually the user finds this overwhelming\, since he\nor she is presented
with the complexity of the intricacies of the\nsystems. This way the final
user of HEP experiments becomes a forced\nprogrammer or an application de
veloper.\n\nIn our opinion this inflicts directly or indirectly in the que
ry\nsystem performances. For this reason we have decided to invest in a\nl
ine of research to find a solution that balances the complexity and\nvaria
bility of the analysis queries with the need for simpler query\nsystems in
terfaces. The ultimate goal is to save time on query\nalgorithms productio
n and to have a way to increase efficiency.\n\nIn this communication we ar
e going to present how we explored the\nhypothesis of generating a visual
query language specific for the HEP\nhigh level analysis domain. The proto
typed framework developed so far\,\nPHEASANT\, is supporting our arguments
in the feasibility of this\napproach. Therefore\, like in any young Human
Centric development\nproject\, this raises the need of a broad discussion
in order to\nvalidate it. We believe to be opening an new fruitful resear
ch topic\namong the community and we expect to motivate both computer scie
nce\nand physics experts into the same discussion.\n\nhttps://indico.cern.
ch/event/0/contributions/1294217/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294217/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OpenPAW
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294461@indico.cern.ch
DESCRIPTION:Speakers: G B. Barrand (CNRS / IN2P3 / LAL)\nOpenPAW is for pe
ople that definitively do not want\n to quit the PAW command prompt\, but
seek anyway\n an implementation based over more modern technologies.\n We
shall present the OpenScientist/Lab/opaw program\n that offers a PAW comma
nd prompt by using the OpenScientist\n tools (then C++\, Inventor for doin
g graphic\, Rio for doing \n the IO\, OnX for the GUI\, etc...). The OpenS
cientist/Lab package being\n also AIDA complient\, we shall show that it i
s possible\n to marry AIDA with a PAW command prompt.\n\nhttps://indico.ce
rn.ch/event/0/contributions/1294461/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294461/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The FEDRA - Framework for Emulsion Data Reconstruction and Analysi
s in OPERA experiment.
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294266@indico.cern.ch
DESCRIPTION:Speakers: V. Tioukov (INFN NAPOLI)\nOPERA is a massive lead/em
ulsion target for a long-baseline neutrino \noscillation search. More then
90% of the useful experimental data in OPERA \nwill be produced by the sc
anning of emulsion plates with the automatic microscopes. \nThe main goal
of the data processing in OPERA will be the search\, analysis and \nidenti
fication of primary and secondary vertexes produced by neutrino in \nlead-
emulsion target.\n\nThe volume of middle and high-level data to be analyse
d and stored \nis expected to be of the order of several Gb per event. Th
e storage\, \ncalibration\, reconstruction\, analysis and visualization of
this data is \nthe task of FEDRA - system written in C++ and based on ROO
T framework. \nThe system is now actively used for processing of test bea
ms and simulation \ndata. Several interesting \nalgoritmic solutions permi
ts us to make very effective code for fast pattern \nrecognition in heavy
signal/noise conditions. The system consists of the storage \npart\,\n int
ercalibration and segments linking part\, track finding and fitting\, vert
ex \nfinding and fitting and kinematical analysis parts. Kalman Filteing t
echnique is \nused \nfor tracks&vertex fitting. ROOT-based event display i
s used for interactive analysis \nof the special events.\n\nhttps://indico
.cern.ch/event/0/contributions/1294266/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294266/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Next Generation Root File Server
DTSTART;VALUE=DATE-TIME:20040927T143000Z
DTEND;VALUE=DATE-TIME:20040927T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294332@indico.cern.ch
DESCRIPTION:Speakers: A. Hanushevsky (SLAC)\nAs the BaBar experiment shift
ed its computing model to a ROOT-based\nframework\, we undertook the devel
opment of a high-performance file server\nas the basis for a fault-toleran
t storage environment whose ultimate goal\nwas to minimize job failures du
e to server failures. Capitalizing on our\nfive years of experience with e
xtending Objectivity's Advanced\nMultithreaded Server (AMS)\, elements wer
e added to remove as many\nobstacles to server performance and fault-toler
ance as possible. The final\noutcome was xrootd\, upwardly and downwardly
compatible with the current\nfile server\, rootd. This paper describes the
essential protocol elements\nthat make high performance and fault-toleran
ce possible\; including\nasynchronous parallel requests\, stream multiplex
ing\, data pre-fetch\,\nautomatic data segmenting\, and the framework for
a structured peer-to-peer\nstorage model that allows massive server scalin
g and client recovery from\nmultiple failures. The internal architecture o
f the server is also\ndescribed to explain how high performance was mainta
ined and full\ncompatibility was achieved. Now in production at Stanford
Linear\nAccelerator Center\, Rutherford Appleton Laboratory (RAL)\, INFN\,
and IN2P3\;\nxrootd has shown that our design provides what we set out to
achieve. The\nxrootd server is now part of the standard ROOT distributio
n so that other \nexperiments can benefit from this data serving model wit
hin a standard HEP\nevent analysis framework.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294332/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294332/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The DZERO Run II Level 3 Trigger and Data Aquisition System
DTSTART;VALUE=DATE-TIME:20040927T151000Z
DTEND;VALUE=DATE-TIME:20040927T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294570@indico.cern.ch
DESCRIPTION:Speakers: D Chapin (Brown University)\nThe DZERO Level 3 Trigg
er and Data Aquisition (L3DAQ) system has been\nrunning continuously since
Spring 2002. DZERO is loacated at one of the\ntwo interaction points in t
he Fermilab Tevatron Collider. The L3DAQ\nmoves front-end readout data fro
m VME crates to a trigger processor\nfarm. It is built upon a Cisco 6509 E
thernet switch\, standard PCs\, and\ncommodity VME single board computers.
We will report on operating\nexperience\, performance\, and upgrades. In
particular\, issues related to\nhardware quality\, networking and security
\, and an expansion of the\ntrigger farm will be discussed.\n\nhttps://ind
ico.cern.ch/event/0/contributions/1294570/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294570/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Using Nagios for intrusion detection
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294439@indico.cern.ch
DESCRIPTION:Speakers: M. Cardenas Montes (CIEMAT)\nImplementing strategies
for secured access to widely accessible\nclusters is a basic requirement
of these services\, in particular if\nGRID integration is sought for. This
issue has two complementary \nlines to be considered: security perimeter
and intrusion detection\nsystems. In this paper we address aspects of the
second one.\n\nCompared to classical intrusion detection mechanisms\, clos
e monitoring of\ncomputer services can substantially help to detect intrus
ion signs. \nHaving alarms indicating the presence of an intrusion into th
e system\,\nallows system administrators to take fast actions to minimize
damages \nand stop diffusion towards other critical systems.\n\nOne possib
le monitoring tool is Nagios (www.nagios.org)\, a powerful GNU tool\nwith
capacity to observe and collect information about a variety of\nservices\,
and trigger alerts. \n\nIn this paper we present the work done at CIEMAT
\, where we have applied\nthese directives to our local cluster.We have im
plemented a system\nto monitor the hardware and system sensitive informati
on. \nWe describe the process and show through different simulated securit
y \nthreads how does our implementation respond to it.\n\nhttps://indico.c
ern.ch/event/0/contributions/1294439/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294439/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Using the reconstruction software\, ORCA\, in the CMS datachalleng
e
DTSTART;VALUE=DATE-TIME:20040929T153000Z
DTEND;VALUE=DATE-TIME:20040929T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294342@indico.cern.ch
DESCRIPTION:Speakers: S. Wynhoff (PRINCETON UNIVERSITY)\nWe report on the
software for Object-oriented Reconstruction for CMS \nAnalysis\, ORCA. It
is based on the Coherent Object-oriented Base for \nReconstruction\, Analy
sis and simulation (COBRA) and used for \ndigitization and reconstruction
of simulated Monte-Carlo events as \nwell as testbeam data. \n \nFor the
2004 data challenge the functionality of the software has \nbeen extended
to store collections of reconstructed objects (DST) as \nwell as the previ
ously storable quantities (Digis) in multiple\, \nparallel streams. \n \nW
e describe the structure of the DST\, the way to ensure and store \nthe co
nfiguration of reconstruction algorithms that fill the \ncollections of re
constructed objects as well as the relations \nbetween them. Also the hand
ling of multiple streams to store parts \nof selected events is discussed.
The experience from the \nimplementation used early 2004 and the modifica
tions for future \noptimization of reconstruction and analysis are present
ed.\n\nhttps://indico.cern.ch/event/0/contributions/1294342/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294342/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Testing the CDF Distributed Computing Framework
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294507@indico.cern.ch
DESCRIPTION:Speakers: V. Bartsch (OXFORD UNIVERSITY)\nTo distribute comput
ing for CDF (Collider Detector at Fermilab) a system managing \nlocal comp
ute and storage resources is needed. For this purpose CDF will use the \nD
CAF (Decentralized CDF Analysis Farms) system which is already at Fermilab
. DCAF \nhas to work with the data handling system SAM (Sequential Access
to data via \nMetadata). However\, both DCAF and SAM are mature systems wh
ich have not yet been \nused in combination\, and on top of this DCAF has
only been installed at Fermilab and \nnot on local sites. Therefore tests
of the systems are necessary to test the \ninterplay of the data handling
with the farms\, the behaviour of the off-site DCAFs \nand the user friend
liness of the whole system. The tests are focussed on the main \ntasks of
the DCAFs\, like Monte Carlo generation and stores\, as well as the readou
t \nof data files and connected data handling. To achieve user friendlines
s the SAM \nstation environment has to be common to all stations and adapt
ations to the \nenvironment have to be made.\n\nhttps://indico.cern.ch/eve
nt/0/contributions/1294507/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294507/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Remote Shifting at the CLEO Experiment
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294374@indico.cern.ch
DESCRIPTION:The CLEO III data acquisition was from the beginning in the la
te 90's\ndesigned to allow remote operations and monitoring of the experim
ent.\nSince changes in the coordination and operation of the CLEO experime
nt\ntwo years ago enabled us to separate tasks of the shift crew into an \
noperational and a physics task\, existing remote capabilities have\nbeen
revisited. In 2002/03 CLEO started to deploy its remote \nmonitoring \ntas
ks for performing remote shifts and evaluated various communication\ntools
e.g. video conferencing and remote desktop sharing. Remote\, \ncollaborat
ing institutions were allowed to perform the physicist shift\npart from th
eir home institutions keeping only the professional \noperator\nof the CLE
O experiment on site. After a one year long testing and \nevaluation phase
the remote shifting for physicists is now in \nproduction \nmode. \n\nThi
s talk reports on experiences made when evaluating and deploying \nvarious
options and technologies used for remote control\, operation \nand monito
ring e.g. CORBA's IIOP\, X11 and VNC in the CLEO experiment. \nFurthermore
some aspects of the usage of video conferencing tools by \ndistributed sh
ift crews are being discussed.\n\nhttps://indico.cern.ch/event/0/contribut
ions/1294374/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294374/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ROOT Linear Algebra
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294527@indico.cern.ch
DESCRIPTION:Speakers: R. Brun (CERN)\nThe ROOT linear algebra package has
been invigorated . The\nhierarchical structure has been improved allowing
different flavors of\nmatrices\, like dense and symmetric . A fairly comp
lete set of matrix\ndecompositions has been added to support matrix invers
ions and solving\nlinear equations. \nThe package has been extensively com
pared to other algorithms for its\naccuracy and performance.\n\nIn this po
ster we will descrive the structure of the package and several\nbenchmarks
obtained with typical linear algebra applications.\n\nhttps://indico.cern
.ch/event/0/contributions/1294527/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294527/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rio
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294540@indico.cern.ch
DESCRIPTION:Speakers: G B. Barrand (CNRS / IN2P3 / LAL)\nRio (for ROOT IO)
is a rewriting of the file IO system of ROOT.\n We shall present our stro
ng motivations of doing this \n tedious work. We shall present the main ch
oices done \n in the Rio implementation (then by opposition to what we \n
don't like in ROOT). For example\, we shall say why\n we believe that an I
O package is not a drawing package (no \nTClass::Draw) \; \n why someone s
hould use pure abstract interfaces in such package \n (for example to open
cleanly to various dictionaries) \; how we can \nhave \n a more reliable
system than ROOT (for example\, by simply protect \n the various buffer ov
erflows).\n\n We shall cover the today role of Rio within OpenScientist t
o store \n histograms and tuples. We shall present the effort done around
Gaudi\,\n at the beginning of 2003\, to read LHCb events with Rio \n (then
in the "before POOL" system).\n\n We shall present our views about the L
CG proposed solution\n for storage\, that is to say POOL over ROOT\, and w
hy the author \n believe that this coarse graining assembly is simply poor
software \n engineering. We shall explain why CERN\, due to its fermionic
\nsociology\, \n is going to miss an essential target : an appealing open
source \nobject \n oriented data base for HEP. We shall explain then how
to do it \nwithout\n this lab\, then passing from Rio to RioGrande...\n\nh
ttps://indico.cern.ch/event/0/contributions/1294540/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294540/
END:VEVENT
BEGIN:VEVENT
SUMMARY:LCIO persistency and data model for LC simulation and reconstructi
on
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294290@indico.cern.ch
DESCRIPTION:Speakers: F. Gaede (DESY IT)\nLCIO is a persistency framework
and data model for the next linear\n collider. Its original implementation
\, as presented at CHEP 2003\, \n was focused on simulation studies. Since
then the data model has \n been extended to also incorporate prototype te
st beam data\,\n reconstruction and analysis. The design of the interface
has also \n been simplified. LCIO defines a common abstract user interface
(API) \n in Java\, C++ and Fortran in order to fulfill the needs of the g
lobal \n linear collider community. It is designed to be lightweight and \
nflexible without introducing additional dependencies on other software \n
packages.\n User code is completely separated from the concrete persistenc
y\n implementation. SIO\, a simple binary format that supports data \ncomp
ression and pointer retrieval is the current choice. LCIO is\nimplemented
in such a way that it can also be used as the transient\ndata model in any
linear collider application\, e.g. a modular\nreconstruction program \nca
n use the LCIO event class (LCEvent) as the container for the\nmodules' in
put and output data. As LCIO offers a common API for three\nlanguages it i
s also possible to construct a multi-language\nreconstruction framework th
at would facilitate the integration of\nalready existing algorithms.\nA nu
mber of groups has already incorporated LCIO in their software\nframeworks
and others plan to do so. \nWe present the design and implementation of L
CIO\, focusing on new\n developments and uses.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294290/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294290/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Global Distributed Parallel Analysis using PROOF and AliEn
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294198@indico.cern.ch
DESCRIPTION:Speakers: F. Rademakers (CERN)\nThe ALICE experiment and the R
OOT team have developed a Grid-enabled version of PROOF that allows \neffi
cient parallel processing of large and distributed data samples. This syst
em has been integrated with the \nALICE-developed AliEn middleware. Parall
elism is implemented at the level of each local cluster for efficient \npr
ocessing and at the Grid level\, for optimal workload management of distri
buted resources. This system \nallows harnessing large Computing on Demand
capacity during an interactive session. Remote parallel \ncomputations ar
e spawned close to the data\, minimising network traffic. If several copie
s of the data are \navailable\, a workload management system decides autom
atically where to send the task. Results are \nautomatically merged and di
splayed at the user workstation. The talk will describe the different comp
onents \nof the system (PROOF\, the parallel ROOT engine\, and the AliEn m
iddleware)\, the present status and future \nplans for the development and
deployment and the consequences for the ALICE computing model.\n\nhttps:/
/indico.cern.ch/event/0/contributions/1294198/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294198/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ion transport simulation using Geant4 hadronic physics
DTSTART;VALUE=DATE-TIME:20040927T151000Z
DTEND;VALUE=DATE-TIME:20040927T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294417@indico.cern.ch
DESCRIPTION:Speakers: T. Koi (SLAC)\nThe transportation of ions in matter
is subject of much interest in not only \nhigh-energy ion-ion collider exp
eriments such as RHIC and LHC but also many \nother field of science\, eng
ineering and medical applications. Geant4 is a tool \nkit for simulation o
f passage of particles through matter and its OO designs \nmakes it easy t
o extend its capability for ion transports. To simulate ions \ninteraction
\, we had to develop two major functionalities to Geant4. One is \ncross s
ection calculators and the other is final stage generators for ion-ion \ni
nteractions. For cross sections calculator\, several empirical cross secti
on \nformulas for the total reaction cross section of ion-ion interactions
were\ninvestigated. And for final stage generator\, binary cascade and qu
ark-gluon \nstring model of Geant4 were improved so that ions reaction wit
h matter can also be \ncalculated. Having successfully developed both func
tionalities\, Geant4 can be \napplied to ion transportation problems. In
the presentation we will\nexplain cross section and final stage generator
in detail and show comparisons with \nexperimental data.\n\nhttps://indico
.cern.ch/event/0/contributions/1294417/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294417/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Multi-Terabyte EIDE Disk Arrays running Linux RAID5
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294554@indico.cern.ch
DESCRIPTION:Speakers: D. Sanders (UNIVERSITY OF MISSISSIPPI)\nHigh-energy
physics experiments are currently recording large amounts of data and in a
few years will be \nrecording prodigious quantities of data. New methods
must be developed to handle this data and make \nanalysis at universities
possible. Grid Computing is one method\; however\, the data must be cache
d at the \nvarious Grid nodes. We examine some storage techniques that ex
ploit recent developments in commodity \nhardware. Disk arrays using RAID
level 5 (RAID5) include both parity and striping. The striping improves
\naccess speed. The parity protects data in the event of a single disk fa
ilure\, but not in the case of multiple disk \nfailures.\n We report on
tests of dual-processor Linux Software RAID5 arrays and Hardware RAID5 arr
ays using the 12-\ndisk 3ware controller\, in conjunction with 300 GB disk
s\, for use in offline high-energy physics data analysis. \nThe price of
IDE disks is now less than $1/GB. These RAID5 disk arrays can be scaled t
o sizes affordable to \nsmall institutions and used when fast random acces
s at low cost is important.\n\nhttps://indico.cern.ch/event/0/contribution
s/1294554/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294554/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CDB - Distributed Conditions Database of BaBar Experiment
DTSTART;VALUE=DATE-TIME:20040929T155000Z
DTEND;VALUE=DATE-TIME:20040929T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294379@indico.cern.ch
DESCRIPTION:Speakers: I. Gaponenko (LAWRENCE BERKELEY NATIONAL LABORATORY)
\nA new\, completely redesigned Condition/DB was deployed in BaBar in Octo
ber 2002. It \nreplaced the old database software used through the first t
hree and half years of \ndata taking.\nThe new software aims at performanc
e and scalability limitations of the original \ndatabase. However this maj
or redesign brought in a new model of the metadata\, brand \nnew technolog
y- and implementation- independent API\, flexible configurability and \nex
tended functionality.\nOne of the greatest strength of new CDB is that it'
s been designed to be a \ndistributed kind database from the ground up to
facilitate propagation and exchange \nof conditions (calibrations\, detect
or alignments\, etc.) in the realm of the \ninternational HEP collaboratio
n.\nThe first implementation of CDB uses Objectivity/DB as its underlying
persistent \ntechnology. There is an ongoing study to understand how to im
plement CDB on top of \nother persistent technologies.\nThe talk will cove
r the whole spectrum of topics ranging from the basic conceptual \nmodel o
f the new database through the way CDB is currently exploited in BaBar to
the \ndirections of further developments.\n\nhttps://indico.cern.ch/event/
0/contributions/1294379/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294379/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Evolving Wide Area Network Infrastructure in the LHC era
DTSTART;VALUE=DATE-TIME:20040930T093000Z
DTEND;VALUE=DATE-TIME:20040930T100000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294419@indico.cern.ch
DESCRIPTION:Speakers: Peter Clarke ()\nThe global network is more than eve
r taking its role as the \ngreat "enabler" for many branches of science an
d research. Foremost \namongst such science drivers is of course the LHC/
LCG programme\, \nalthough there are several other sectors with growing de
mands of the \nnetwork. \nCommon to all of these is the realisation that a
straightforward \nover provisioned best efforts wide area IP service is p
robably not \nenough for the future.\n\nThis talk will summarise the needs
of several science sectors\, and \nthe advances being made to exploit the
current best efforts \ninfrastructure. It will then describe current proj
ects aimed as \nprovisioning "better than best efforts" services (such ban
dwidth on \ndemand)\, the global optical R&D testbeds and the strategy of
the \nresearch network providers to move towards hybrid multi-service \nne
tworks for the next generation of the global wide area production \nnetwor
k.\n\nhttps://indico.cern.ch/event/0/contributions/1294419/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294419/
END:VEVENT
BEGIN:VEVENT
SUMMARY:JIM Deployment for the CDF Experiment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294131@indico.cern.ch
DESCRIPTION:Speakers: M. Burgon-Lyon (UNIVERSITY OF GLASGOW)\nJIM (Job and
Information Management) is a grid extension to the mature data handling\n
system called SAM (Sequential Access via Metadata) used by the CDF\, DZero
and Minos\nExperiments based at Fermilab. JIM uses a thin client to allo
w job submissions from\nany computer with Internet access\, provided the u
ser has a valid certificate or\nkerberos ticket. On completion the job ou
tput can be downloaded using a web\ninterface. The JIM execution site sof
tware can be installed on shared resources\,\nsuch as ScotGRID\, as it may
be configured for any batch system and does not require\nexclusive contro
l of the hardware. Resources that do not belong entirely to CDF and\nthus
cannot run DCAF (Decentralised CDF Analysis Farm)\, may therefore be acce
ssed\nusing JIM. We will report on the initial deployment of JIM for CDF
and the steps\ntaken to integrate JIM with DCAF.\n\nhttps://indico.cern.ch
/event/0/contributions/1294131/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294131/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Physics validation of the simulation packages in a LHC-wide effort
DTSTART;VALUE=DATE-TIME:20040927T134000Z
DTEND;VALUE=DATE-TIME:20040927T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294246@indico.cern.ch
DESCRIPTION:Speakers: A. Ribon (CERN)\nIn the framework of the LCG Simulat
ion Physics Validation Project\, we present \ncomparison studies between t
he GEANT4 and FLUKA shower packages and LHC sub-detector \ntest-beam data.
Emphasis is given to the response of LHC calorimeters to electrons\, \nph
otons\, muons and pions. Results of "simple-benchmark" studies\, where the
above \nsimulation packages are compared to data from nuclear facilities\
, are also shown.\n\nhttps://indico.cern.ch/event/0/contributions/1294246/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294246/
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the Management of Certification Authority in Large Scale GRID I
nfrastructure
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294390@indico.cern.ch
DESCRIPTION:Speakers: E. Berdnikov (INSTITUTE FOR HIGH ENERGY PHYSICS\, PR
OTVINO\, RUSSIA)\nThe scope of this work is the study of scalability limit
s of the \n Certification Authority (CA)\, running for large scale GRID en
vironments. \n \n The operation of Certification Authority is analyzed fro
m the view of \n the rate of incoming requests\, complexity of authenticat
ion procedures\, \n LCG security restrictions and other limiting factors.
It is shown\, that \n standard CA operational model has some native "bottl
enecks"\, which \n can be resolved with proper management and technical to
ols. \n \n The central point is the discussion of "decentralized" scheme w
ith \n single CA and multiple authentication agents\, called Registration
\n Authorities (RA). Single CA retains a role for technical center\, \n re
sponsible for support of GRID security infrastructure\, while \n general r
ole of RAs is verification of requests from end-users. \n \n Practical imp
lementation of this scheme (including the development \n and installation
of end-user software) have been done in CERN in 2002 \n (http://service-gr
id-ca.web.cern.ch/service-grid-ca/help/RA.html). \n Second implementation
of the same ideas was the GRID project of the \n Russia Ministry of Atomic
Energy\, 2003 (http://grid.ihep.su/MAG/). \n These two implementations ar
e compared in aspects of security \n and functionality.\n\nhttps://indico.
cern.ch/event/0/contributions/1294390/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294390/
END:VEVENT
BEGIN:VEVENT
SUMMARY:GROSS: an end user tool for carrying out batch analysis of CMS dat
a on the LCG-2 Grid.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294232@indico.cern.ch
DESCRIPTION:Speakers: H. Tallini (IMPERIAL COLLEGE LONDON)\nGROSS (GRidifi
ed Orca Submission System) has been developed to provide CMS\nend users wi
th a single interface for running batch analysis tasks over\nthe LCG-2 Gri
d. The main purpose of the tool is to carry out job\nsplitting\, preparati
on\, submission\, monitoring and archiving in a\ntransparent way which is
simple to use for the end user. Central to its\ndesign has been the requir
ement for allowing multi-user analyses\, and to\naccomplish this all persi
stent information is stored on a backend MySQL\ndatabase. This database is
additionally shared with BOSS\, to which GROSS\ninterfaces in order to pr
ovide job submission and real time monitoring\ncapability.\n\nIn this pape
r we present an overview of GROSS's architecture and\nfunctionality and re
port on first user tests of the system using CMS Data\nChallenge 2004 data
(DC04).\n\nhttps://indico.cern.ch/event/0/contributions/1294232/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294232/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Raw Ethernet based hybrid control system for the automatic control
of suspended masses in gravitational waves interferometric detectors
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294279@indico.cern.ch
DESCRIPTION:Speakers: A. Eleuteri (DIPARTIMENTO DI SCIENZE FISICHE - UNIVE
RSITà DI NAPOLI FEDERICO II)\nIn this paper we examine the performance of
the raw Ethernet \nprotocol in deterministic\, low-cost\, real-time comm
unication. Very \nfew applications have been reported until now\, and they
focus on the \nuse of the TCP and UDP protocols\, which however add a sen
sible \noverhead to the communication and reduce the useful bandwidth. We
\nshow how low-level Ethernet access can be used for peer-to-peer\, \nshor
t distance communication\, and how it allows the writing of \napplications
requiring large bandwidth. We show some examples \nrunning on the Lynx re
al-time OS and on Linux\, both in mixed and \nhomogeneous environments. As
an example of application of \nthis technique\, we describe the architect
ure of an hybrid Ethernet \nbased real-time control system prototype we im
plemented in Napoli\, \ndiscussing its characteristics and performances. F
inally we discuss \nits application to the real-time control of a suspende
d mass of the \nmode cleaner of the 3m prototype optical interferometer fo
r \ngravitational wave detection operational in Napoli.\n\nhttps://indico.
cern.ch/event/0/contributions/1294279/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294279/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ALICE Data Challenge 2004 and the ALICE distributed analysis p
rototype
DTSTART;VALUE=DATE-TIME:20040929T134000Z
DTEND;VALUE=DATE-TIME:20040929T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294420@indico.cern.ch
DESCRIPTION:Speakers: A. Peters (ce)\nDuring the first half of 2004 the AL
ICE experiment has performed a large distributed \ncomputing exercise with
two major objectives: to test the ALICE computing model\, \nincluded dist
ributed analysis\, and to provide data sample for a refinement of the \nAL
ICE Jet physics Monte-Carlo studies. Simulation reconstruction and analysi
s of \nseveral hundred thousand events were performed\, using the heteroge
neous resources of \ntens of computer centres worldwide. These resources b
elong to different GRID systems \nand were steered by the AliEn (ALICE Env
ironment) framework\, acting as a meta-GRID. \nThis has been a very thorou
gh test of the middleware of AliEn and LCG (LCG-2 and \ngrid.it resources)
and their compatibility. During the Data Challenge more than \n1\,500 job
s run in parallel for several weeks. More than 50 TB of data have been \np
roduced and analysed worldwide in one of the major exercises of this kind
run to \ndate. ALICE has developed an analysis system based on AliEn and R
OOT. This system \nstarts with a metadata selection in the AliEn file cata
logue\, followed by a \ncomputation phase. Analysis jobs are sent where th
e data is\, thus minimising data \nmovement. The control is performed by a
n intelligent workload management system. The \nanalysis can be done eithe
r via batch or interactive jobs. The latter are "spawned" \non remote syst
ems and report the results back to the user workstation. The talk will \nd
escribe the ALICE experience with this large-scale use of the Grid\, the m
ajor \nlessons learned and the consequences for the ALICE computing model.
\n\nhttps://indico.cern.ch/event/0/contributions/1294420/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294420/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Central Reconstruction System on the RHIC Linux Farm in Brookhaven
Laboratory
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294183@indico.cern.ch
DESCRIPTION:Speakers: T. Wlodek (Brookhaven National Lab)\nA description o
f a Condor-based\, Grid-aware batch \nsoftware system configured to functi
on asynchronously\nwith a mass storage system is presented. The software \
nis currently used in a large Linux Farm (2700+ \nprocessors) at the RHIC
and ATLAS Tier 1 Computing \nFacility at Brookhaven Lab. Design\, scalabil
ity\, \nreliability\, features and support issues with a \ncomplex Condor-
based batch system are addressed within\nthe context of a Grid-like\, dist
ributed computing \nenvironment.\n\nhttps://indico.cern.ch/event/0/contrib
utions/1294183/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294183/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deployment of SAM for the CDF Experiment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294428@indico.cern.ch
DESCRIPTION:Speakers: S. Stonjek (Fermi National Accelerator Laboratory /
University of Oxford)\nCDF is an experiment at the Tevatron at Fermilab. O
ne dominating\nfactor of the experiments' computing model is the high volu
me of raw\,\nreconstructed and generated data. The distributed data handli
ng\nservices within SAM move these data to physics analysis\napplications.
The SAM system was already in use at the D-Zero\nexperiment. Due to diffe
rence in the computing model of the two\nexperiments some aspects of the S
AM system had to be adapted. We will\npresent experiences from the adaptat
ion and the deployment phase. This\nincludes the behavior of the SAM syste
m on batch systems of very\ndifferent sizes and type as well as the intera
ction between the\ndatahandling and the storage systems\, ranging from dis
k pools to tape\nsystems. In particular we will cover the problems faced o
n large scale\ncompute farms. To accommodate the needs of Grid computing\,
CDF deployed\ninstallations consisting of SAM for datahandling and CAF fo
r high\nthroughput batch processing. The CDF experiment already had experi
ences\nwith the CAF system. We will report on the deployment of the combin
ed\nsystem.\n\nhttps://indico.cern.ch/event/0/contributions/1294428/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294428/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experiences Building a Distributed Monitoring System
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294571@indico.cern.ch
DESCRIPTION:Speakers: J. Fromm (Fermilab)\nThe NGOP Monitoring Project at
FNAL has developed a package which has demonstrated \nthe capability to ef
ficiently monitor tens of thousands of entities on thousands of \nhosts\,
and has been in operation for over 4 years. The project has met the majori
ty \nof its initial reqirements\, and also the majority of the requirement
s discovered \nalong the way. This paper will describe what worked\, and w
hat did not\, in the first \n4 years of the NGOP Project at Fermilab\; and
we hope will provide valuable lessons \nfor others considering undertakin
g even larger (GRID-scale) monitoring projects.\n\nhttps://indico.cern.ch/
event/0/contributions/1294571/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294571/
END:VEVENT
BEGIN:VEVENT
SUMMARY:AliEn Web Portal
DTSTART;VALUE=DATE-TIME:20040930T122000Z
DTEND;VALUE=DATE-TIME:20040930T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294165@indico.cern.ch
DESCRIPTION:Speakers: P E. Tissot-Daguette (CERN)\nThe AliEn system\, an i
mplementation of the Grid paradigm developed by \nthe ALICE Offline Projec
t\, is currently being used to produce and \nanalyse Monte Carlo data at o
ver 30 sites on four continents. The \nAliEn Web Portal is built around Op
en Source components with a \nbackend based on Grid Services and compliant
with the OGSA model.\nAn easy and intuitive presentation layer gives the
opportunity to the \nuser to access information from multiple sources in a
transparent and \nconvenient way. Users can browse job provenance and ac
cess \nmonitoring information from MonaLisa repository.\nThe presentation
layer is separated from the content layer which is \nimplemented via Grid
and Web Services serving one or more users or \nVirtual Organizations.\nSe
curity and authentication of the portal are based on the Globus \nGrid Sec
urity infrastructure\, OGSI::Lite and MyProxy online \ncredentials reposit
ory.\nIn this presentation the architecture and functionality of AliEn \nP
ortal implementation will be presented.\n\nhttps://indico.cern.ch/event/0/
contributions/1294165/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294165/
END:VEVENT
BEGIN:VEVENT
SUMMARY:"RecPack"\, a general reconstruction tool-kit
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294481@indico.cern.ch
DESCRIPTION:Speakers: A. CERVERA VILLANUEVA (University of Geneva)\nWe hav
e developed a c++ software package\, called "RecPack"\, \nwhich allows the
reconstruction of dynamic trajectories in any experimental setup. \nThe b
asic utility of the package is the fitting of trajectories in the presence
\nof random and systematic perturbations to the system \n(multiple scatte
ring\, energy loss\, inhomogeneous magnetic fields\, etc) \nvia a Kalman
Filter fit. It also includes an analytical navigator \nwhich allows: extra
polation of the trajectory parameters \n(and their covariance matrix) to a
ny surface\, path length computations\, \nmatching functions (trajectory-
trajectory\, trajectory-measurement\, etc) \nand much more. The RecPack t
ool-kit also includes the algorithms \nfor vertex fitting via Kalman Filte
r\, and the necessary tools for easily \ncoding pattern recognition algori
thms. \nIn summary\, "RecPack" provides all the necessary tools and algori
thms \nthat are common to any reconstruction program.\nIn addition\, a toy
simulator is provided. This is very usefull to debug new \nreconstructio
n algorithms and also to perform simple physics analysis.\nThe modularity
of the package allows extensions in any direction: new \npropagation model
s\, measurement types\, volume and surface types\, \nfitting algorithms\,
etc.\nI\n\nhttps://indico.cern.ch/event/0/contributions/1294481/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294481/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Distributed Testing Infrastructure and Processes for the EGEE Grid
Middleware
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294149@indico.cern.ch
DESCRIPTION:Speakers: L. Guy (CERN)\nExtensive and thorough testing of the
EGEE middleware is essential to ensure that\na production quality Grid ca
n be deployed on a large scale as well as\nacross the broad range of heter
ogeneous resources that make up the hundreds of\nGrid computing centres bo
th in Europe and worldwide. \n \nTesting of the EGEE middleware encompass
es the tasks of both verification and\nvalidation. In adition we test the
integrated middleware for\nstability\, platform independence\, stress resi
lience\, scalability and\nperformance. \n\nThe EGEE testing infrastructure
is distributed across three\nmajor EGEE grid centres in three countries:
CERN\, NIKHEF and RAL.\nAs much as is possible the testing procedures are
automated and\nintegrated with the EGEE build system. This allows for cont
inuous\ntesting together with the incremental daily code builds\, fast and
\nearly feedback to developers of bug\, and for the easy inclusion of\nreg
ression tests. \n\nThis paper will report on the initial results of the te
sting\nprocedures\, frameworks and automation techniques adopted by the EG
EE project\, \nthe advantages and disadvantages of test automation and the
\nissues involved in testing a complex distributed middleware system in a\
ndistributed environment.\n\nhttps://indico.cern.ch/event/0/contributions/
1294149/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294149/
END:VEVENT
BEGIN:VEVENT
SUMMARY:D0 data processing within EDG/LCG
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294166@indico.cern.ch
DESCRIPTION:Speakers: T. Harenberg (UNIVERSITY OF WUPPERTAL)\nThe D0 exper
iment at the Tevatron is collecting some 100 Terabytes of data\neach year
and has a very high need of computing resources for the various\nparts of
the physics program. D0 meets these demands by establishing a\nworld - inc
reasingly based on GRID technologies.\n\nDistributed resources are used fo
r D0 MC production and data\nreprocessing of 1 billion events\, requiring
250 TB to be transported over\nWANs. While in 2003 most of this computing
at remote sites was\ndistributed manually\, some data reprocessing was per
formed with the EDG.\nIn 2004 GRID tools are increasingly and successfully
employed.\n\nWe will report on performing MC production and data reproces
sing using\nEDG and LCG. We will explain how the D0 computing\nenvironment
was linked to these GRID platforms\, and will discuss some lessons \nlear
ned (for both Grid computing and preparing applications for\ndistributed o
peration) from the D0 reprocessing on EDG\, subjecting a\ngeneric Grid inf
rastructure to real data for the first time.\n\nAn outlook on plans for ap
plying LCG within D0 is given.\n\nhttps://indico.cern.ch/event/0/contribut
ions/1294166/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294166/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Storage Resource Managers at Brookhaven
DTSTART;VALUE=DATE-TIME:20040927T151000Z
DTEND;VALUE=DATE-TIME:20040927T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294116@indico.cern.ch
DESCRIPTION:Speakers: Ofer RIND ()\nProviding Grid applications with effec
tive access to large volumes of data residing \non a multitude of storage
systems with very different characteristics prompted the \nintroduction of
storage resource managers (SRM). Their purpose is to provide \nconsistent
and efficient wide-area access to storage resources unconstrained by \nth
eir particular implementation (tape\, large disk arrays\, dispersed small
disks). To \nassess their viability in the context of the US Atlas Tier 1
facility at Brookhaven\, \ntwo implementations of SRM were tested: dCache
(FNAL/DESY joint project) and HRM/DRM \n(NERSC Berkeley). Both systems inc
luded a connection to the local HPSS mass data \nstore providing Grid acce
ss to the main tape repository. In addition\, dCache offered \nstorage agg
regation of dispersed small disks (local drives on computing farm nodes).
\nAn overview of our experience with both systems will be presented\, incl
uding details \nabout configurations\, performance\, inter-site transfers\
, interoperability and \nlimitations.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294116/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294116/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grid Deployment Experiences: The path to a production quality LDAP
based grid information system
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294358@indico.cern.ch
DESCRIPTION:Speakers: L. Field (CERN)\nThis paper reports on the deploymen
t experience of the defacto grid \ninformation system\, Globus MDS\, in a
large scale production grid. The \nresults of this experience led to the d
evelopment of an information \ncaching system based on a standard openLDAP
database. The paper then \ndescribes how this caching system was develope
d further into\na production quality information system including a generi
c framework \nfor information providers. This includes the deployment and
operation \nexperience and the results from performance tests on the infor
mation \nsystem to assess the scalability limits of it.\n\nhttps://indico.
cern.ch/event/0/contributions/1294358/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294358/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CMD-3 Project Offline Software Development
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294491@indico.cern.ch
DESCRIPTION:Speakers: A. Zaytsev (BUDKER INSTITUTE OF NUCLEAR PHYSICS)\nCM
D-3 is the general purpose cryogenic magnetic detector for VEPP-2000\nelec
tron-positron collider\, which is being commissioned at Budker Institute o
f\nNuclear Physics (BINP\, Novosibirsk\, Russia). The main aspects of phys
ical program of\nthe experiment are study of known and search for new vect
or mesons\, study of the\nppbar a nnbar production cross sections in the v
icinity of the threshold and search\nfor exotic hadrons in the region of c
enter-of-mass energy below 2 GeV. The essential\nupgrade of CMD-2 detector
(designed for VEPP-2M collider at BINP) farm and\ndistributed data storag
e management software is required to satisfy new detector\nneeds and sched
uled to perform in near future.\nIn this talk I will present the general d
esign overview and status of implementation\nof CMD-3 offline software for
reconstruction\, visualization\, data farm management and\nuser interface
s. Software design standards for this project are object oriented\nprogram
ming techniques\, C++ as a main language\, Geant4 as an only simulation to
ol\,\nGeant4 and GDML based detector geometry description\, WIRED and HepR
ep based\nvisualization\, CLHEP library based primary generators and Linux
as a main platform.\nThe dedicated software development framework (Cmd3Fw
k) was implemented in order to be\nthe basic software integration solution
and persistency manager. We also look forward\nto achieve high level of i
ntegration with ROOT framework and Geant4 toolkit.\n\nhttps://indico.cern.
ch/event/0/contributions/1294491/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294491/
END:VEVENT
BEGIN:VEVENT
SUMMARY:New compact hierarchical mass storage system at Belle realizing a
peta-scale system with inexpensive ice-raid disks and an S-ait tape librar
y
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294463@indico.cern.ch
DESCRIPTION:Speakers: N. Katayama (KEK)\nThe Belle experiment has accumula
ted an integrated luminosity of more \nthan 240fb-1 so far\, and a daily l
ogged luminosity now exceeds 800pb-\n1. These numbers correspond to more t
han 1PB of raw and processed \ndata stored on tape and an accumulation of
the raw data at the rate \nof 1TB/day. To meet these storage demands\, a n
ew cost effective\, \ncompact hierarchical mass storage system has been co
nstructed. The \nsystem consists of commodity RAID systems using IDE disks
and Linux \nPC servers as the front-end and a tape library system using t
he new \nhigh density SONY S-AIT tape as the back-end.The SONY Peta Serv \
nsoftware manages migration and restoration of the files between tapes \na
nd disks. The capacity of the tape library is\, at the moment\, 500 TB \ni
n three 19 inch racks and the RAID system\, 64 TB in two 19 inch \nracks.
An extension of the system to 1.2 PB tape library in eight \nracks with 15
0 TB RAID in four racks is planned. In this talk\, \nexperiences with the
new system will be discussed and the performance \nof the system when used
for data processing and physics analysis of \nthe Belle experiment will b
e demonstrated.\n\nhttps://indico.cern.ch/event/0/contributions/1294463/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294463/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CHIPS based hadronization of quark-gluon strings
DTSTART;VALUE=DATE-TIME:20040927T155000Z
DTEND;VALUE=DATE-TIME:20040927T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294476@indico.cern.ch
DESCRIPTION:Speakers: M. Kosov (CERN)\nQuark-gluon strings are usually fra
gmented on the light cone in hadrons\n(PITHIA\, JETSET) or in small hadron
ic clusters which decay in hadrons\n(HERWIG). In both cases the transverse
momentum distribution is\nparameterized as an unknown function. In CHIPS
the colliding hadrons\nstretch Pomeron ladders to each other and\, when th
e Pomeron ladders meet\nin the rapidity space\, they create Quasmons (hadr
onic clusters bigger\nthen Amati-Veneziano clusters of HERWIG). The Quasmo
n size and the\ncorresponding transverse momentum distributions are tuned
by the\nDrell-Yan mu+mu- pairs. The final Quasmon fragmentation in CHIPS i
s\ntuned by the e+e- and proton-antiproton annihilation\, which is already
\npublished.\n\nhttps://indico.cern.ch/event/0/contributions/1294476/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294476/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OpenScientist. Status of the project.
DTSTART;VALUE=DATE-TIME:20040927T143000Z
DTEND;VALUE=DATE-TIME:20040927T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294376@indico.cern.ch
DESCRIPTION:Speakers: G B. Barrand (CNRS / IN2P3 / LAL)\nWe want to presen
t the status of this project.\n After quickly remembering the basic choice
s around GUI\, visualization\n and scriptingm we would like to develop wha
t had been done in order to \n have an AIDA-3.2.1 complient systen\, to vi
sualize Geant4 data (G4Lab module)\, \n to visualize ROOT data (Mangrove m
odule)\, to have an hippodraw module\n and what had been done in order to
run on MacOSX by using the native \nNextStep (Cocoa) environment.\n\nhttps
://indico.cern.ch/event/0/contributions/1294376/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294376/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Gaussian-sum filter for vertex reconstruction
DTSTART;VALUE=DATE-TIME:20040930T151000Z
DTEND;VALUE=DATE-TIME:20040930T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294176@indico.cern.ch
DESCRIPTION:Speakers: T. Speer (UNIVERSITY OF ZURICH\, SWITZERLAND)\nA ver
tex fit algorithm was developed based on the Gaussian-sum filter\n(GSF) an
d implemented in the framework of the CMS reconstruction \nprogram. While
linear least-squares estimators are optimal in case \nall observation erro
rs are Gaussian distributed\, the GSF offers a \nbetter treatment of the n
on-Gaussian distribution of track parameter \nerrors when these are modele
d by Gaussian mixtures. \nIn addition\, when using electron tracks reconst
ructed with an \nelectron-reconstruction Gaussian-sum filter\, the full mi
xture can be \nused rather than the approximation by a single Gaussian.\nP
roperties\, results and performance of this filter with simulated \ndata w
ill be shown\, and compared to the Kalman filter and to robust \nfilters.\
n\nhttps://indico.cern.ch/event/0/contributions/1294176/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294176/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Configurations Database Challenge in the ATLAS DAQ System
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294234@indico.cern.ch
DESCRIPTION:Speakers: I. Soloviev (CERN/PNPI)\nThe ATLAS data acquisition
system uses the database to describe configurations\nfor different types o
f data taking runs and different sub-detectors. Such\nconfigurations are c
omposed of complex data objects with many inter-relations.\nDuring the DAQ
system initialisation phase the configurations database is\nsimultaneousl
y accessed by a large number of processes. It is also required that\nsuch
processes be notified about database changes that happen during or between
\ndata-taking runs.\n\n The paper describes the architecture of the confi
gurations database. It presents\nthe set of graphical tools which are avai
lable for the database schema design and\nthe data editing. The automatic
generation of data access libraries for C++ and Java\nlanguages is also de
scribed. They provide the programming interfaces to access the\ndatabase e
ither via a common file system or via remote database servers\, and the\nn
otification mechanism on data changes.\n\n The paper presents results of
recent performance and scalability tests\, which\nallow a conclusion to be
drawn about the applicability of the current configurations\ndatabase imp
lementation in the future DAQ system.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294234/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294234/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Distributed Filesystem Evaluation and Deployment at the US-CMS Tie
r-1 Center
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294320@indico.cern.ch
DESCRIPTION:Speakers: L. Lisa Giacchetti (FERMILAB)\nThe scalable serving
of shared filesystems across large clusters of computing resources continu
es to be a \ndifficult problem in high energy physics computing. The US C
MS group at Fermilab has performed a detailed \nevaluation of hardware and
software solutions to allow filesysystem access to data from computing sy
stems.\n\nThe goal of the evaluation was to arrive at a solution that was
able to meet the growing needs of the US-CMS \nTier-1 facility. The syste
m needed to be scalable and be able to grow with the increasing size of th
e facility\, \nload balanced and with high performance for data access\, r
eliable and redundant with protection against \nfailures\, and manageable
and supportable given a reasonable level of effort.\n\nOver the course of
a one year evaluation the group developed a suite of tools to analysis per
formance and \nreliability under load conditions\, and then applied these
tools to evaluations systems at Fermilab. In this \npresentation we will
describe the suite of tools developed\, the results of the evaluation proc
ess\, the system \nand architecture that were eventually chosen\, and the
experience so far supporting a user community.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294320/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294320/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Production data export and archiving system for new data format of
the BaBar experiment.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294209@indico.cern.ch
DESCRIPTION:For The BaBar Computing Group\n\nBaBar has recently moved away
from using Objectivity/DB for it's event\nstore towards a ROOT-based even
t store. Data in the new format is\nproduced at about 20 institutions worl
dwide as well as at SLAC. Among \nnew challenges are the organization of d
ata export from remote \ninstitutions\, archival at SLAC and making the da
ta visible to users \nfor analysis and import to their own institutions.\n
\nThe new system is designed to be scalable\, easily configurable on the\n
client and server side and adaptive to server load. It's intergrated \nto
work with SLAC's mass storage system (HPSS) and with the xrootd \nservice.
Design\, implementation and experience with new system\, as \nwell as fut
ure development is discussed in this article.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294209/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294209/
END:VEVENT
BEGIN:VEVENT
SUMMARY:SRB system at Belle/KEK
DTSTART;VALUE=DATE-TIME:20040929T153000Z
DTEND;VALUE=DATE-TIME:20040929T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294458@indico.cern.ch
DESCRIPTION:Speakers: Y. Iida (HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATI
ON)\nThe Belle experiment has accumulated an integrated luminosity of more
than 240fb-1 \nso far\, and a daily logged luminosity now exceeds 800pb-1
. These numbers correspond \nto more than 1PB of raw and processed data st
ored on tape and an accumulation of \nthe raw data at the rate of 1TB/day.
The processed\, compactified data\, together \nwith Monte Carlo simulatio
n data for the final physics analyses amounts to more \nthan 100TB. The Be
lle collaboration consists of more than 55 institutes in 14 \ncountries an
d at most of the collaborating institutions\, active physics data \nanalys
is programs are being undertaken. To meet these storage and data distribut
ion \ndemands\, we have tried to adopt a resource broker\, SRB. We have i
nstalled the SRB \nsystem at KEK\, Australia\, and other collaborating ins
titutions and have started to \nshare data. In this talk\, experiences wit
h the SRB system will be discussed and the \nperformance of the system whe
n used for data processing and physics analysis of the \nBelle experiment
will be demonstrated.\n\nhttps://indico.cern.ch/event/0/contributions/1294
458/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294458/
END:VEVENT
BEGIN:VEVENT
SUMMARY:LCG Conditions Database Project Overview
DTSTART;VALUE=DATE-TIME:20040929T161000Z
DTEND;VALUE=DATE-TIME:20040929T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294168@indico.cern.ch
DESCRIPTION:Speakers: A. Valassi (CERN)\nThe Conditions Database project h
as been launched\nto implement a common persistency solution for experimen
t conditions data\nin the context of the LHC Computing Grid (LCG) Persiste
ncy Framework.\nConditions data\, such as calibration\, alignment or slow
control data\, \nare non-event experiment data characterized by the fact \
nthat they vary in time and may have different versions.\nThe LCG project
draws on preexisting projects which have led \nto the definition of a gene
ric C++ API for condition data access\nand its implementation using differ
ent storage technologies\,\nsuch as Objectivity\, MySQL or Oracle. \nThe p
roject is assigned the task to deliver a production release \nof the softw
are including implementation libraries for several \ntechnologies and high
level tools for data management.\nThe presentation will review the curren
t status of the LCG common project \nat the time of the conference and the
plans for its evolution.\n\nhttps://indico.cern.ch/event/0/contributions/
1294168/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294168/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quality Assurance and Testing in LCG
DTSTART;VALUE=DATE-TIME:20040930T155000Z
DTEND;VALUE=DATE-TIME:20040930T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294392@indico.cern.ch
DESCRIPTION:Speakers: M. GALLAS (CERN)\nSoftware Quality Assurance is
an integral part of the software \ndevelopment process of the LCG Proj
ect and includes several activities \nsuch as automatic testing\, test
coverage reports\, static software \nmetrics reports\, bug tracker\, usa
ge statistics and compliance to build\, \ncode and release policies. \n \n
As a part of QA activity all levels of the sw-testing should be run as \na
part of automatic process\, the SPI project delivers a general \nt
est-framework solution based on open source software together with \nte
st document templates and software testing policies. The \ntes
t-framework solution is built on QMtest\, Oval and the X-Unit family \n(Cp
pUnit\, PyUnit\, JUnit). The specific languages testing features are \nco
vered at the unit-testing level with the X-Unit family\, the \nva
lidation testing activity can be done through Oval. And Qmtest \noff
ers a way to integrate all the tests and write custom python tests\, \nhav
ing a nice web interface for running and browse the test results. \n \nTes
t coverage reports allow to understand to which extent software \nprod
ucts are tested and they are based on the approach used by Linux \nTe
sting Project. Code size and development effort of the software is \nest
imated using sloccount utility based on standard development \nmode
ls. Statistics are automatically extracted from the savannah \nbug t
racker which enables to analyze the evolution of the quality\, \namount
of feedback from the users etc. Finally the compliance with the \nstandar
d LCG policies is verified. It includes the build and CVS \nrepository
structure and the standard release procedure.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294392/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294392/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Multidimensional Approach to the Analysis of Grid Monitoring Dat
a
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294126@indico.cern.ch
DESCRIPTION:Speakers: G. Rubini (INFN-CNAF)\nAnalyzing Grid monitoring dat
a requires the capability of dealing with\nmultidimensional concepts intri
nsic to Grid systems. The meaningful \ndimensions identified in recent wor
ks are the physical dimension \nreferring to geographical location of reso
urces\, the Virtual \nOrganization (VO) dimension\, the time dimension and
the monitoring \nmetrics dimension. In this paper\, we discuss the applic
ation of\nOn-Line Analytical Processing (OLAP)\, an approach to the fast
\nanalysis of shared multidimensional information\, to the mentioned \npro
blem. OLAP relies on structures called `OLAP cubes'\, that are \ncreated b
y a reorganization of data contained inside a\nrelational database\, thus
transforming operational data into \ndimensional data.\nOur OLAP model is
a four-dimension cuboid based on time\, geographic\, \nVirtual Organizatio
n (VO)\, and monitoring metric. Time and geographic \ndimensions have tota
l order relation and form two concept \nhierarchies\, respectively hours\n
\nhttps://indico.cern.ch/event/0/contributions/1294126/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294126/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EventStore: Managing Event Versioning and Data Partitioning using
Legacy Data Formats
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294134@indico.cern.ch
DESCRIPTION:Speakers: C. Jones (CORNELL UNIVERSITY)\nHEP analysis is an it
erative process. It is critical that in each iteration the physicist's an
alysis job accesses the \nsame information as previous iterations (unless
explicitly told to do otherwise). This becomes problematic \nafter the da
ta has been reconstructed several times. In addition\, when starting a ne
w analysis\, physicists \nnormally want to use the most recent version of
reconstruction. Such version control is useful for data \nmanaged by a si
ngle physicist using a laptop or small groups of physicists at a remote in
stitution in addition \nto the collaboration wide managed data.\n\nIn this
presentation we will discuss our implementation of the EventStore which u
ses a data location\, indexing \nand versioning service to manage legacy d
ata formats (e.g. an experiment's existing proprietary file format or \nRo
ot files). A plug-in architecture is used to support adding additional fi
le formats. The core of the system is \nused to implement three different
sizes of services: personal\, group and collaboration.\n\nhttps://indico.c
ern.ch/event/0/contributions/1294134/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294134/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Production Management Software for the CMS Data Challenge
DTSTART;VALUE=DATE-TIME:20040927T122000Z
DTEND;VALUE=DATE-TIME:20040927T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294226@indico.cern.ch
DESCRIPTION:Speakers: J. Andreeva (UC Riverside)\nOne of the goals of CMS
Data Challenge in March-April 2004 (DC04) was to run\nreconstruction for s
ustained period at 25 Hz input rate with distribution of the\nproduced dat
a to CMS T1 centers for further analysis.\n\nThe reconstruction was run at
the T0 using CMS production software\, of which the main\ncomponents are
RefDB (CMS Monte Carlo 'Reference Database' with Web interface) and\nMcRun
job (a framework for creation and submission of large numbers of Monte Car
lo\njobs ).\n\nThis paper presents an overview of CMS production cycle \,
describing production\ntools\, covering data processing\, bookkeeping and
publishing issues\, in the context of\ntheir use during the T0 reconstruct
ion part of DC04.\n\nhttps://indico.cern.ch/event/0/contributions/1294226/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294226/
END:VEVENT
BEGIN:VEVENT
SUMMARY:New specific solids definitions in the Geant4 geometry modeller
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294127@indico.cern.ch
DESCRIPTION:Speakers: O. Link (CERN\, PH/SFT)\nTwisted trapezoids are impo
rtant compontents in the LAr end cap calorimeter of the \nAtlas detector.
A similar solid\, the so-called twisted tubs consists of two end \nplanes
\, inner and outer hyperboloidal surfaces\, and twisted surfaces\, and is
an \nindispensable component for cylindrical drift chambers (see K. Hoshin
a et al\, \nComputer Physics Communications 153 (2003) 373-391). In Geant
3 exists a general \nversion of a twisted trapezoid\, however the implemen
tation puts very strong \nrestrictions on its use. \nIn the Geant4 toolkit
no solids have been available to date to describe twisted \nobjects. The
design and realisation of new twisted solids within the framework of \nGea
nt4 will be presented together with the algorithmic details\, followed by
a \ndiscussion of the performance and accuracy test results.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294127/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294127/
END:VEVENT
BEGIN:VEVENT
SUMMARY:POOL Development Status and Plans
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294394@indico.cern.ch
DESCRIPTION:Speakers: D. Duellmann (CERN IT/DB & LCG POOL PROJECT)\nThe LC
G POOL project is now entering the third year of active development. The b
asic functionality of the \nproject is provided but some functional extens
ions will move into the POOL system this year. This \npresentation will gi
ve a summary of the main functionality provided by POOL\, which used in ph
ysics \nproductions today. We will then present the design and implementat
ion of the main new interfaces and \ncomponents planned such as the POOL R
DBMS abstraction layer and the RDBMS based Storage Manager back-\nend.\n\n
https://indico.cern.ch/event/0/contributions/1294394/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294394/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A New STAR Event Reconstruction Chain
DTSTART;VALUE=DATE-TIME:20040929T161000Z
DTEND;VALUE=DATE-TIME:20040929T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294427@indico.cern.ch
DESCRIPTION:Speakers: C. Pruneau (WAYNE STATE UNIVERSITY)\nWe present the
design and performance analysis of a new event reconstruction chain deploy
ed for analysis of \nSTAR data acquired during the 2004 run and beyond. Th
e creation of this new chain involved the elimination \nof obsolete FORTRA
N components\, and the development of equivalent or superior modules writt
en in C++. \nThe new reconstruction chain features a new and fast TPC clu
ster finder\, a new track reconstruction software \n(ITTF discussed at CHE
P2003)\, which seamlessly integrate all detector components of the experim
ent\, a new \nvertex finder\, and various post-tracking analysis modules i
ncluding a V0 finder\, and a track kink finder. The \nnew chain is the cul
mination of a large software development effort involving in excess of ten
FTEs.\n\nhttps://indico.cern.ch/event/0/contributions/1294427/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294427/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Control Software for the ALICE High Level Trigger
DTSTART;VALUE=DATE-TIME:20040929T161000Z
DTEND;VALUE=DATE-TIME:20040929T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294416@indico.cern.ch
DESCRIPTION:Speakers: T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS\, RUP
RECHT-KARLS-UNIVERSITY HEIDELBERG\, for the Alice Collaboration)\nThe Alic
e High Level Trigger (HLT) cluster is foreseen to consist of \n400 to 500
dual SMP PCs at the start-up of the experiment. The \nsoftware running on
these PCs will consist of components \ncommunicating via a defined interfa
ce\, allowing flexible software \nconfigurations. During Alice's operation
the HLT has to be \ncontinuously active to avoid detector dead time. To e
nsure that the \nseveral hundred software components\, distributed through
out the \ncluster\, operate and interact properly\, a control software was
\nwritten that is presented here. It was designed to run distributed \nov
er the cluster and to support control program hierarchies. \nDistributed o
peration avoids central performance bottlenecks and \nsingle-points-of-fai
lures. The last point is of particular \nimportance\, as each of the commo
dity type PCs in the HLT cluster \ncannot be relied upon to operate contin
ously. Control hierarchies in \nturn are relevant for scalability over the
required number of nodes. \nThe software makes use of existing and widely
used technologies: \nConfigurations of programs to be controlled are save
d in XML\, while \nPython is used as a scripting language and to specify a
ctions\nto execute. Interface libraries are used to access the controlled
\ncomponents\, presenting a uniform interface to the control program. \nUs
ing these mechanisms the control software remains generic and can \nbe use
d for other purposes as well. It is being used for HLT data \nchallenges i
n Heidelberg and is planned for use during upcoming beam \ntests.\n\nhttps
://indico.cern.ch/event/0/contributions/1294416/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294416/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Globally Distributed User Analysis Computing at CDF
DTSTART;VALUE=DATE-TIME:20040930T161000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294385@indico.cern.ch
DESCRIPTION:Speakers: A. Sill (TEXAS TECH UNIVERSITY)\nTo maximize the phy
sics potential of the data currently being taken\, the CDF collaboration a
t Fermi National \nAccelerator Laboratory has started to deploy user analy
sis computing facilities at several locations throughout \nthe world. Ove
r 600 users are signed up and able to submit their physics analysis and si
mulation applications \ndirectly from their desktop or laptop computers to
these facilities. These resources consist of a mix of \ncustomized compu
ting centers and a decentralized version of our Central Analysis Facility
(CAF) initially used \nat Fermilab\, which we have designated Decentralize
d CDF Analysis Facilities (DCAFs).\n\nWe report on experience gained durin
g the initial deployment and use of these resources for the summer \nconfe
rence season 2004. During this period\, we allowed MC generation as well a
s data analysis of selected \ndata samples at several globally distributed
centers. In addition\, we discuss a migration path from this first \ngene
ration distributed computing infrastructure towards a more open implementa
tion that will be \ninteroperable with LCG\, OSG and other general-purpose
grid installations at the participating sites.\n\nhttps://indico.cern.ch/
event/0/contributions/1294385/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294385/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Job Interactivity using a Steering Service in an Interactive Grid
Analysis Environment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294144@indico.cern.ch
DESCRIPTION:Speakers: A. Anjum (NIIT)\nIn the context of Interactive Grid-
Enabled Analysis Environment \n(GAE)\, physicists desire bi-directional in
teraction with the job \nthey submitted. In one direction\, monitoring inf
ormation about the \njob and hence a “progress bar” should be provided
to them. On other \ndirection\, physicist should be able to control their
jobs. Before \nsubmission\, they may direct the job to some specified res
ource or \ncomputing element. Before execution\, its parameter may be chan
ged or \nit may be moved to another location. During execution\, its \nint
ermediate results should be fetched or it may be moved to another \nlocati
on. Also\, physicists should be able to kill\, restart\, hold and \nresume
their jobs.\n\nInteractive job execution requires that at each step\, the
user must \nmake choices between alternative application components\, fil
es\, or \nlocations. So a dead end may be reached where no solution can be
\nfound\, which would require backtracking to undo some previous \nchoice
. Another desire is reliable and optimal execution of the job. \nGrid shou
ld take some decisions regarding the job execution to help \nin reliable a
nd optimal execution of the job. Reliability can be \nachieved using the j
ob recovery mechanism. When a job on grid fails\, \nthe recovery mechanism
should resubmit the job on either the same \nresource or on different res
ource. Check-pointing the job will \nmake resource utilization low when re
covering the job from failure. \n\nIn this paper the architecture and desi
gn of an autonomous grid \nservice is described that fulfills the above st
ated requirements for \ninteractivity in Grid-enabled data analysis.\n\nht
tps://indico.cern.ch/event/0/contributions/1294144/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294144/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Track Extrapolation package in the new ATLAS Tracking Realm
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294173@indico.cern.ch
DESCRIPTION:Speakers: A. Salzburger (UNIVERSITY OF INNSBRUCK)\nThe ATLAS r
econstruction software requires extrapolation to arbitrary oriented\nsurfa
ces of different types inside a non-uniform magnetic field. In addition mu
ltiple\nscattering and energy loss effects along the propagated trajectori
es have to be taken\ninto account. A good performace in respect of computi
ng time consumption is crucial\ndue to hit and track multiplicity in high
luminosity events at the LHC and the small\ntime window of the ATLAS high
level trigger.\nTherefor stable and fast algorithms for the propagation of
the track parameters and\ntheir associated covariance matrices in specifi
c representations to different\nsurfaces in the detector are required.\nTh
e recently developped track extrapolation package inside the new ATLAS off
line\ntracking software is presented. Timing performace studies\, integrat
ion tests with\nclient algorithms and results on ATLAS 2004 Combined Test
Beam data are given.\n\nhttps://indico.cern.ch/event/0/contributions/12941
73/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294173/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Paths: Specifying Multiple Job Outputs via Filter Expressions
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294335@indico.cern.ch
DESCRIPTION:Speakers: C. Jones (CORNELL UNIVERSITY)\nA common task for a r
econstruction/analysis system is to be able to\noutput different sets of e
vents to different permanent data stores\n(e.g. files). This allows multi
ple related logical jobs to be grouped\ninto one process and run using the
same input data (read from a\npermanent data store and/or created from an
algorithm). In our\nsystem\, physicists can specify multiple output 'pat
hs'\, where \neach path contains a group of filters followed by output 'op
erations'.\n The filters are combined using a physicist specified boolean\
nexpression\; only if the expression evaluates to true will the output\nop
eration be performed for that event. \nPaths do not explicitly contain th
e order that data objects should be\ncreated as our system uses a 'data on
demand' mechanism which causes\ndata to be created the first time the dat
a is requested. Separating\nthe data dependencies from the event selectio
n criteria vastly\nsimplifies the task of creating a path\, thereby making
the facility\nmore accessible to physicists.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294335/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294335/
END:VEVENT
BEGIN:VEVENT
SUMMARY:An Embedded Linux System Based on PowerPC
DTSTART;VALUE=DATE-TIME:20040927T130000Z
DTEND;VALUE=DATE-TIME:20040927T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294403@indico.cern.ch
DESCRIPTION:Speakers: M. Ye (INSTITUTE OF HIGH ENERGY PHYSICS\, ACADEMIA S
INICA)\nThis article introduces a Embedded Linux System based on vme serie
s \nPowerPC as well as the base method on how to establish the system. \nT
he goal of the system is to build a test system of VMEbus device. \nIt als
o can be used to setup the data acquisition and control \nsystem. Two type
s of compiler are provided by the developer system \naccording to the feat
ures of the system and the PowerPC. At the top \nof the article some typic
al embedded Operation system will be \nintroduced and the features of diff
erent system will be provided. \nAnd then the method on how to build a emb
edded Linux system as well \nas the key technique will be discussed in det
ail. Finally a \nsuccessful data acquisition example will be given based o
n the test \nsystem.\n\nhttps://indico.cern.ch/event/0/contributions/12944
03/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294403/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Extending EGS with SVG for Track Visualization
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294325@indico.cern.ch
DESCRIPTION:Speakers: B. White (STANFORD LINEAR ACCELERATOR CENTER (SLAC))
\nThe Electron Gamma Shower (EGS) Code System at SLAC is designed to simul
ate the flow \nof electrons\, positrons and photons through matter at a wi
de range of energies. It \nhas a large user base among the high-energy phy
sics community and is often used as a \nteaching tool through a Web interf
ace that allows program input and output. Our work \naims to improve the u
ser interaction and shower visualization model of the EGS Web \ninterface.
Currently\, manipulation of the graphical output (a GIF file) is limited
\nto simple operations like panning and zooming\, and each such operation
requires \nserver-side calculations. We use SVG (Scalable Vector Graphics)
to allow a much \nricher set of operations\, letting users select a track
and visualize it with the aid \nof 3-D rotations\, adjustable particle di
splay intensities\, and interactive display \nof the interactions happenin
g over time. A considerable advantage of our method is \nthat once a track
is selected for visualization\, all further manipulations on that \ntrack
can be done client-side without requiring server-side calculations. We he
nce \ncombine the advantages of the SVG format (powerful interaction model
s over the Web) \nwith those of conventional image formats (file size inde
pendent of scene complexity) \nto allow a composite set of operations for
users\, and enhance the value of EGS as a \npedagogical tool.\n\nhttps://i
ndico.cern.ch/event/0/contributions/1294325/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294325/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The deployment mechanisms for the ATLAS software.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294441@indico.cern.ch
DESCRIPTION:Speakers: C. ARNAULT (CNRS)\nOne of the most important problem
s in software management of a very\nlarge and complex project such as Atla
s is how to deploy the software\non the running sites. By running sites we
include computer sites\nranging from computing centers in the usual sense
down to individual\nlaptops but also the computer elements of a computing
grid\norganization. The deployment activity consists in constructing a we
ll\ndefined representation of the states of the working software (known as
\nreleases)\, and transporting them to the target sites\, in such a way\nt
hat the installation process can be entirely automated and can take\ncare
of discovering the context and adapting itself to it. A set of\ntools base
d on both CMT - the basic configuration management tool of\nATLAS - and Pa
cman has been developed. The resulting mechanism now\nsupports the systema
tic production of distribution kits for various\nbinary conditions of ever
y release\, the partial or complete automatic\ninstallation of kits on any
site and the running of test suites to\nvalidate the installed kits. This
mechanism is meant to be fully\ncompliant with the Grid requirements and
has been tested in the\ncontext of LCG. Several issues related with the c
onstraints on the\ntarget system\, or with the incremental updates of the
installation\nstill need to be studied and will be discussed.\n\nhttps://i
ndico.cern.ch/event/0/contributions/1294441/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294441/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Project CampusGrid
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294240@indico.cern.ch
DESCRIPTION:Speakers: O. Schneider (FZK)\nA central idea of Grid Computing
is the virtualization of \nheterogeneous resources. To meet this challeng
e the Institute for \nScientific Computing\, IWR\, has started the project
CampusGrid. Its \nmedium term goal is to provide a seamless IT environmen
t \nsupporting the on-site research activities in physics\, \nbioinformati
cs\, nanotechnology and meteorology. The environment \nwill include all ki
nds of HPC resources: vector computers\, shared \nmemory SMP servers and c
lusters of commodity components as well as \na shared high-performance sto
rage solution. After introducing the \ngeneral ideas the talk will inform
about the current project \nstatus and scheduled development tasks. This i
s associated with reports on other \nactivities in the fields of Grid comp
uting and \nhigh performance computing at IWR.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294240/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294240/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Disk storage technology for the LHC T0/T1 centre at CERN
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294331@indico.cern.ch
DESCRIPTION:Speakers: H. Meinhard (CERN-IT)\nBy 2008\, the T0/T1 centre fo
r the LHC at CERN is estimated to use\nabout 5000 TB of disk storage. This
is a very significant increase\nover the about 250 TB running now. In ord
er to be affordable\, the\nchosen technology must provide the required per
formance and at the\nsame time be cost-effective and easy to operate and u
se.\n\nWe will present an analysis of the cost (both in terms of material\
nand personnel) of the current implementation (network-attached\nstorage)\
, and then describe detailed performance studies with hardware\ncurrently
in use at CERN in different configurations of filesystems\non software or
hardware RAID arrays over disks. Alternative\ntechnologies that have been
evaluated by CERN in varying depth (such\nas arrays of SATA disks with a F
iber Channel uplink\, distributed disk\nstorage across worker nodes\, iSCS
I solutions\, SANFS\, ...) will be\ndiscussed. We will conclude with an ou
tlook of the next steps to be\ntaken at CERN towards defining the future d
isk storage model.\n\nhttps://indico.cern.ch/event/0/contributions/1294331
/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294331/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Federating Grids: LCG meets Canadian HEPGrid
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294485@indico.cern.ch
DESCRIPTION:Speakers: R. Walker (Simon Fraser University)\nA large number
of Grids have been developed\, motivated by\ngeo-political or application
requirements. Despite being mostly based\non the same underlying middlewar
e\, the Globus Toolkit\, they are\ngenerally not inter-operable for a vari
ety of reasons. We present a\nmethod of federating those disparate grids w
hich are based on the\nGlobus Toolkit\, together with a concrete example o
f interfacing the\nLHC grid(LCG) with HEPGrid. HEPGrid consists of shared
resources\, at\nseveral Canadian research institutes\, which are exposed
via Globus\ngatekeepers\, and makes use of Condor-G for resource advertise
ment\,\nmatchmaking and job submission. An LCG Computing Element(CE) based
at\nthe TRIUMF Laboratory hosts a HEPGrid User Interface(UI) which is\nco
ntained within a custom jobmanager. This jobmanager appears in the\nLCG i
nformation system as a normal CE publishing an aggregation of the\nHEPGrid
resources. The interface interprets the incoming job in terms\nof HEPGrid
UI usage\, submits it onto HEPGrid\, and implements the\njobmanager 'poll
' and 'remove' methods\, thus enabling monitoring and\ncontrol across the
grids. In this way non-LCG resources are\nintegrated into LCG\, without t
he need for LCG middleware on those\nresources. The same method can be us
ed to create interfaces between\nother grids\, with the details of the chi
ld-Grid being fully abstracted\ninto the interface layer. The LCG-HEPGrid
interface is operational\,\nand has been used to federate 1300 CPU's at 4
sites into LCG for the\nAtlas Data Challenge (DC2).\n\nhttps://indico.cer
n.ch/event/0/contributions/1294485/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294485/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Monitoring a Petabyte Scale Storage System
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294568@indico.cern.ch
DESCRIPTION:Speakers: E. Berman (FERMILAB)\nFermilab operates a petabyte s
cale storage system\, Enstore\, which is the\nprimary data store for exper
iments' large data sets. The Enstore system\nregularly transfers greater
than 15 Terabytes of data each day. It is designed using a\nclient-server
architecture providing sufficient modularity to allow easy addition and\n
replacement of hardware and software components. Monitoring of this syste
m is\nessential to insure the integrity of the data that is stored in it a
nd to maintain\nthe high volume access that this system supports.\n\nThe m
onitoring of this distributed system is accomplished\nusing a variety of t
ools and techniques that present information for use\nby a variety of role
s (operator\, storage system administrator\, storage software\ndeveloper\,
user).\nAll elements of the system are monitored: performance\, hardware\
,\nfirmware\, software\, network\, data integrity.\nWe will present detail
s of the deployed monitoring tools with an\nemphasis on the different tech
niques that have proved useful\nto each role. Experience with the monito
ring tools and techniques\,\nwhat worked and what did not will be presente
d.\n\nhttps://indico.cern.ch/event/0/contributions/1294568/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294568/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lattice QCD Clusters at Fermilab
DTSTART;VALUE=DATE-TIME:20040927T124000Z
DTEND;VALUE=DATE-TIME:20040927T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294117@indico.cern.ch
DESCRIPTION:Speakers: Don Petravick ()\nAs part of the DOE SciDAC "Nationa
l Infrastructure for Lattice Gauge\nComputing" project\, Fermilab builds a
nd operates production clusters for\nlattice QCD simulations. We currentl
y operate three clusters: a 128-node dual\nXeon Myrinet cluster\, a 128-no
de Pentium 4E Myrinet cluster\, and a 32-node\ndual Xeon Infiniband cluste
r. We will discuss the operation of these systems\nand examine their perf
ormance in detail. We will describe the uniform user\nruntime environment
emerging from the SciDAC collaboration.\n\nThe design of lattice QCD clus
ters requires careful attention towards\nbalancing memory bandwidth\, floa
ting point throughput\, and network\nperformance. We will discuss our inv
estigations of various commodity\nprocessors\, including Pentium 4E\, Xeon
\, Itanium2\, Opteron\, and PPC970\, in\nterms of their suitability for bu
ilding balanced QCD clusters. We will also\ndiscuss our early experiences
with the emerging Infiniband and PCI Express\narchitectures. Finally\, w
e will examine historical trends in price to\nperformance ratios of lattic
e QCD clusters\, and we will present our\npredictions and plans for future
clusters.\n\nhttps://indico.cern.ch/event/0/contributions/1294117/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294117/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Job Submission & Monitoring on the PHENIX Grid*
DTSTART;VALUE=DATE-TIME:20040927T161000Z
DTEND;VALUE=DATE-TIME:20040927T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294277@indico.cern.ch
DESCRIPTION:Speakers: A. Shevel (STATE UNIVERSITY OF NEW YORK AT STONY BRO
OK)\nThe PHENIX collaboration records large volumes of data for each \nexp
erimental run (now about 1/4 PB/year). Efficient and timely \nanalysis of
this data can benefit from a framework for distributed \nanalysis via a gr
owing number of remote computing facilities in the \ncollaboration. The gr
id architecture has been\, or is being deployed \nat most of these facilit
ies.\nThe experience being obtained in the transition to the Grid \ninfras
tructure with minimum of manpower is presented with particular \nemphasis
on job monitoring and job submission in multi cluster \nenvironment. The i
ntegration of the existing subsystems\n(from Globus project\, from several
HEP collaborations)\, large \napplication libraries\, and other software
tools to render the \nresulting architecture stable\, robust\, and useful
for the end user is \nalso discussed.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294277/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294277/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Atlantis event visualisation program for the ATLAS experiment
DTSTART;VALUE=DATE-TIME:20040930T132000Z
DTEND;VALUE=DATE-TIME:20040930T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294437@indico.cern.ch
DESCRIPTION:Speakers: J. Drohan (University College London)\nWe describe t
he philosophy and design of Atlantis\, an event visualisation\nprogram for
the ATLAS experiment at CERN. Written in Java\, it employs the\nSwing API
to provide an easily configurable Graphical User Interface.\n\nAtlantis i
mplements a collection of intuitive\, data-orientated 2D\nprojections\, wh
ich enable the user to quickly understand and visually\ninvestigate comple
te ATLAS events. Event data is read in from XML files\nproduced by a dedic
ated algorithm running in the ATLAS software framework\nATHENA\, and trans
lated into internal data objects. Within the same main\ncanvas area\, mult
iple views of the data can be displayed with varying size\nand position. I
nteractions such as zoom\, selection and query can occur\nbetween these vi
ews using Drag and Drop.\n\nAssociations between data objects as well as t
he values of their member\nvariables provide criteria upon which the Atlan
tis user may filter a full\nAtlas event. By choosing whether or not to sho
w certain data and\, if so\,\nin what colour\, a more personalised and use
ful display may be obtained.\nThe user can dynamically create and manage t
heir own associations and\nperform context dependent operations upon them.
\n\nhttps://indico.cern.ch/event/0/contributions/1294437/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294437/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Parallel compilation of CMS software
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294467@indico.cern.ch
DESCRIPTION:Speakers: S. Schmid (ETH Zurich)\nLHC experiments have large a
mounts of software to build. CMS has\nstudied ways to shorten project buil
d times using parallel and\ndistributed builds as well as improved ways to
decide what to rebuild.\nWe have experimented with making idle desktop an
d server machines\neasily available as a virtual build cluster using distc
c and zeroconf.\nWe have also tested variations of ccache and more traditi
onal make\ndependency analysis. \nWe report on our test results\, with ana
lysis of the factors that most\nimprove or limit build performance.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294467/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294467/
END:VEVENT
BEGIN:VEVENT
SUMMARY:HEP Applications Experience with the European DataGrid Middleware
and Testbed
DTSTART;VALUE=DATE-TIME:20040927T143000Z
DTEND;VALUE=DATE-TIME:20040927T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294228@indico.cern.ch
DESCRIPTION:Speakers: S. Burke (Rutherford Appleton Laboratory)\nThe Europ
ean DataGrid (EDG) project ran from 2001 to 2004\, with the aim of produci
ng \nmiddleware which could form the basis of a production Grid\, and of r
unning a testbed \nto demonstrate the middleware. HEP experiments (initial
ly the four LHC experiments \nand subsequently BaBar and D0) were involved
from the start in specifying \nrequirements\, and subsequently in evaluat
ing the performance of the middleware\, both \nwith generic tests and thro
ugh increasingly complex data challenges. A lot of \nexperience has theref
ore been gained which may be valuable to future Grid projects\, \nin parti
cular LCG and EGEE which are using a substantial amount of the middleware
\ndeveloped in EDG. We report our experiences with job submission\, data m
anagement and \nmass storage\, information and monitoring systems\, Virtua
l Organisation management \nand Grid operations\, and compare them with so
me typical Use Cases defined in the \ncontext of LCG. We also describe som
e of the main lessons learnt from the project\, \nin particular in relatio
n to configuration\, fault-tolerance\, interoperability and \nscalability\
, as well as the software development process itself\, and point out some
\nareas where further work is needed. We also make some comments on how th
ese issues \nare being addressed in LCG and EGEE.\n\nhttps://indico.cern.c
h/event/0/contributions/1294228/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294228/
END:VEVENT
BEGIN:VEVENT
SUMMARY:EU Grid Research - Projects and Vision
DTSTART;VALUE=DATE-TIME:20040928T103000Z
DTEND;VALUE=DATE-TIME:20040928T104500Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294494@indico.cern.ch
DESCRIPTION:Speakers: Max Lemke ()\nThe European Grid Research vision as s
et out in the Information\nSociety Technologies Work Programmes of the EU'
s Sixth Research\nFramework Programme is to advance\, consolidate and matu
re Grid\ntechnologies for widespread e-science\, industrial\, business and
\nsocietal use. A batch of Grid research projects with 52 Million EUR EU\n
support was launched during the European Grid Technology Days 15 - 17\nSep
tember 2004. The portfolio of projects has the potential for\nturning Euro
pe's strong competence and critical mass in Grid Research\ninto competitiv
e advantages. In this presentation\, the Grid research\nvision of the prog
ramme and the new project portfolio will be\nintroduced. More information:
www.cordis.lu/ist/grids.\n\nhttps://indico.cern.ch/event/0/contributions/
1294494/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294494/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Go4 analysis design
DTSTART;VALUE=DATE-TIME:20040927T134000Z
DTEND;VALUE=DATE-TIME:20040927T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294369@indico.cern.ch
DESCRIPTION:Speakers: H. Essel (GSI)\nThe GSI online-offline analysis syst
em Go4 is a ROOT based framework for medium \nenergy ion- and nuclear phys
ics experiments. Its main features are a multithreaded \nonline mode with
a non-blocking Qt GUI\, and abstract user interface classes to set \nup th
e analysis process itself which is organised as a list of subsequent analy
sis \nsteps. Each step has its own event objects and a processor instance.
It can handle \nits event i/o independently. It can be set up by macros o
r by generic a GUI. With \nrespect to the more complex experiments planned
at GSI\, a configurable network of \nsteps is required. Multiple IO chann
els per step and multiple references to steps \ncan be set up by macros or
via generic GUI. The required mechanisms are provided by \nan upgrade of
the Go4 analysis step manager using the new ROOT TTasks. Support for \nIO
configuration and references across the task tree is provided.\n\nhttps://
indico.cern.ch/event/0/contributions/1294369/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294369/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A statistical toolkit for data analysis
DTSTART;VALUE=DATE-TIME:20040930T153000Z
DTEND;VALUE=DATE-TIME:20040930T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294208@indico.cern.ch
DESCRIPTION:Speakers: M.G. Pia (INFN GENOVA)\nStatistical methods play a s
ignificant role throughout the life-\ncycle of HEP experiments\, being an
essential component of physics \nanalysis. We present a project in progres
s for the development of \nan object-oriented software toolkit for statis
tical data analysis. \nMore in particular\, the Statistical Comparison com
ponent of the \ntoolkit provides algorithms for the comparison of data dis
tributions \nin a variety of use cases typical of HEP experiments\, as reg
ression \ntesting (in various phases of the software life-cycle)\, validat
ion \nof simulation through comparison to experimental data\, comparison o
f \nexpected versus reconstructed distributions\, comparison of data from
\ndifferent sources - such as different sets of experimental data\, or \ne
xperimental with respect to theoretical distributions. The toolkit \nconta
ins a variety of goodness-of-fit tests\, from chi-squared to \nKolmogorov-
Smirnov\, to less known\, but generally much more powerful \ntests such as
Anderson-Darling\, Cramer-von Mises\, Kuiper\, Tiku etc.\n\nThanks to the
component-based design and the usage of the standard \nAIDA interfaces\,
this tool can be used by other data analysis \nsystems or integrated in e
xperimental software frameworks. We \npresent the architecture of the syst
em\, the statistics methods \nimplemented and some results of its applica
tions to the comparison \nof Geant4 simulations with respect to experiment
.\n\nhttps://indico.cern.ch/event/0/contributions/1294208/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294208/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Operating the LCG and EGEE Production Grids for HEP
DTSTART;VALUE=DATE-TIME:20040928T070000Z
DTEND;VALUE=DATE-TIME:20040928T073000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294319@indico.cern.ch
DESCRIPTION:Speakers: I. Bird (CERN)\nIn September 2003 the first LCG-1 se
rvice was put into production at most of the \nlarge Tier 1 sites and was
quickly expanded up to 30 Tier 1 and Tier 2 sites by the \nend of the year
. Several software upgrades were made and the LCG-2 service was put \nint
o production in time for the experiment data challenges that began in Febr
uary \n2004 and continued for several months. In particular LCG-2 introdu
ced transparent \naccess to mass storage and managed disk-only storage ele
ments\, and a first release \nof the Grid File Access library. Much valua
ble experience was gained during the \ndata challenges in all aspects from
the functionality and use of the middleware\, to \nthe deployment\, maint
enance\, and operation of the services at many sites. Based on \nthis exp
erience a program of work to address the functional and operational issues
\nis being implemented. The goal is to focus on essential areas such as
data \nmanagement and to build by the end of 2004 a basic grid system capa
ble of handling \nthe basic needs of LHC computing\, providing direction f
or future middleware and \nservice development. \n\nThe LCG-2 infrastruct
ure also forms the production service of EGEE. This involves \nsupporting
new application communities\, bringing in new sites not associated with \
nHEP and evolving a full scale 24x7 user and operational support structure
. We will \ndescribe the EGEE infrastructure\, how it supports and intera
cts with LCG\, and how \nwe expect the infrastructure to evolve over the n
ext year of the EGEE project.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294319/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294319/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The GeoModel Toolkit for Detector Description
DTSTART;VALUE=DATE-TIME:20040930T124000Z
DTEND;VALUE=DATE-TIME:20040930T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294152@indico.cern.ch
DESCRIPTION:Speakers: V. Tsulaia (UNIVERSITY OF PITTSBURGH)\nThe GeoModel
toolkit is a library of geometrical primitives that can be\nused to descri
be detector geometries. The toolkit is designed as a data\nlayer\, and e
specially optimized in order to be able to describe large and\ncomplex det
ector systems with minimum memory consumption. Some of the\ntechniques us
ed to minimize the memory consumption are: shared instancing\nwith refere
nce counting\, compressed representations of Euclidean\ntransformations\,
special nodes which encode the naming of volumes without storing \nname-st
rings\, and\, especially\, parameterization though embedded symbolic expre
ssions \nof transformation fields. A faithful representation of a GeoMode
l description \ncan be transferred to Geant4\, and\, we predict\, to other
engines that simulate the \ninteraction of particles with matter. GeoMod
el comes with native capabilities for \ngeometry clash detection and for m
aterial integration. It's only external \ndependencies are upon CLHEP.\nTh
is talk describes this toolkit for the first time in a public forum.\n\nht
tps://indico.cern.ch/event/0/contributions/1294152/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294152/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Transparently managing time varying conditions and detector data o
n ATLAS.
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294352@indico.cern.ch
DESCRIPTION:Speakers: C. Leggett (LAWRENCE BERKELEY NATIONAL LABORATORY)\n
It is essential to provide users transparent access to time varying\ndata\
, such as detector misalignments\, calibration parameters and the\nlike. T
his data should be automatically updated\, without user\nintervention\, wh
enever it changes. Furthermore\, the user should be\nable to be notified w
henever a particular datum is updated\, so as to\nperform actions such as
re-caching of compound results\, or performing\ncomputationally intensive
task only when necessary. The user should\nonly have to select a particul
ar calibration scheme or time interval\,\nwithout having to worry about ex
plicitly updating data on an event by\nevent basis. In order to minimize d
atabase activity\, it is important\nthat the system only manage the parame
ters that are actively used in \na particular job\, making updates only on
demand. For certain\nsituations however\, such as testbeam environments\,
pre-caching of data \nis essential\, so the system must also be able to p
re-load all relevant\ndata at the start of a run\, and avoid further updat
es to the data.\n\n\nIn this talk we present the scheme for managing time
varying data and\ntheir associated intervals of validity\, as used in the
Athena framework\non ATLAS\, which features automatic updating of conditio
ns data\noccurring invisibly to the user\; automatic and explicit registra
tion\nof objects of interest\; callback function hierarchies\; and abstrac
t\nconditions database interfaces.\n\nhttps://indico.cern.ch/event/0/contr
ibutions/1294352/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294352/
END:VEVENT
BEGIN:VEVENT
SUMMARY:LHC data files meet mass storage and networks: going after the los
t performance
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294466@indico.cern.ch
DESCRIPTION:Speakers: L. Tuura (NORTHEASTERN UNIVERSITY\, BOSTON\, MA\, US
A)\nExperiments frequently produce many small data files for reasons beyon
d their control\, such as output \nsplitting into physics data streams\, p
arallel processing on large farms\, database technology incapable of \ncon
current writes into a single file\, and constraints from running farms rel
iably. Resulting data file size is \noften far from ideal for network tran
sfer and mass storage performance. Provided that time to analysis does \nn
ot significantly deteriorate\, files arriving from a farm could easily be
merged into larger logical chunks\, for \nexample by physics stream and fi
le type within a configurable time and size window.\n\nUncompressed zip ar
chives seem an attractive candidate for such file merging and are currentl
y tested by the \nCMS experiment. We describe the main components now in u
se: the merging tools\, tools to read and write zip \nfiles directly from
C++\, plug-ins to the database system\, mass-storage access optimisation\,
consistent \nhandling of application and replica metadata\, and integrati
on with catalogues and other grid tools. We report \non the file size rati
o obtained in the CMS 2004 data challenge and observations and analysis on
changes to \ndata access as well as estimated impact on network usage.\n\
nhttps://indico.cern.ch/event/0/contributions/1294466/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294466/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Griding The Nordic Supercomputing Infrastructure
DTSTART;VALUE=DATE-TIME:20040930T090000Z
DTEND;VALUE=DATE-TIME:20040930T093000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294499@indico.cern.ch
DESCRIPTION:Speakers: Bo Anders Ynnerman (Linköping)\nThis talk gives a b
rief overview of recent development of high\nperformance computing and Gri
d initiatives in the Nordic region. Emphasis\nwill be placed on the techno
logy and policy demands posed by the integration\nof general purpose super
computing centers into Grid environments. Some of\nthe early experiences
of bridging national eBorders in the Nordic region\nwill also be presented
.\nRather than giving an exhaustive presentation of all projects in the No
rdic\ncountries the presentation uses selected examples of Grid projects t
o show\nthe potential as well as some of the current limitations of Grids.
\n\nPlans for a common Nordic Grid Core Facility are currently being made
. The\npresentation gives an overview of these plans and the status of the
project.\nIt will also cover a few examples of Nordic Grid initiatives in
more detail\nsuch as the recently launched SweGrid test bed for productio
n.\n\nhttps://indico.cern.ch/event/0/contributions/1294499/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294499/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Methodologies and techniques for analysis of network flow data
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294258@indico.cern.ch
DESCRIPTION:Speakers: A. Bobyshev (FERMILAB)\nNetwork flow data gathered o
n border routers and core network switch/routers is used\nat Fermilab for
statistical analysis of traffic patterns\, passive network monitoring\,\na
nd estimation of network performance characteristics. Flow data is also a
critical\ntool in the investigation of computer security incidents. Devel
opment and enhancement\nof flow- based tools is on-going effort. The curre
nt state of flow analysis is based\non the open source Flow-Tools package.
This paper describes the most recent\ndevelopments in flow analysis at Fe
rmilab. Our goal is to provide a multidimensional\nview of network traffi
c patterns\, with a detailed breakdown based on site\,\nexperiment\, domai
n\, subnet\, hosts\, protocol\, or application. The latest analysis\ntool
provides a descriptive and graphical representation of network traffic b
roken\ndown by combinations of experiment and DNS domain. The tool can be
utilized in\nreal-time mode\, as well as to provide a historical view. A
nother tool analyzes flow\ndata to provide performance characteristics of
completed multistream GridFTP data\ntransfers. The current prototype provi
des a web interface for dynamic administration\nof the flow reports. We w
ill describe and discuss the new features that we plan on\ndeveloping in f
uture enhancements to our flow analysis tool set.\n\nhttps://indico.cern.c
h/event/0/contributions/1294258/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294258/
END:VEVENT
BEGIN:VEVENT
SUMMARY:POOL Integration into three Experiment Software Frameworks
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294395@indico.cern.ch
DESCRIPTION:Speakers: Giacomo Govi ()\nThe POOL software package has been
successfully integrated with the three large experiment software \nframewo
rks of ATLAS\, CMS and LHCb. This presentation will summarise the experien
ce gained during these \nintegration efforts and will try to highlight the
commonalities and the main differences between the \nintegration approach
es. In particular we’ll discuss the role of the POOL object cache\, the
choice of the main \nstorage technology in ROOT (tree or named objects) an
d approaches to collection and catalogue integration.\n\nhttps://indico.ce
rn.ch/event/0/contributions/1294395/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294395/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grid Collector: Using an Event Catalog to Speed up User Analysis i
n Distributed Environment
DTSTART;VALUE=DATE-TIME:20040930T151000Z
DTEND;VALUE=DATE-TIME:20040930T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294372@indico.cern.ch
DESCRIPTION:Speakers: K. Wu (LAWRENCE BERKELEY NATIONAL LAB)\nNuclear and
High Energy Physics experiments such as STAR at BNL are \ngenerating milli
ons of files with PetaBytes of data each year. In \nmost cases\, analysis
programs have to read all events in a file in \norder to find the interes
ting ones. \nSince most analyses are only interested in some subsets of e
vents in \na number of files\, a significant portion of the computer time
is \nwasted on reading the unwanted events. To address this issue\, we \n
developed a software system called the Grid Collector. The core of \nthe
Grid Collector is an "Event Catalog". \nThis catalog can be efficiently s
earched with compressed bitmap \nindices. Tests show that it can index an
d search STAR event data \nmuch faster than database systems. \nIt is ful
ly integrated with an existing analysis framework so that a \nminimal effo
rt is required to use the Grid Collector in an analysis \nprogram. In add
ition\, by taking advantage of existing file catalogs\, \nStorage Resource
Managers (SRMs) and GridFTP\, the Grid Collector \nautomatically download
s the needed files anywhere on the Grid without \nuser intervention.\n\nTh
e Grid Collector can significantly improve user productivity. The \nimpro
vement in productivity is more significant as users converge \ntoward sear
ching for rare events\, because only the rare events are \nread into memor
y and the necessary files are automatically located \nand downloaded throu
gh the best available route. For a user that \ntypically performs computa
tion on 50% of the events\, using the Grid \nCollector could reduce the tu
rn around time by a half.\n\nhttps://indico.cern.ch/event/0/contributions/
1294372/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294372/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reflection-Based Python-C++ Bindings
DTSTART;VALUE=DATE-TIME:20040927T130000Z
DTEND;VALUE=DATE-TIME:20040927T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294239@indico.cern.ch
DESCRIPTION:Speakers: W. LAVRIJSEN (LBNL)\nPython is a flexible\, powerful
\, high-level language with excellent \ninteractive and introspective capa
bilities and a very clean syntax. As \nsuch it can be a very effective too
l for driving physics analysis.\n \nPython is designed to be extensible in
low-level C-like languages\, and \nits use as a scientific steering langu
age has become quite widespread. \nTo this end\, existing and custom-writt
en C or C++ libraries are bound \nto the Python environment as so-called e
xtension modules. A number of \ntools for easing the process of creating s
uch bindings exist\, such as \nSWIG or Boost.Python. Yet\, the the process
still requires a \nconsiderable amount of effort and expertise.\n \nThe C
++ language has little built-in introspective capabilities\, but \ntools s
uch as LCGDict and CINT add this by providing so-called\ndictionaries: lib
raries that contain information about the names\, \nentry points\, argumen
t types\, etc. of other libraries.\nThe reflection information from these
dictionaries can be used for the \ncreation of bindings and so the process
can be fully automated\, as \ndictionaries are already provided for many
end-user libraries for \nother purposes\, such as object persistency.\n \n
PyLCGDict is a Python extension module that uses LCG dictionaries\, as \nP
yROOT uses CINT reflection information\, to allow Python users to \naccess
C++ libraries with essentially no preparation on the users' \nbehalf. In
addition\, and in a similar way\, PyROOT gives ROOT users \naccess to Pyth
on libraries.\n\nhttps://indico.cern.ch/event/0/contributions/1294239/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294239/
END:VEVENT
BEGIN:VEVENT
SUMMARY:MonALISA: An Agent Based\, Dynamic Service System to Monitor\, Con
trol and Optimize Grid based Applications.
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294542@indico.cern.ch
DESCRIPTION:Speakers: I. Legrand (CALTECH)\nThe MonALISA (MONitoring Agent
s in A Large Integrated Services Architecture) system\nis a scalable Dynam
ic Distributed Services Architecture which is based on the mobile\ncode pa
radigm.\n\nAn essential part of managing a global system\, like the Grids\
, is a monitoring system\nthat is able to monitor and track the many site
facilities\, networks\, and all the\ntask in progress\, in real time. MonA
LISA is designed to easily integrate existing\nmonitoring tools and proced
ures and to provide this information in a dynamic\, self\ndescribing way t
o any other services or clients.\n\nThe monitoring information gathered is
essential for developing higher level\nservices that provide decision su
pport\, and eventually some degree of automated\ndecisions\, to help maint
ain and optimize workflow through the Grid.\n\nMonALISA is an ensemble of
autonomous multi-threaded\, agent-based subsystems which\nare registered a
s dynamic services and are able to collaborate and cooperate in\nperformin
g a wide range of monitoring\, data processing and control tasks in large\
nscale distributed applications. We also present the development of specia
lized higher\nlevel services\, implemented as distributed mobile agents in
the MonALISA framework to\ncontrol and globally optimize tasks as grid sc
heduling\, real-time data streaming or\neffective file replication.I\n\nTh
e system is currently used to monitor several large scale systems and prov
ides\ndetailed information for computing nodes\, LAN and WAN network compo
nents\, job\nexecution and applications specific parameters. This distribu
ted system proved to be\nreliable\, able to correctly handle connectivity
problems and is running around the\nclock on more than 120 sites.\n\nhttps
://indico.cern.ch/event/0/contributions/1294542/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294542/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DZERO Data Aquistiion Monitoring and History Gathering
DTSTART;VALUE=DATE-TIME:20040929T153000Z
DTEND;VALUE=DATE-TIME:20040929T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294163@indico.cern.ch
DESCRIPTION:Speakers: G. Watts (UNIVERSITY OF WASHINGTON)\nThe DZERO Colli
der Expermient logs many of its Data Aquisition Monitoring \nInformation i
n long term storage. This information is most frequently used to \nunderst
and shift history and efficiency. Approximately two kilobytes of informati
on \nis stored every 15 second. We describe this system and the web interf
ace provided. \nThe current system is distributed\, running on Linux for t
he back end and Windows \nfor the web interface front end and data logging
. We also discuss the development \npath we have taken for the database ba
ckend\, from use of root\, to Oracle\, and back \nto root.\n\nhttps://indi
co.cern.ch/event/0/contributions/1294163/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294163/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Network Security Protection System at IHEP-Net
DTSTART;VALUE=DATE-TIME:20040930T161000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294218@indico.cern.ch
DESCRIPTION:Speakers: L. Ma (INSTITUTE OF HIGH ENERGY PHYSICS)\nNetwork se
curity at IHEP is becoming one of the most important issues \nof computing
environment. To protect its computing and network \nresources against att
acks and viruses from outside of the institute\, \nsecurity measures to co
mbat these are implemented. To enforce \nsecurity policy the network infra
structure was re-configured \nto one intranet and two DMZ areas. New rules
to control the access \nbetween intranet and DMZ areas are applied. All h
osts at IHEP are \ndivided into three types according to their security le
vels. Hosts of \nthe first type are isolated in the institute and can just
access the \nhosts inside of IHEP. The second type hosts access Internet
\nthrough NAT. The third type hosts will directly connect to outside. \nAn
intrusion detection system works with firewall so that all packets \nfrom
outside IHEP are checked and filtered. Access from outside will \ngo thro
ugh firewall or VPN. In order to prevent virus spread at IHEP \nand reduce
the number of spam mail we installed a virus filter and \nspam filter sys
tem. All of these measures make the network at IHEP \nmore secure. Attacks
\, virus and spam mails decrease dramatically.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294218/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294218/
END:VEVENT
BEGIN:VEVENT
SUMMARY:An Object-Oriented Simulation Program for CMS
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294544@indico.cern.ch
DESCRIPTION:Speakers: M. Stavrianakou (FNAL)\nThe CMS detector simulation
package\, OSCAR\, is based on the Geant4 simulation toolkit\nand the CMS o
bject-oriented framework for simulation and reconstruction.\nGeant4 provid
es a rich set of physics processes describing in detail electro-magnetic\n
and hadronic interactions. It also provides the tools for the implementati
on of the\nfull CMS detector geometry and the interfaces required for reco
vering information\nfrom the particle tracking in the detectors.\nThis fun
ctionality is interfaced to the CMS framework\, which\, via its "action on
\ndemand" mechanisms\, allows the user to selectively load desired modules
and to\nconfigure and tune the final application.\nThe complete CMS detec
tor is rather complex with more than 12 million readout\nchannels and more
than 1 million geometrical volumes. \nOSCAR has been validated by compari
ng its results with test beam data and with\nresults from simulation with
a GEANT3-based program.\nIt has been succesfully deployed in the 2004 data
challenge for CMS\, where ~20\nmillion events for various LHC physics cha
nnels were simulated and analysed.\n\nAuthors: \nS. Abdulline\, V. Andreev
\, P. Arce\, S. Arcelli\, S. Banerjee\, T. Boccali\, \nM. Case\, A. De Roe
ck\, S. Dutta\, G. Eulisse\, D. Elvira\, A. Fanfani\, F. Ferro\, \nM. Lien
dl\, S. Muzaffar\, A. Nikitenko\, K. Lassila-Perini\, I. Osborne\, \nM. St
avrianakou\, T. Todorov\, L. Tuura\, H.P. Wellisch\, T. Wildish\, S. Wynho
ff\, \nM. Zanetti\, A. Zhokin\, P. Zych\n\nhttps://indico.cern.ch/event/0/
contributions/1294544/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294544/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The new BaBar Analysis Model
DTSTART;VALUE=DATE-TIME:20040930T134000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294118@indico.cern.ch
DESCRIPTION:Speakers: D. Brown (LAWRENCE BERKELEY NATIONAL LAB)\nThis talk
will describe the new analysis computing model deployed by\nBaBar over th
e past year. The new model was designed to better \nsupport the current a
nd future needs of physicists analyzing data\, \nand to improve BaBar's an
alysis computing efficiency.\nThe use of RootIO in the new model is descri
bed in other talks.\nBabar's new analysis data content format contains bot
h high and low \nlevel information\, allowing physicists to pick a tradeof
f between \nspeed and precision/flexibility appropriate to their analysis.
\nThe new format is customizable\, allowing physicists to create\nanalysis
-specific content using simple and familiar tools.\n\nBabar's new analysis
processing model involves selecting events \naccording to their physics c
ontent\, and writing them together with \nanalysis-customized content to d
edicated output streams.\nCurrently 120 such 'skims' are written as part o
f a periodic central \nprocessing cycle. Skims can be further reduced and
customized as \nwell as queried interactively in root-based applications.
Skims and \nsubskims retain links back to the original full event inform
ation. \nThis processing model eliminates the need for large tuple produc
tion \nefforts by physicists.\n\nThe entire BaBar data sample is available
in the new format\, and the \nnew model has been used to produce physics
results presented at the \nsummer HEP conferences.\nWe will also present r
eactions from the BaBar analysis community\, \nand describe the issues tha
t arose in deploying the new model.\n\nhttps://indico.cern.ch/event/0/cont
ributions/1294118/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294118/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Storage Resource Manager
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294179@indico.cern.ch
DESCRIPTION:Speakers: T. Perelmutov (FERMI NATIONAL ACCELERATOR LABORATORY
)\nStorage Resource Managers (SRMs) are middleware components whose functi
on is to\nprovide dynamic space allocation and file management on shared s
torage components on\nthe Grid. SRMs support protocol negotiation and rel
iable replication mechanism. The\nSRM standard allows independent institu
tions to implement their own SRMs\, thus\nallowing for a uniform access to
heterogeneous storage elements. SRMs leave the\npolicy decision to be mad
e independently by each implementation at each site.\nResource Reservati
ons made through SRMs have limited lifetimes and allow for\nautomatic coll
ection of unused resources thus preventing clogging of storage systems\nwi
th "forgotten" files.\n\nThe storage systems can be classified on basis o
f their longevity and persistence of\ntheir data. Data can also be tempora
ry or permanent. To support these notions\, SRM\ndefines Volatile\, Dura
ble and Permanent types of files and spaces. Volatile files can\nbe remove
d by the system to make space for new files upon the expiration of their\n
lifetimes. Permanent files are expected to exist in the storage system for
the\nlifetime of the storage system. Finally Durable files have both the
lifetime\nassociated with them and a mechanism of notification of owners
and administrators of\nlifetime expiration\, but cannot be deleted automat
ically by the system and require\nexplicit removal.\n\nFermilab's data han
dling system uses the SRM management interface\, the dCache\nDistributed D
isk Cache and the Enstore Tape Storage System as key components to\nsatisf
y current and future user requests.\n\nStorage Resource Manager specificat
ion is a result of international collaborative\neffort by representatives
of JLAB\, LBNL\, FNAL\, EDG-WP2 and EDG-WP5.\n\nhttps://indico.cern.ch/
event/0/contributions/1294179/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294179/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Test of the ATLAS Inner Detector reconstruction software using com
bined test beam data
DTSTART;VALUE=DATE-TIME:20040930T130000Z
DTEND;VALUE=DATE-TIME:20040930T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294506@indico.cern.ch
DESCRIPTION:Speakers: W. Liebig (CERN)\nThe athena software framework for
event reconstruction in ATLAS\nwill be employed to analyse the data from t
he 2004 combined test beam.\nIn this combined test beam\, a slice of the A
TLAS detector is operated\nand read out under conditions similar to future
LHC running\,\nthus providing a test-bed for the complete reconstruction
chain.\nFirst results for the ATLAS InnerDetector will be presented.\n\nIn
particular\, the reading of the bytestream data inside athena\, the\nmoni
toring tasks\, the alignment techniques and all the different online\nand
offline reconstruction algorithms will be fully tested with real data.\nTh
eir performance will be studied and results compared to simulated data\,\n
which has been generated specifically for the test beam layout.\n\nhttps:/
/indico.cern.ch/event/0/contributions/1294506/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294506/
END:VEVENT
BEGIN:VEVENT
SUMMARY:AutoBlocker: A system for detecting and blocking of network scanni
ng based on analysis of netflow data.
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294510@indico.cern.ch
DESCRIPTION:Speakers: A. Bobyshev (FERMILAB)\nIn a large campus network\,
such as Fermilab's ten thousand nodes\, scanning initiated\nfrom either ou
tside of or within the campus network raises security concerns\, may\nhave
very serious impact on network performance\, and even disrupt normal oper
ation of\nmany services. In this paper we introduce a system for detecting
and automatic\nblocking of excessive traffic of different nature\, scanni
ng\, DoS attacks\, virus\ninfected computers. The system\, called AutoBloc
ker\, is a distributed computing system\nbased on quasi-real time analysis
of network flow data collected from the border\nrouter and core routers.
AutoBlocker also has an interface to accept alerts from the\nIDS systems (
e.g. BRO\, SNORT) that are based on other technologies. The system has\nmu
ltiple configurable alert levels for the detection of anomalous behavior a
nd\nconfigurable trigger criteria for automated blocking of the scans at t
he core or\nborder routers. It has been in use at Fermilab for about 2 yea
rs\, and become a very\nvaluable tool to curtail scan activity within the
Fermilab campus network.\n\nhttps://indico.cern.ch/event/0/contributions/1
294510/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294510/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Synergia: A Modern Tool for Accelerator Physics Simulation
DTSTART;VALUE=DATE-TIME:20040927T161000Z
DTEND;VALUE=DATE-TIME:20040927T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294380@indico.cern.ch
DESCRIPTION:Speakers: P. Spentzouris (FERMI NATIONAL ACCELERATOR LABORATOR
Y)\nComputer simulations play a crucial role in both the design and\nopera
tion of particle accelerators. General tools for modeling\nsingle-particle
accelerator dynamics have been in wide use for many\nyears. Multi-particl
e dynamics are much more computationally \ndemanding than single-particle
dynamics\, requiring supercomputers or \nparallel clusters of PCs. Because
of this\, simulations of multi-\nparticle dynamics have been much more sp
ecialized. Although several \nmulti-particle simulation tools are now avai
lable\, they tend to \ncover a narrow range of topics. Most also present d
ifficulties for \nthe end user ranging from platform portability to arcane
interfaces.\n\nIn this presentation\, we discuss Synergia\, a multi-parti
cle \naccelerator simulation tool developed at Fermilab\, funded by the DO
E \nSciDAC program. Synergia was designed to cover a variety of physics \n
processes while presenting a flexible and humane interface to the \nend us
er. It is a hybrid application\, primarily based on the \nexisting package
s mxyzptlk/beamline and Impact. Our presentation \ncovers Synergia's physi
cs capabilities and human interface. We focus \non the computational probl
ems we encountered and solved in the \nprocess of building an application
out of codes written in Fortran \n90\, C++\, and wrapped with a Python fro
nt-end. We also discuss some \napproaches we have used in the visualizatio
n of the high-dimensional \ndata that comes out of a particle accelerator
simulations\, \nespecially our work with OpenDX.\n\nhttps://indico.cern.ch
/event/0/contributions/1294380/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294380/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Job-monitoring over the Grid with GridIce infrastructure.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294271@indico.cern.ch
DESCRIPTION:Speakers: G. Donvito (UNIVERSITà DEGLI STUDI DI BARI)\, G. To
rtone (INFN Napoli)\nIn a wide-area distributed and heterogeneous grid env
ironment\, monitoring\nrepresents an important and crucial task. It includ
es system status checking\, \nperformance tuning\, bottlenecks detecting\,
troubleshooting\, fault notifying. In\nparticular a good monitoring infra
structure must provide the information to\ntrack down the current status o
f a job in order to locate any problems. Job\nmonitoring requires interop
eration between the monitoring system and other grid\nservices.\nCurrently
development and deployment LCG testbeds integrate GridICE monitoring\nsys
tem which measures and publics the state of a grid resource at a particula
r\npoint in time. In this paper we present the efforts to integrate in the
current\nGridICE infrastructure\, additional useful information about job
status\, e.g. the\nname of job\, the virtual organization to which it bel
ongs\, eventually real and\nmapped user who has submitted the job\, the ef
fective CPU time consumed and its\nexit status.\n\nhttps://indico.cern.ch/
event/0/contributions/1294271/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294271/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The SAMGrid Database Server Component: Its Upgraded Infrastructure
and \nFuture Development Path
DTSTART;VALUE=DATE-TIME:20040929T161000Z
DTEND;VALUE=DATE-TIME:20040929T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294436@indico.cern.ch
DESCRIPTION:Speakers: S. Veseli (Fermilab)\nThe SAMGrid Database Server en
capsulates several important services\, such as\naccessing file metadata a
nd replica catalog\, keeping track of the processing \ninformation\, as we
ll as providing the runtime support for SAMGrid station \nservices. Recent
deployment of the SAMGrid system for CDF has resulted in \nunification of
the database schema used by CDF and D0\, and the complexity\nof changes r
equired for the unified metadata catalog has warranted a \ncomplete redesi
gn of the DB Server.\n\nWe describe here the architecture and features of
the new server. In particular\,\nwe discuss the new CORBA infrastructure t
hat utilizes python wrapper classes\naround IDL structs and exceptions. Su
ch infrastructure allows us to\nuse the same code on both server and clien
t sides\, which in turn results\nin significantly improved code maintainab
ility and easier development.\n\nWe also discuss future integration of the
new server with an SBIR II \nproject which is directed toward allowing th
e dbserver to access distributed\ndatabases\, implemented in different DB
systems and possibly using different\nschema.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294436/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294436/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Virtual Organization Membership Service eXtension project (VOX
)
DTSTART;VALUE=DATE-TIME:20040929T153000Z
DTEND;VALUE=DATE-TIME:20040929T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294140@indico.cern.ch
DESCRIPTION:Speakers: Ian FISK (FNAL)\nCurrent grid development projects a
re being designed such that they \nrequire end users to be authenticated u
nder the auspices of \na "recognized" organization\, called a Virtual Orga
nization (VO). A VO \nmust establish resource-usage agreements with grid r
esource \nproviders. The VO is responsible for authorizing its members for
grid \ncomputing privileges. The individual sites and resources typically
\nenforce additional layers of authorization. \n\nThe VOX project develop
ed at Fermilab is an extension of VOMS\, \ndeveloped jointly for DataTAG b
y INFN and for DataGrid by CERN. \nThe Virtual Organization Membership Reg
istration Service (VOMRS) is a \nmajor component of the VOX project. VOMRS
is a service that provides \nthe means for registering members of a VO\,
and coordination of this \nprocess among the various VO and grid administr
ators. It consists of \na database to maintain user registration and insti
tutional \ninformation\, a server to handle members' notification and \nsy
nchronization with various interfaces\, web services and a \nweb user inte
rface for the input of data into the database and \nmanipulation of that d
ata. \nThe VOX project also includes a component for the Site AuthoriZatio
n \n(SAZ)\, which allows security authorities at a site to control access
\nto site resources and a component for the Local Resource \nAdministratio
n (LRAS)\, which associates the VO member with the local \naccount and loc
al resources on a grid cluster. \nThe current state of deployment and futu
re steps to improve the \nprototype and implement some new features will b
e presented.\n\nhttps://indico.cern.ch/event/0/contributions/1294140/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294140/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The LCG-AliEn interface\, a realization of a MetaGrid system
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294190@indico.cern.ch
DESCRIPTION:Speakers: S. Bagnasco (INFN Torino)\nAliEn (ALICE Environment)
is a GRID middleware developed and used in the context of ALICE\, the CER
N LHC \nheavy-ion experiment. In order to run Data Challenges exploiting b
oth AliEn “native” resources and any \ninfrastructure based on EDG-der
ived middleware (such as the LCG and the Italian GRID.IT)\, an interface \
nsystem was designed and implemented\; some details of a prototype were al
ready presented at CHEP2003. In \nthe spring of 2004 an ALICE Data Challen
ge began with the simulated data production on this multiple \ninfrastruct
ure\, thus qualifying as the first large production carried out transparen
tly making use of very \ndifferent middleware system. This system is a pra
ctical realisation of the “federated” or “meta-” grid concept\, \n
and it has been successfully tested in a very large production. This talk
reports about new developments of \nthe interface system\, the successful
DC running experience\, the advantages and limitations of this concept\, \
nthe plans for the future and some lessons learned.\n\nhttps://indico.cern
.ch/event/0/contributions/1294190/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294190/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Computing for Belle
DTSTART;VALUE=DATE-TIME:20040927T090000Z
DTEND;VALUE=DATE-TIME:20040927T093000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294245@indico.cern.ch
DESCRIPTION:Speakers: N. KATAYAMA (KEK)\nThe Belle experiment operates at
the KEKB accelerator\, a high \nluminosity asymmetric energy e+ e- machine
. KEKB has achieved the \nworld highest luminosity of 1.39 times 10^34 cm-
2s-1. Belle \naccumulates more than 1 million B Bbar pairs in one good day
. \nThis corresponds to about 1.2 TB of raw data per day. The amount of \n
the raw and processed data accumulated so far exceeds 1.4 PB. \nBelle's co
mputing model has been a traditional one and very \nsuccessful so far. The
computing has been managed by minimal number \nof people using cost effec
tive solutions. Looking at the future\, \nKEKB/Belle plans to improve th
e luminosity to a few times 10^35 cm-\n2s-1\, 10 times as much as we obtai
n now. This presentation \ndescribes Belle's efficient computing operatio
ns\, struggles to \nmanage large amount of raw and physics data\, and plan
s for \nBelle computing for Super KEKB/Belle.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294245/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294245/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Detector-independent vertex reconstruction toolkit (VERTIGO)
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294185@indico.cern.ch
DESCRIPTION:Speakers: W. Waltenberger (Austrian Academy of Sciences // Ins
titute of High Energy Physics)\nA proposal is made for the design and impl
ementation of a detector-independent vertex \nreconstruction toolkit and i
nterface to generic objects (VERTIGO). The first stage aims at re-\nusing
existing state-of-the-art algorithms for geometric vertex finding and fitt
ing by both linear \n(Kalman filter) and robust estimation methods. Protot
ype candidates for the latter are a wide \nrange of adaptive filter algori
thms being developed for LHC/CMS\, as well as proven ones (like \nZVTOP of
SLC/SLD).In a second stage\, also kinematic constraints will be included
for the \nbenefit of complex multi-vertex topologies.\n\nThe design is bas
ed on modern object-oriented techniques. A core (RAVE) is surrounded by a
\nshell of abstract interfaces (using adaptors for access from/to the part
icular environment) and a \nset of analysis and debugging tools. The imple
mentation follows an open source approach \nand is easily adaptable to fut
ure standards.\n\nWork has started with the development of a specialized v
isualisation tool\, following the model-\nview-controller (MVC) paradigm\;
it is based on COIN3D and may also include interactivity by \nPYTHON scr
ipting. A persistency storage solution\, intended to provide a general dat
a \nstructure\, was originally based on top of ROOT and is currently being
extended for AIDA and \nXML compliance\; interfaces to existing or future
event reconstruction packages are easily \nimplementable. Flexible linkin
g to a math library is an important requirement\; at present we \nuse CLHE
P\, which could be replaced by a generic product.\n\nhttps://indico.cern.c
h/event/0/contributions/1294185/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294185/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gfarm v2: A Grid file system that supports high-performance distri
buted and parallel data computing
DTSTART;VALUE=DATE-TIME:20040927T151000Z
DTEND;VALUE=DATE-TIME:20040927T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294301@indico.cern.ch
DESCRIPTION:Speakers: O. Tatebe (GRID TECHNOLOGY RESEARCH CENTER\, AIST)\n
Gfarm v2 is designed for facilitating reliable file sharing and\nhigh-perf
ormance distributed and parallel data computing in a Grid\nacross administ
rative domains by providing a Grid file system. A \nGrid\nfile system is
a virtual file system that federates multiple file\nsystems. It is possib
le to share files or data by mounting the\nvirtual file system. This pape
r discusses the design and\nimplementation of secure\, robust\, scalable a
nd high-performance Grid\nfile system.\n\nThe most time-consuming\, but al
so the most typical\, task in data\ncomputing such as high energy physics\
, astronomy\, space exploration\,\nhuman genome analysis\, is to process a
set of files in the same way.\nSuch a process can be typically performed
independently on every file\nin parallel\, or at least have good locality.
Gfarm v2\nsupports high-performance distributed and parallel computing f
or such\na process by introducing a "Gfarm file"\, a new "file-affinity" \
nprocess\nscheduling based on file locations\, and new parallel file acces
s\nsemantics. An arbitrary group of files possibly dispersed across\nadmi
nistrative domains can be managed as a single Gfarm file. Each\nmember fi
le will be accessed in parallel in a new file view called\n"local file vie
w" by a parallel process possibly allocated by\nfile-affinity scheduling b
ased on replica locations of the member\nfiles. File-affinity scheduling
and new file view enable the ``owner\ncomputes'' strategy\, or ``move the
computation to data'' approach for\nparallel and distributed data computin
g of member files of a Gfarm\nfile in a single system image.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294301/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294301/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Regional Analysis Center at the University of Florida
DTSTART;VALUE=DATE-TIME:20040927T132000Z
DTEND;VALUE=DATE-TIME:20040927T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294207@indico.cern.ch
DESCRIPTION:Speakers: J. Rodriguez (UNIVERSITY OF FLORIDA)\nThe High Energ
y Physics Group at the University of Florida is involved in a variety \nof
projects ranging from High Energy Experiments at hadron and electron posi
tron \ncolliders to cutting edge computer science experiments focused on g
rid computing. \nIn support of these activities members of the Florida gro
up have developed and \ndeployed a local computational facility which cons
ists of several service nodes\, \ncomputational clusters and disk storage
services. The resources contribute \ncollectively or individually to a var
iety of production and development activities \nsuch as the UFlorida Tier2
center for the CMS experiment at the Large Hadron \nCollider (LHC)\, Mont
e Carlo production for the CDF experiment at Fermi Lab\, the \nCLEO experi
ment\, and research on grid computing for the GriPhyN and iVDGL projects.
\nThe entire collection of servers\, clusters and storage services is mana
ged as a \nsingle facility using the ROCKS cluster management system. Mana
ging the facility as \na single centrally managed system enhances our abil
ity to relocate and reconfigure \nthe resources as necessary in support of
both research and production activities. \nIn this paper we describe the
architecture deployed\, including details on our local \nimplementation of
the ROCKS systems\, how this simplifies the maintenance and \nadministrat
ion of the facility and finally the advantages and disadvantages of \nusin
g such a scheme to manage a modest size facility.\n\nhttps://indico.cern.c
h/event/0/contributions/1294207/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294207/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Managing third-party software for the LCG
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294150@indico.cern.ch
DESCRIPTION:Speakers: E. Poinsignon (CERN)\nThe External Software Service
of the LCG SPI project provides\nopen source and public domain packages
required by the LCG\nprojects and experiments. Presently\, more than 50
libraries\nand tools are provided for a set of platforms decided by the\
narchitect forum. All packages are installed following a standard\nprocedu
re and are documented on the web.\nA set of scripts has been developed to
ease new installations.\n\nIn addition to providing these packages\, a sof
tware configuration\nmanagement "toolbox" is provided\, containing a coher
ent set of\npackage-version combinations for each release of a project\, a
s well\nas a distribution script which manages the dependencies of the LCG
\nprojects such that users can easily download and install a release\nof a
project including its depended packages. Emphasis here has\nbeen put on e
ase of use for the end-user.\n\nhttps://indico.cern.ch/event/0/contributio
ns/1294150/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294150/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The role of scientific middleware in the future of HEP computing
DTSTART;VALUE=DATE-TIME:20040929T063000Z
DTEND;VALUE=DATE-TIME:20040929T070000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294153@indico.cern.ch
DESCRIPTION:Speakers: Miron Livny (Wisconsin)\nIn the 18 months since the
CHEP03 meeting in San Diego\, the HEP community deployed \nthe current gen
eration of grid technologies in a veracity of settings. Legacy \nsoftware
as well as recently developed applications was interfaced with middleware
\ntools to deliver end-to-end capabilities to HEP experiments in different
stages of \ntheir life cycles. In a series of data challenges\, reproces
sing efforts and data \ndistribution activities the community demonstrated
the benefits distributed \ncomputing can offer and the power a range of m
iddleware tools can deliver. After \nrunning millions of jobs\, moving ter
a-bytes of data\, creating millions of files and \nresolving hundreds of b
ug reports\, the community also exposed the limitations of \nthese middlew
are tools. As we move to the next level of challenges\, requirements \nan
d expectations\, we must also examine the methods and procedures we employ
to \ndevelop\, implement and maintain our common suite of middleware tool
s. The talk will \nfocus on the role common middleware developed by the sc
ientific community can and \nshould play in the software stack of current
and future HEP experiments.\n\nhttps://indico.cern.ch/event/0/contribution
s/1294153/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294153/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deploying and operating LHC Computing Grid 2 (LCG2) During Data Ch
allenges
DTSTART;VALUE=DATE-TIME:20040927T145000Z
DTEND;VALUE=DATE-TIME:20040927T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294402@indico.cern.ch
DESCRIPTION:Speakers: M. Schulz (CERN)\nLCG2 is a large scale production g
rid formed by more than 40 worldwide distributed sites.\nThe aggregated nu
mber of CPUs exceeds 3000 several MSS systems are integrated in the system
. Almost \nall sites form an independent administrative domain.\nOn most
of the larger sites the local computing resources have been integrated int
o the grid. \n\nThe system has been used for large scale production by LH
C experiments\nfor several month.\n\nDuring the operation the software wen
t through several versions and had to \nbe upgraded including non backward
compatible upgrades. \n\nWe report on the experience gained setting up th
e service\, integrating sites and operating it under the \nload of the pro
duction.\n\nhttps://indico.cern.ch/event/0/contributions/1294402/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294402/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The CEDAR Project
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294350@indico.cern.ch
DESCRIPTION:Speakers: M. Whalley (IPPP\, UNIVERSITY OF DURHAM)\nWe will de
scribe the plans and objectives of the recently funded PPARC(UK) e-science
\nproject\, the Combined E-Science Data Analysis Resource for High Energy
Physics\n(CEDAR)\, which will combine the strengths of the well establishe
d and widely used\nHEPDATA library of HEP data and the innovative JETWEB D
ata/Monte Carlo comparison\nfacility built on the HZTOOL package and which
exploits developing grid technology.\nThe current status and future plans
of both of these individual sub-projects within\nthe CEDAR framework are
described showing how they will cohesively provide a) an\nextensive archiv
e of Reaction Data\, b) validation and tuning of Monte Carlo\nprogrammes
against the Reaction Data sets\, and c) a validated code repository for a\
nwide range of HEP code such as parton distribution functions and other\nc
alculation codes used by particle physicists. Once established it is envis
aged CEDAR\nwill become an important GRID tool used by LHC experimentalist
s in their analyses and\nmay well serve as a model in other branches of sc
ience which have need to compare\ndata and complex simulations\n\nhttps://
indico.cern.ch/event/0/contributions/1294350/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294350/
END:VEVENT
BEGIN:VEVENT
SUMMARY:GraXML
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294516@indico.cern.ch
DESCRIPTION:Speakers: J. Hrivnac (LAL)\nGraXML is the framework for manipu
lation and visualization of 3D geometrical\nobjects in space. The full fra
mework consists of the GraXML toolkit\, libraries\nimplementing Generic an
d Geometric Models and end-user interactive front-ends.\nGraXML Toolkit pr
ovides a foundation for operations on 3D objects (both detector\nelements
and events). Each external source of 3D data is automatically\ntranslated
into Generic Model which is then analyzed and translated into\nGeometric M
odel using GraXML modules. The construction of this Geometric\nModel is pa
rametrised by several parameters (optimization level\, quality\nlevel\, ..
.) so that it can be used in applications with different requirements\n(gr
aphical or not). Two visualization applications are provided in the GraXML
\nframework: GraXML Interactive Display and GraXML Converter into various
3D\ngeometry formats. Other applications can be easily developed.\nThe pre
sentation will concentrate on GraXML graphical capabilities and relation\n
with geometric data providers. The difference between specific GraXML feat
ures\nand properties of other similar tools will be highlighted. The quest
ions of\ndifferent visualization needs and possibilities for different kin
ds of\ngeometrical data will be also explained.\n\nhttps://indico.cern.ch/
event/0/contributions/1294516/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294516/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vertex finding and B-tagging algorithms for the ATLAS Inner Detect
or
DTSTART;VALUE=DATE-TIME:20040930T153000Z
DTEND;VALUE=DATE-TIME:20040930T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294178@indico.cern.ch
DESCRIPTION:Speakers: A. Wildauer (UNIVERSITY OF INNSBRUCK)\nFor physics a
nalysis in ATLAS\, reliable vertex finding and\nfitting algorithms are imp
ortant. In the harsh enviroment\nof the LHC (~ 23 inelastic collissions ev
ery 25 ns) this task\nturns out to be particularily challenging. One of th
e guiding\nprinciples in developing the vertexing packages is a strong\nfo
cus on modularity and defined interfaces using the advantages\nof object o
riented C++. The benefit is the easy expandability\nof the vertexing with
additional fitting strategies integrated\nin the Athena framework.\n\nVari
ous implementations of algorithms and strategies dedicated\nto primary and
secondary vertex reconstruction using the full\nreconstruction of simulat
ed ATLAS events are presented.\n\nPrimary and secondary vertex finding is
essential for the\nidentification of b-jets in a reconstructed event. Resu
lts from\na modular and expandable b-tagging algorithm are shown using\nth
e presented strategies for vertexing.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294178/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294178/
END:VEVENT
BEGIN:VEVENT
SUMMARY:XML I/O in ROOT
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294367@indico.cern.ch
DESCRIPTION:Speakers: S. Linev (GSI)\nTill now\, ROOT objects can be store
d only in a binary ROOT specific file format. \nWithout the ROOT environme
nt the data stored in such files are not directly \naccessible. Storing ob
jects in XML format makes it easy to view and edit (with some \nrestrictio
n) the object data directly. It is also plausible to use XML as exchange \
nformat with other applications. Therefore XML streaming has been implemen
ted in \nROOT. Any object which is in the ROOT dictionary can be stored/re
trieved in XML \nformat. Two layouts of object representation in XML are s
upported: class-dependent \nand generic. In the first case all XML tag nam
es are derived from class and member \nnames. To avoid name intersections\
, XML namespaces for each class are used. A \nDocument Type Definition (DT
D) file is automatically generated for each class (or \nset of classes). I
t can be used to validate the structure of the XML document. The \ngeneric
layout of XML files includes tag names like "Object"\, "Member"\, "Item"
and \nso on. In this case the DTD is common for all produced XML files. Fu
rther \ndevelopment is required to provide tools for accessing created XML
files from other \napplications like: pure C++ code without ROOT librarie
s and dictionaries\, Java and \nso on.\n\nhttps://indico.cern.ch/event/0/c
ontributions/1294367/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294367/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bayesian Approach for Combined Particle Identification in ALICE Ex
periment at LHC
DTSTART;VALUE=DATE-TIME:20040930T155000Z
DTEND;VALUE=DATE-TIME:20040930T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294170@indico.cern.ch
DESCRIPTION:Speakers: I. Belikov (CERN)\nOne of the main features of the A
LICE detector at LHC is the capability to identify particles in a very bro
ad \nmomentum range from 0.1 GeV/c up to 10 GeV/c. This can be achieved on
ly by combining\, within a common \nsetup\, several detecting systems that
are efficient in some narrower and complementary momentum sub-\nranges. T
he situation is further complicated by the amount of data to be processed
(about 10^7 events with \nabout 10^4 tracks in each). Thus\, the particle
identification (PID) procedure should satisfy the following \nrequirements
:\n1) It should be as much as possible automatic. \n2) It should be able t
o combine PID signals of different nature (e.g. dE/dx and TOF measurements
).\n3) When several detectors contribute to the PID\, the procedure must p
rofit from this situation by providing an \nimproved PID.\n4) When only so
me detectors identify a particle\, the signals from the other detectors mu
st not affect the \ncombined PID.\n5) It should take into account the fact
that the PID depends\, due to different track selection\, on the kind of
\nanalysis.\n In this report we will demonstrate how combining the sing
le detector PID signals in the Bayesian way \nsatisfies these requirements
. We will also discuss how one can obtain the needed probability distribut
ion \nfunctions and a priory probability from the experimental data. The a
pproach has been implemented within the \nALICE offline framework\, and th
e algorithm efficiency and PID contamination have been estimated using the
\nALICE simulation.\n\nhttps://indico.cern.ch/event/0/contributions/12941
70/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294170/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Defining a Semantic Web Initiative for High Energy Physics
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294496@indico.cern.ch
DESCRIPTION:Speakers: B. White (SLAC)\nDuring a recent visit to SLAC\, Tim
Berners-Lee challenged the High \nEnergy Physics community to identify an
d implement HEP resources to \nwhich Semantic Web technologies could be ap
plied. This challenge \ncomes at a time when a number of other scientific
disciplines (for \nexample\, bioinformatics and chemistry) have taken a s
trong \ninitiative in making information resources compatible with Semanti
c \nWeb technologies and in the development of associated tools and \nappl
ications. \n\nThe CHEP conference series has a strong history of identifyi
ng and \nencouraging adoption of new technologies. The most notable of the
se \ntechnologies include the Web itself and Grid computing. The Semantic
\nWeb could have a similar potential.\n\nTopics of discussion in this BoF
include (but are not limited to):\nDefinition of the Semantic Web\; Semant
ic Web component technologies\; Review of \ncurrent Semantic Web-related e
fforts in HEP\; Semantic Web resources that are \npublicly available\; Wha
t needs to be done next.\n\nhttps://indico.cern.ch/event/0/contributions/1
294496/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294496/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Participation of Russian sites in the Data Challenge of ALICE expe
riment in 2004
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294188@indico.cern.ch
DESCRIPTION:Speakers: G. Shabratova (Joint Institute for Nuclear Research
(JINR))\nThe report presents an analysis of the Alice Data Challenge 2004.
\nThis Data Challenge has been performed on two different distributed \nc
omputing environments. The first one is the Alice Environment for \ndistri
buted computing (AliEn) used standalone. Presently this \nenvironment allo
ws ALICE physicists to obtain results on simulation\, \nreconstruction and
analysis of data in ESD format for AA and pp \ncollisions at LHC energies
. The second environment is the LCG-2 \nmiddleware accessed via AliEn with
the help of an interface\, \ndeveloped at INFN. Three Russian sites have
been configured as AliEn \nnodes for the Data Challenge. These sites (IHEP
at Protvino\, ITEP in \nMoscow and JINR at Dubna) could run a maximal of
86 jobs. The initial \nanalysis shows that the architecture of one site wa
s not adequate for \ndistributed computing. Another farm had nodes with in
sufficient RAM \nfor efficient job processing. All these problems have bee
n cured \nsubsequent DC phases. \nActions have also been taken to reduce t
he downtime due to wrong site \nconfiguration. The local AliEn server ins
talled at the JINR site has \nbeen used as a standard configuration for th
e other Russian sites. \nThe total number of jobs processed in Russia cons
titute ~2% of total \nrun in the ALICE DC 2004.\n\nhttps://indico.cern.ch/
event/0/contributions/1294188/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294188/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Athena Control Framework in Production\, New Developments and
Lessons Learned
DTSTART;VALUE=DATE-TIME:20040927T145000Z
DTEND;VALUE=DATE-TIME:20040927T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294172@indico.cern.ch
DESCRIPTION:Speakers: P. Calafiura (LBNL)\nAthena is the Atlas Control Fra
mework\, based on the common Gaudi architecture\,\noriginally developed by
LHCb. In 2004 two major production efforts\, the Data\nChallenge 2 and th
e Combined Test-beam reconstruction and analysis were structured as\nAthen
a applications. To support the production work we have added new features
to\nboth Athena and Gaudi: an "Interval of Validity" service to manage tim
e-varying\nconditions and detector data\; a History service\, to manage th
e provenance information\nof each event data object\; and a toolkit to sim
ulate and analyze the overlay of\nmultiple collisions during the detector
sensitive time (pile-up). To support the\nanalysis of simulated and test-b
eam data in athena we have introduced a python-based\nscripting interface\
, based on the CERN LCG tools PyLCGDict\, PyRoot and PyBus. The\nscripting
interface allows to fully configure any athena component\, interactively\
nbrowse and modify this configuration\, as well as examine the content of
any data\nobject in the event or detector store.\n\nhttps://indico.cern.ch
/event/0/contributions/1294172/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294172/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The IBM Research Global Technology Outlook
DTSTART;VALUE=DATE-TIME:20040929T100000Z
DTEND;VALUE=DATE-TIME:20040929T103000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294161@indico.cern.ch
DESCRIPTION:Speakers: Dave McQueeney (IBM)\nThe Global Technology Outlook
(GTO) is IBM Research’s projection of the\nfuture for information techno
logy (IT). The GTO identifies progress and\ntrends in key indicators such
as raw computing speed\, bandwidth\, storage\,\nsoftware technology\, and
business modeling. These new technologies have the\npotential to radically
transform the performance and utility of tomorrow's\ninformation processi
ng systems and devices\, ultimately creating new levels\nof business value
.\n\nhttps://indico.cern.ch/event/0/contributions/1294161/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294161/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Specifying Selection Criteria using C++ Expression Templates
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294138@indico.cern.ch
DESCRIPTION:Speakers: C. Jones (CORNELL UNIVERSITY)\nGeneric programming a
s exemplified by the C++ standard library makes\nuse of functions or funct
ion objects (objects that accept function\nsyntax) to specialize generic a
lgorithms for particular uses. Such\nseparation improves code reuse witho
ut sacrificing efficiency. We\nemployed this same technique in our combin
atoric engine: DChain. In\nDChain\, physicists combine lists of child par
ticles to form a \nlist of parent hypotheses. E.g.\, d0 = pi.plus() * K
.minus(). The\nselection criteria for the hypothesis is defined in a func
tion or\nfunction object that is passed to the list's constructor.\n\nHowe
ver\, C++ requires that functions and class declarations be defined\noutsi
de the scope of a function. \nTherefore physicists are forced to separate
the code that defines the\ncombinatorics from the code that sets the sele
ction criteria. We will\ndiscuss a technique using C++ expression templat
es to allow users to\ndefine function objects using a mathematical express
ion directly in\ntheir main function\, e.g.\, \nfunc = (sqrt( beamEnergy*b
eamEnergy - vPMag*vPMag) >= 5.1*k_GeV). \n\nUse of such techniques can gr
eatly decrease the coding 'excess' needed\nto perform an analysis.\n\nhttp
s://indico.cern.ch/event/0/contributions/1294138/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294138/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CMS Software Installation
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294235@indico.cern.ch
DESCRIPTION:Speakers: K. Rabbertz (UNIVERSITY OF KARLSRUHE)\nFor data anal
ysis in an international collaboration it is important\nto have an efficie
nt procedure to distribute\, install and update the\ncentrally maintained
software. This is even more true when not only\nlocally but also grid acce
ssible resources are to be exploited.\nA practical solution will be presen
ted that has been successfully employed\nfor CMS software installations on
systems ranging from\nphysicists' notebooks up to LCG2 enabled clusters.\
nIt is based on perl for an automated production of rpm's\nand xcmsi\, a t
ool written in perl and perl/Tk\, to facilitate\ninstalling\, updating and
verifying our rpm packaged software.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294235/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294235/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The status of Fermilab Enstore Data Storage System
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294430@indico.cern.ch
DESCRIPTION:Speakers: A. Moibenko (FERMI NATIONAL ACCELERATOR LABORATORY\,
USA)\nFermilab has developed and successively uses Enstore Data Storage\n
System. It is a primary data store for the Run II Collider Experiments\, \
nas well as for the others. It provides data storage in robotic tape libra
ries\naccording to requirements of the experiments. High fault tolerance a
nd\navailability\, as well as multilevel priority based request processing
\nallows experiments to effectively store and access data stored in the\nE
nstore\, including storing raw data from data acquisition systems.\nThe di
stributed structure and modularity of Enstore allow to scale\nthe system a
nd add more storage equipment as the requirements grow.\nCurrently Fermila
b Data Storage System storage system Enstore includes\n5 robotic tape libr
aries\, 96 tape drives of different type. Amount of data\nstored in the sy
stem is ~1.7 PetaBytes. Users access Enstore directly using\na special com
mand. They also can use ftp\, grid ftp\, SRM interfaces to dCache\nsystem\
, that uses Enstore as its lower layer storage.\n\nhttps://indico.cern.ch/
event/0/contributions/1294430/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294430/
END:VEVENT
BEGIN:VEVENT
SUMMARY:PyBus -- A Python Software Bus
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294187@indico.cern.ch
DESCRIPTION:Speakers: W. Lavrijsen (LBNL)\nA software bus\, just like its
hardware equivalent\, allows for the discovery\,\ninstallation\, configura
tion\, loading\, unloading\, and run-time replacement of software\ncompone
nts\, as well as channeling of inter-component communication.\nPython\, a
popular open-source programming language\, encourages a modular design on\
nsoftware written in it\, but it offers little or no component functionali
ty. However\,\nthe language and its interpreter provide sufficient hooks t
o implement a thin\,\nintegral layer of component support. This functional
ity can be presented to the\ndeveloper in the form of a module\, making it
very easy to use.\nThis paper describes a Python module\, PyBus\, with wh
ich the concept of a 'software\nbus' can be realised in Python. It demonst
rates\, within the context of the Atlas\nsoftware framework Athena\, how P
yBus can be used for the installation and (run-time)\nconfiguration of sof
tware\, not necessarily Python modules\, from a Python application\nin a w
ay that is transparent to the end-user.\n\nhttps://indico.cern.ch/event/0/
contributions/1294187/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294187/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Enterasys - Networks that Know
DTSTART;VALUE=DATE-TIME:20040929T093000Z
DTEND;VALUE=DATE-TIME:20040929T100000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294560@indico.cern.ch
DESCRIPTION:Speakers: J. ROESE ()\nToday and in the future businesses need
an intelligent network. \nAnd Enterasys has the smarter solution. Our act
ive network uses a combination of \ncontext-based and embedded security te
chnologies -\nas well as the industry’s first automated response capabil
ity\n- so it can manage who is using your network.\nOur solution also prot
ects the entire enterprise - from the\nedge\, through the distribution lay
er\, and into the core of\nthe network. Threats are recognized and isolate
d at the\nuser level\, rather than taking your entire network down.\nIt ev
en has the ability to coexist with and enhance your\nlegacy data networkin
g infrastructure and existing security\nappliances - regardless of the ven
dor. By continually offering\na context-based analysis of network traffic\
, our solution\nallows you to see not only what the problem is\, but also\
nwhere it is\, and who caused it. And\, with the industry's most\nadvanced
controls\, we're the first solution that's able to\nresolve threats acros
s the entire network - dynamically\nor on demand.\n\nhttps://indico.cern.c
h/event/0/contributions/1294560/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294560/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Physics Validation of the LHC Software
DTSTART;VALUE=DATE-TIME:20040930T070000Z
DTEND;VALUE=DATE-TIME:20040930T073000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294294@indico.cern.ch
DESCRIPTION:Speakers: Fabiola Gianotti (CERN)\nThe LHC Software will be co
nfronted to unprecedented challenges as \n soon as the LHC will turn on.\n
We summarize the main Software requirements coming from the LHC \n detect
ors\, triggers and physics\, and we discuss several examples of \n Softwa
re components developed by the experiments and the LCG project \n (simulat
ion\, reconstruction\, etc.)\, their validation\, and their \n adequacy fo
r LHC physics.\n\nhttps://indico.cern.ch/event/0/contributions/1294294/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294294/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simulation and reconstruction of heavy ion collisions in the ATLAS
detector.
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294505@indico.cern.ch
DESCRIPTION:Speakers: P. Nevski (BROOKHAVEN NATIONAL LABORATORY)\nThe ATLA
S detector is a sophisticated multi-purpose detector with \nover 10 millio
n electronics channels designed to study high-pT \nphysics at LHC. Due to
their high multiplicity\, reaching almost \nhundred thousand particles per
event\, heavy ion collisions pose a \nformidable computational challenge.
A set of tools have been created \nto realistically simulate and fully re
construct the most difficult \ncase of central Pb-Pb collisions (impact pa
rameter \n\nhttps://indico.cern.ch/event/0/contributions/1294505/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294505/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Secure Grid Data Management Technologies in ATLAS
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294324@indico.cern.ch
DESCRIPTION:Speakers: M. Branco (CERN)\nIn a resource-sharing environment
on the grid both grid users and grid\nproduction managers call for securit
y and data protection from\nunauthorized access. To secure data management
several novel grid\ntechnologies were introduced in ATLAS data management
. Our presentation\nwill review new grid technologies introduced in HEP pr
oduction environment\nfor database access through the Grid Security Infras
tructure (GSI): secure\nGSI channel mechanisms for database services deliv
ery for reconstruction\non grid clusters behind closed firewalls\; grid ce
rtificate authorization\ntechnologies for production database access contr
ol and scalable locking\ntechnologies for the chaotic 'on-demand' producti
on mode. We address the\nseparation of file transfer process from the file
catalog interaction\nprocess (file location registration\, file medadata
querying\, etc.)\,\ndatabase transactions capturing data integrity and the
high availability\nfault-tolerant database solutions for the core data ma
nagement tasks. We\ndiscuss the complementarities of the security model fo
r the online and the\noffline computing environments\; best practices (and
realities) of the\ndatabase users' roles: administrators\, developers\, d
ata writers\, data\nreplicators and data readers\, need for elimination of
the clear-text\npasswords\; stateless and stateful protocols for the bina
ry data transfers\nover secure grid data transport channels in heterogeneo
us grids. We\npresent the security policies and technologies integrated in
the ATLAS\nProduction Data Management System - Don Quijote (GSI-enabled s
ervices\noriented architecture\, GSI proxy certificate delegation) and app
roaches\nfor seamless integration of Don Quijote with POOL event collectio
ns and\ntag databases - while making the system non-intrusive to end-users
.\n\nhttps://indico.cern.ch/event/0/contributions/1294324/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294324/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boosting the data logging rates in run 4 of the PHENIX experiment
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294431@indico.cern.ch
DESCRIPTION:Speakers: Martin purschke ()\nWith the improvements in CPU and
disk speed over the past years\, we\nwere able to exceed the original des
ign data logging rate of 40MB/s by\na factor of 3 already for the Run 3 in
2002. For the Run 4 in 2003\, we\nincreased the raw disk logging capacit
y further to about 400MB/s.\n\nAnother major improvement was the implement
ation of compressed data\nlogging. The PHENIX raw data\, after application
of the standard data\nreduction techniques\, were found to be further com
pressible by\nutilities like gzip by almost a factor of 2\, and we defined
a PHENIX\nstandard of a compressed raw data format. The buffers that make
up a\nraw data file consist of buffers that would get compressed and the\
nresulting smaller data volume written out to disk. For a long time\,\nthi
s proved to be much too slow to be usable in the DAQ\, until we\ncould shi
ft the compression to the event builder machines and so\ndistributed the l
oad over many fast CPU's. We also selected a\ndifferent compression algori
thm\, LZO\, which is about a factor of 4\nfaster than the "compress2" algo
rithm used internally in gzip. With\nthe compression\, the raw data volume
shrinks to about 60% of the\noriginal size\, boosting the original data r
ate before compression to\nmore than 700MB/s.\n\nWe will the present the t
echniques and architecture\, and the impact\nthis has had on the data taki
ng in Run 4.\n\nhttps://indico.cern.ch/event/0/contributions/1294431/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294431/
END:VEVENT
BEGIN:VEVENT
SUMMARY:On Distributed Database Deployment for the LHC Experiments
DTSTART;VALUE=DATE-TIME:20040927T130000Z
DTEND;VALUE=DATE-TIME:20040927T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294464@indico.cern.ch
DESCRIPTION:Speakers: Dirk Duellmann ()\nWhile there are differences among
the LHC experiments in their views of the role of \ndatabases and their d
eployment\, there is relatively widespread agreement on a number \nof prin
ciples:\n\n1. Physics codes will need access to database-resident data.
The need for database \naccess is not confined to middleware and services:
physics-related data will reside \nin databases. \n\n2. Database-reside
nt data will be distributed\, and replicated. A single\, \ncentralized da
tabase\, at CERN or elsewhere\, does not suffice. \n\n3. Distributed depl
oyment infrastructure should be open to the use of different \ntechnologie
s as appropriate at the various Tier N sites. \n\nA variety of approaches
to distributed deployment have been explored in the context \nof individua
l experiments\; indeed\, a degree of distributed deployment has been \nint
egral to the computing model tests of some experiments (cf. ATLAS) in thei
r 2004 \ndata challenges. Approaches to replication have also been invest
igated in the \ncontext of specific databases\, often with vendor-specific
replication tools (e.g.\, \nOracle Replication via Streams for the LCG Fi
le Catalog and the Oracle \ninstantiation of the LCG conditions database\;
MySQL tools for replication in the \nMySQL instantiation of the LCG cond
itions database). XML exchange mechanisms have \nalso been discussed. Di
stributed database deployment\, though\, is more than a \nmiddleware and a
pplications software issue—a successful strategy must involve those \nwh
o will be responsible for systems deployment and administration at LHC gri
d \nsites. \n\nWe describe the status of ongoing work in this area\, and
discuss the prospects for \ncomponents of a common approach to distributed
deployment in the time frame of the \n2005 LHC data challenges.\n\nhttps:
//indico.cern.ch/event/0/contributions/1294464/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294464/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Validation of the GEANT4 Bertini Cascade model and data analysis u
sing the Parallel ROOT Facility on a Linux cluster
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294282@indico.cern.ch
DESCRIPTION:Validation of hadronic physics processes of the Geant4 simulat
ion\ntoolkit is a very important task to ensure adequate physics results f
or\nthe experiments being built at the Large Hadron Collider. We report on
\nsimulation results obtained using the Geant4 Bertini cascade\ndouble-dif
ferential production cross-sections for various target\nmaterials and inci
dent hadron kinetic energies between 0.1-10 GeV [1\, 2].\n\nThe cross-sect
ion benchmark study in this work has been performed using\na Linux cluster
set up with the Red Hat Linux based NPACI Rocks Cluster\nDistribution. Fo
r analysis of the validation data we have used the\nParallel ROOT Facility
(PROOF). PROOF has been designed for setting up a\nparallel data analysis
environment in an inhomogeneous computing\nenvironment. Here we use a hom
ogeneous Rocks cluster and automatic class\ngeneration for PROOF event dat
a-analysis [3].\n\n[1] J. Beringer\, "(p\, xn) Production Cross Sections:
A benchmak Study\n for the Validation of Hadronic Physics Simulation at
LHC"\,\n CERN-LCGAPP-2003-18.\n\n[2] A. Heikkinen\, N. Stepanov\, and
J.P. Wellisch\, "Bertini intra-nuclear\n cascade implementation in Gean
t4"\, arXiv: nucl-th/0306008\n\n[3] F. Rademakers\, M. Goto\, P. Canal\, R
. Brun\, "ROOT Status and Future\n Developments"\, arXiv: cs.SE/0306078
\n\nhttps://indico.cern.ch/event/0/contributions/1294282/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294282/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Geometry Package for the Pierre Auger Observatory
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294451@indico.cern.ch
DESCRIPTION:Speakers: L. Nellen (I. DE CIENCIAS NUCLEARES\, UNAM)\nThe Pie
rre Auger Observatory consists of two sites with several\nsemi-autonomous
detection systems. Each component\, and in some cases\neach event\, provid
es a preferred coordinate system for simulation and\nanalysis. To avoid a
proliferation of coordinate systems in the\noffline software of the Pierr
e Auger Observatory\, we have developed a\ngeometry package that allows th
e treatment of fundamental geometrical\nobjects in a coordinate-independen
t way. This package makes\ntransformations between coordinate systems tran
sparent to the user\,\nwithout taking the control about the internal repre
sentation\ncompletely from the user. \n\nThe geometry package allows easy
combination of the results from\ndifferent sub-detectors\, at the same tim
e as ensuring that effects\nlike the earth curvature\, which is non-neglig
ible on the scale of a\nsingle Auger site\, are dealt with properly. \n\nT
he internal representations used are Cartesian. For interfacing\,\nincludi
ng I/O\, the package includes support for Cartesian coordinates\,\ngeodeti
c (latitude/longitude and UTM)\, and astrophysical coordinate\nsystems.\n\
nhttps://indico.cern.ch/event/0/contributions/1294451/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294451/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Python-based physics analysis environment for LHCb
DTSTART;VALUE=DATE-TIME:20040930T151000Z
DTEND;VALUE=DATE-TIME:20040930T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294255@indico.cern.ch
DESCRIPTION:Speakers: P. MATO (CERN)\nBender\, the Python based physics an
alysis application for LHCb combines the best\nfeatures of underlying Gaud
i C++ software architecture with the flexibility of Python\nscripting lang
uage and provides end-users with friendly physics analysis oriented\nenvir
onment. It is based in one hand\, on the generic Python bindings for the G
audi\nframework\, called GaudiPython\, and in the other hand on an efficie
nt C++ physics\nanalysis toolkit called LoKi. Bender and LoKi use the tool
s from the physics analysis\nframework\, called DaVinci. Bender achieves a
clear separation between the technical\ndetails and the physical contents
of end-user physicist's code. The usage of Python\,\nAIDA abstract interf
aces and standard LCG reflection techniques allows an easy\nintegration of
Bender's analysis environment with third party products like the\ninterac
tive event display and visualization tools like Panoramix/LaJoconde\, ROOT
and\nHippoDraw. We'll present the overall design and capabilities of the
system\, its\nstatus and prospects.\n\nhttps://indico.cern.ch/event/0/cont
ributions/1294255/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294255/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Muon Reconstruction Software in CMS
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294533@indico.cern.ch
DESCRIPTION:Speakers: N. Neumeister (CERN / HEPHY VIENNA)\nThe CMS detecto
r has a sophisticated four-station muon system made up of tracking chamber
s (Drift Tubes\, \nCathode Strip Chambers) and dedicated trigger chambers.
A muon reconstruction software based on Kalman \nfilter techniques has be
en developed which reconstructs muons in the standalone muon system\, usin
g \ninformation from all three types of muon detectors\, and links the res
ulting muon tracks with tracks \nreconstructed in the silicon tracker. The
software is designed to work for both\, offline reconstruction and for \n
online event selection within the CMS High-Level Trigger (HLT). Since the
quality of the selection algorithms \nused in the HLT system is of utmost
importance the software has been designed using modern object-\noriented s
oftware techniques and is implemented within the CMS reconstruction softwa
re framework. The \nsystem should be able to select events with final-stat
e muons\, indicating interesting physics. The design \nimplementation and
performance of the CMS muon reconstruction software is presented. We will
show that \noffline code with little modifications can be used in the HLT
system\, by making use of the concepts of regional \nand conditional recon
struction. The implementation and performance of possible HLT selection al
gorithms are \nillustrated.\n\nhttps://indico.cern.ch/event/0/contribution
s/1294533/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294533/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Monte Carlo Event Generation in a Multilanguage\, Multiplatform En
vironment
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294247@indico.cern.ch
DESCRIPTION:Speakers: N. Graf (SLAC)\nWe discuss techniques used to access
legacy event generators from modern simulation \nenvironments. Examples w
ill be given of our experience within the linear collider \ncommunity acce
ssing various FORTRAN-based generators from within a Java \nenvironment. C
oding to a standard interface and use of shared object libraries \nenables
runtime selection of generators\, and allows for extension of the suite o
f \navailable generators without having to rewrite core code.\n\nhttps://i
ndico.cern.ch/event/0/contributions/1294247/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294247/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Performance analysis of Cluster File System on Linux
DTSTART;VALUE=DATE-TIME:20040927T153000Z
DTEND;VALUE=DATE-TIME:20040927T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294347@indico.cern.ch
DESCRIPTION:Speakers: Y. CHENG (COMPUTING CENTER\,INSTITUTE OF HIGH ENERGY
PHYSICS\,CHINESE ACADEMY OF SCIENCES)\nWith the development of Linux and
improvement of PC's performance\, PC cluster used \nas high performance co
mputing system is becoming much popular. The performance of \nI/O subsyste
m and cluster file system is critical to a high performance computing \nsy
stem. In this work the basic characteristics of cluster file systems and t
heir \nperformance are reviewed. The performance of four distributed clust
er file systems\, \nAFS\, NFS\, PVFS and CASTOR\, were measured. The measu
rements were carried out on CERN \nversion RedHat 7.3.3 Linux using standa
rd I/O performance benchmarks. Measurements \nshow that for single-server
single client configuration\, NFS\, CASTOR and PVFS have \nbetter performa
nce and write rate slightly increases while the record length becomes \nla
rger. CASTOR has the best throughput when the number of write processes in
creases. \nPVFS and CASTOR are tested on multi-server and multi-client sys
tem. The two file \nsystems nicely distribute data I/O to all servers. CAS
TOR RFIO protocol shows the \nbest utilization of network bandwidth and op
timized to large data size files. CASTOR \nalso has the better scalability
as a cluster file system. Based on the test some \nmethods are proposed t
o improve the performance of cluster file system.\n\nhttps://indico.cern.c
h/event/0/contributions/1294347/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294347/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Description of the Atlas Detector
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294470@indico.cern.ch
DESCRIPTION:Speakers: Vakhtang tsulaia ()\nThe ATLAS Detector consists of
several major subsytems: an inner detector composed of\npixels\, microstri
p detectors and a transition radiation tracker\; electromagnetic and\nhadr
onic calorimetry\, and a muon spectrometer. Over the last year\, these sys
tems have\nbeen described in terms of a set of geometrical primitives know
n as GeoModel.\nSoftware components for detector description interpret str
uctured data from a\nrelational database and build from that a complete de
scription of the detector. This\ndescription is now used in the Geant-4 b
ased simulation program and also for\nreconstruction. Detector-specific se
rvices that are not handled in a generic way (e.g\nstrip pitches and calor
imetric tower boundaries) are added as an additional layer\nwhich is synch
ed to the raw geometry. Detector misalignments may also be fed \nthrough
the model to both simulation and reconstruction. Visualization of the\nde
tector geometry is accomplished through Open Inventor and its HEPVis exten
sions.\nThe ATLAS geometry system in the last year has undergone extensive
visual debugging\,\nand experience with the new system has been gained no
t only though the data challenge\nbut also through the combined test beam.
This talk gives an overview of the ATLAS\ndetector description and discu
sses operational experience with the system in the data\nchallenges and co
mbined test beam.\n\nhttps://indico.cern.ch/event/0/contributions/1294470/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294470/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Integration of ATLAS Software in the Combined Beam Test
DTSTART;VALUE=DATE-TIME:20040927T155000Z
DTEND;VALUE=DATE-TIME:20040927T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294206@indico.cern.ch
DESCRIPTION:Speakers: M. Dobson (CERN)\nThe ATLAS collaboration had a Comb
ined Beam Test from May until \nOctober 2004. Collection and analysis of d
ata required integration \nof several software systems that are developed
as prototypes for \nthe ATLAS experiment\, due to start in 2007. Eleven di
fferent detector \ntechnologies were integrated with the Data Acquisition
system and were\ntaking data synchronously. The DAQ was integrated with th
e High Level \nTrigger software\, which will perform online selection of A
TLAS events. \nThe data quality was monitored at various stages of the Tri
gger and \nDAQ chain. The data was stored in a format foreseen for ATLAS a
nd was \nanalyzed using a prototype of the experiments' offline software\,
using \nthe Athena framework. Parameters recorded by the Detector Control
System \nwere recorded in a prototype of the ATLAS Conditions Data Base a
nd were \nmade available for the offline analysis of the collected event d
ata. \nThe combined beam test provided a unique opportunity to integrate a
nd \nto test the prototype of ATLAS online and offline software in its com
plete \nfunctionality.\n\nhttps://indico.cern.ch/event/0/contributions/129
4206/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294206/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Applications of the FLUKA Monte Carlo code in High Energy and Acce
lerator Physics
DTSTART;VALUE=DATE-TIME:20040927T124000Z
DTEND;VALUE=DATE-TIME:20040927T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294175@indico.cern.ch
DESCRIPTION:Speakers: G. Battistoni (INFN Milano\, Italy)\nThe FLUKA Monte
Carlo transport code is being used for different\napplications in High En
ergy\, Cosmic Ray and Accelerator Physics.\nHere we review some of the ong
oing projects which are\nbased on this simulation tool. \nIn particular\,
as far as accelerator physics is concerned\, we wish\nto summarize the wor
k in progress for the LHC and the CNGS project.\nFrom the point of view of
experimental activity\, a part the activity \ngoing\nin the framework of
LHC detectors\, we wish to discuss\nas a major example the application o
f FLUKA to the ICARUS Liquid \nArgon TPC.\nUpgrades in cosmic ray calculat
ions\, to demonstrate the capability\nof FLUKA to reproduce existing exper
imental data\, are also presented\n\nhttps://indico.cern.ch/event/0/contri
butions/1294175/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294175/
END:VEVENT
BEGIN:VEVENT
SUMMARY:RDBC: ROOT DataBase Connectivity
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294211@indico.cern.ch
DESCRIPTION:Speakers: V. Onuchin (CERN\, IHEP)\nThe RDBC (ROOT DataBase Co
nnectivity) library is a C++ implementation \nof the The Java Database Con
nectivity Application Programming Interface.\nIt provides a DBMS-independe
nt interface to relational databases from \nROOT as well as a generic SQL
database access framework. \nRDBC also extends the ROOT TSQL abstract inte
rface.\nCurrently it is used in two large experiments: \n - in Minos as i
nterface to MySQL and Oracle databases\n - in Phenix as interface to Postg
reSQL database.\nIn this paper we will describe the main features and app
licability of \nthis library.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294211/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294211/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DIRAC Lightweight information and monitoring services using XML-RP
C and Instant Messaging
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294383@indico.cern.ch
DESCRIPTION:Speakers: I. Stokes-Rees (UNIVERSITY OF OXFORD PARTICLE PHYSIC
S)\nThe DIRAC system developed for the CERN LHCb experiment is a grid \nin
frastructure for managing generic simulation and analysis jobs. It \nenabl
es jobs to be distributed across a variety of computing \nresources\, such
as PBS\, LSF\, BQS\, Condor\, Globus\, LCG\, and individual \nworkstation
s.\n\nA key challenge of distributed service architectures is that there i
s \nno single point of control over all components. DIRAC addresses this \
nvia two complementary features:\na distributed Information System\, and a
n XMPP (Extensible Messaging \nand Presence Protocol) Instant Messaging fr
amework.\n\nThe Information System provides a concept of local and remote
\ninformation sources.\nAny information which is not found locally will be
fetched from \nremote sources. This allows a component to define its own
state\, \nwhile fetching the state of other components directly from those
\ncomponents\, or via a central Information Service. We will present the
\narchitecture\, features\, and performance of this system.\n\nXMPP has pr
ovided DIRAC with numerous advantages. As an \nauthenticated\, robust\,lig
htweight\, and scalable asynchronous message \npassing system\, XMPP is us
ed\, in addition to XML-RPC\, for inter-\nService communication\, making D
IRAC very fault-tolerant\, a critical \nfeature when using Service Oriente
d Architectures. XMPP\nis also used for monitoring real-time behaviour of
the various DIRAC \ncomponents.\nFinally\, XMPP provides XML-RPC like fac
ilities which are being \ndeveloped to provide control channels direct to
Services\, Agents\, and \nJobs. We will describe our novel use of Instant
Messaging in DIRAC \nand discuss directions for the future.\n\nhttps://ind
ico.cern.ch/event/0/contributions/1294383/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294383/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grand Challenges facing Storage Systems
DTSTART;VALUE=DATE-TIME:20040929T073000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294155@indico.cern.ch
DESCRIPTION:Speakers: Jai Menon (IBM)\nIn this talk\, we will discuss the
future of storage systems. In particular\, we will \nfocus on several big
challenges which we are facing in storage\, such as being able \nto build\
, manage and backup really massive storage systems\, being able to find \n
information of interest\, being able to do long-term archival of data\, an
d so on. We \nalso present ideas and research being done to address these
challenges\, and provide \na perspective on how we expect these challenges
to be resolved as we go forward.\n\nhttps://indico.cern.ch/event/0/contri
butions/1294155/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294155/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Generic logging layer for the distributed computing
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294565@indico.cern.ch
DESCRIPTION:Speakers: V. Fine (BROOKHAVEN NATIONAL LABORATORY)\nMost HENP
experiment software includes a logging or tracing API allowing for \ndispl
aying in a particular format important feedback coming from the core \napp
lication. However\, inserting log statements into the code is a low-tech
method \nfor tracing the program execution flow and often leads to a flood
of messages in \nwhich the relevant ones are occluded. In a distributed c
omputing environment\, \naccessing the information via a log-file is no lo
nger applicable and the approach \nfails to provide runtime tracing.\nRun
ning a job involves a chain of events where many components are involved
often \nwritten in diverse languages and not offering a consistent and eas
ily adaptable\ninterface for logging important events.\nWe will present an
approach based on a new generic layer built on top of a logger \nfamily d
erived from the Jakarta log4j project that includes log4cxx\, log4c\, log4
perl \npackages. This provides consistency across packages and framework.
\nAdditionally\, the power of using log4j\, is the possibility to enable l
ogging (or \nfeatures) at runtime without modifying the application binary
or the wrapper layers.\nWe provide a C++ abstract class library that serv
es as a proxy between the \napplication framework and the distributed envi
ronment. The approach is designed so \nthat the debugging statements can r
emain in shipped code without incurring a heavy \nperformance cost. Loggin
g equips the developer with as detailed context as necessary\nfor applicat
ion failures\, from testing\, quality assurance to a production mode \nli
mited amount of information. We will explain and show its implementation
in the \nSTAR production environment.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294565/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294565/
END:VEVENT
BEGIN:VEVENT
SUMMARY:LCG Generator
DTSTART;VALUE=DATE-TIME:20040927T120000Z
DTEND;VALUE=DATE-TIME:20040927T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294254@indico.cern.ch
DESCRIPTION:Speakers: P. Bartalini (CERN)\nIn the framework of the LCG Sim
ulation Project\, we present the Generator\nServices Sub-project\, launche
d in 2003 under the oversight of the LHC Monte\nCarlo steering group (MC4L
HC). The goal of the Generator Services Subproject\nis to guarantee the ph
ysics generator support for the LHC experiments. Work is\ndivided into fou
r work packages: Generator library\; Storage\, event\ninterfaces and parti
cle services\; Public event files and event database\;\nValidation and tun
ing. The current status and the future plans in the four\ndifferent work p
ackages are presented. Some emphasis is put on the Monte\nCarlo Generator
Library (GENSER) and on the Monte Carlo Generator Database\n(MCDB).\n\nGE
NSER is the central code repository for Monte Carlo generators and\ngenera
tor tools. it was the first CVS repository in the LCG Simulation\nproject
and it is currently distributed in AFS. GENSER comprises release and\nbui
lding tools for librarian and end users. GENSER is going to gradually\nrep
lace the obsolete CERN library in Monte Carlo generators support.\n\nMCDB
is a public database for the configuration\, book-keeping and storage of\n
the generator level event files. The generator events often need to be\npr
epared and documented by Monte Carlo experts. MCDB aims at facilitating\nt
he communication between Monte-Carlo experts and end-users. Its use can be
\noptionally extended to the official event production of the LHC experime
nts.\n\nhttps://indico.cern.ch/event/0/contributions/1294254/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294254/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beyond Persistence: Developments and Directions in ATLAS Data Man
agement
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294534@indico.cern.ch
DESCRIPTION:Speakers: D. Malon (ANL)\nAs ATLAS begins validation of its co
mputing model in 2004\, requirements\nimposed upon ATLAS data management s
oftware move well beyond simple persistence\,\nand beyond the "read a file
\, write a file" operational model that has sufficed for\nmost simulation
production. New functionality is required to support the\nATLAS Tier 0 mo
del\, and to support deployment in a globally distributed environment\nin
which the preponderance of computing resources--not only CPU cycles but\nd
ata services as well--reside outside the host laboratory.\nThis paper take
s an architectural perspective in describing new developments in ATLAS\nda
ta management software\, including the ATLAS event-level metadata system a
nd related\ninfrastructure\, and the mediation services that allow one to
distinguish writing from\nregistration and selection from retrieval\, in a
manner that is consistent both for\nevent data and for time-varying condi
tions. The ever-broader role of databases and\ncatalogs\, and issues rela
tedto the distributed deployment thereof\, are also \naddressed.\n\nhttps:
//indico.cern.ch/event/0/contributions/1294534/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294534/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experiences with Data Indexing services supported by the NorduGrid
middleware
DTSTART;VALUE=DATE-TIME:20040927T132000Z
DTEND;VALUE=DATE-TIME:20040927T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294508@indico.cern.ch
DESCRIPTION:Speakers: O. Smirnova (Lund University\, Sweden)\nThe NorduGri
d middleware\, ARC\, has integrated support for querying and\nregistering
to Data Indexing services such as the Globus Replica Catalog\nand Globus R
eplica Location Server. This support allows one to use these\nData Indexin
g services for for example brokering during job-submission\,\nautomatic re
gistration of files and many other things. This\nintegrated support is com
plemented by a set of command-line tools for\nregistering to and querying
these Data Indexing services.\n\nIn this talk we will describe experiences
with these Data Indexing\nservices both from a daily work point of view a
nd in production\nenvironments such as the Atlas Data-Challenges 1 and 2.
We will describe\nthe advantages of such Data Indexing services as well as
their\nshortcomings. Finally we will present a proposal for an extended D
ata\nIndexing service which should deal with the shortcomings described. T
he\ndevelopment of such a Data Indexing service is being planned at the\nm
oment.\n\nhttps://indico.cern.ch/event/0/contributions/1294508/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294508/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A high-level language for specifying detector coincidences
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294562@indico.cern.ch
DESCRIPTION:Speakers: F. Gray (UNIVERSITY OF CALIFORNIA\, BERKELEY)\nThe m
uCap experiment at the Paul Scherrer Institut (PSI) will measure the rate
of \nmuon capture on the proton to a precision of 1% by comparing the appa
rent lifetimes \nof positive and negative muons in hydrogen. This rate may
be related to the induced \npseudoscalar weak form factor of the proton.\
n\nSuperficially\, the muCap apparatus looks something like a miniature mo
del of a \ncollider detector. Muons pass through several beam counters bef
ore reaching a \nhydrogen-filled time projection chamber (TPC) at its core
\, which acts as both a \nstopping target and the primary muon detector. I
t is surrounded by cylindrical wire \nchambers and a scintillator hodoscop
e to observe the Michel electrons that emerge \nfrom muon decay. The first
key step in the analysis of our data is the proper \ndefinition of coinci
dence events across these many detector layers\, maximizing the \nsignal s
ignificance by suppressing accidental and pileup backgrounds. Part of our
\nanalysis software is written in a special-purpose high-level language\,
called ``muon \nquery language'' (MQL)\, in which these coincidences may b
e specified cleanly. It \nuses a variant of the relational model\, represe
nting the data as a set of tables \nupon which selection and join operatio
ns may be performed. ROOT histograms and trees \nare defined based on the
contents of tables. A preprocessor generates optimized C++ \ncode that imp
lements the operations described in the MQL file\, which is suitable for \
nincorporation into our analyzer framework. This talk will describe the MQ
L approach \nand our collaboration's experience with it.\n\nhttps://indico
.cern.ch/event/0/contributions/1294562/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294562/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The role of legacy services within ATLAS DC2
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294221@indico.cern.ch
DESCRIPTION:Speakers: J. kennedy (LMU Munich)\nThis paper presents an over
view of the legacy interface provided for\nthe ATLAS DC2 production system
. The term legacy refers to any\nnon-grid system which may be deployed for
use within DC2. The\nreasoning behind providing such a service for DC2 is
twofold in\nnature. Firstly\, the legacy interface provides a backup solu
tion\nshould unforeseen problems occur while developing the grid based\nin
terfaces. Secondly\, this system allows DC2 to use resources which have ye
t to\ndeploy grid software\, thus increasing the available computing power
\nfor the Data Challenge.\n\nThe aim of the legacy system is to provide a
simple framework which is\neasily adaptable to any given computing system.
Here the term\ncomputing system refers to the batch system provided at a
given site\nand also to the structure of the computing and storage systems
at that\nsite. The legacy interface provides the same functionality as th
e grid\nbased interfaces and is deployed transparently within the DC2\npro
duction system. Following the push-pull model implemented for DC2\nthe sys
tem pulls jobs from a production database and pushes them onto\na gives co
mputing/batch system.\n\nIn a world which is becoming increasingly grid or
ientated this project\nallows us to evaluate the role of non-grid solution
s in dedicated\nproduction environments. Experiences\, both good and bad\,
gained during\nDC2 are presented and the future of such systems is discus
sed.\n\nhttps://indico.cern.ch/event/0/contributions/1294221/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294221/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The D0 Virtual Center and planning for large scale computing
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294260@indico.cern.ch
DESCRIPTION:Speakers: A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)
\nThe D0 experiment relies on large scale computing systems to achieve her
\nphysics goals. As the experiment lifetime spans\, multiple generations
of\ncomputing hardware\, it is fundemental to make projective models in to
use\navailable resources to meet the anticipated needs. In addition\, c
omputing\nresources can be supplied as in-kind contributions by collaborat
ing\ninstitutions and countries\, however\, such resources typically requi
re\nscheduling\, thus adding another dimension for planning. In addition\
, to\navoid over-subscription of the resources\, the experiment has to be
educated\non the limitations and trade-offs for various computing activiti
es to enable\nthe management to prioritze. We present the metrics and mech
anisms used for\nplanning and discuss the uncertainties and unknowns\, as
well as some of the\nmechanisms for communicating the resource load to the
stakeholders.\n\nIn order to correctly account for in-kind contributions
of remote computing\,\nD0 uses the concept of a Virtual Center\, in which
all of the costs are\nestimated as if the computing were located at solely
at FNAL. In contrast\nto other such models in common use\, D0 accounts f
or contributions based on\ncomputer usage rather than strictly on money sp
end on hardware. This gives\nincentive to acheive the maximum efficiency
of the systems as well as\nencouraging active participation in the computi
ng model by collaborating\ninstititions. This method of operation leverag
es a common tool and\ninfrastructure base for all production-type activite
s.\n\nhttps://indico.cern.ch/event/0/contributions/1294260/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294260/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The impact of e-science
DTSTART;VALUE=DATE-TIME:20040928T100000Z
DTEND;VALUE=DATE-TIME:20040928T103000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294160@indico.cern.ch
DESCRIPTION:Speakers: Ken Peach (RAL)\nJust as the development of the Worl
d Wide Web has had its greatest \nimpact outside particle physics\, so it
will be with the development \nof the Grid.\nE-science\, of which the Grid
is just a small part\, is already making \na big impact upon many scienti
fic disciplines\, and facilitating new \nscientific discoveries that would
be difficult to achieve in any \nother way. Key to this is the definition
and use of metadata.\n\nhttps://indico.cern.ch/event/0/contributions/1294
160/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294160/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Recent Developments in the ROOT I/O
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294280@indico.cern.ch
DESCRIPTION:Speakers: P. Canal (FERMILAB)\nSince version 3.05/02\, the ROO
T I/O System has gone through \nsignificant enhancements.\nIn particular\,
the STL container I/O has been upgraded to support \nsplitting\, reading
without existing libraries and using directly from\nTTreeFormula (TTree qu
eries). \nThis upgrade to the I/O system is such that it can be easily ext
ended \n(even by the users) to support the splitting and querying of almos
t\nany collections. The ROOT TTree queries engine has also been enhanced\
nin many ways including an increase performance\, better support for\narra
y printing and histograming\, addition of the ability to call any\nexterna
l C or C++ functions\, etc.\nWe improved the I/O support for classes not i
nheriting from TObject\, \nincluding support for automatic schema evolutio
n without using an\nexplicit class version. ROOT now support generating fi
les larger than\n2Gb. We also added plugins for several of the mass stora
ge servers\n(Castor\, DCache\, Chirp\, etc.). \nWe will describe in deta
ils these new features and their implementation.\n\nhttps://indico.cern.ch
/event/0/contributions/1294280/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294280/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ZEUS Global Tracking Trigger Barrel Algorithm
DTSTART;VALUE=DATE-TIME:20040930T122000Z
DTEND;VALUE=DATE-TIME:20040930T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294512@indico.cern.ch
DESCRIPTION:Speakers: M. Sutton (UNIVERSITY COLLEGE LONDON)\nThe current d
esign\, implementation and performance of the ZEUS global\ntracking trigge
r barrel algorithm are described. The ZEUS global\ntracking trigger integ
rates track information from the ZEUS central\ntracking chamber (CTD) and
micro vertex detector (MVD) to obtain a\nglobal picture of the track topol
ogy in the ZEUS detector at the\nsecond level trigger stage. Algorithm pr
ocessing is performed on a\nfarm of Linux PCs and\, to avoid unacceptable
deadtime in the ZEUS\nreadout system\, must be completed within the strict
requirements of\nthe ZEUS trigger system. The GTT plays a vital role in t
he selection\nof good physics events and the rejection of non-physics back
ground\nwithin the very harsh trigger environment provided by the upgraded
\nHERA collider. The GTT barrel algorithm greatly improves the vertex\nre
solution and the track finding efficiency of the ZEUS second level\ntrigge
r while the mean event processing latency and throughput are\nwell within
the trigger requirements. Recent running experience with\nHERA production
luminosity is briefly discussed.\n\nhttps://indico.cern.ch/event/0/contrib
utions/1294512/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294512/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The LCG Project - Preparing for Startup
DTSTART;VALUE=DATE-TIME:20040928T063000Z
DTEND;VALUE=DATE-TIME:20040928T070000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294159@indico.cern.ch
DESCRIPTION:Speakers: Les Robertson (CERN)\nThe talk will cover briefly th
e current status of the LHC Computing Grid project \nand will discuss the
main challenges facing us as we prepare for the startup of LHC.\n\nhttps:/
/indico.cern.ch/event/0/contributions/1294159/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294159/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The GridSite authorization system
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294289@indico.cern.ch
DESCRIPTION:Speakers: A. McNab (UNIVERSITY OF MANCHESTER)\nWe describe the
GridSite authorization system\, developed by GridPP and the\nEU DataGrid
project for access control in High Energy Physics grid\nenvironments with
distributed virtual organizations. This system provides a\ngeneral toolkit
of common functions\, including the evaluation of access\npolicies (in GA
CL or XACML)\, the manipulation of digital credentials\n(X.509\, GSI Proxi
es or VOMS attribute certificates) and utility functions\nfor protocols su
ch as HTTP.\nGridSite also provides a set of extensions\nto the Apache web
server to permit it to function in a Grid security\nenvironment\, includi
ng access control\, fileserver / webserver management and\na lightweight V
irtual Organization service.\nUsing Apache as an example\, we explain how
Grid security can be\nadded to an existing service using our toolkit. We t
hen outline some of the\nother uses to which components have been put in t
he deployed Grids of GridPP\, the EU\nDataGrid and the LHC Computing Grid.
\n\nhttps://indico.cern.ch/event/0/contributions/1294289/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294289/
END:VEVENT
BEGIN:VEVENT
SUMMARY:G-PBox: a Policy Framework for Grid Environments
DTSTART;VALUE=DATE-TIME:20040929T155000Z
DTEND;VALUE=DATE-TIME:20040929T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294137@indico.cern.ch
DESCRIPTION:A key feature of Grid systems is the sharing of its resources
among \nmultiple Virtual Organizations (VOs). The sharing process needs a
\npolicy framework to manage the resource access and usage. Generally \nP
olicy frameworks exist for farms or local systems only\, but now\, for \nG
rid environments\, a general\, and distributed policy system is \nnecessar
y.\nGenerally VOs and local systems have contracts that regulate the \nres
ource usage\, hence complex relationships among these entities \nimplying
different kind of policies may exist: VOs oriented\, local \nsystems orien
ted\, and a mix of these ones.We propose an approach to \nthe representati
on\, and management of such policies: \nthe Grid Policy Box (G-PBox) frame
work. The approach is based on a \nset of databases belonging hierarchica
lly-organised levels \ndistributed onto the Grid and VOs structures. Each
\nlevel contains only policies regarding itself. These levels have to \nc
ommunicate among themselves to accomodate for mixed policies\, \noriginat
ing the need for a secure communication service framework\, -\nfor privacy
reasons\,- with the ability to sort and dispatch various \nkind of polici
es to the involved parties.\nIn this paper we present our first implementa
tion of the G-PBox\, and \nits architecture details\, and we discuss the p
lans for G-PBox-related \napplication and research.\n\nhttps://indico.cern
.ch/event/0/contributions/1294137/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294137/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Integrating Mutiple PC Farms into an uniform computing System with
Maui
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294539@indico.cern.ch
DESCRIPTION:Speakers: G. Sun (INSTITUE OF HIGH ENERGY PHYSICS)\nThese are
several on-going experiments at IHEP\, such as BES\, YBJ\, and CMS \ncolla
boration with CERN. each experiment has its own computing system\, these \
ncomputing systems run separately. This leads to a very low CPU utilizatio
n due \nto different usage period of each experiment. The Grid technology
is a very \ngood candidate for integrating these separate computing system
s into a "single \nimage"\, but it is too early to be put into a productio
n system as it is not \nstable and user-friendly as well. A realistic choi
ce is to implement such \nan integration and sharing with Maui\, an advacn
ed scheduler. Each PC farm \nis thought as a partition\, which is assigne
d high priority to its owner users \nwith preemtor feature. this paper wil
l describe the detail of implementation \nwith Maui scheduler\, as well as
the entire system architecture and configuration \nand fuctions.\n\nhttps
://indico.cern.ch/event/0/contributions/1294539/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294539/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Run II computing
DTSTART;VALUE=DATE-TIME:20040927T080000Z
DTEND;VALUE=DATE-TIME:20040927T083000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294259@indico.cern.ch
DESCRIPTION:Speakers: A. Boehnlein (FERMI NATIONAL ACCELERATOR LABORATORY)
\nIn support of the Tevatron physics program\, the Run II experiments have
\ndeveloped computing models and hardware facilities to support data sets
at\nthe petabyte scale\, currently corresponding to 500 pb-1 of data and o
ver 2\nyears of production operations. The systems are complete from on
line\ndata collection to user analysis\, and make extensive use of central
services\nand common solutions developed with the FNAL CD and experiment
collaborating\ninstitutions\, and make use of global facilities to meet th
e computing needs.\nWe describe the similiarities and differences between
computing on CDF and\nD0 while describing solutions for database and datab
ase servers\, data\nhandling\, movement and storage and job submission mec
hanisms. The\nfacilities for production computing and analysis and the u
se of commody\nfileservers will also be described. Much of the knowledg
e gained from\nproviding computing at this scale can be abstracted and app
lied to design\nand planning for future experiments with large scale compu
ting.\n\nhttps://indico.cern.ch/event/0/contributions/1294259/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294259/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A kinematic and a decay chain reconstruction library
DTSTART;VALUE=DATE-TIME:20040930T145000Z
DTEND;VALUE=DATE-TIME:20040930T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294344@indico.cern.ch
DESCRIPTION:A kinematic fit package was developed based on Least Means Squ
ared \nminimization with Lagrange multipliers and Kalman filter techniques
\nand implemented in the framework of the CMS reconstruction program. \nT
he package allows full decay chain reconstruction from final state \nto pr
imary vertex according to the given decay model. The class \nframework all
owing decay tree description on every reconstruction \nstep will be descri
bed in details. Package extension to any type of \nphysics object reconstr
ucted in CMS\, integration to general CMS \nreconstruction framework and r
elated questions will be discussed. \nExamples of decay chain models\, con
straints and their application\non Bs reconstruction will be presented.\n\
nhttps://indico.cern.ch/event/0/contributions/1294344/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294344/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Super scaling PROOF to very large clusters
DTSTART;VALUE=DATE-TIME:20040930T130000Z
DTEND;VALUE=DATE-TIME:20040930T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294473@indico.cern.ch
DESCRIPTION:Speakers: M. Ballintijn (MIT)\nThe Parallel ROOT Facility\, PR
OOF\, enables a physicist to analyze and\nunderstand very large data sets
on an interactive time scale. It makes use\nof the inherent parallelism in
event data and implements an architecture\nthat optimizes I/O and CPU uti
lization in heterogeneous clusters with\ndistributed storage. Scaling to m
any hundreds of servers is essential\nto process tens or hundreds of gigab
ytes of data interactively. This is\nsupported by the industry trend to pa
ck more CPU's into single systems and\nto create bigger clusters by increa
sing the number of systems per rack. We\nwill describe the latest developm
ents in PROOF and the development of a\nstandardized benchmark for PROOF c
lusters. The benchmark is self contained\nand measures the network\, the I
/O and the processing characteristics of\na cluster. We will present the c
omprehensive results of the benchmark for\nseveral clusters\, demonstratin
g the performance and scalability of PROOF\non very large clusters.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294473/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294473/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Offline Software for the ATLAS Combined Test Beam
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294148@indico.cern.ch
DESCRIPTION:Speakers: A. FARILLA (I.N.F.N. ROMA3)\nA full slice of the bar
rel detector of the ATLAS experiment at the LHC\nis being tested this year
with beams of pions\, muons\, electrons and\nphotons in the energy range
1-300 GeV in the H8 area of the CERN\nSPS. It is a challenging exercise si
nce\, for the first time\, the\ncomplete software suite developed for the
full ATLAS experiment\nhas been extended for use with real detector data\,
including\nsimulation\, reconstruction\, online and offline conditions da
tabases\,\ndetector and physics monitoring\, and distributed analysis.\nIm
portant integration issues like combined\nsimulation\, combined reconstruc
tion\, connection with the online\nservices and management of many differe
nt types of conditions data are\nbeing addressed for the first time\, with
the goal of both achieving\nexperience on such integration aspects and of
performing physics\nstudies requiring the combined analysis of simultaneo
us data coming\nfrom different subdetectors. It is a unique opportunity t
o test\, \nwith real data\, new algorithms for pattern recognition\, parti
cle \ntracking and identification and High Level Trigger strategies. \nA
relevant outcome of this combined test\nbeam will be a detailed comparison
of Monte Carlo - based on\nGeant4 - with real data. In the talk the main
components of the\nsoftware suite are described\, together with some preli
minary results\nobtained both with simulated and real data.\n\nhttps://ind
ico.cern.ch/event/0/contributions/1294148/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294148/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Harp data and software migration from Objectivity to Oracle
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294164@indico.cern.ch
DESCRIPTION:Speakers: A. Valassi (CERN)\nThe migration of the Harp data an
d software from an Objectivity-\nbased \nto an Oracle-based data storage s
olution is reviewed in this \npresentation.\nThe project\, which was succe
ssfully completed in January 2004\,\ninvolved three distinct phases. In th
e first phase\, which profited\nsignificantly from the previous COMPASS da
ta migration project\,\n30 TB of Harp raw event data were migrated in two
weeks to a hybrid\npersistency solution\, storing raw event records in sta
ndard "flat"\nfiles and the corresponding metadata in Oracle as relational
tables.\nIn the second phase\, the longest to achieve in spite of the \nr
elatively \nlimited data volume to migrate\, the complex data model of Har
p event \ncollections was reimplemented for the Oracle-based solution. The
\nrelational schema design and the implementation of read-only \nnavigati
onal access to event collections in the Harp software \nframework using Or
acle are reviewed in detail in the presentation.\nThe third phase was the
easiest\, as it involved the migration of\nconditions data (time-varying n
on-event data) from the Objectivity\nto the Oracle implementation of a sam
e C++ API\, which acted as\na screening layer between the data model and i
ts implementation.\n\nhttps://indico.cern.ch/event/0/contributions/1294164
/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294164/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pixel Reconstruction in the CMS High-Level Trigger
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294256@indico.cern.ch
DESCRIPTION:Speakers: S. Cucciarelli (CERN)\nThe Pixel Detector is the inn
ermost one in the tracking system of the\nCompact Muon Solenoid (CMS) expe
riment. It provides the most precise\nmeasurements not only supporting the
full track reconstruction but \nalso allowing the standalone reconstructi
on useful especially for \nthe online event selection at High-Level Trigge
r (HLT). The \nperformance of the Pixel Detector is given. The HLT algorit
hms using \nPixel Detector are presented\, including pixel track reconstru
ction\, \nprimary vertex finding\,tau identification\, isolation and track
\nseeding.\n\nhttps://indico.cern.ch/event/0/contributions/1294256/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294256/
END:VEVENT
BEGIN:VEVENT
SUMMARY:File-Metadata Management System for the LHCb Experiment
DTSTART;VALUE=DATE-TIME:20040927T153000Z
DTEND;VALUE=DATE-TIME:20040927T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294265@indico.cern.ch
DESCRIPTION:Speakers: C. CIOFFI (Oxford University)\nThe LHCb experiment n
eeds to store all the information about the datasets and their \nprocessin
g history of recorded data resulting from particle collisions at the LHC \
ncollider at CERN as well as of simulated data.\n\nTo achieve this functio
nality a design based on data warehousing techniques was \nchosen\, where
several user-services can be implemented and optimized individually \nwith
out losing functionality nor performance. This approach results in an expe
riment-\nindependent and flexible system. It allows fast access to the cat
alogue of available \ndata\, to detailed history information and to the ca
talogue of data replicas. Queries \ncan be made based on these three sets
of information. A flexible underlying database \nschema allows the impleme
ntation and evolution of these services without the need to \nchange the b
asic database schema. The consequent implementation of interfaces based \n
on XML-RPC allows to access and to modify the stored information using a w
ell \ndefined encapsulating API.\n\nhttps://indico.cern.ch/event/0/contrib
utions/1294265/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294265/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tracking of long lived hyperons in silicon detector at CDF.
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294349@indico.cern.ch
DESCRIPTION:Speakers: E. Gerchtein (CMU)\nLong lived charged hyperon\, $\\
Xi$ and $\\Omega$\, are capable of travelling significant\ndistances produ
cing hits in the silicon detector\, before decaying into \n$\\Lambda^0 \\p
i$ and $\\Lambda^0 K$ pairs\, respectively. This gives unique\nopportunity
of reconstructiong hyperon tracks. We have developed a dedicated\n"outsid
e-in" tracking algorithm that is seeded by 4-momentum and decay vertex of\
nthe long lived hyperon reconstructed by its decay products.\nThe tracking
of hyperons in the silicon detector results in a dramatic \nreduction of
the combinatorial background and an improvement of the momentum\nresolutio
n compared with the standard reconstruction using final decay \nproducts.
\n\nUsing a super clean sample of $\\Xi$ hyperons CDF observed charmed-str
ange baryon \nisodublet $\\Xi^0_c$ and $\\Xi^+_c$ for the first time in $p
\\bar{p}$ collision.\n$\\Xi$ hyperons were used for the search for exotic
$S=-2$ baryons decaying\ninto $\\Xi \\pi$.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294349/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294349/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ATLAS Metadata Interfaces (AMI) and ATLAS Metadata Catalogs
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294238@indico.cern.ch
DESCRIPTION:Speakers: S. Albrand (LPSC)\nThe ATLAS Metadata Interface (AMI
) project provides a set of generic \ntools for managing database applicat
ions. AMI has a three-tier \narchitecture with a core that supports a conn
ection to any RDBMS \nusing JDBC and SQL. The middle layer assumes that th
e databases have \nan AMI compliant self-describing structure. It provides
a generic\nweb interface and a generic command line interface. The top la
yer \ncontains application specific features. The principal uses of AMI \n
are the ATLAS Data Challenge dataset bookkeeping catalogs\, and Tag \nColl
ector\, a tool for release management. \nThe first AMI Web service client
was introduced in early 2004. It \noffers many advantages over earlier cli
ents because:\n- Web services permit multi-language and multi-operating sy
stem \nsupport\n- The user interface is very effectively de-coupled from t
he \nimplementation. \n\nMost upgrades can be implemented on the server si
de\; no \nredistribution of client software is needed. In 2004 this client
\nwill be used for the ATLAS Data Challenge 2\, for the ATLAS\ncombined t
est beam offline bookkeeping\, and also in the first \nprototypes of ARDA
compliant analysis interfaces.\n\nhttps://indico.cern.ch/event/0/contribut
ions/1294238/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294238/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ATLAS Production System in ATLAS Data Challenge 2
DTSTART;VALUE=DATE-TIME:20040927T124000Z
DTEND;VALUE=DATE-TIME:20040927T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294426@indico.cern.ch
DESCRIPTION:Speakers: L. GOOSSENS (CERN)\nIn order to validate the Offline
Computing Model and the\ncomplete software suite\, ATLAS is running a ser
ies of Data\nChallenges (DC). The main goals of DC1 (July 2002 to April\n
2003) were the preparation and the deployment of the\nsoftware required f
or the production of large event samples\,\nand the production of those sa
mples as a worldwide\ndistributed activity.\n\nDC2 (May 2004 until October
2004) is divided into three\nphases: (i) Monte Carlo data are produced us
ing GEANT4 on\nthree different Grids\, LCG\, Grid3 and NorduGrid\; (ii)\ns
imulate the first pass reconstruction of data expected in\n2007\, also cal
led Tier0 exercise\, using the MC sample\; and\n(iii) test the Distributed
Analysis model.\n\nA new automated data production system has been develo
ped\nfor DC2. The major design objectives are minimal human\ninvolvement\
, maximal robustness\, and interoperability with\nseveral grid flavors and
legacy systems. A central\ncomponent of the production system is the pro
duction\ndatabase holding information about all jobs. Multiple\ninstances
of a 'supervisor' component pick up unprocessed\njobs from this database\,
distribute them to 'executor'\nprocesses\, and verify them after executio
n. The 'executor'\ncomponents interface to a particular grid or legacy fla
vour.\nThe job distribution model is a combination of push and\npull. A d
ata management system keeps track of all produced\ndata and allows for fil
e transfers.\n\nThe basic elements of the production system are described.
\nExperience with the use of the system in world-wide DC2\nproduction of t
en million events will be presented. We also\npresent how the three Grid f
lavors are operated and\nmonitored. Finally we discuss the first attempts
on using\nthe Distributed Analysis system.\n\nhttps://indico.cern.ch/even
t/0/contributions/1294426/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294426/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CMS Tracker Visualisation Tools
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294408@indico.cern.ch
DESCRIPTION:Speakers: M.S. Mennea (UNIVERSITY & INFN BARI)\nThis document
will review the design considerations\, implementations \nand performance
of the CMS Tracker Visualization tools. In view of \nthe great complexity
of this subdetector (more than 50 millions \nchannels organized in 17000 m
odules each one of these being a \ncomplete detector)\, the standard CMS v
isualisation tools (IGUANA and \nIGUANACMS) that provide basic 3D capabili
ties and integration within \nCMS framework respectfully have been complem
ented with additional 2D \ngraphics objects and a detailed object model of
the tracker.\nBased on the experience acquired by using this software to
debug and \nunderstand both hardware and software during the construction
phase\, \nwe will propose possible future improvements to cope with online
\nmonitoring and event analysis during data taking.\n\nhttps://indico.cer
n.ch/event/0/contributions/1294408/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294408/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Dynamically Reconfigurable Data Stream Processing System
DTSTART;VALUE=DATE-TIME:20040927T120000Z
DTEND;VALUE=DATE-TIME:20040927T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294119@indico.cern.ch
DESCRIPTION:Speakers: J. Nogiec (FERMI NATIONAL ACCELERATOR LABORATORY)\nT
he paper describes a component-based framework for data stream processing
that \nallows for configuration\, tailoring\, and run-time system reconfig
uration. The \nsystem’s architecture is based on a pipes and filters pa
ttern\, where data is passed \nthrough routes between components. Componen
ts process data and add\, substitute\, \nand/or remove named data items fr
om a data stream. They can also manipulate data \nstreams by buffering dat
a\, compressing/decompressing individual streams\, and \ncombining\, split
ting\, or synchronizing multiple data streams. Configurable general-\npurp
ose filters for manipulating streams\, visualizing data\, persisting data\
, and \nreading data from various standard data sources are supplemented w
ith many \napplication specific filters\, such as DSP\, scripting\, or ins
trumentation-specific \ncomponents. A network of pipes and filters can be
dynamically reconfigured at run-\ntime\, in response to a preplanned seque
nce of processing steps\, operator \nintervention\, or a change in one or
more data streams. Four distinctive methods \nsupporting reconfiguration a
re provided by the framework: modification of data \nroutes\, management o
f components’ activity states\, triggering processing based on \nthe con
tent of the data\, or the use of source addressing in components. The \nfr
amework can be used to build static data stream processing applications su
ch as \nmonitoring or data acquisition systems as well as self-adjusting s
ystems that would \nadapt their processing algorithm\, presentation layer\
, or data persistency layer in \nresponse to changes in input data streams
.\n\nhttps://indico.cern.ch/event/0/contributions/1294119/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294119/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patriot: Physics Archives and Tools required to Investigate Our T
heories
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294120@indico.cern.ch
DESCRIPTION:Speakers: S. Mrenna (FERMILAB)\nPATRIOT is a project that aims
to provide better predictions of \nphysics events for the high-Pt physics
program of Run2 at the \nTevatron collider.\n\nCentral to Patriot is an e
nstore or mass storage repository for files \ndescribing the high-Pt phys
ics predictions. These are typically \nstored as StdHep files which can b
e handled by CDF and\nD0 and run through detector and triggering simulatio
ns. The \ndefinition of these datasets in the CDF and D0 data handling sy
stem \nSAM is under way.\n\nPatriot relies heavily on a new generation of
Monte Carlo tools \n(such as MadEvent\, Alpgen\, Grappa\, CompHEP\, etc.)
to calculate the \nhard structure of high-Pt events and the more venerable
event \ngenerators (Pythia and Herwig) to make particle level predictions
.\n\nAn early informational database\, describing the types of data files\
nstored in Patriot\, already exists. A new database is under \ndevelopmen
t.\n\nIn parallel with PATRIOT\, we wish to develop the QCD tools that \nd
escribe the detailed properties of high-Pt events. Some of the \nessentia
l features of particle-level events must be described by non-\nperturbativ
e functions\, whose form is often constrained by theory\, \nbut which must
be ultimately tuned to data.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294120/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294120/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Precision validation of Geant4 electromagnetic physics
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294121@indico.cern.ch
DESCRIPTION:Speakers: M.G. Pia (INFN GENOVA)\nThe Geant4 Toolkit provides
an ample set of alternative and complementary physics \nmodels to handle t
he electromagnetic interactions of leptons\, \nphotons\, charged hadrons a
nd ions. \nBecause of the critical role often played by simulation in the
experimental design \nand physics analysis\, an accurate validation of the
physics \nmodels implemented in Geant4 is essential\, down to the quantit
ative understanding of \nthe accuracy of their microscopic features.\nResu
lts from a series of detailed tests with respect to well established refer
ence \ndata sources and experiments are presented\, focusing in \nparticul
ar \non the precision validation of the microscopic components of Geant4 p
hysics\, such as \ncross sections and angular distributions\, provided in
the \nvarious alternative physics models of Geant4 electromagnetic package
s.\nThe validation of Geant4 physics is performed by means of quantitative
evaluations \nof the comparison of Geant4 models to reference data are \
npresented\, making use of statistical analysis algorithms to estimate the
\ncompatibility of simulated and experimental distributions.\nSuch precis
ion tests are especially relevant for critical applications\nof simulation
models\, such as tracking detectors\, neutrino and other astroparticle \n
experiments\, medical physics\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294121/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294121/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chimera - a new\, fast\, extensible and Grid enabled namespace ser
vice
DTSTART;VALUE=DATE-TIME:20040927T155000Z
DTEND;VALUE=DATE-TIME:20040927T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294122@indico.cern.ch
DESCRIPTION:Speakers: T. Mkrtchyan (DESY)\nAfter successful implementation
and deployment of the dCache system\nover the last years\, one of the add
itional required services\, the\nnamespace service\, is faced additional a
nd completely new\nrequirements. Most of these are caused by scaling the s
ystem\, the\nintegration with Grid services and the need for redundant (hi
gh\navailability) configurations. The existing system\, having only an\nNF
Sv2 access path\, is easy to understand and well accepted by the\nusers. T
his single 'access path' limits data management task to make\nuse of class
ical tools like 'find'\, 'ls' and others. This is intuitiv\nfor most users
\, but failed while dealing with millions of entries\n(files) and more sop
histicated organizational schemes (metadata). The\nnew system should suppo
rt a native programmable interface (deep\ncoupled\, but fast)\, the 'class
ical' NFS path (now version 3 at \nleast)\,\na dCache native access and th
e SQL path allowing any type of metadata\nto be used in complex queries. E
xtensions with other 'access paths'\nwill be possible. Based on the experi
ence with the current system we\nhighlight on the following requirements:\
n - large file support (64 Bit) + large number of files (> 10^8)\n - fas
t\n - Platform independents (runtime + persistent objects)\n - Grid name
service integration\n - custom dCache integration\n - redundant\, high
available runtime configurations (concurrent\n backup etc.)\n - user u
sable metadata (store and query)\n - ACL support\n - pluggable authentic
ation (e.g. GSSAPI)\n - external processes can register for namespace eve
nts (e.g. \nremoval/creation of\n files\n\nThe presentation will show a
detailed analysis of the requirements\,\nthe choosen design and selection
of existing components. The current\nschedule should allow to show the fi
rst prototype results.\n\nhttps://indico.cern.ch/event/0/contributions/129
4122/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294122/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Development of algorithms for cluster finding and track reconstruc
tion in the forward muon spectrometer of ALICE experiment
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294184@indico.cern.ch
DESCRIPTION:A simultaneous track finding / fitting procedure based on Kalm
an\nfiltering approach has been developed for the forward muon spectromete
r of\nALICE experiment. \n In order to improve the performance of the me
thod in high-background \nconditions of the heavy ion collisions the "cano
nical" Kalman filter has \nbeen modified and supplemented by a "smoother"
part. It is shown that \nthe resulting "extended" Kalman filter gives bett
er tracking results and \noffers higher flexibility. \n To further
improve the tracking performance in a high occupancy environment\na new al
gorithm for cluster / hit finding in cathode pad chambers of the\nmuon spe
ctrometer has been developed. It is based on the expectation \nmaximizatio
n procedure for a shape deconvolution of overlapped clusters.\nIt is demon
strated that the proposed method allows to reduce the loss of the \ncoordi
nate reconstruction accuracy for high hit multiplicities and \nachieve bet
ter tracking results. \n Both the hit finding and track reconstruction
algorithms have been\nimplemented within the AliRoot software framework.\
n\nhttps://indico.cern.ch/event/0/contributions/1294184/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294184/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wide Area Network Monitoring system for HEP experiments at Fermila
b
DTSTART;VALUE=DATE-TIME:20040930T134000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294123@indico.cern.ch
DESCRIPTION:Speakers: M. Grigoriev (FERMILAB\, USA)\nLarge\, distributed H
EP collaborations\, such as D0\, CDF and US-CMS\, \ndepend on stable and r
obust network paths between major world\nresearch centers. The evolving em
phasis on data and compute Grids\nincreases the reliance on network perfor
mance.\nFermiLab's experimental groups and network support personnel \nide
ntified a critical need for WAN monitoring to ensure the quality\nand effi
cient utilization of such network paths. This has led to the \ndevelopmen
t of the Network Monitoring system we will present in this \npaper.\nThe s
ystem evolved from the IEPM-BW project\, started at SLAC two \nyears ago.
\nAt Fermilab it has developed into a fully functional infrastructure \nwi
th bi-directional active network probes and path characterizations.\nIt is
based on the Iperf achievable throughput tool\, Ping and Synack \nto test
ICMP/TCP connectivity\, Pipechar and Traceroute to test\, \ncompare and
report hop-by-hop network path characterization\, and \nreal file transfer
performance by BBFTP and GridFTP. The Monitoring \nsystem has an extensi
ve web-interface and all the data is available \nthrough standalone SOAP w
eb services or by a MonaLISA client.\nAlso in this paper we will present a
case study of network path\nasymmetry and abnormal performance between FN
AL and SDSC which was\ndiscovered and resolved by utilizing the Network
Monitoring system.\n\nhttps://indico.cern.ch/event/0/contributions/1294123
/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294123/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Distributed computing and oncological radiotherapy: technology tra
nsfer from HEP and experience with prototype systems
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294124@indico.cern.ch
DESCRIPTION:Speakers: M.G. Pia (INFN GENOVA)\nWe show how nowadays it is p
ossible to achieve the goal of accuracy and fast computation response in r
adiotherapic dosimetry using Monte Carlo \nmethods\, together with a distr
ibuted computing model. \nMonte Carlo methods have never been used in clin
ical practice because\, even if they are more accurate than available comm
ercial software\, the \ncalculation time needed to accumulate sufficient s
tatistics is too long for a realistic use in radiotherapic treatment.\nWe
present a complete\, fully functional prototype dosimetric system for radi
otherapy\, integrating various components based on HEP software \nsystems:
a Geant4-based simulation\, an AIDA-based dosimetric analysis\, a web-bas
ed user interface\, and distributed processing either on a local \ncomputi
ng farm or on geographically spread nodes. \nThe performance of the dosime
tric system has been studied in three execution modes: sequential on a sin
gle dedicated machine\, parallel on a \ndedicated computing farm\, paralle
l on a grid test-bed. An intermediate software layer\, the DIANE system\,
makes the three execution modes \ncompletely transparent to the user\, all
owing to use the same code in any of the three configurations. \nThanks to
the integration in a grid environment\, any hospital\, even small ones o
r in less wealthy countries\, that could not afford the high costs of \nco
mmercial treatment planning software\, may get the chance of using advance
d software tools for oncological therapy\, by accessing distributed \ncomp
uting resources\, shared with other hospitals and institutes belonging to
the same virtual organization\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294124/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294124/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ROOT : detector visualization
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294125@indico.cern.ch
DESCRIPTION:The ROOT geometry package is a tool designed for building\, br
owsing\,\ntracking and visualizing a detector geometry. The code is \ninde
pendent from other external MC for simulation\, therefore it does \nnot co
ntain any constraints related to physics. However\, the package \ndefines
a number of hooks for tracking\, such as media\, materials\, \nmagnetic fi
eld or track state flags\, in order to allow interfacing \nto tracking MC'
s. The final goal is to be able to use the same \ngeometry for several pur
poses\, such as tracking\, reconstruction or \nvisualization\, taking adva
ntage of the ROOT features related to \nbookkeeping\, I/O\, histograming\,
browsing and GUI's.\n\nIn this poster\, we will show the various graphics
tools to render \ncomplex geometries\, from ray tracing tools that have t
he advantage \nto test the real geometry like when tracking particles\, to
\nsophisticated 3-D dynamic graphics with the OpenGL\, X3D\, Coin3D or \n
OpenInventor viewers. An abstract interface has been defined and it \nis c
ommon to all the viewers.\n\nhttps://indico.cern.ch/event/0/contributions/
1294125/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294125/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Realization of a stable network flow with high performance communi
cation in high bandwidth-delay product network
DTSTART;VALUE=DATE-TIME:20040930T132000Z
DTEND;VALUE=DATE-TIME:20040930T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294129@indico.cern.ch
DESCRIPTION:Speakers: Y. Kodama (NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL
SCIENCE AND TECHNOLOGY (AIST))\nIt is important that the total bandwidth
of the multiple streams should\nnot exceed the network bandwidth in order
to achieve a stable network\nflow with high performance in high bandwidth-
delay product networks.\nSoftware control of bandwidth for each stream som
etimes exceed the\nspecified bandwidth. We proposed the hardware control
technique for\ntotal bandwidth of multiple streams with high accuracy.\n\n
GNET-1 is the hardware gigabit network testbed that we developed. It\nprov
ides functions such as wide area network emulation\, network\ninstrumentat
ion\, and traffic generation at gigabit Ethernet wire\nspeeds. GNET-1 is a
powerful tool for developing network-aware grid\nsoftware. It can control
the total bandwidth of the multiple streams with\nhigh accuracy by adjust
ing the interframe gap (IFG).\n\nTo see the effect of the highly accurate
bandwidth control by GNET-1\,\nthe file exchange of large-scale data was d
one on a Trans-pacific Grid\nDatafarm testbed between Japan-U.S.. We used
three trans-pacific\nnetworks\, APAN/TransPAC Los Angels line and its Chi
cago line and\nSuperSINET New York line. Its total bandwidth that can be
used was 3.9\nGbps. In this feasible study\, GNET-1 controlled five gigab
it Ethernet\nports\, and achieved the total bandwidth of 3.78 Gbps in stab
le for about\none hour. The bandwidth was 97 % of the peak bandwidth of us
ed networks.\n\nhttps://indico.cern.ch/event/0/contributions/1294129/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294129/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lattice QCD Data and Metadata Archives at Fermilab and the Interna
tional Lattice Data Grid
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294139@indico.cern.ch
DESCRIPTION:Speakers: E. Neilsen (FERMI NATIONAL ACCELERATOR LABORATORY)\n
The lattice gauge theory community produces large volumes of\ndata. Becaus
e the data produced by completed computations form the\nbasis for future w
ork\, the maintenance of archives of existing data\nand metadata describin
g the provenance\, generation parameters\, and\nderived characteristics of
that data is essential not only as a\nreference\, but also as a basis for
future work. Development of these\narchives according to uniform standard
s both in the data and metadata\nformats provided and in the software inte
rfaces to the component\nservices could greatly simplify collaborations be
tween institutions\nand enable the dissemination of meaningful results.\n\
nThis paper describes the progress made in the development of a set of\nsu
ch archives at the Fermilab lattice QCD facility. We are\ncoordinating the
development of the interfaces to these facilities\nand the formats of the
data and metadata they provide with the efforts\nof the international lat
tice data grid (ILDG) metadata and middleware\nworking groups\, whose goal
s are to develop standard formats for\nlattice QCD data and metadata and a
uniform interface to archive\nfacilities that store them. Services under
development include those\ncommonly associate with data grids: a service r
egistry\, a metadata\ndatabase\, a replica catalog\, and an interface to a
mass storage\nsystem. All services provide GSI authenticated web service
interfaces\nfollowing modern standards\, including WSDL and SOAP\, and acc
ept and\nprovide data and metadata following recent XML based formats prop
osed\nby the ILDG metadata working group.\n\nhttps://indico.cern.ch/event/
0/contributions/1294139/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294139/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Managed Data Storage and Data Access Services for Data Grids
DTSTART;VALUE=DATE-TIME:20040927T122000Z
DTEND;VALUE=DATE-TIME:20040927T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294141@indico.cern.ch
DESCRIPTION:Speakers: M. Ernst (DESY)\nThe LHC needs to achieve reliable h
igh performance access to vastly distributed storage resources across the
\nnetwork. USCMS has worked with Fermilab-CD and DESY-IT on a storage serv
ice that was deployed at several \nsites. It provides Grid access to hete
rogeneous mass storage systems and synchronization between them. It \nincr
eases resiliency by insulating clients from storage and network failures\,
and facilitates file sharing and \nnetwork traffic shaping.\n\nThis new s
torage service is implemented as a Grid Storage Element (SE). It consists
of dCache as the core \nstorage system and an implementation of the Storag
e Resource Manager (SRM)\, that together allow both local \nand Grid based
access to the mass storage facilities. It provides advanced functionaliti
es for managing\, \naccessing and distributing collaboration data.\n\nUSCM
S is using this system both as Disk Resource Manager at Tier-1 and Tier-2
sites\, and as Hierarchical \nResource Manager with Enstore as tape back-e
nd at the Fermilab Tier-1. It is used for providing shared \nmanaged disk
pools at sites and for streaming data between the CERN Tier-0\, the Fermil
ab Tier-1 and U.S. \nTier-2 centers\n\nApplications can reserve space for
a time period\, ensuring space availability when the application runs. Wor
ker \nnodes without WAN connection can trigger data replication to the SE
and then access data via the LAN. Moving \nthe SE functionality off the wo
rker nodes reduces load and improves reliability of the compute farm eleme
nts \nsignificantly.\n\nWe describe architecture\, components\, and experi
ence gained in CMS production and the DC04 Data \nChallenge.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294141/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294141/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Open Science Grid (OSG)
DTSTART;VALUE=DATE-TIME:20040927T151000Z
DTEND;VALUE=DATE-TIME:20040927T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294142@indico.cern.ch
DESCRIPTION:Speakers: R. Pordes (FERMILAB)\nThe U.S.LHC Tier-1 and Tier-2
laboratories and universities are developing production Grids to support L
HC \napplications running across a worldwide Grid computing system. Togeth
er with partners in computer science\, \nphysics grid projects and running
experiments\, we will build a common national production grid \ninfrastru
cture which is open in its architecture\, implementation and use.\n\nThe O
SG model builds upon the successful approach of last year’s joint Grid20
03 project. The Grid3 shared \ninfrastructure has for over eight months gi
ven significant computational resources and throughput to more \nthan six
applications\, including ATLAS and CMS data challenges\, SDSS\, LIGO and B
iology analyses and \ncomputer science demonstrators.\n\nTo move towards L
HC-scale data management\, access and analysis capabilities\, we will need
to increase the \nscale\, services\, and sustainability of the current in
frastructure by an order of magnitude. This requires a \nsignificant upgra
de in its functionalities and technologies.\n\nThe OSG roadmap is a strate
gy and work plan to build the U.S.LHC computing enterprise as a fully usab
le\, \nsustainable and robust grid\, which is part of the LHC global compu
ting infrastructure and open to partners. \nThe approach is to federate wi
th other application communities in the U.S. to build a shared infrastruct
ure \nopen to other sciences and capable of being modified and improved to
respond to needs of other \napplications\, including CDF\, D0\, BaBar and
RHIC experiments.\n\nWe describe the application driven engineered servic
es of the OSG\, short term plans and status\, and the \nroadmap for a cons
ortium\, its partnerships and national focus.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294142/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294142/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Pattern-based Continuous Integration Framework for Distributed E
GEE Grid Middleware Development
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294143@indico.cern.ch
DESCRIPTION:Speakers: A. Di Meglio (CERN)\nSoftware Configuration Manageme
nt (SCM) Patterns and the Continuous Integration\nmethod are recent and po
werful techniques to enforce a common software\nengineering process across
large\, heterogeneous\, rapidly changing development\nprojects where a ra
pid release lifecycle is required. In particular the Continuous\nIntegrati
on method allows tracking and addressing problems in the software\ncompone
nts integration as early as possible in the release cycle. Since new\nincr
emental code builds are done several times per day\, only small amounts of
new\ncode is built and integrated at relatively short intervals. Develope
rs are \nimmediately\nnotified of arising problems and integrators can pin
point configuration and build\nproblems to the level of single files withi
n any given software component. This \npaper\npresents the implementation
and the initial results of the application of such\ntechniques in the SCM
and Integration of the EGEE Grid Middleware software. The\nsoftware is bas
ed on a Service Oriented Architecture model where services are\ndeveloped
in different programming languages by development groups in several\nEurop
ean locations under stringent quality requirements. A number of basic SCM\
npatterns\, such as the Workspace\, the Active Line\, the Repository\, are
introduced and\nthe Continuous Integration tools used in the project are
presented with a \ndiscussion of\nthe advantages and disadvantages of usin
g the method.\n\nhttps://indico.cern.ch/event/0/contributions/1294143/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294143/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Job Monitoring in Interactive Grid Analysis Environment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294145@indico.cern.ch
DESCRIPTION:Speakers: A. Anjum (NIIT)\nGrid is emerging as a great computa
tional resource but its dynamic behaviour makes \nthe Grid environment unp
redictable. System failure or network failure can occur or \nthe system pe
rformance can degrade. So once the job has been submitted monitoring \nbec
omes very essential for user to ensure that the job is completed in an eff
icient \nway. In current environments once user submits a job he loses dir
ect control over \nthe job\, system behaves like a batch system\, user sub
mits the job and gets the \nresult back. Only information a user can obtai
n about a job is whether it is \nscheduled\, running\, cancelled or finish
ed. This information is enough from the Grid \nmanagement point of view bu
t not from the point of view of a user. User wants \ninteractive environme
nt in which he can check the progress of the job\, obtain \nintermediate r
esults\, terminate the job based on the progress of job or intermediate \n
results\, steer the job other nodes to achieve better performance and chec
k the \nresources consumed by the job. So a mechanism is needed that can p
rovide user with \nsecure access to information about different attributes
of a job. In this paper we \ndescribe a monitoring service\, a java based
web service that will provide secure \naccess to different attributes of
a job once a job has been submitted to Interactive \nGrid Analysis Environ
ment.\n\nhttps://indico.cern.ch/event/0/contributions/1294145/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294145/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Software Management in the HARP experiment
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294363@indico.cern.ch
DESCRIPTION:Speakers: E. Tcherniaev (CERN)\nThis paper discusses some key
points in the organization\nof the HARP software. In particular it describ
es the configuration of\nthe packages\, data and code management\, testing
and release procedures.\n\nDevelopment of the HARP software is based on i
ncremental\nreleases with strict respect of the design structure.\nThis po
ses serious challenges to the software management\,\nwhich has gone throug
h essential evolution during the life\nof the experiment.\n\nA progressive
ly better understanding of the organizational issues\, like\nthe environme
nt settings\, package versioning\, release procedures\, etc.\,\nwas achiev
ed.\n\nMastering of the CVS and CMT tools\, plus an essential reduction of
\nthe manual work allowed to reach the situation where one software\ndevel
opment iteration (compilation from scratch\, full testing\,\ninstallation
in the official area) takes only a few hours.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294363/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294363/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Predicting Resource Requirements of a Job Submission
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294146@indico.cern.ch
DESCRIPTION:Speakers: A. Anjum (NIIT)\nGrid computing provides key infrast
ructure for distributed problem solving in \ndynamic virtual organizations
. However\, Grids are still the domain of a few highly \ntrained programme
rs with expertise in networking\, high-performance computing\, and \nopera
ting systems. \nOne of the big issues in the full-scale usage of a grid is
the matching of the \nresource requirements of a job submission to availa
ble resources. In order for \nresource brokers/job schedulers to ensure e
fficient use of grid resources\, an \ninitial estimate of the likely resou
rce usage of a submission must be made. In the \ncontext of the Grid Enabl
ed Analysis Environment (GAE)\, physicists want the ability \nto discover\
, acquire\, and reliably manage computational resources dynamically\, in \
nthe course of their everyday activities. They do not want to be bothered
with the \nlocation of these resources\, the mechanisms that are required
to use them\, keeping \ntrack of the status of computational tasks operati
ng on these resources\, or with \nreacting to failure. They do care about
how long their tasks are likely to run and \nhow much these tasks will cos
t. \nSo the grid scheduler must have the capability to estimate before job
submission\, \nhow much time and resources the job will consume on execut
ion site. Our proposed \nmodule\, Prediction engine will be part of schedu
ler and it will provide estimates of \nresource use along with the duratio
n of use. This will enable scheduler to choose \nthe optimum site for job
execution. \nThis paper presents the survey of existing grid schedulers an
d then based on this \nsurvey states the need for resource usage estimatio
n. Also the architecture and \ndesign of “grid prediction engine” that
predicts the resource requirements of a job \nsubmission is discussed.\n\
nhttps://indico.cern.ch/event/0/contributions/1294146/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294146/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Design and Implementation of a Notification Model for Grid Monitor
ing Events
DTSTART;VALUE=DATE-TIME:20040930T145000Z
DTEND;VALUE=DATE-TIME:20040930T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294147@indico.cern.ch
DESCRIPTION:Speakers: N. De Bortoli (INFN - NAPLES (ITALY))\nGridICE is a
monitoring service for the Grid\, it measures\nsignificant Grid related re
sources parameters in order to analyze\nusage\, behavior and performance o
f the Grid and/or to detect and\nnotify fault situations\, contract violat
ions\, and user-defined\nevents. In its first implementation\, the notific
ation service\nrelies on a simple model based on a pre-defined set of even
ts.\n\nThe growing interest for more flexible and scalable notification\nc
apabilities from several LHC experiments has led us to study a\nmore suita
ble solution satisfying their requirements. \nWith this paper we present b
oth model and design of a notification \nservice which main functionalitie
s are: filtering\, transformation\,\nand routing\nof data. It basically co
llects a large number of incoming streams\nof data items from monitored re
sources (events)\, filters them\naccording to user profiles or queries des
cribing users information\npreferences (subscriptions) and finally\, after
a customization of\nmatched data items\, notifies users whose interests a
re satisfied\n(event consumers).\n\nOur proposal significantly improves th
e notification capabilities in\ncurrent Grid systems by providing flexible
means for specifying\nboth topic and content based subscriptions\, moreov
er it provides\nan efficient matchmaking engine. The new component has bee
n\ndeveloped and integrated in the GridICE service based on users\nexpress
ed interests.\n\nhttps://indico.cern.ch/event/0/contributions/1294147/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294147/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evolution and Revolution in the Design of Computers Based on Nanoe
lectronics
DTSTART;VALUE=DATE-TIME:20040929T090000Z
DTEND;VALUE=DATE-TIME:20040929T093000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294156@indico.cern.ch
DESCRIPTION:Speakers: Stan Williams (HP)\nToday's computers are roughly a
factor of one billion less efficient at doing their \njob than the laws of
fundamental physics state that they could be. How much of this \nefficie
ncy gain will we actually be able to harvest? What are the biggest obstac
les \nto achieving many orders of magnitude improvement in our computing h
ardware\, rather \nthat the roughly factor of two we are used to seeing wi
th each new generation of \nchip? Shrinking components to the nanoscale o
ffers both potential advantages and \nsevere challenges. The transition f
rom classical mechanics to quantum mechanics is \na major issue. Others a
re the problems of defect and fault tolearance: defects are \nmanufacturi
ng mistakes or components that irreversibly break over time and faults \na
re transient interuptions that occur during operation. Both of these issu
es become \nbigger problems as component sizes shrink and the number of co
mponents scales up \nmassively. In 1955\, John von Neumann showed that a
completely general approach to \nbuilding a reliable machine from unreliab
le components would require a redundancy \noverhead of at least 10\,000 -
this would completely negate any advantages of \nbuilding at the nanoscale
. We have been examining a variety of defect and fault \ntolerant techniq
ues that are specific to particular structures or functions\, and are \nva
stly more efficient for their particular task than the general approach of
von \nNeumann. Our strategy is to layer these techniques on top of each
other to achieve \nhigh system reliability even with component reliability
of no more than 97% or so\, \nand a total redudancy of less than 3. This
strategy preserves the advantages of \nnanoscale electronics with a relat
ively modest overhead.\n\nhttps://indico.cern.ch/event/0/contributions/129
4156/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294156/
END:VEVENT
BEGIN:VEVENT
SUMMARY:50 years of Computing at CERN
DTSTART;VALUE=DATE-TIME:20040927T073000Z
DTEND;VALUE=DATE-TIME:20040927T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294158@indico.cern.ch
DESCRIPTION:Speakers: David Williams ()\n"Where are your Wares"\n\nComputi
ng in the broadest sense has a long history\, and Babbage (1791-1871)\, \n
Hollerith (1860-1929) Zuse (1910-1995)\, many other early pioneers\, and t
he wartime \ncode breakers\, all made important breakthroughs. CERN was f
ounded as the first \nvalve-based digital computers were coming onto the m
arket.\n\nI will consider 50 years of Computing at CERN from the following
viewpoints:-\n\nWhere did we come from? What happened? Who was involved
? Which wares (hardware\, \nsoftware\, netware\, peopleware and now middl
eware) were important? Where did \ncomputers (not) end up in a physics la
b? What has been the impact of computing on \nparticle physics? What abo
ut the impact of particle physics computing on other \nsciences? And the
impact of our computing outside the scientific realm? \n\nI hope to conclu
de by looking at where we are going\, and by reflecting on why \ncomputing
is likely to remain challenging for a long time yet.\n\nThe topic is so v
ast that my remarks are likely to be either prejudiced or trivial\, \nor b
oth.\n\nhttps://indico.cern.ch/event/0/contributions/1294158/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294158/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The architecture of the AliEn system
DTSTART;VALUE=DATE-TIME:20040927T132000Z
DTEND;VALUE=DATE-TIME:20040927T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294167@indico.cern.ch
DESCRIPTION:Speakers: P. Buncic (CERN)\nAliEn (ALICE Environment) is a Gri
d framework developed by the Alice Collaboration and used in production \n
for almost 3 years. From the beginning\, the system was constructed using
Web Services and standard \nnetwork protocols and Open Source components.
The main thrust of the development was on the design and \nimplementation
of an open and modular architecture. A large part of the component came f
rom state-of-the-\nart modules available in the Open Source domain. Thus\,
in a very short time\, the ALICE experiment had a \nprototype Grid that\,
while constantly evolving\, has allowed large distributed simulation and
reconstruction \nvital to the design of the experiment hardware and softwa
re to be performed with very limited manpower. \nThis proved to be the cor
rect path to which many Grid project and initiatives are now converging. T
he \narchitecture of AliEn inspired the ARDA report and subsequently AliEn
provided the foundation of components \nfor the first EGEE prototype. Thi
s talk presents the architecture of the original AliEn system\, describes
its \nevolution. A critical review of the major technology choices\, their
implementation and the development \nprocess is also presented.\n\nhttps:
//indico.cern.ch/event/0/contributions/1294167/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294167/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ALICE Multi-site Data Transfer Tests on a Wide Area Network
DTSTART;VALUE=DATE-TIME:20040930T122000Z
DTEND;VALUE=DATE-TIME:20040930T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294169@indico.cern.ch
DESCRIPTION:Speakers: G. Lo Re (INFN & CNAF Bologna)\nNext generation high
energy physics experiments planned at the CERN \nLarge Hadron Collider is
so demanding in terms of both computing \npower and mass storage that dat
a and CPU's can not be concentrated in \na single site and will be distrib
uted on a computational Grid \naccording to a "multi-tier". \nLHC experime
nts are made of several thousands of people from a few \nhundreds of insti
tutes spread out all over the world. These people\, \naccording to their c
ollaborations on specific physics analysis \ntopics\, can constitute highl
y dynamic Virtual Organizations rapidly \nchanging as a function of both t
ime and topology. The impact \nof future experiments on Wide Area Networks
(WAN) will be non \nnegligible especially for what concerns the capillari
ty of bandwidths \n(down to the "last mile")\, quality of service\, adapti
vity and \nconfigurability.\nIn this paper we report on a series of multi-
site data transfer tests \nperformed within the ALICE Experiment on a wide
area network test-bed \nin order to spot possible bottlenecks and pin dow
n critical elements \nand parameters of actual research networks.\nIn orde
r to make the tests as realistic as possible\, reflecting the \nreal use c
ases foreseen in the next future\, we have taken into \naccount all the as
pects of the elements involved in the transfer of \na file:\n- Local di
sk Input/Output (I/O) performance\;\n- I/O block size\;\n- TCP param
eters and number of parallel streams\;\n- Bandwidth Delay Product (BDP)
expressed as the product of the \nBandwidth (BW) \ntimes the Round Trip \
nTime (RTT).\n\nhttps://indico.cern.ch/event/0/contributions/1294169/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294169/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The High Level Trigger software for the CMS experiment
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294171@indico.cern.ch
DESCRIPTION:Speakers: O. van der Aa (INSTITUT DE PHYSIQUE NUCLEAIRE\, UNIV
ERSITE CATHOLIQUE DE LOUVAIN)\nThe observation of Higgs bosons predicted i
n supersymmetric theories\nwill be a challenging task for the CMS experime
nt at the LHC\, in\nparticular for its High Level trigger (HLT). A prototy
pe of the\nHigh Level Trigger software to be used in the filter farm of th
e CMS \nexperiment and for the filtering of monte carlo samples will be \n
presented. The implemented prototype heavily uses recursive \nprocessing o
f a HLT tree and allows dynamic trigger definition.\nFirstly the general a
rchitecture and design choices as well\nas the timing performance of the s
ystem will be reviewed in the \nlight of the DAQ constrains. Secondly\, sp
ecific trigger \nimplementations in the context of the object-oriented Rec
onstruction \nfor CMS Analysis (ORCA) software will be detailed.\nFinally\
, the analysis for the selection of a CP even Higgs decaying \nin tau pair
s will be presented. The Aforementioned analysis will \nillustrate the imp
ortance of the trigger strategies required to \nachieve the various physic
s analysis in CMS.\n\nhttps://indico.cern.ch/event/0/contributions/1294171
/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294171/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Authentication/Security services in the ROOT framework
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294180@indico.cern.ch
DESCRIPTION:Speakers: G. GANIS (CERN)\nThe new authentication and security
services available in the ROOT framework\nfor client/server applications
will be described.\n\nThe authentication scheme has been designed with the
purpose to make the\nsystem complete and flexible\, to fit the needs of t
he coming clusters and\nfacilities.\nThree authentication methods have bee
n made available: Globus/GSI\,\nfor GRID-awareness\; SSH\, to allow using
a secure and very popular protocol\;\na fast identification method for int
rinsically secure situations.\nA mechanism to allow server access control
has been implemented\, allowing\nto model the authorization schemes accord
ing to the needs.\nA lightweight mechanism for client/server method negoti
ation has been\nintroduced\, to adapt to heterogeneous situations.\nThe fo
rward of the authentication credentials in the PROOF system has been\nfull
y automatized.\nThe modularity of the code has been improved to ease maint
enance and reuse\nin new ROOT modules. In particular\, a plug-in library f
or the new Xrootd file\nserver daemon has been designed and implemented.\n
Authentication support has been extended to the main socket server\nclass\
, allowing to run a ROOT interactive session as a full-featured daemon.\n\
nSecurity services have also been added to ROOT. The exchange of sensitive
\ninformation\, e.g. passwords\, has been secured.\nNew socket classes sup
porting SSL-secured connections have been provided for\nencryption of all
the information exchanged with the remote host.\n\nhttps://indico.cern.ch/
event/0/contributions/1294180/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294180/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Event Data Model in ATLAS
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294181@indico.cern.ch
DESCRIPTION:Speakers: Edward Moyse ()\nThe event data model (EDM) of the A
TLAS experiment is presented. For large \ncollaborations like the ATLAS e
xperiment common interfaces and data objects \nare a necessity to insure e
asy maintenance and coherence of the experiments \nsoftware platform over
a long period of time. The ATLAS EDM improves \ncommonality across the det
ector subsystems and subgroups such as trigger\, test \nbeam reconstructio
n\, combined event reconstruction\, and physics analysis. The \nobject ori
ented approach in the description of the detector data allows the \npossib
ility to have one common raw data flow. Furthermore the EDM allows the \nu
se of common software between online data processing and offline \nreconst
ruction. One important component of the ATLAS EDM is a common track \ncla
ss which is used for combined track reconstruction across the innermost \n
tracking subdetectors and is also used for tracking in the muon detectors.
The \nstructure of the track object and the variety of track parameters a
re \npresented. For the combined event reconstruction a common particle c
lass is \nintroduced which serves as the interface between event reconstru
ction and \nphysics analysis.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294181/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294181/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience producing simulated events for the DZero experiment on
the SAM-Grid
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294182@indico.cern.ch
DESCRIPTION:Speakers: Rob KENNEDY (FNAL)\nMost of the simulated events for
the DZero experiment at Fermilab have been\nhistorically produced by the
“remote” collaborating institutions. One of the\nprincipal challenges
reported concerns the maintenance of the local software\ninfrastructure\,
which is generally different from site to site. As the understanding\nof t
he community on distributed computing over distributively owned and shared
\nresources progresses\, it becomes increasingly interesting the adoption
of grid\ntechnologies to address the production of montecarlo events for h
igh energy physics\nexperiments. The SAM-Grid is a software system develop
ed at Fermilab\, which\nintegrates standard grid technologies for job and
information management with SAM\,\nthe data handling system of the DZero a
nd CDF experiments. During the past few\nmonths\, this grid system has bee
n tailored for the montecarlo production of DZero.\nSince the initial phas
e of deployment\, this experience has exposed an interesting\nseries of re
quirements to the SAM-Grid services\, the standard middleware\, the\nresou
rces and their management and to the analysis framework of the experiment.
As of\ntoday\, the inefficiency due to the grid infrastructure has been r
educed to as little\nas 1%. In this paper\, we present our statistics and
the "lesson learned" in running\nlarge high energy physics applications on
a grid infrastructure.\n\nhttps://indico.cern.ch/event/0/contributions/12
94182/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294182/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simulations and Prototyping of the LHCb L1 and HLT Triggers
DTSTART;VALUE=DATE-TIME:20040929T134000Z
DTEND;VALUE=DATE-TIME:20040929T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294191@indico.cern.ch
DESCRIPTION:Speakers: T. Shears (University of Liverpool)\nThe Level 1 and
High Level triggers for the LHCb experiment are \nsoftware triggers which
will be implemented on a farm of about 1800 \nCPUs\, connected to the det
ector read-out system by a large Gigabit \nEthernet LAN with a capacity of
8 Gigabyte/s and some 500 Gigabit \nEthernet links. The architecture of
the readout network must be \ndesigned to maximise data throughput\, contr
ol data flow\, allow load \nbalancing between the nodes and be proven to p
erform at scale. \nIssues of stability\, robustness and fault tolerance ar
e vital to the \neffective operation of the trigger. We report on the deve
lopment and \nresults of two independent software simulations which allow
us to \nevaluate the performance of various network configurations and \nt
o specificy the switch parameters. In order to validate the results \nof t
he simulation and to experimentally test the performance of the \nreadout
network in conditions similar to those expected at the LHC\, \nwe have con
structed a hardware prototype of the LHCb Level 1 and \nHigh Level trigger
s. This prototype allows a scaled evaluation of \nour design\, soak-testin
g\, and an evaluation of the overall system \nresponse to the deliberate i
ntroduction of faults. The performance \nof this test-bed is described and
the results compared to simulation.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294191/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294191/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ALICE High Level Trigger
DTSTART;VALUE=DATE-TIME:20040929T122000Z
DTEND;VALUE=DATE-TIME:20040929T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294192@indico.cern.ch
DESCRIPTION:Speakers: M. Richter (Department of Physics and Technology\, U
niversity of Bergen\, Norway)\nThe ALICE experiment at LHC will implement
a High Level Trigger \nSystem\, where the information from all major detec
tors are combined\, \nincluding the TPC\, TRD\, DIMUON\, ITS etc. The larg
est computing \nchallenge is imposed by the TPC\, requiring realtime patte
rn \nrecognition. The main task is to reconstruct the tracks in the TPC\,
\nand in a final stage combine the tracking information from all \ndetecto
rs. Based on the physics observables selective readout is done \nby gener
ation of a software trigger (High Level Trigger)\, capable of \nselecting
interesting (sub)events from the input data stream.\nDepending on the phys
ics program various prosessing options are \ncurrently being developed\, i
ncluding region of interest processing\, \nrejecting events based on softw
are trigger and data compression \nschemes. Examples of such triggers are
verification of candidates for \nhigh-pt dielectron heavy-quarkonium decay
s\, momentum filter to enhance \nthe open-charm signal\, high-pt jets sele
ction etc.\n \nTechnically the HLT system entails a very large scale proce
ssing farm \nwith about 1000 active processors. The input data stream is d
esigned \nfor 25 GB/sec. The system nodes will be interfaced to the local
data \nconcentrators of the DAQ system via optical fibers receiving a copy
of \nthe raw data.\nThe optical fibers will be connected to the PCI-bus o
f HLT nodes using \na custom PCI card. These cards provide a co-processor
functionality \nfor the first steps of the pattern recognition.\n \nThe ta
lk will give an overview of the HLT project and will focus on \nthe latest
results regarding efficient data compression and trigger \nperformance.\n
\nhttps://indico.cern.ch/event/0/contributions/1294192/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294192/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Track reconstruction in high density environment
DTSTART;VALUE=DATE-TIME:20040930T132000Z
DTEND;VALUE=DATE-TIME:20040930T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294194@indico.cern.ch
DESCRIPTION:Speakers: M. Ivanov (CERN)\nTracks finding and fitting algorit
hm in ALICE Time projection chamber (TPC) and Inner Tracking System (ITS)
\nbased on the Kalman-filtering are presented. The filtering algorithm is
able to cope with non-Gaussian noise \nand ambiguous measurements in high-
density environments. The tracking algorithm consists of two parts: \none
for the TPC and one for the prolongation into the ITS. The occupancy in th
e TPC can reach up to 40 %. \nUsually\, due to the overlaps\, a number of
points along the track are lost or significantly displaced.\nAt first the
clusters are found and the space points are reconstructed. The shape of a
cluster provides \ninformation about the overlap. An unfolding algorithm i
s applied for points with distorted shapes. Then\, the \nexpected space po
int error is estimated using information about the cluster shape and track
parameters. \nFurther\, the available information about local track overl
ap is used.\nIn the TPC-ITS matching\, the distance between the TPC and th
e ITS sensitive volume is rather large and the \ntrack density inside the
ITS is so high that the straightforward continuation of the tracking proce
dure is \nineffective. Using only chi2 minimisation there is a high probab
ility of assigning a wrong hit to the track.\nTherefore for each TPC track
a candidate tree of the possible track prolongation in the ITS is build.
Finally the \nmost probable track candidates are chosen.\nThe approach hav
e been implemented within the ALICE simulation/reconstruction framework (A
LIROOT)\, and \nalgorithm's efficiency have been estimated using the ALIRO
OT Monte Carlo data.\n\nhttps://indico.cern.ch/event/0/contributions/1294
194/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294194/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fault Tolerance and Fault Adaption for High Performance Large Scal
e Embedded Systems
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294196@indico.cern.ch
DESCRIPTION:Speakers: P. Sheldon (VANDERBILT UNIVERSITY)\nThe BTeV experim
ent\, a proton/antiproton collider experiment at the Fermi National\nAccel
erator Laboratory\, will have a trigger that will perform complex computat
ions\n(to reconstruct vertices\, for example) on every collision (as oppos
ed to the more\ntraditional approach of employing a first level hardware b
ased trigger). This\ntrigger requires large-scale fault adaptive embedded
software: with thousands of\nprocessors involved in performing event fil
tering in the trigger farm fault\nconditions must be given proper treatmen
t. Without fault mitigation\, it is\nconceivable that the trigger system
will experience failures at a high enough rate to\nhave an unacceptable ne
gative impact on BTeV's physics goals. The RTES (Real Time\nEmbedded Syste
ms) collaboration is a group of physicists\, engineers\, and computer\nsci
entists working to address the problem of reliability in large-scale clust
ers with\nreal-time constraints such as this. Resulting infrastructure mus
t be highly scalable\,\nverifiable\, extensible by users\, and dynamically
changeable. An initial prototype has\nbeen built to test design ideas and
methods for the final system\, and a larger scale\nand more ambitious pro
totype is currently under construction. I will discuss the\nlessons learn
ed from these prototypes as well as the overall design and deliverables\nf
or the BTeV experiment.\n\nhttps://indico.cern.ch/event/0/contributions/12
94196/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294196/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Virtual MonteCarlo : status and applications
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294197@indico.cern.ch
DESCRIPTION:Speakers: A. Gheata (CERN)\nThe current major detector simulat
ion programs\, i.e. GEANT3\, GEANT4 \nand FLUKA have largely incompatible
environments. This forces the \nphysicists willing to make comparisons bet
ween the different \ntransport Monte Carlos to develop entirely different
programs. \nMoreover\, migration from one program to the other is usually
\nvery expensive\, in manpower and time\, for an experiment offline \nenvi
ronment\, as it implies substantial changes in the simulation \ncode. To s
olve this problem\, the ALICE Offline project has developed \na virtual in
terface to these three programs allowing their seamless \nuse without any
change in the framework\, the geometry description or \nthe scoring code.
Moreover a new geometrical modeller has been \ndeveloped in collaboration
with the ROOT team\, and successfully \ninterfaced to the three programs.
This allows the use of one \ndescription of the geometry\, which can be us
ed also during \nreconstruction and visualisation. The talk will describe
the present \nstatus and future plans for the Virtual Monte Carlo. It will
also \npresent the capabilities and performance of the geometrical modell
er.\n\nhttps://indico.cern.ch/event/0/contributions/1294197/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294197/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Usage of ALICE Grid middleware for medical applications
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294199@indico.cern.ch
DESCRIPTION:Speakers: P. Cerello (INFN Torino)\nBreast cancer screening pr
ograms require managing and accessing a \nhuge amount of data\, intrinsica
lly distributed\, as they are collected \nin different Hospitals. The deve
lopment of an application based on \nComputer Assisted Detection algorithm
s for the analysis of digitised \nmammograms in a distributed environment
is a typical GRID use case. \nIn particular\, AliEn (ALICE Environment) se
rvices\, whose development \nwas carried on by the ALICE Collaboration\, w
ere used to configure a \ndedicated Virtual Organisation\; a PERL-based in
terface to AliEn \ncommands allows the registration of new patients and ma
mmograms in \nthe AliEn Data Catalogue as well as queries to retrieve imag
es \nassociated to selected patients. The analysis of selected mammograms
\ncan be performed interactively\, making use of PROOF services\, or \ntak
ing advantage of the AliEn capabilities to generate "sub-jobs"\; \neach of
them analyzes the fraction of the selected sample stored on a \nsite\, an
d the results are merged. All the required functionality is \navailable: b
y the end of 2004 a working prototype is foreseen\, with \nan AliEn Client
installed in each of the Hospitals participating to \nthe INFN-funded MAG
IC-5 project.\nThe same approach will be applied in the near future in two
other \napplication areas: \n- Lung cancer screening\, equivalent to the
mammographic screening \nfrom the middleware point of view\, where Compute
r Assisted Detection \nalgorithms are being developed\; \n- Diagnosis of t
he Alzheimer disease\, where the application is \nintrinsically distribute
d: it should\, in fact\, compare the PET-\ngenerated image to a set of ref
erence images which are scattered on \nmany sites and merge the results.\n
\nhttps://indico.cern.ch/event/0/contributions/1294199/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294199/
END:VEVENT
BEGIN:VEVENT
SUMMARY:BaBar simulation production - A millennium of work in under a year
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294200@indico.cern.ch
DESCRIPTION:Speakers: D. Smith (STANFORD LINEAR ACCELERATOR CENTER)\nfor t
he BaBar Computing Group. \n \nThe analysis of the BaBar experiment requir
es many times the measured \ndata to be produced in simulation. This requ
irement has resulted in \none of the largest distributed computing project
s ever completed. \nThe latest round of simulation for BaBar started in e
arly 2003\, and \ncompleted in early 2004\, and encompassed over 1 million
jobs\, and \nover 2.2 billion events. By the end of the production cycle
over 2 \ndozen different computing centers and nearly 1.5 thousand cpus w
ere \nin constant use in North America and Europe. The whole effort was \
nmanaged from a central database at SLAC\, with real-time updates of \nthe
status of all jobs. Utilities were developed to tie together \nproductio
n with many different batch systems\, and with different \nneeds for secur
ity. \nThe produced data was automatically transfered to SLAC for use and
\ndistribution to analysis sites. The system developed to manage this \n
effort was a combination of web and database applications\, and \ncommand
line utilities. The technologies used to complete this \neffort along wit
h its complete scope will be presented.\n\nhttps://indico.cern.ch/event/0/
contributions/1294200/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294200/
END:VEVENT
BEGIN:VEVENT
SUMMARY:BaBar Bookkeeping - a distributed meta-data catalog of the BaBar e
vent store.
DTSTART;VALUE=DATE-TIME:20040930T151000Z
DTEND;VALUE=DATE-TIME:20040930T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294201@indico.cern.ch
DESCRIPTION:Speakers: D. Smith (STANFORD LINEAR ACCELERATOR CENTER)\nThe B
aBar experiment has migrated its event store from an \nobjectivity based s
ystem to a system using ROOT-files\, and along \nwith this has developed a
new bookkeeping design. This bookkeeping \nnow combines data production\
, quality control\, event store \ninventory\, distribution of BaBar data t
o sites and user analysis in \none central place\, and is based on collect
ions of data stored as \nROOT-files. These collections are grouped into p
re-determined \ndatasets\, which define subsets of BaBar data to be used i
n \nanalysis. Datasets are updated automatically to contain at any \ntime
s the most up-to-date BaBar data. Local mirrors \nof the bookkeeping dat
abase can be used with the data distribution \nfeatures to import collect
ions and maintain local event stores \ncontaining subsets of the available
BaBar data. The bookkeeping \nsystem is scalable and supports sites cont
aining all available data \nand hundreds of users down to the single user
with a laptop. Oracle \nand MySQL relational databases are supported in i
ts use\, and sites \ncan choose which to support. Database mirrors in th
e bookkeeping \nsystem can be accessed over network\, which allows to brow
se local \ninventories from remote sites. This book keeping system has be
en in \nactive use in BaBar since early this year\, and the scope of its u
se \nalong with technologies developed to keep it working will be \npresen
ted.\n\nhttps://indico.cern.ch/event/0/contributions/1294201/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294201/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tools for GRID deployment of CDF offline and SAM data handling sys
tems for Summer 2004 computing.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294202@indico.cern.ch
DESCRIPTION:Speakers: A. Kreymer (FERMILAB)\nThe Fermilab CDF Run-II exper
iment is now providing official support for\nremote computing\, expanding
this to about 1/4 of the total CDF computing\nduring the Summer of 2004.\n
\nI will discuss in detail the extensions to CDF software distribution\nan
d configuration tools and procedures\, in support of CDF GRID/DCAF\ncomput
ing for Summer 2004. We face the challenge of unreliable networks\,\ntime
differences\, and remote managers with little experience with\nthis partic
ular software.\n\nWe have made the first deployment of the SAM data handli
ng system \noutside its original home in the D0 experiment.\nWe have deplo
yed to about 20 remote CDF sites.\nWe have created light weight testing an
d monitoring tools\nto assure that these sites are in fact functional when
installed.\n\nWe are distributing and configuring both client code within
CDF code releases\,\nand the SAM servers to which the clients connect.\nP
rocedures which once took days are now performed in minutes.\nThese tools
can be used to install SAM servers for D0 and other experiments.\nNetworks
permitting\, we will give a live SAM installation demonstration.\n\nWe ha
ve separated the data handling components from the main CDF offline\ncode
releases by means of shared libraries\, permitting live upgrades\nto other
wise frozen code.\nWe now use a special 'development lite' release to ensu
re that all sites\nhave the latest tools available.\n\nWe have put subtant
ial effort into revision control\,\nso that essentially all active CDF sit
es are running exactly the same code.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294202/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294202/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alibaba: A heterogeneous grid-based job submission system used by
the BaBarexperiment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294204@indico.cern.ch
DESCRIPTION:Speakers: M. Jones (Manchester University)\nThe BaBar experime
nt has accumulated many terabytes of data on \nparticle physics reactions\
, accessed by a community of hundreds of \nusers. \nTypical analysis tasks
are C++ programs\, individually written by the \nuser\, using shared temp
lates and libraries. The resources have \noutgrown a single platform and
a distributed computing model is \nneeded. The grid provides the natural
toolset. However\, in contrast \nto the LHC experiments\, BaBar has an exi
sting user community with an \nexisting non-Grid usage pattern\, and provi
ding users with an \nacceptable evolution presents a challenge.\n
\n \
nThe 'Alibaba' system\, developed as part of the UK GridPP project\, \npro
vides the user with a familiar command line environment. It draws \non the
existing global file systems employed and understood by the \ncurrent use
r base. The main difference is that they submit jobs with \na 'gsub' comma
nd that looks and feels like the familiar'qsub'. \nHowever it enables them
to submit jobs to computer systems at\ndifferent institutions\, with mini
mal requirements on the remote \nsites. Web based job monitoring is also p
rovided. The problems and \nfeatures (the input and output sandboxes\, au
thentication\, data \nlocation) and their solutions are described.\n\nhttp
s://indico.cern.ch/event/0/contributions/1294204/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294204/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Comparative study of the power of goodness-of-fit algorithms
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294205@indico.cern.ch
DESCRIPTION:Speakers: M.G. Pia (INFN GENOVA)\nA Toolkit for Statistical Da
ta Analysis has been recently released. \nThanks to this novel software sy
stem\, for the first time an ample \nset of sophisticated algorithms for t
he comparison of data \ndistributions (goodness of fit tests) is made avai
lable to the High \nEnergy Physics community in an open source product. Th
e statistical \nalgorithms implemented belong to two sets\, for the compar
ison of \nbinned and unbinned distributions respectively\; they include t
he Chi-\nsquared Test\, the Kolmogorov-Smirnov Test\, the Kuiper Test\, th
e \nGoodman Test\, the Anderson-Darling Test\, the Fisz-Cramer-von Mises \
ntest\, the Tiku Test.\nSince the Toolkit provides the user a wide choice
of algorithms\, it \nis important to evaluate them comparatively and to es
timate their \npower\, to provide guidance to the users about the selectio
n of the \nmost appropriate algorithm for a given use case.\nWe present a
study of the power of a variety of mathematical \nalgorithms implemented i
n the Toolkit. The study is performed by \nevaluating the behaviour of the
various tests in a set of well \nidentified use cases relevant to data an
alysis applications. To our \nknowledge\, such a comparative study of the
power of goodness of fit \nalgorithms has never been performed previously
.\n\nhttps://indico.cern.ch/event/0/contributions/1294205/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294205/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Carrot ROOT Apache Module
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294210@indico.cern.ch
DESCRIPTION:Speakers: V. Onuchin (CERN\, IHEP)\nCarrot is a scripting modu
le for the Apache webserver. Based on the \nROOT framework\, it has a numb
er of powerful features\, including the \nability to embed C++ code into H
TML pages\, run interpreted and \ncompiled C++ macros\, send and execute C
++ code on remote web \nservers\, browse and analyse the remote data locat
ed in ROOT files \nwith the web browser\, access and manipulate databases\
, and generate \ngraphics on-the-fly\, among many others.\nIn this talk we
will describe and demonstrate the main features of \nCarrot. We will also
discuss the future development of this module\nin context of GRID and int
egration with PROOF and xrootd/rootd.\n \nMore information about Carrot is
available from the Carrot website \nat: http://carrot.cern.ch\n\nhttps://
indico.cern.ch/event/0/contributions/1294210/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294210/
END:VEVENT
BEGIN:VEVENT
SUMMARY:MONARC2: A Processes Oriented\, Discrete Event Simulation Framewor
k for Modelling and Design of Large Scale Distributed Systems.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294213@indico.cern.ch
DESCRIPTION:Speakers: I. Legrand (CALTECH)\nThe design and optimization of
the Computing Models for the future LHC experiments\,\nbased on the Grid
technologies\, requires a realistic and effective modeling and\nsimulation
of the data access patterns\, the data flow across the local and wide are
a\nnetworks\, and the scheduling and workflow created by many concurrent\,
data intensive\njobs on large scale distributed systems.\n\nThis paper pr
esents the latest generation of the MONARC (MOdels of Networked Analysis\n
at Regional Centers) simulation framework\, as a design and modelling tool
for large\nscale distributed systems applied to HEP experiments. A proces
s-oriented approach for\ndiscrete event simulation is used for describing
concurrent running programs\, as well\nas the stochastic arrival patterns
that characterize how such systems are used. The\nsimulation engine is bas
ed on Threaded Objects\, (or Active Objects) which offer great\nflexibilit
y in simulating the complex behavior of distributed data processing\nprogr
ams. The engine provides an appropriate scheduling mechanism for the Activ
e\nObjects with efficent support for interrupts.\nThe framework provides a
complete set of basic components (processing nodes\, data\nservers\, netw
ork components) together with dynamically loadable decision units\n(schedu
ling or data replication modules) for easily building complex Computing Mo
del\nsimulations.\nExamples of simulating complex data processing systems\
, specific for the LHC\nexperiments (production tasks associated with data
replication and interactive\nanalysis on distributed farms) are presented
\, and the way the framework is used to\ncompare different decision making
algorithms or to optimize the overall Grid\narchitecture.\n\nhttps://indi
co.cern.ch/event/0/contributions/1294213/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294213/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Managing software licences for a large research laboratory
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294214@indico.cern.ch
DESCRIPTION:Speakers: N. Hoeimyr (CERN IT)\nThe Product Support (PS) group
of the IT department at CERN distributes and \nsupports more than one hun
dred different software packages\, ranging from tools\n for computer aided
design\, field calculations\, mathematical and structural \nanalysis to s
oftware development. Most of these tools\, which are used on \na variety o
f Unix and Windows platforms by different user populations\, are \ncommerc
ial packages requiring a licence. The group is also charged with \nlicens
e negotiations with the software vendors.\n\nKeeping track of large number
and variety of licences is no easy task\, so in \norder to provide a more
automated and more efficient service\, the PS group has \ndeveloped a dat
abase system to both track detailed licence configurations \nand to monito
r the their use. The system is called PSLicmon (PS Licence \nMonitor) and
is based on an earlier development from the former CE group.\n\nPSLicmon c
onsists of four main components: report generation\, data loader\, \nOracl
e product database and a PHP-based Web-interface. The license log \nparser
/loader is implemented in Perl and loads reports from the different \nlice
nse managers into the Oracle database. The database contains information \
nabout products\, licenses and suppliers and is linked to CERN's human res
ource \ndatabase. The web-interface allows for on the fly generation of st
atistics \nplots as well as data entry and updates. The system also includ
es an alarm \nsystem for licence expiry.\n\nThanks to PSLicmon\, the suppo
rt team is able to better match licence \naquisitions with the diverse nee
ds of its user community\, and to be in \ncontrol of migration and phaseou
t scenarios between different products \nand/or product versions. The tool
has proved to be a useful aid when making \ndecisions regarding product s
upport policy and licence aquisitions\, in \nparticular ensuring the provi
sion of the correct number of often expensive \nsoftware licences to match
CERN's needs.\n\nhttps://indico.cern.ch/event/0/contributions/1294214/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294214/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The introduction to BES computing environment
DTSTART;VALUE=DATE-TIME:20040927T132000Z
DTEND;VALUE=DATE-TIME:20040927T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294216@indico.cern.ch
DESCRIPTION:Speakers: G. CHEN (COMPUTING CENTER\,INSTITUTE OF HIGH ENERGY
PHYSICS\,CHINESE ACADEMY OF SCIENCES)\nBES is an experiment on Beijing Ele
ctron-Positron Collider (BEPC). \nBES computing environment consists of PC
/Linux cluster and mainly relies on the free \nsoftware. OpenPBS and Gangl
ia are used as job schedule and monitor system. With \nhelps from CERN IT
Division\, CASTOR was implemented as storage management system. \nBEPC is
being upgraded and luminosity will increase one hundred times comparing to
\ncurrent machine. The data produced by new BES-III detector will be abou
t 700 \nTerabytes per year. To meet the computing demand\, we proposed a s
olution based on \nPC/Linux/Cluster and SAN technology. CASTOR will be use
d to manage the storage \nresources of SAN. We started to develop a graphi
cal interface for CASTOR. Some tests \non data transmission performance of
SAN environment were carried out. The result \nshows that I/O performance
of SAN is better than that of traditional storage \nconnection method inc
luding IDE\, SCSI etc and it can satisfy BESIII experiment’s \ndemand fo
r data processing.\n\nhttps://indico.cern.ch/event/0/contributions/1294216
/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294216/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CERN's openlab for Datagrid applications
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294220@indico.cern.ch
DESCRIPTION:Speakers: S. Jarp (CERN)\nFor the last 18 months CERN has coll
aborated closely with several industrial partners\nto evaluate\, through t
he opencluster project\, technology that may (and hopefully\nwill) play a
strong role in the future computing solutions\, primarily for LHC but\npos
sibly also for other HEP computing environments. Unlike conventional field
testing\nwhere solutions from industry are evaluated rather independently
\, the openlab\nprinciple is based on active collaboration between all par
tners\, with the common goal\nof constructing a coherent system.\nThe talk
will discuss our experience to date with the following hardware\n- 64-
bit computing (in our case represented by the Itanium processor). This \nw
ill also\ninclude the porting of applications and Grid software to 64 bits
.\n- Rack mounted servers\n- The use of 10 Gbps Ethernet for both LA
N and WAN connectivity\n- An iSCSI-based Storage System that promises t
o scale to Petabyte dimensions\n- The use of 10 Gbps Infiniband as a cl
uster interconnect\nOn the software side we will review our experience wit
h the latest grid-enabled\nrelease of Oracle\, the so-called release "10g"
.\nThe talk will review the results obtained so far\, either in stand alon
e tests or as\npart of the larger LCG testbed\, and it will describe the p
lans for the future in this\nthree-year collaboration with industry.\n\nht
tps://indico.cern.ch/event/0/contributions/1294220/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294220/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CMS Detector Description: New Developments
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294222@indico.cern.ch
DESCRIPTION:Speakers: M. Case (UNIVERSITY OF CALIFORNIA\, DAVIS)\nThe CMS
Detector Description Database (DDD) consists of a C++ API and an XML based
\ndetector description language. DDD is used by the CMS simulation (OSCAR)
\,\nreconstruction (ORCA)\, and visualization (IGUANA) as well by test bea
m software that\nrelies on those systems. The DDD is a sub-system within t
he COBRA framework of the\nCMS Core Software. Management of the XML is cur
rently done using a separate Geometry\nproject in CVS.\n\nWe give an overv
iew of the DDD integration and report on recent developments\nconcerning d
etector description in CMS software:\n\n* The ability of client software t
o describe sub-detectors by providing an algorithm\nplug-in in C++ based o
n SEAL plug-in facilities. A typical algorithm plug-in makes\nuse of the D
DD API to describe detector properties. Through the API seamless access\nt
o data defined via the XML description language is ensured.\n\n* An Oracle
schema was recently developed and the database populated by a DDD\napplic
ation. The geometrical structure of the detector is seen as a skeleton to
which\nconditions or configuration data can be attached.\n\n* A C++ stream
ing mechanism to output the geometry as binary files was developed.\nThis
representation can be read into memory much more rapidly than the XML file
s can\nbe parsed.\n\nThe DDD API shields clients from each of the possible
input sources. Even the\nsimultaneous use of several different input sour
ces is possible through various\nconfiguration options in the framework CO
BRA.\n\nhttps://indico.cern.ch/event/0/contributions/1294222/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294222/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Future processors: What is on the horizon for HEP farms?
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294224@indico.cern.ch
DESCRIPTION:Speakers: S. Jarp (CERN)\nIn 1995 I predicted that the dual-pr
ocessor PC would start invading HEP computing and\na couple of years later
the x86-based PC was omnipresent in our computing facilities.\nToday\, we
cannot imagine HEP computing without thousands of PCs at the heart.\nThis
talk will look at some of the reasons why we may one day be forced to lea
ve this\nsweet-spot. This would be not because we (the HEP community) want
to\, but rather\nbecause other market forces may pull in different direct
ions. Amongst such forces\, I\nwill review the new generation of powerful
game consoles where IBM's Power processor\nis currently making strong inro
ads. Then I will look at the huge mobile market where\nlow-powered process
ing rules rather than power-hungry DP Xeon/Xeon-like processors\,\nand thi
rdly I will explore in my talk the promise of enterprise servers with a la
rge\nnumber of processors on each die (so-called Core Multi-Processors). F
or all the\nscenarios\, we must\, of course\, keep in mind that HEP can on
ly move when the\nprice-performance ratio is right.\n\nhttps://indico.cern
.ch/event/0/contributions/1294224/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294224/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CASTOR: Operational issues and new Developments
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294225@indico.cern.ch
DESCRIPTION:Speakers: J-D. Durand (CERN)\nThe Cern Advanced STORage (CASTO
R) system is a scalable high throughput\nhierarchical storage system devel
oped at CERN. CASTOR was first deployed\nfor full production use in 2001 a
nd has expanded to now manage around two\nPetaBytes and almost 20 million
files. CASTOR is a modular system\,\nproviding a distributed disk cache\,
a stager\, and a back end tape archive\,\naccessible via a global logical
name-space.\n\nThis paper focuses on the operational issues of the system
currently in\nproduction\, and first experiences with the new CASTOR stage
r which has\nundergone a significant redesign in order to cope with the da
ta handling\nchallenges posed by the LHC\, which will be commissioned in 2
007.\n\nThe design target for the new stager was to scale to another order
of\nmagnitude above the current CASTOR\, namely to be able to sustain pea
k\nrates of the order of 1000 file open requests per second for a PetaByte
\ndisk pool. The new developments have been inspired by the problems whic
h\narose managing massive installations of commodity storage hardware. The
\nfarming of disk servers poses new challenges to the disk cache managemen
t:\nrequest scheduling\; resource sharing and partitioning\; automated\nco
nfiguration and monitoring\; and fault tolerance of unreliable hardware\n\
nManagement of the distributed component based CASTOR system across a larg
e\nfarm\, provides an ideal example of the driving forces for the developm
ent\nof automated management suites. Quattor and Lemon frameworks naturall
y\naddress CASTOR's operational requirements\, and we will conclude by\nde
scribing their deployment on the masstorage systems at CERN.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294225/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294225/
END:VEVENT
BEGIN:VEVENT
SUMMARY:dCache\, LCG Storage Element and enhanced use cases
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294227@indico.cern.ch
DESCRIPTION:Speakers: P. Fuhrmann (DESY)\nThe dCache software system has
been designed to manage a\nhuge amount of individual disk storage nodes an
d let them\nappear under a single file system root. Beside a variety\nof
other features\, it supports the GridFtp dialect\, implements\nthe Storage
Resource Manager interface (SRM V1) and can be linked\nagainst the CERN G
FAL software layer. These abilities makes\ndCache a perfect Storage Elemen
t in the context of LCG and\npossibly future grid initiatives as well.\nDu
ring the last year\, dCache has been deployed at dozens of\nTier-I and T
ier-II centers for the CMS and CDF experiments in\nthe US and Europe\, in
cluding Fermilab\, Brookhaven\, San Diego\,\nKarlsruhe and CERN. The large
st implementation\, the CDF system\nat FERMI\, provides 150 TeraBytes of d
isk space and delivers up\nto 50 TeraBytes/day to its clients.\nSites usin
g the LCG dCache distribution are more or less\noperating the cache as bl
ack box and little knowledge is\navailable about customization and enhance
d features.\nThis presentation is therefor intended to make non dCache use
rs\ncurious and enable dCache users to better integrate dCache into\ntheir
site specific environment. Beside many other topics\,\npaper will touch o
n the possibility of dCache to closely cooperate\nwith tertiary storage sy
stems\, like Enstore\, Tsm and HPSS. It\nwill describe the way dCache can
be configured to attach\ndifferent pool nodes to different user groups but
let them all\nuse the same set of fall back pools. We will explain how dC
ache\ntakes care of dataset replication\, either by configuration or by\na
utomatic detection of data access hot spots. Finally we will\nreport on on
going development plans.\n\nhttps://indico.cern.ch/event/0/contributions/
1294227/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294227/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Network Architecture: lessons from the past\, vision for the futur
e
DTSTART;VALUE=DATE-TIME:20040930T100000Z
DTEND;VALUE=DATE-TIME:20040930T103000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294229@indico.cern.ch
DESCRIPTION:Speakers: F. Fluckiger (CERN)\nThe Architectural Principles of
the Internet have dominated the past decade. \nOrthogonal to the telecomm
unications industry principles\, they dramatically changed \nthe networkin
g landscape because they relied on iconoclastic ideas. First\, the \nInter
net end-to-end principle\, which stipulates that the network should interv
ene \nminimally on the end-to-end traffic\, pushing the complexity to the
end-systems. \nSecond\, the ban of centralized functions: all the Internet
techniques (routing\, DNS\, \nmanagement) are based on distributed\, dece
ntralized mechanisms. Third\, the absolute \ndomination of connectionless
(stateless) protocols (as with IP\, HTTTP).\n\nHowever\, when facing new r
equirements: multimedia traffic\, security\, Grid \napplications\, these p
rinciples appear sometimes as architectural barriers. \nMultimedia require
s QoS guarantees\, but stateless systems are not good at QoS. \nSecurity r
equires active\, intelligent networks\, but dumb routers or plain end-to-e
nd \nmail systems are insufficient. Grid applications require middleware o
verlay \nnetworks\, often with centralized functions.\n\nAttempts to overc
ome these deficiencies may lead to excessively complicated hybrid \nsoluti
ons\, distorting the initial principles (the QoS Pandora box). Middleware
\nsolutions are sometimes difficult to deploy (e.g for large scale PKI \nd
eployment). “Lambda on-demand” technologies are conceptually nothing e
lse than old \nswitched circuits\, that we never managed to satisfactorily
integrate with IP \nnetworks. \n\nWhere is all this going? To help formin
g a vision of the future\, the paper will \nrefer to several observations
that the author has formulated over the past 30 years: \nthe “breathing
law” (a succession of decentralization and recentralization phases)\, \n
the perpetual and oscillating mismatch of the bandwidth offer-demand\, th
e \nconceptual antagonisms between resource level and complexity\, between
scaling and \nQoS.\n\nhttps://indico.cern.ch/event/0/contributions/129422
9/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294229/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Don Quijote - Data Management for the ATLAS Automatic Production S
ystem
DTSTART;VALUE=DATE-TIME:20040927T120000Z
DTEND;VALUE=DATE-TIME:20040927T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294230@indico.cern.ch
DESCRIPTION:Speakers: M. Branco (CERN)\nAs part of the ATLAS Data Challeng
es 2 (DC2)\, an automatic production system was\nintroduced and with it a
new data management component.\n\nThe data management tools used for previ
ous Data Challenges were built as separate\ncomponents from the existing G
rid middleware. These tools relied on a database of its\nown which acted a
s a replica catalog.\n\nWith the extensive use of Grid technology expected
for the most part of the DC2\nproduction\, no longer can a data managemen
t tool be independent of the Grid\nmiddleware. Each Grid relies on its own
replica catalog and not on an ATLAS specific\ntool.\n\nATLAS DC will atte
mpt to use uniformly the resources provided by three Grids:\nNorduGrid\, U
S Grid3 and LCG-2. Lecagy system will be supported as well.\n\nThe propose
d solution was to build a data management proxy system which consists of a
\ncommon high-level interface\, whose implementation depends on each Grid'
s replica and\nmetadata catalog as well as the storage backend (mainly "cl
assic" GridFTP servers and\nSRM).\n\nDon Quijote provides management of re
plicas in a services oriented architecture\,\nacross the several "flavours
" of Grid middleware used by ATLAS DC.\n\nWith a higher-level interface co
mmon across several Grids (and legacy systems) a user\n(such as the new au
tomatic production system) can seamlessly manage replicas\nindependently o
f their hosting environment. Given the services-based architecture\, a\nli
ghtweight command line tool is capable of interacting uniformly within eac
h Grid\nand between Grids (e.g. moving files from LCG-2 to US Grid 3 while
maintaining\nattributes such as the Global Unique Identifier).\n\nhttps:/
/indico.cern.ch/event/0/contributions/1294230/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294230/
END:VEVENT
BEGIN:VEVENT
SUMMARY:FAMOS\, a FAst MOnte-Carlo Simulation for CMS
DTSTART;VALUE=DATE-TIME:20040927T122000Z
DTEND;VALUE=DATE-TIME:20040927T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294231@indico.cern.ch
DESCRIPTION:Speakers: F. Beaudette (CERN)\nAn object-oriented FAst MOnte-C
arlo Simulation (FAMOS) has recently been developed\nfor CMS to allow rapi
d analyses of all final states envisioned at the LHC while\nkeeping a high
degree of accuracy for the detector material description and the\nrelated
particle interactions. For example\, the simulation of the material effec
ts in\nthe tracker layers includes charged particle energy loss by ionizat
ion and multiple\nscattering\, electron Bremsstrahlung and photon conversi
on. The particle showers are\ndeveloped in the calorimeters with an emulat
ion of GFLASH\, finely interfaced with the\ncalorimeter geometry (e.g.\, c
rystal positions\, cracks\, rear leakage\, etc).\n\nAs the same software f
ramework is used for FAMOS and ORCA (the full Object-oriented\nReconstruct
ion software for CMS Analysis)\, the various Physics Objects (electrons\,\
nphotons\, muons\, taus\, jets\, missing ET\, charged particle tracks\, ..
.) can be accessed\nwith a similar code with both fast and full simulation
\, thus allowing any analysis\nalgorithm to be transported from FAMOS to O
RCA (and later\, to data analysis and DST\nreading) or vice-versa without
any additional work.\n\nAltogether\, a gain in CPU time of about a hundred
can be achieved with respect to the\nfull simulation\, with little loss i
n precision.\n\nhttps://indico.cern.ch/event/0/contributions/1294231/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294231/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The TAG Collector - A Tool for Atlas Code Release Management
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294233@indico.cern.ch
DESCRIPTION:Speakers: S. Albrand (LPSC)\nThe Tag Collector is a web interf
aced database application for release management.\nThe tool is tightly cou
pled to CVS\, and also to CMT\, the configuration management\ntool. Develo
pers can interactively select the CVS tags to be included in a build\, and
\nthe complete build commands are produced automatically. Other features a
re provided\nsuch as verification of package CMT requirements files\, and
direct links to the\npackage documentation\, making it a useful tool for a
ll ATLAS users.\nThe software for the Atlas experiment contains about 1 MS
LOC. It is organized in over\n50 container packages containing about 500 s
ource code packages. One or several\ndevelopers maintain each package. ATL
AS developers are widely distributed \ngeographically.\nThe Tag Collector
was designed and implemented during the summer of 2001\, in response\nto a
near crisis situation. It has been in use since September 2001. Until thi
s time\nthe ATLAS librarian constructed a build of the software release af
ter a cascade of\ne-mails from developers\; communicating the correct CVS
code repository version tag of\ntheir respective packages. This was subjec
t to all sorts of human errors\, and\ninefficient in our multi-time zone e
nvironment. In addition\, it was difficult to\nmanage the contents of a re
lease. It was all too easy for a prolific developer to\nintroduce a well-i
ntentioned change in his package just before a build\, often with\nunsuspe
cted border effects. Developers were also asking for regular\, and frequen
t\ndeveloper builds.\nThe tool has proved extremely successful\, and featu
res that are outside the scope of\nthe original design have been requested
. Requirements for a new version were\ncollected during 2003\, culminating
in a formal review in December 2003. The new\nversion is currently being
designed. It will be more flexible and easier to maintain.\n\nhttps://indi
co.cern.ch/event/0/contributions/1294233/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294233/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Architecture of the ZEUS Second Level Global Tracking Trigger
DTSTART;VALUE=DATE-TIME:20040927T122000Z
DTEND;VALUE=DATE-TIME:20040927T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294515@indico.cern.ch
DESCRIPTION:Speakers: M. Sutton (UNIVERSITY COLLEGE LONDON)\nThe architect
ure and performance of the ZEUS Global Track Trigger \n(GTT) are described
. Data from the ZEUS silicon Micro Vertex\ndetector's HELIX readout chips\
, corresponding to 200k channels\, are\ndigitized by 3 crates of ADCs and
PowerPC VME board computers push\ncluster data for second level trigger pr
ocessing and strip data for\nevent building via Fast and GigaEthernet netw
ork connections.\nAdditional tracking information from the central trackin
g chamber \nand forward straw tube tracker are interfaced into the 12 dual
CPU \nPC farm of the global track trigger where track and vertex finding
\nis performed by separately threaded algorithms.\nThe system is data driv
en at the ZEUS first level trigger rates \n\n\nhttps://indico.cern.ch/even
t/0/contributions/1294515/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294515/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tier-1 and Tier-2 Real-time Analysis experience in CMS Data Challe
nge 04
DTSTART;VALUE=DATE-TIME:20040930T145000Z
DTEND;VALUE=DATE-TIME:20040930T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294236@indico.cern.ch
DESCRIPTION:Speakers: N. De Filippis (UNIVERSITA' DEGLI STUDI DI BARI AND
INFN)\nDuring the CMS Data Challenge 2004 a realtime analysis was attempte
d \nat INFN and PIC Tier-1 and Tier-2s in order to test the ability of \nt
he instrumented methods to quickly process the data.\n\nSeveral agents and
automatic procedures were implemented to perform \nthe analysis at the Ti
er-1/2 synchronously with the data transfer \nfrom Tier-0 at CERN. The sys
tem was implemented in the Grid LCG-2 \nenvironment and allowed on-the-fly
job preparation and subsequent \nsubmission to the Resource Broker as new
data come along. \nRunning job accessed data from the Storage Elements th
rough POOL via \nremote file protocol\, whenever possible\, or copying the
m locally with \ngridftp.\nJob monitoring and bookkeeping was performed us
ing BOSS.\nDetails of the procedures adopted to run the analysis jobs and
the \nexpected results are described.\n\nAn evaluation of the ability of t
he system to maintain an analysis \nrate at Tier-1 an Tier-2 comparable wi
th the data transfer rate is \nalso presented.\nThe results on the analysi
s timeline\, the statistics of submitted \njobs\, the overall efficiency o
f the GRID services and the overhead \nintroduced by the agents/procedures
are reported. Performances and \npossible bottlenecks of the whole proced
ure are discussed.\n\nhttps://indico.cern.ch/event/0/contributions/1294236
/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294236/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panoramix
DTSTART;VALUE=DATE-TIME:20040930T134000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294237@indico.cern.ch
DESCRIPTION:Speakers: G B. Barrand (CNRS / IN2P3 / LAL)\nPanoramix is an e
vent display for LHCb. LaJoconde is \n an interactive environment over DaV
inci\, the analysis\n software layer for LHCb. We shall present global\n t
echnological choices behind these two softwares : GUI\, \n graphic\, scrip
ting\, plotting. We shall present the connection\n to the framework (Gaudi
)\, how we can integrate other tools like\n hippodraw. We shall present th
e overall capabilities to these systems\n and their today status. We shall
outline also how good part of \n choosen technologies may be reused to bu
ild the same kind of\n interactive environments for ATLAS (LAL Agora proto
type).\n\nhttps://indico.cern.ch/event/0/contributions/1294237/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294237/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Developing & Managing a large Linux farm - the Brookhaven Experien
ce
DTSTART;VALUE=DATE-TIME:20040927T122000Z
DTEND;VALUE=DATE-TIME:20040927T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294241@indico.cern.ch
DESCRIPTION:Speakers: Tomasz WLODEK (BNL)\nThis presentation describes the
experiences and the \nlessons learned by the RHIC/ATLAS Computing Facilit
y \n(RACF) in building and managing its 2\,700+ CPU (and growing) \nLinux
Farm over the past 6+ years. We describe how \nhardware cost\, end-user ne
eds\, infrastructure\, \nfootprint\, hardware configuration\, vendor selec
tion\, \nsoftware support and other considerations have \nplayed a role in
the process of steering the growth \nof the RACF Linux Farm\, and how the
y help shape our \nfuture hardware purchase decisions. As well as a detail
ed description \nof the \nchallenges encountered and of the solutions used
in managing \nand configuring a large\, heterogenous Linux Farm \n(2700+
CPU's) in the midst of an ongoing transition \nfrom being a generally loca
l resource to a global\, \nGrid-aware resource within a larger\, distribut
ed \ncomputing environment is provided.\n\nhttps://indico.cern.ch/event/0/
contributions/1294241/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294241/
END:VEVENT
BEGIN:VEVENT
SUMMARY:64-Bit Opteron systems in High Energy and Astroparticle Physics
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294242@indico.cern.ch
DESCRIPTION:Speakers: S. Wiesand (DESY)\n64-Bit commodity clusters and far
ms based on AMD technology meanwhile have been\nproven to achieve a high c
omputing power in many scientific applications. This report\n first gives
a short introduction into the specialties of the amd64 architecture and\nt
he characteristics of two-way Opteron systems.\nThen results from measurin
g the performance and the behavior of such systems in\nvarious Particle Ph
ysics applications as compared to the classical 32-Bit systems\nare prese
nted. The investigations cover analysis tools like ROOT\, Astrophysics\ns
imulations based on CORSIKA and event reconstruction programs. Another fie
ld of\ninvestigations are parallel high performance clusters for Lattic
e QCD\ncalculations\, and n-loop calculations based on perturbative metho
ds in quantum field\ntheory using the formula manipulation program FORM. \
nIn addition to the performance results the compatibility of 32- and 64-Bi
t\narchitectures and Linux operating system issues\, as well as the impact
on fabric\nmanagement are discussed. \nIt is shown that for most of the c
onsidered applications the recently available\n64-bit commodity computers
from AMD are a viable alternative to comparable 32-Bit\nsystems.\n\nhttps
://indico.cern.ch/event/0/contributions/1294242/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294242/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grid2003 Monitoring\, Metrics\, and Grid Cataloging System
DTSTART;VALUE=DATE-TIME:20040930T134000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294243@indico.cern.ch
DESCRIPTION:Speakers: B K. Kim (UNIVERSITY OF FLORIDA)\, M. Mambelli (Univ
ersity of Chicago)\nGrid computing involves the close coordination of many
different sites which offer \ndistinct computational and storage resource
s to the Grid user community. The \nresources at each site need to be moni
tored continuously. Static and dynamic site \ninformation need to be prese
nted to the user community in a simple and efficient \nmanner.\n\nThis pap
er will present both the design and implementation of the Grid3 monitoring
\ninfrastracture and the design details and the functionalities of a new
application \ncalled the Gridcat.\n\nThe Grid3 monitoring architecture fol
lows a user-oriented design that specifies \nstandard metrics and uses dif
ferent underlying monitoring tools to collect them and \nbuild a very dive
rsified framework. In the monitoring framework we integrated \nexisting to
ols\, extended their functionality and developed original new tools. The \
nmain tools used include ACDC Job Monitoring from University of Buffalo\,
Ganglia\, a \npreliminary version of Gridcat\, Globus MDS\, the University
of Chicago Grid telemetry \nMDViewer\, and US CMS MonALISA. From the coll
ected data is extracted information of \ninterest for the VOs participatin
g in the Grid\, for example resources provided and \nused by all VOs and j
obs submitted by each VO.\n\nThe Gridcat shows site status using a web int
erface that is simple and powerful \nenough to determine the site's readin
ess to accept grid applications by collecting \nand storing dynamic site i
nformation to a database. The status information displayed \nby the protot
ype Gridcat was used extensively by the Grid2003 project as a \ncoordinati
on point for the grid operations center.\n\nhttps://indico.cern.ch/event/0
/contributions/1294243/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294243/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The BIRN Project: Distributed Information Infrastructure and Mult
i-scale Imaging of the Nervous System (BIRN = Biomedical Informatics Resea
rch Network)
DTSTART;VALUE=DATE-TIME:20040928T090000Z
DTEND;VALUE=DATE-TIME:20040928T093000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294244@indico.cern.ch
DESCRIPTION:Speakers: M. Ellisman (National Center for Microscopy and Imag
ing Research of the Center for Research in Biological Systems - The Depart
ment of Neurosciences\, University of California San Diego School of Medic
ine - La Jolla\, California - USA)\nThe grand goal in neuroscience researc
h is to understand how the interplay of \nstructural\, chemical and electr
ical signals in nervous tissue gives rise to \nbehavior. Experimental adv
ances of the past decades have given the individual \nneuroscientist an in
creasingly powerful arsenal for obtaining data\, from the level \nof molec
ules to nervous systems. Scientists have begun the arduous and challenging
\nprocess of adapting and assembling neuroscience data at all scales of r
esolution and \nacross disciplines into computerized databases and other e
asily accessed sources. \nThese databases will complement the vast struct
ural and sequence databases created \nto catalogue\, organize and analyze
gene sequences and protein products. The general \npremise of the neurosci
ence goal is simple\; namely that with "complete" knowledge of \nthe genom
e and protein structures accruing rapidly we next need to assemble an \nin
frastructure that will facilitate acquisition of an understanding for how
\nfunctional complexes operate in their cell and tissue contexts. Our U.C
. San Diego-\nbased group is leading several interdisciplinary projects ar
ound this grand \nchallenge. We are evolving a shared infrastructure that
allows for mapping \nmolecular and cellular brain anatomy in the context
of a shared multi-scale mouse \nbrain atlas system\, the Cell-Centered Dat
abase (CCDB). Complementary to these \nneuroinformatics activities at the
National Center for Microscopy and Imaging \nResearch in San Diego (NCMIR
) we have developed new molecular labeling methods \ncompatible with advan
ced ultra-wide field laser-scanning light microscopy and multi-\nresolutio
n 3 dimensional electron microscopy. These new labeling and imaging \nmet
hods are being used to populate the CCDB\, using as a driver mouse models
of \nneurological and neuropsychiatric disorders. The informatics framewor
k is \nfacilitating cooperative work by distributed teams of scientists en
gaged in focused \ncollaborations aimed to deliver new fundamental underst
anding of structures on the \nscale of 1 nm3 to 10's of µm3\, a dimension
al range that encompasses macromolecular \ncomplexes\, organelles\, and mu
lti-component structures like synapses and the cellular \ninteractions in
the context of the complex organization of the entire nervous \nsystem. Th
is is a unique and pioneering effort that links new neuroscience \ntechniq
ues and revolutionary advances in information technology. Database \nfede
ration tools are critical to the scalability of these efforts and future \
ndevelopment plans will be described in the context of the NIH-supported p
roject to \ncreate a new framework for collaboration and data integration
in the Biomedical \nInformatics Research Network (BIRN). BIRN is the lead
ing example of a virtual \ndatabase effort that is using the challenge of
federating multi-scale distributed \ndata about the nervous systems to hel
p guide the evolution of an International \nCyberinfrastructure serving al
l science disciplines\, including biomedicine.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294244/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294244/
END:VEVENT
BEGIN:VEVENT
SUMMARY:LEXOR\, the LCG-2 Executor for the ATLAS DC2 Production System
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294387@indico.cern.ch
DESCRIPTION:Speakers: D. Rebatto (INFN - MILANO)\nIn this paper we present
an overview of the implementation of the LCG \ninterface for the ATLAS pr
oduction system. In order to take profit \nof the features provided by Da
taGRID software\, on which LCG is based\, \nwe implemented a Python module
\, seamless integrated into the Workload \nManagement System\, which can b
e used as an object-oriented API to the \nsubmission services. On top of
it we implemented Lexor\,an executor \ncomponent conforming to the pull/pu
sh model designed by the DC2\nproduction system team. It pulls job descrip
tions from the supervisor \ncomponent and uses them to create job objects\
, which in turn are \nsubmitted to the Grid. All the typical Grid operatio
ns (match-making \nwith respect to input data location\, registration of o
utput data in \nthe replica catalog\, workload balancing) are performed by
the \nunderlying middleware\, while interactions with ATLAS metadata \nca
talog and the production database are granted by the integration \nwith th
e Data Management System (Don Quijote) client module and via \nXML message
s to the production supervisor (Windmill).\n\nhttps://indico.cern.ch/event
/0/contributions/1294387/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294387/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Visualisation of the ROOT geometries (TVolume and TGeoVolume) wi
th Coin-based model.
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294249@indico.cern.ch
DESCRIPTION:Speakers: A. Kulikov (Joint Institute for Nuclear Research\, D
ubna\, Russia.)\nUsing the modern 3D visualization software and hardware t
o represent \nthe object models of the HEP detectors would create the impr
essive\npictures of events and the detail views of the detectors facilitat
ing\nthe design\, simulation and data analysis and representation the huge
\namount of the information flooding the modern HEP experiments. In this\n
paper we represent the work made by members of STAR collaboration\nfrom L
aboratory of High Energy Physics JINR Dubna. This work devoted\nto visuali
sation of the STAR detector geometry. Initially the detector \ngeometry i
s described by means of specific AGE - geometry specification \nlanguage a
nd it can be converted to either TVolume or TGeoVolume type \nobject of R
OOT environment using of specially developed software. We \ncreated clas
s library for conversion of the ROOT OO model of the detector \nfrom ROOT
environment to the text "iv" file. Our class library assumes \nthe convers
ion of ROOT OO models to Coin-based C++ representation and \nthe Coin-base
d 3d Viewer with cutting / highlighting / selecting pieces \nof the image
features. Since the class library implementation is free of \nthe STAR exp
eriment specific it can be used to visualize any detector \ngeometry repre
sented in the ROOT environment. The results of our work \ncan be downloade
d from the LHE web server.\n\nhttps://indico.cern.ch/event/0/contributions
/1294249/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294249/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Results of the LHCb experiment Data Challenge 2004
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294250@indico.cern.ch
DESCRIPTION:Speakers: J. Closier (CERN)\nThe LHCb experiment performed its
latest Data Challenge (DC) in May-July 2004.\nThe main goal was to demons
trate the ability of the LHCb grid system to carry out \nmassive productio
n and efficient distributed analysis of the simulation data.\n\nThe LHCb p
roduction system called DIRAC provided all the necessary services for the
\nDC: Production and Bookkeeping Databases\, File catalogs\, Workload and
Data \nManagement systems\, Monitoring and Accounting tools. It allowed to
combine in a \nconsistent way resources of more than 20 LHCb production s
ites as well as the LCG2 \ngrid resources. 200M events constituting 90 TB
of data were produced and stored in \n6 Tier 1 centers. The subsequent ana
lysis was carried out at CERN as well as in all \nthe Tier 1 centers to wh
ere preselected datasets were distributed. The GANGA User \nInterface was
used to assist users in preparation of their analysis jobs and \nrunning t
hem on the local and remote computing resources.\n\nWe will present the DC
results\, the experience gained utilising DIRAC and\nLCG2 grids as well a
s further developments necessary to achieve the scalability \nlevel of the
real running LHCb experiment.\n\nhttps://indico.cern.ch/event/0/contribut
ions/1294250/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294250/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Writing Extension Modules (Plug-ins) for JAS3
DTSTART;VALUE=DATE-TIME:20040930T161000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294251@indico.cern.ch
DESCRIPTION:Speakers: Mark DONSZELMANN (Extensions to JAS)\nJAS3 is a gene
ral purpose\, experiment independent\, open-source\, data analysis tool. \
nJAS3 includes a variety of features\, including histograming\, plotting\,
fitting\, \ndata access\, tuple analysis\, spreadsheet and event display
capabilities. More \ncomplex analysis can be performed using several scrip
ting languages (pnuts\, jython\, \netc.)\, or by writing Java analysis cla
sses. All of these features are provided by \nloosely coupled "plug-in" mo
dules which are installed into the JAS3 base \napplication framework.\n\nI
n this presentation we will describe the JAS3 plug-in architecture\, and e
xplain \nhow different plug-ins can interact via service interfaces and ev
ent dispatch \nmechanisms. We will demonstrate how this architecture makes
it possible for \nindividual plug-ins to be added\, removed or upgraded t
o customize the application. \nWe will then give an overview of how to des
ign new experiment or domain specific \nplug-ins to extend the functionali
ty of JAS3 for your own requirements\, or to \nprovide general purpose com
ponents for use by others.\n\nhttps://indico.cern.ch/event/0/contributions
/1294251/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294251/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interactive Data Analysis on the Grid using Globus 3 and JAS3
DTSTART;VALUE=DATE-TIME:20040930T153000Z
DTEND;VALUE=DATE-TIME:20040930T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294252@indico.cern.ch
DESCRIPTION:Speakers: T. Johnson (SLAC)\nThe aim of the service is to allo
w fully distributed analysis of large volumes of \ndata while maintaining
true (sub-second) interactivity. All the Grid related \ncomponents are ba
sed on OGSA style Grid services\, and to the maximum extent uses \nexistin
g Globus Toolkit 3.0 (GT3) services. All transactions are authenticated an
d \nauthorized using GSI (Grid Security Infrastructure) mechanism - part o
f GT3. JAS3\, \nand experiment independent data analysis tool is used as t
he interactive analysis \nclient.\n\nThe system consists of three main ser
vice components: \n\nDataset Catalog Service:\nThe Dataset Catalog support
s browsing for an interesting dataset\, or searching for \ndata using a qu
ery language which operates on metadata stored in the catalog. The \ncatal
og makes few assumptions about the metadata stored in the catalog\, except
that \nthe metadata consists of key-value pairs\, stored in a hierarchica
l tree. The \nDataset Catalog Service is designed to allow easy interfaci
ng to existing data \ncatalog back-ends.\n\nDataset Analysis Grid Service:
\nThis service is responsible for resolving the dataset id from the catalo
g service\, \nand transferring chunks of data to worker nodes for analysis
processing. This \nservice also manages the worker nodes\, distributes an
alysis code to the worked \nnodes and retrieves intermediate results from
the worker nodes before sending \nmerged results back to the analysis clie
nt. \n\nWorker Execution Services:\nThis service runs on each worker node
and is responsible for processing analysis \nrequests. \n\nIn this presen
tation we will demonstrate the current system\, and will describe some \no
f the choices made in architecting the system\, in particular the challeng
es of \nobtaining interactive response times from GT3.\n\nhttps://indico.c
ern.ch/event/0/contributions/1294252/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294252/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The FreeHEP Java Library Root IO package
DTSTART;VALUE=DATE-TIME:20040929T134000Z
DTEND;VALUE=DATE-TIME:20040929T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294253@indico.cern.ch
DESCRIPTION:Speakers: T. Johnson (SLAC)\nThe FreeHEP Java library contains
a complete implementation \nof Root IO for Java. The library uses the "St
reamer Info" embedded in files created \nby Root 3.x to dynamically create
high performance Java proxies for Root objects\, \nmaking it possible to
read any Root file\, including files with user defined \nobjects. In this
presentation we will discuss the status of this code\, explain its \nimple
mentation and demonstrate performance using benchmark comparisons to stand
ard \nRoot IO. We will also describe recently added support for reading fi
les remotely \nusing rootd and xrootd protocols.\n\nWe will also show some
uses of this library\, including using JAS3 to analyze Root \ndata\, usin
g the WIRED event display to visualize data from Root files and using \nro
otd and Java servlet technology to make live plots web accessible - with e
xamples \nfrom GLAST and BaBar. We will also explain how you can trivially
make your own root \ndata web-accessible using the AIDA Tag Library and J
akarta Tomcat.\n\nhttps://indico.cern.ch/event/0/contributions/1294253/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294253/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Operation of the CERN Managed Storage environment\; current status
and future directions
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294261@indico.cern.ch
DESCRIPTION:Speakers: T. Smith (CERN)\nThis paper discusses the challenges
in maintaining a stable Managed Storage Service \nfor users built upon dy
namic underlying disk and tape layers.\n\nEarly in 2004 the tools and tech
niques used to manage disk\, tape\, and stage servers \nwere refreshed in
adopting the QUATTOR tool set. This has markedly increased the \ncoherency
and efficiency of the configuration of data servers. The LEMON \nmonitor
ing suite was deployed to raise alarms and gather performance metrics. \nE
xploiting this foundation\, higher level service displays are being added\
, giving \ncomprehensive and near-real-time views of operations. The scope
of our monitoring \nhas been broadened to include low-level machine senso
rs such as thermometer\, IPMI \nand SMART readings\, improving our ability
to detect impending hardware failure.\n\nIn terms of operations\, widespr
ead disk reliability problems which were manpower \nintensive to chase\, w
ere overcome by exchanging a bad batch of 1200 disks. Recent \nLHC data c
hallenges have ventured into new operating domains for the CASTOR system\,
\nwith massive disk resident file catalogues requiring special handling.
The tape \nlayer has focused on STK 9940 drives for bulk recording capacit
y: a large scale \ndata migration to this media permitted old drive techno
logies to be retired. \nRepacking 9940A data to 9940B high density media a
llows us to recycle tapes\, giving \nsubstantial savings by avoiding acqui
sition of new media.\n\nIn addition to more robust software\, hardware dev
elopments are required for LHC era \nservices. We are moving from EIDE to
SATA based disk storage and envisage a tape \ndrive technology refresh. De
tails will be provided of our investigations in these \nareas.\n\nhttps://
indico.cern.ch/event/0/contributions/1294261/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294261/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DIRAC - The Distributed MC Production and Analysis for LHCb
DTSTART;VALUE=DATE-TIME:20040930T161000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294262@indico.cern.ch
DESCRIPTION:Speakers: A. TSAREGORODTSEV (CNRS-IN2P3-CPPM\, MARSEILLE)\nDIR
AC is the LHCb distributed computing grid infrastructure for MC\nproductio
n and analysis. Its architecture is based on a set of distributed\ncollabo
rating services. The service decomposition broadly follows the ARDA \nproj
ect proposal\, allowing for the possibility of interchanging the EGEE/ARDA
\nand DIRAC components in the future. Some components developed outside t
he\nDIRAC project are already in use as services\, for example the File Ca
talog\ndeveloped by the AliEn project.\n\nAn overview of the DIRAC archite
cture will be given\, in particular the\nrecent developments to support us
er analysis. The main design choices will\nbe presented. One of the main d
esign goals of DIRAC is the simplicity\nof installation\, configuring and
operation of various services. This allows\nall the DIRAC resources to be
easily managed by a single Production Manager.\n\nThe modular design of th
e DIRAC components allows its functionality to be\neasily extended to incl
ude new computing and storage elements or to\nhandle new tasks. The DIRAC
system already uses different types of computing\nresources - from single
PC's to a variety of batch system to the Grid\nenvironment. In particular\
, the use of the LCG2 environment will be\npresented. Different ways to ut
ilise LCG2 resources will be examined as well\nas the issue of interoperab
ility between the LCG2 and DIRAC sites.\n\nhttps://indico.cern.ch/event/0/
contributions/1294262/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294262/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Geant4 as Simulation Toolkit addressed to interplanetary manned mi
ssions studies: required developments and improvements
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294263@indico.cern.ch
DESCRIPTION:Speakers: S. Guatelli (INFN Genova\, Italy)\nThe study of the
effects of space radiation on astronauts in an important concern of\nspac
e missions for the exploration of the Solar System. The radiation hazard t
o crew\nis critical to the feasibility of interplanetary manned missions.
\nTo protect the crew\, shielding must be designed\, the environment must
be \nanticipated and monitored\, and a warning system must be put in plac
e. \nA Geant4 simulation has been developed for a preliminary quantitativ
e study of\nvehicle concepts and Moon surface habitat designs\, and the ra
diation exposure of\ncrews therein. This project is defined in the context
of the European AURORA\nprogramme\, whose primary object is to study solu
tions for the robotic and human\nexploration of the Solar System\, with Ma
rs\, the Moon and the asteroids as the most\nlikely objects. \nThis study
intends to evaluate wheter the energy range typical of the radiation\nenv
ironment of interplanetary missions is adequately treated in Geant4 physic
s\npackages\, for all the major types of particles involved\, identifying
the availability\nof appropriate electromagnetic and hadronic physics mode
ls and verifyng the status of\ntheir validation. Recommendations for fur
ther Geant4 developments or improvements\nand of further validation tests\
, necessary for the interplanetary manned missions\, \nare issued as a res
ult of this study.\n\nhttps://indico.cern.ch/event/0/contributions/1294263
/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294263/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Development and use of MonALISA high level monitoring services for
the star unified Meta-Scheduler
DTSTART;VALUE=DATE-TIME:20040930T155000Z
DTEND;VALUE=DATE-TIME:20040930T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294264@indico.cern.ch
DESCRIPTION:Speakers: E. Efstathiadis (BROOKHAVEN NATIONAL LABORATORY)\nAs
a PPDG cross-team joint project\, we proposed to study\, develop\, \nimpl
ement and evaluate a set of tools that allow Meta-Schedulers to \ntake adv
antage of consistent information (such as information needed \nfor complex
decision making mechanisms) across both local and/or Grid \nResource Mana
gement Systems (RMS). \n\nWe will present and define the requirements and
schema by which one \ncan consistently provide queue attributes for the mo
st common batch \nsystems (PBS\, LSF\, Condor\, SGE\, etc). We evaluate th
e best scalable \nand lightweight approach to access the monitored paramet
ers from a \nclient perspective and\, in particular\, the feasibility of \
naccessing real-time and aggregate information using the MonaLISA \nmonito
ring framework. Client programs are envisioned to function in a \nnon-cent
ralized\, fault \ntolerant fashion. Inherent delays as well as scalability
issues of \neach approach (implementing it at a large number of sites) wi
ll be \ndiscussed.\n\nThe MonALISA monitoring framework\, being an ensembl
e of autonomous \nmulti-threaded\, agent based systems which are registere
d as dynamic \nservices and are able to collaborate and cooperate in perfo
rming a \nwide range of monitoring tasks in a large scale distributed \nap
plications\, is a natural choice for such a project. MonALISA is \ndesigne
d to easily integrate existing monitoring tools and procedures \nand provi
de information in a dynamic self-describing way to any other \nservice or
client. We intend to demonstrate the usefulness of this \nconsistent appro
ach for queue monitoring by implementing a monitoring \nagent within the S
TAR Unified Meta-Scheduler (SUMS) framework.\n\nWe believe that such devel
opments could highly benefit Grid \nlaboratory efforts such as the Grid3+
and the OpenScience Grid (OSG).\n\nhttps://indico.cern.ch/event/0/contribu
tions/1294264/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294264/
END:VEVENT
BEGIN:VEVENT
SUMMARY:AIDA\, JAIDA and AIDAJNI: Data Analysis using interfaces
DTSTART;VALUE=DATE-TIME:20040927T132000Z
DTEND;VALUE=DATE-TIME:20040927T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294269@indico.cern.ch
DESCRIPTION:Speakers: Victor SERBO (AIDA)\nAIDA\, Abstract Interfaces for
Data Analysis\, is a set of abstract interfaces\nfor data analysis compone
nts: Histograms\, Ntuples\, Functions\, Fitter\,\nPlotter and other typica
l analysis categories. The interfaces are currently\ndefined in Java\, C++
and Python and implementations exist in the form of\nlibraries and tools
using C++ (Anaphe/Lizard\, OpenScientist)\, Java (Java\nAnalysis Studio) a
nd Python (PAIDA).\n\nJAIDA is the full implementation of AIDA in Java. It
is used internally by\nJAS3 as its analysis core but it can also be used
independently for either\nbatch or interactive processing\, or for web app
lications to access data\,\nmake plots and simple data analysis through a
browser. Some of the JAIDA\nfeatures are the ability to open AIDA\, ROOT a
nd PAW files and the support of\nan extensible set of fit methods (chi-squ
are\, least squares\, binned/unbinned\nlikelihood\, etc) to be matched wit
h an extensible set of optimizers\nincluding Minuit and Uncmin.\n\nAIDAJNI
is glue code between C++ and Java that allows any C++ code to access\nany
Java implementation of the AIDA interfaces. For example AIDAJNI is used\n
with Geant4 to access the JAIDA implementation of AIDA.\n\nThis paper give
s an update on the AIDA 3.2.1 interfaces and its\ncorresponding JAIDA impl
ementation. Examples will be provided on how to use\nJAIDA within JAS3\, a
s a standalone library and from C++ using AIDAJNI.\n\nReferences:\n\nhttp:
//aida.freehep.org/\nhttp://java.freehep.org/jaida\nhttp://java.freehep.or
g/aidajni\nhttp://jas.freehep.org/jas3\n\nhttps://indico.cern.ch/event/0/c
ontributions/1294269/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294269/
END:VEVENT
BEGIN:VEVENT
SUMMARY:WIRED 4 - A generic Event Display plugin for JAS 3
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294272@indico.cern.ch
DESCRIPTION:Speakers: M. Donszelmann (SLAC)\nWIRED 4 is a experiment indep
endent event display plugin module\nfor JAS 3 (Java Analysis Studio) gener
ic analysis framework.\nBoth WIRED and JAS are written in Java.\n\nWIRED\,
which uses HepRep (HEP Representables for Event Display) as its input \nf
ormat\, supports viewing of events using either conventional 3D projection
s \nas well as specialized projections such as a fish-eye or a rho-Z proje
ction.\nProjections allow the user to scale\, rotate\, position or change
parameters \non the plot as he wishes. All interactions are handled as sep
arate edits \nwhich can be undone and/or redone\, so the user can try out
things and \nget back to a previous situation. All edits are scriptable by
any of \nthe scripting languages supported by JAS\, such as pnuts\, jytho
n or java.\nHits and tracks can be picked to display physics information a
nd \ncuts can be made on physics parameters to allow the user to filter \n
the number of objects drawn into the plot. Multiple event display plots \n
can be laid out on pages combined with histograms and other plots\,\navail
able from JAS itself or from other plugin modules. \nConfiguration informa
tion on the state of all plots can be saved and restored \nallowing the us
er to save his session\, share it with others or later continue \nwhere he
left off.\n\nThis version of WIRED is written to be easily extensible by
the user/developer. \nProjections\, representations\, interaction handlers
and edits are all services \nand new ones can be added by writing additio
nal plugins.\nBoth JAS 3 and WIRED 4 are built on top of the FreeHEP Java
Libraries\, \nwhich support a multitude of vector graphics output formats\
, such as \nPostScript\, PDF\, SVG\, SWF and EMF\, allowing document quali
ty output of \nevent display plots and histograms.\n\nReferences: \n\nhttp
://wired.freehep.org\nhttp://jas.freehep.org/jas3 \nhttp://java.freehep.or
g\n\nhttps://indico.cern.ch/event/0/contributions/1294272/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294272/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Using HEP Systems to Provide Storage for Biologists
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294273@indico.cern.ch
DESCRIPTION:Speakers: Alan Tackett ()\nProtein analysis\, imaging\, and DN
A sequencing are some of the branches\nof biology where growth has been en
abled by the availability of\ncomputational resources. With this growth\,
biologists face an\nassociated need for reliable\, flexible storage syste
ms. For decades\nthe HEP community has been driving the development of su
ch storage\nsystems to meet their own needs. Two of these systems - the d
Cache\ndisk caching system and the Enstore hierarchical storage manager -
are\nviable candidates for addressing the storage needs of biologists.\nBo
th incorporate considerable experience from the HEP community.\n\nWhile bi
ologists have much to gain from the HEP community's experience\nwith stora
ge systems\, they face several issues that are unique to the\nbiological s
ciences. There is a wider diversity in experiments\, in\nnumber and size
of datafiles\, and in client operating systems in\nbiology than there is i
n HEP. Patient information must be kept\nconfidential. Disparate IT depa
rtments set up firewalls that separate\nclient systems and the storage sys
tem.\n\nVanderbilt University is developing a storage system with the goal
of\nmeeting biologists' needs. This system will use Enstore for its\nrob
ustness and reliability\, and will use the flexible door-based\narchitectu
re of dCache to provide storage services to biologists via\nweb-portal\, t
he dCache copy command\, and custom applications. This\nsystem will be de
ployed using an automated tape library\, several\nsecure central servers\,
and nodes placed near biologists' existing\ncompute infrastructure to ens
ure locality of caches and secure data\nchannels between researchers and t
he central servers.\n\nhttps://indico.cern.ch/event/0/contributions/129427
3/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294273/
END:VEVENT
BEGIN:VEVENT
SUMMARY:JASSimApp plugin for JAS3: Interactive Geant4 GUI
DTSTART;VALUE=DATE-TIME:20040930T155000Z
DTEND;VALUE=DATE-TIME:20040930T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294276@indico.cern.ch
DESCRIPTION:Speakers: V. Serbo (SLAC)\nJASSimApp is joint project of SLAC\
, KEK\, and Naruto University to create integrated \nGUI for Geant4\, base
d on JAS3 framework\, with ability to interactively:\n\n - Edit Geant4 ge
ometry\, materials\, and physics processes \n - Control Geant4 execution\
, local and remote: pass commands and \n receive output\, control event
loop \n - Access AIDA histograms defined in Geant4 \n - Show generated
Geant4 events in integrated event display \n\nJAS3 is the latest developme
nt of JAS\, a general-purpose data analysis tool. \nIt employs a highly mo
dular component-based framework and allows flexible \nand powerful customi
zed plugin modules. JASSimApp is a concrete implementation \nof its design
concept\, Geant4 as the problem domain. It is composed of \nexisting inte
ractive tools like : GAG\, Gain\, Momo\, WIRED etc.. A new C++ class \nof
the Geant4 interfaces category was developed to exploit multi-threaded \nc
ontrol over Geant4 execution. The plugin modules of JAS3 reused existing \
nclasses with little modification. \n\nReferences: \n\nhttp://erpc1.naruto
-u.ac.jp/~geant4/index.html \; http://jas.freehep.org/jas3 \; \nhttp://wir
ed.freehep.org \; http://wwwasd.web.cern.ch/wwwasd/geant4/geant4.html\n\nh
ttps://indico.cern.ch/event/0/contributions/1294276/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294276/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The AliRoot framework\, status and perspectives
DTSTART;VALUE=DATE-TIME:20040927T151000Z
DTEND;VALUE=DATE-TIME:20040927T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294517@indico.cern.ch
DESCRIPTION:Speakers: F. Carminati (CERN)\nThe ALICE collaboration at the
LHC is developing since 1998 an OO offline framework\, written entirely in
C++. \nIn 2001 a GRID system (AliEn - ALICE Environment) has been added a
nd successfully integrated with ROOT \nand the offline. The resulting comb
ination allows ALICE to do most of the design of the detector and test the
\nvalidity of its computing model by performing large scale Data Challeng
es\, using OO technology in a \ndistributed framework. The early migration
of all ALICE users to C++ and the adoption of advanced software \ndevelop
ment techniques are two of the strong points of the ALICE offline strategy
. The offline framework is \nheavily based on virtual interfaces\, which a
llows the use of different generators and even different Monte-\nCarlo tra
nsport codes with no change in the framework or the scoring\, reconstructi
on and analysis code. This \ntalk presents a review of the development pat
h\, current status and future perspectives of the ALICE Offline \nenvironm
ent.\n\nhttps://indico.cern.ch/event/0/contributions/1294517/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294517/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aspect-Oriented Extensions to HEP Frameworks
DTSTART;VALUE=DATE-TIME:20040930T124000Z
DTEND;VALUE=DATE-TIME:20040930T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294278@indico.cern.ch
DESCRIPTION:Speakers: C. Tull (LBNL/ATLAS)\nIn this paper we will discuss
how Aspect-Oriented Programming (AOP) can be used to\nimplement and extend
the functionality of HEP architectures in areas such as\nperformance moni
toring\, constraint checking\, debugging and memory management. AOP is\nt
he latest evolution in the line of technology for functional decomposition
which\nincludes Structured Programming (SP) and Object-Oriented Programmi
ng (OOP). In AOP\,\nan Aspect can contribute to the implementation of a n
umber of procedures and objects\nand is used to capture a concern such as
logging\, memory allocation or thread\nsynchronization that crosscuts mult
iple modules and/or types. We have chosen Gaudi\nas a representative HEP
architecture because it is a component architecture and has\nbeen successf
ully adopted by several HEP experiments. Since most HEP frameworks are\nc
urrently implemented in C++\, for our study we have used AspectC++\, an ex
tension to\nC++ that allows the use of AOP techniques without adversely af
fecting software\nperfomance. We integrated AspectC++ in the development
environment of the Atlas\nexperiment\, and we will discuss some of the con
figuration management issues that may\narise in a mixed C++/AspectC++ envi
ronment. In this study we have focused on\n"Development Aspects"\, i.e. a
spects that are intended to facilitate program\ndevelopment but can be tra
nsparently removed from the production code\, such as\nexecution tracing\,
constraint checking and object lifetime monitoring. We will\nbriefly dis
cuss possible "Production Aspects" related to cache management and object\
ncreation. For each of the concerns we have examined we will discuss how
traditional\nSP or OOP techniques compare to the AOP solution we developed
. We will conclude\ndiscussing the short and medium term feasibility of i
ntroducing AOP\, and AspectC++ in\nparticular\, in the complex software sy
stems of the LHC experiments.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294278/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294278/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Volume-based representation o the magnetic field
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294284@indico.cern.ch
DESCRIPTION:Speakers: T. Todorov (CERN/IReS)\nThe simulation\, reconstruct
ion and analysis software access to the \nmagnetic field has large impact
both on CPU performance and on \naccuracy. \n \nAn approach based on a vol
ume geometry is described. The volumes are \nconstructed in such a way tha
t their boundaries correspond to field \ndiscontinuities\, which are due t
o changes in magnetic permeability \nof the materials. The field in each v
olume is contiguous. \n \nThe field in each volume is interpolated from a
regular grid of \nvalues resulting from a TOSCA calculation. In case a \np
arameterization is available for some volumes it is used instead of \nthe
grid interpolation. \n \nGlobal access to the magnetic field values requir
es efficient search \nfor the volume that contains a global point. An algo
rithm that \nexploits explicitly the layout and the symmetries of the dete
ctor is \npresented. \n \nThe main clients of the magnetic field\, which a
re the simulation \n(geant4) and propagation of track parameters and error
s in the \nreconstruction\, can be made aware of the magnetic field volume
s by \nconnecting the per-volume magnetic field providers in their \nrespe
ctive geometries to the corresponding volume in the magnetic \nfield geome
try. In this way the global volume search is \nby-passed and the access to
the field is sped up significantly.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294284/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294284/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Software management infrastructure in the LCG Application Area
DTSTART;VALUE=DATE-TIME:20040930T153000Z
DTEND;VALUE=DATE-TIME:20040930T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294285@indico.cern.ch
DESCRIPTION:Speakers: A. Pfeiffer (CERN\, PH/SFT)\nIn the context of the S
PI project in the LCG Application Area\,\na centralized s/w management inf
rastructure has been deployed.\nIt comprises of a suite of scripts handlin
g the building and\nvalidating of the releases of the various projects as
well as\nproviding a customized packaging of the released s/w. Emphasis\nw
as put on the flexibility of the packaging and distribution\nsolution as i
t should cover a broad range of use-cases and needs\,\nranging from full p
ackages for developers in the projects and\nexperiments to a minimal set o
f libraries and binaries for specific\napplications running\, e.g.\, on gr
id nodes. In addition\, regular\nreviews of the QA analysis of the releas
es of the projects are\nperformed and fed back to the project leaders to i
mprove the\noverall quality of the software produced. The present status a
nd\nfuture perspectives of this activity will be presented and we\nwill sh
ow examples of quality improvement in the projects.\n\nhttps://indico.cern
.ch/event/0/contributions/1294285/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294285/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A general and flexible framework for virtual organization applicat
ion tests in a grid system
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294287@indico.cern.ch
DESCRIPTION:Speakers: T. Coviello (INFN Via E. Orabona 4 I - 70126 Ba
ri Italy)\nA grid system is a set of heterogeneous computational and stora
ge \nresources\, distributed on a large geographic scale\, which belong to
\ndifferent administrative domains and serve several different \nscientif
ic communities named Virtual Organizations (VOs). A virtual \norganization
is a group of people or institutions which collaborate \nto achieve commo
n objectives. Therefore such system has to guarantee \nthe coexistence of
different VO’s applications providing them the \nsuitable run-time envir
onment. Hence tools are needed both at local \nand central level for testi
ng and detecting eventually bad software \nconfiguration on a grid site.\n
In this paper we present a web based tool which permits to a Grid \nOperat
ional Centre (GOC) or a Site Manager to test a grid site from \nthe VO vie
wpoint. \nThe aim is to create a central repository for collecting both \n
existing and emerging VO tests. EachVO test may include one ore more \nspe
cific application tests\, and each test could include one ore more \nsubte
sts\, arranged in a hierarchic structure.\nA general and flexible framewor
k is presented capable to include VO \ntests straightforwardly by means of
a description file. Submission of \na bunch of tests to a particular grid
site is made available through \na web portal. On the same portal\, past
and current results and logs \ncan be browsed.\n\nhttps://indico.cern.ch/e
vent/0/contributions/1294287/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294287/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bringing High-Performance Networking to HEP Users
DTSTART;VALUE=DATE-TIME:20040930T151000Z
DTEND;VALUE=DATE-TIME:20040930T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294302@indico.cern.ch
DESCRIPTION:Speakers: R. Hughes-Jones (THE UNIVERSITY OF MANCHESTER)\nHow
do we get High Throughput data transport to real users? The MB-NG project
is a \nmajor collaboration which brings together expertise from users\, in
dustry\, equipment \nproviders and leading edge e-science application deve
lopers. Major successes in the \nareas of Quality of Service (QoS) and man
aged bandwidth have provided a leading edge \nU.K. Diffserv enabled networ
k running at 2.5 Gbit/s. One of the central aims of MB-\nNG is the investi
gation of high performance data transport mechanisms for Grid data \ntrans
fer across heterogeneous networks.\n\nNew transport stacks implement sende
r side modifications to the TCP algorithm which \nenable increased bandwid
th utilisation in long-delay high-bandwidth environments. \nThis allows a
single stream of a modified TCP stack to transmit at rates that would \not
herwise require multiple streams of standard RENO TCP. This paper reports
on \ninvestigations of the performance of these TCP stacks and their use w
ith data \ntransfer applications such as GridFTP\, BBFTP\, BBCP and APACHE
. End-host performance \nbehaviour was also examined in order to determine
effects of the Network Interface\, \nPCI bus performance\, and disk and R
AID sub-systems.\n\nIn a Collaboration between the BaBar experiment and MB
-NG we demonstrated high \nperformance data transport using these new TCP/
IP transport protocol stacks and QoS \nprovisioning. We report on the bene
fits of this introduction of high speed networks \nand advanced TCP stacks
together with various levels of QoS to the BaBar computing \nenvironment.
The benefits achieved are contrasted with network behaviour and \napplica
tion performance using today's "production" network.\n\nhttps://indico.cer
n.ch/event/0/contributions/1294302/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294302/
END:VEVENT
BEGIN:VEVENT
SUMMARY:SAMGrid Monitoring Service and its Integration with MonALisa
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294303@indico.cern.ch
DESCRIPTION:Speakers: A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)\nThe
SAMGrid team is in the process of implementing a monitoring and \ninforma
tion service\, which fulfills several important roles in the \noperation o
f the SAMGrid system\, and will replace the first \ngeneration of monitori
ng tools in the current deployments. The first \ngeneration tools are in g
eneral based on text logfiles and\nrepresent solutions which are not scala
ble or maintainable. The roles \nof the monitoring and information service
are: 1) providing \ndiagnostics for troubleshooting the operation of SAM
Grid services\; 2) \nproviding support for monitoring at the level of user
jobs\; 3) \nproviding runtime support for local configuration \nand other
information currently which currently must be stored \ncentrally (thus mo
ving thesystem toward greater autonomy for the SAM \nstation services\, wh
ich include cache management and job management \nservices)\; 4) providing
intelligent collection of statistics in order \nto enable performance mon
itoring and tuning. The architecture of\nthis service is quite flexible\,
permitting input from any \ninstrumented SAM application or service. It w
ill allow multiple \nbackend storage for archiving of(possibly) filtered m
onitoring \nevents\, as well as real time information displays andactive \
nnotification service for alarm conditions. This service will \nbe able to
export\, in a configurable manner\, information to higher \nlevel Grid mo
nitoring services\, such as MonALisa. We describe our \nexperience to dat
e with using a prototype version together with \nMonAlisa.\n\nhttps://indi
co.cern.ch/event/0/contributions/1294303/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294303/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Application of the SAMGrid Test-Harness for Performance Evaluation
and Tuning of a Distributed Cluster Implementation of Data Handling Servi
ces
DTSTART;VALUE=DATE-TIME:20040927T155000Z
DTEND;VALUE=DATE-TIME:20040927T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294304@indico.cern.ch
DESCRIPTION:Speakers: A. Lyon (FERMI NATIONAL ACCELERATOR LABORATORY)\nThe
SAMGrid team has recently refactored its test harness suite for \ngreater
flexibility and easier configuration. This makes possible \nmore interest
ing applications of the test harness\, for component \ntests\, integration
tests\, and stress tests. We report on the \narchitecture of the test har
ness and its recent application\nto stress tests of a new analysis cluster
at Fermilab\, to explore the \nextremes of analysis use cases and the rel
evant parameters for tuning \nin the SAMGrid station services. This reimpl
ementation of the test \nharness is a python framework which usesXML for c
onfiguration and \nsmall plug-in python modules for specific test purposes
.\nOne current testing application is running on a 128-CPU analysis \nclus
ter with access to 6 TB distributed cache and also to a 2 TB \ncentralized
cache\, permitting studies of different cache strategies. \nWe have studi
ed the service parameters which affect the\nperformance of retrieving data
from tape storage as well. The use \ncases studied vary from those which
will require rapid file delivery \nwith short processing time per file\, t
o the opposite extreme of long \nprocessing time per file. We also show ho
w the same harness can be \nused to run regular unit tests on a production
system to aid\nearly fault detection and diagnosis.These results are inte
resting for \ntheir implications with regard to Grid operations\, and illu
strate the \ntype of monitoring and test facilities required to accomplish
such \nperformance tuning.\n\nhttps://indico.cern.ch/event/0/contribution
s/1294304/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294304/
END:VEVENT
BEGIN:VEVENT
SUMMARY:K5 @ INFN.IT: an infrastructure for the INFN cross REALM & AFS c
ell authentication.
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294305@indico.cern.ch
DESCRIPTION:Speakers: E.M.V. Fasanelli (I.N.F.N.)\nThe infn.it AFS cell ha
s been providing a useful single file-space and authentication mechanism f
or the whole \nINFN\, but the lack of a distributed management system\, ha
s lead several INFN sections and LABs to setup local \nAFS cells. The hier
archical transitive cross-realm authentication introduced in the Kerberos
5 protocol and the \nnew versions of the OpenAFS and MIT implementation of
Kerberos 5\, make possible to setup an AFS cross cell \nauthentication in
a transparent way\, using the Kerberos 5 cross-realm one. The goal of the
K5 @ INFN.IT \nproject is to provide a Kerberos 5 authentication infras
tructure for the INFN and cross-realm authentication \nto be used for the
cross cell AFS authentication. In this work we describe the scenario\, the
results of various \ntests performed\, the solution chosen and the status
of the K5 @ INFN.IT project.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294305/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294305/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Breaking the 1 GByte/sec Barrier? High speed WAN data transfers fo
r science
DTSTART;VALUE=DATE-TIME:20040930T124000Z
DTEND;VALUE=DATE-TIME:20040930T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294307@indico.cern.ch
DESCRIPTION:Speakers: S. Ravot (Caltech)\nIn this paper we describe the cu
rrent state of the art in equipment\, software and \nmethods for transferr
ing large scientific datasets at high speed around the globe. \nWe first p
resent a short introductory history of the use of networking in HEP\, some
\ndetails on the evolution\, current status and plans for the Caltech/CER
N/DataTAG \ntransAtlantic link\, and a description of the topology and cap
abilities of the \nresearch networks between CERN and HEP institutes in th
e USA. We follow this with \nsome detailed material on the hardware and so
ftware environments we have used in \ncollaboration with international par
tners (including CERN and DataTAG) to break \nseveral Internet2 land speed
records over the last couple of years. Finally we \ndescribe our recent d
evelopments in collaboration with Microsoft\, Newisys\, AMD\, \nCisco and
other industrial partners\, in which we are attempting to transfer HEP dat
a \nfiles from disk servers at CERN via a 10Gbit network path to disk serv
ers at \nCaltech's Center for Advanced Computing Research (a total distanc
e of over 11\,000 \nkilometres)\, at a rate exceeding 1 GByte per second.
We describe some solutions \nbeing used to overcome networking and hardwar
e performance issues. Whilst such \ntransfers represent the bleeding edge
of what is possible today\, they are expected \nto be commonplace at the s
tart of LHC operations in 2007.\n\nhttps://indico.cern.ch/event/0/contribu
tions/1294307/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294307/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The SEAL Component Model
DTSTART;VALUE=DATE-TIME:20040927T122000Z
DTEND;VALUE=DATE-TIME:20040927T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294309@indico.cern.ch
DESCRIPTION:Speakers: R. Chytracek (CERN)\nThis paper describes the compon
ent model that has been developed in the context of \nthe LCG/SEAL project
. This component model is an attempt to handle the increasing \ncomplexity
in the current data processing applications of LHC experiments. In \naddi
tion\, it should facilitate the software re-use by the integration of soft
ware \ncomponents from LCG and non-LCG into the experiment's applications.
The component \nmodel provides the basic mechanisms and base classes that
facilitate the \ndecomposition of the whole C++ object-oriented applicati
on into a number of run-time \npluggable software modules with well define
d generic behavior\, inter-component \ninteraction protocols\, run-time co
nfiguration and user customization.\nThis new development is based on the
ideas and practical experiences of the various \nsoftware frameworks in us
e by the different LHC experiments for several years. The \ndesign and imp
lementation choices will be described and the practical experiences \nand
difficulties in adopting this model to existing experiment software system
s will \nbe outlined.\n\nhttps://indico.cern.ch/event/0/contributions/1294
309/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294309/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fast tracking for the ATLAS LVL2 Trigger
DTSTART;VALUE=DATE-TIME:20040929T134000Z
DTEND;VALUE=DATE-TIME:20040929T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294311@indico.cern.ch
DESCRIPTION:Speakers: N. Konstantinidis (UNIVERSITY COLLEGE LONDON)\nWe pr
esent a set of algorithms for fast pattern recognition and track\nreconstr
uction using 3D space points aimed for the High Level \nTriggers (HLT) of
multi-collision hadron collider environments. At \nthe LHC there are sever
al interactions per bunch crossing separated \nalong the beam direction\,
z. The strategy we follow is to (a) \nidentify the z-position of the inter
esting interaction prior to any \ntrack reconstruction\; (b) select groups
of space points pointing \nback to this z-position\, using a histogrammin
g technique which \navoids performing any combinatorics\; and (c) proceed
to the \ncombinatorial tracking only within the individual groups of space
\npoints. The validity of this strategy will be demonstrated with \nresul
ts in terms of timing and physics performance for the LVL2 \ntrigger of AT
LAS at the LHC\, although the strategy is generic and \ncan be applied to
any multi-collision hadron collider experiment. \n\nIn addition\, the algo
rithms are conceptually simple\, flexible and \nrobust and hence appropria
te for use in demanding\, online \nenvironments. We will also make qualita
tive comparisons with an \nalternative\, complimentary strategy\, based on
the use of look-up \ntables for handling combinatorics\, that has been de
veloped for the \nATLAS LVL2 trigger. These algorithms have been used for
the results \nthat appear in the ATLAS HLT\, DAQ and Controls Technical De
sign \nReport\, which was recently approved by the LHC Committee.\n\nhttps
://indico.cern.ch/event/0/contributions/1294311/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294311/
END:VEVENT
BEGIN:VEVENT
SUMMARY:How to build an event store - the new Kanga Event Store for BaBar
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294312@indico.cern.ch
DESCRIPTION:Speakers: M. Steinke (Ruhr Universitaet Bochum)\nIn the past y
ear\, BaBar has shifted from using Objectivity to using ROOT I/O\nas the b
asis for our primary event store. This shift required a total\nreworking
of Kanga\, our ROOT-based data storage format. We took advantage\nof this
opportunity to ease the use of the data by supporting multiple\naccess mod
es that make use of many of the analysis tools available in\nROOT.\n\nSpec
ifically\, our new event store supports: 1) the pre-existing separated\ntr
ansient + persistent model\, 2) a transient based load-on-demand model\ncu
rrently being developed\, 3) direct access to persistent data classes in\n
compiled code\, 4) fully interactive access to persistent data classes fro
m\neither the ROOT prompt and via interpreted macros.\n\nWe will describe
key features of Kanga including: 1) the separation and\nmanagement of tran
sient and persistent representations of data\, 2) the\nimplementation of r
ead on demand references in ROOT\, 3) the modular and\nextensible persiste
nt event design\, 4) the implementation of schema\nevolution and 5) BaBar
specific extensions to core ROOT classes that we\nused to preserve the end
-user "feel" of ROOT.\n\nhttps://indico.cern.ch/event/0/contributions/1294
312/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294312/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grid3: An Application Grid Laboratory for Science
DTSTART;VALUE=DATE-TIME:20040928T073000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294313@indico.cern.ch
DESCRIPTION:The U.S. Trillium Grid projects in collaboration with High Ene
rgy Experiment groups \nfrom the Large Hadron Collider (LHC)\, ATLAS and C
MS\, Fermi-Lab's BTeV\, members of \nthe LIGO \, SDSS collaborations and g
roups from other scientific disciplines and \ncomputational centers have d
eployed a multi-VO\, application-driven grid laboratory \n("Grid3"). The g
rid laboratory has sustained for several months the production-\nlevel ser
vices required by the participating experiments. The deployed \ninfrastruc
ture has been operating since November 2003 with 27 sites\, a peak of 2800
\nprocessors\, work loads from 10 different applications exceeding 1300 s
imultaneous \njobs\, and data transfers among sites of greater than 2 TB/d
ay.\n\nThe Grid3 infrastructure was deployed from grid level services prov
ided by groups \nand applications within the collaboration. The services w
ere organized into four \ndistinct "grid level services" including: Grid3
Packaging\, Monitoring and \nInformation systems\, User Authentication and
the iGOC Grid Operations Center. In \nthis paper we describe the Grid3 op
erational model\, deployment strategies\, and site \ninstallation and conf
iguration procedures. We describe the grid middleware \ncomponents used\,
how the components were packaged and deployed on sites each under \nits ow
n loacl administrative domain\, and how the pieces fit together to form th
e \nGrid3 grid infrastructure.\n\nhttps://indico.cern.ch/event/0/contribut
ions/1294313/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294313/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ROOT Graphical User Interface
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294314@indico.cern.ch
DESCRIPTION:Speakers: I. Antcheva (CERN)\nThe GUI is a very important comp
onent of the ROOT framework. Its \nmain purpose is to improve the usabilit
y and end-user perception. In\nthis paper\, we present two main projects i
n this direction: the ROOT\ngraphics editor and the ROOT GUI builder.\n\nT
he ROOT graphics editor is a recent addition to the framework. It \nprovid
es a state of the art and an intuitive way to create or edit\nobjects in t
he canvas.\n\nThe ROOT GUI builder greatly facilitates the design\, the de
velopment \nand the maintenance of any interactive application based on th
e ROOT \nframework. GUI objects can be selected\, dragged/dropped in the\n
widgets. An automatic code generator can be activated to save the code\nco
rresponding to any complex layout. This code can be executed via the\nCINT
interpreter or directly compiled with the user application.\n\nPast surve
ys indicate that the development of a GUI is a significant \nundertaking a
nd that the GUI's source code is a substantial portion of\nthe program's o
verall source base. The new GUI builder in ROOT will\nenable the rapid con
struction of simple and complex GUIs.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294314/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294314/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grid Enabled Analysis : Architecture\, prototype and status
DTSTART;VALUE=DATE-TIME:20040930T134000Z
DTEND;VALUE=DATE-TIME:20040930T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294315@indico.cern.ch
DESCRIPTION:Speakers: F. van Lingen (CALIFORNIA INSTITUTE OF TECHNOLOGY)\n
In this paper we report on the implementation of an early prototype \nof d
istributed high-level services supporting grid-enabled data \nanalysis wit
hin the LHC physics community as part of the ARDA project \nwithin the con
text of the GAE (Grid Analysis Environment) and begin \nto investigate the
associated complex behaviour of such an \nend-to-end system. In particula
r\, the prototype integrates a typical \nphysics user interface client (RO
OT)\, a uniform web-services \ninterface to grid services (Clarens)\, a vi
rtual data service \n(Chimera)\, a request scheduling service (Sphinx)\, a
monitoring \nservice (MonALISA)\, a workflow execution service (Virtual D
ata \nToolkit Client)\, a remote data file service (Clarens)\, a grid \nre
source service (Virtual Data Toolkit Server)\, a replica location \nservic
e/meta data catalog (RLS/POOL)\, an \nanalysis session management system (
CAVES) and a fine grain monitor \nsystem for job submission (BOSS).\n\n\nF
or testing and evaluation purposes\, the prototype is deployed across \na
modest sized U.S. regional CMS Grid Test-bed (consisting of sites \nin Cal
ifornia\, Florida\, Fermilab) and is in the early stages of \nexhibiting i
nteractive remote data access demonstrating interactive \nworkflow generat
ion and collaborative data analysis using \nvirtual data and data provenan
ce\, as well as showing non-trivial \nexamples of policy based scheduling
of requests in a resource \nconstrained grid environment. In addition\, th
e prototype is used to \ncharacterize the system performance as a whole\,
including the \ndetermination of request-response latencies in a distribut
ed service \nmodel and the classification of high-level failure modes in a
complex \nsystem.\n\nhttps://indico.cern.ch/event/0/contributions/1294315
/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294315/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Porting CLEO software to Linux
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294316@indico.cern.ch
DESCRIPTION:Speakers: V. Kuznetsov (CORNELL UNIVERSITY)\nLinux operating s
ystem has become the platform of choice in the HEP community.\nHowever\, t
he migration process from another operating system to Linux can\nbe a trem
endous effort for developers and system administrators.\nThe ultimate goal
of such a transition is to maximize agreement between the\nfinal results
of identical calculations on the different platforms.\nApart from the fine
tuning of the existing software the following issues need to be\nresolved
: choice of Linux distribution\, development tools\n(compiler\, debugger\,
profilers etc.)\, compatibility with 3d party\nsoftware\, and deployment
strategy. It would be ideal to\ndevelop\, run and test software using offi
ce desktops\, local farm systems\,\nor personal laptop regardless of the L
inux distribution choosen. To accomplish\nthis task you need to have flexi
ble package management system which is capable to\ninstall/upgrade/verify/
uninstall necessary software components without\nparticular knowledge of r
emote system configuration and user privileges.\nWe discuss how Linux beca
me the third official computing platform of the CLEO\ncollaboration\, outl
ining the details of the transition from OSF and Solaris \noperating syste
ms to Linux\, software model and deployment strategy employed.\n\nhttps://
indico.cern.ch/event/0/contributions/1294316/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294316/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A database prototype for managing computer systems configurations
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294317@indico.cern.ch
DESCRIPTION:Speakers: Z. Toteva (Sofia University/CERN/CMS)\nWe describe a
database solution in a web application to centrally\nmanage the configura
tion information of computer systems. It extends the\nmodular cluster mana
gement tool Quattor with a user friendly web interface.\n\nSystem configur
ations managed by Quattor are described with the aid of PAN\, a \ndeclarat
ive language with a command line and a compiler interface. Using a \nrelat
ional schema\, we are able to build a database for efficient data storage
and \nconfiguration data processing. The relational schema ensures the\nco
nsistency of the described model while the standard database interface\nen
sures the fast retrieval of configuration information and statistic data.\
n\nThe web interface simplifies the typical administration and routine ope
rations \ntasks\, e.g. definition of new types\, configuration comparisons
and updates etc.\nWe present a prototype built on the above ideas and use
d to manage a cluster of \ndeveloper workstations and specialised services
in CMS.\n\nhttps://indico.cern.ch/event/0/contributions/1294317/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294317/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Clarens Grid-enabled Web Services Framework: Services and Impl
ementation
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294321@indico.cern.ch
DESCRIPTION:Speakers: C. Steenberg (California Institute of Technology)\nC
larens enables distributed\, secure and high-performance access to the\nwo
rldwide data storage\, compute\, and information Grids being constructed i
n\nanticipation of the needs of the Large Hadron Collider at CERN. We repo
rt on\nthe rapid progress in the development of a second server implementa
tion in\nthe Java language\, the evolution of a peer-to-peer network of Cl
arens\nservers\, and general improvements in client and server implementat
ions.\n\nServices that are implemented at this time include read/write fil
e access\,\nservice lookup and discovery\, configuration management\, job
execution\,\nVirtual Organization Management\, an LHCb Information Service
\, as well as web\nservice interfaces to POOL replica location and metadat
a catalogs\, MonaLISA\nmonitoring information\, CMS MCRunjob workflow mana
gement\, BOSS job\nmonitoring and bookkeeping\, Sphinx job scheduler and C
himera virtual data\nsystems.\n\nCommodity web service protocols allows a
wide variety of computing\nplatoforms and applications to be used to secur
ely access Clarens services\,\nincluding a standard web browser\, Java app
lets and stand-alone applications\,\nthe ROOT data analysis package\, as w
ell as libraries that provide\nprogrammatic access from the Python\, C/C++
and Java languages.\n\nhttps://indico.cern.ch/event/0/contributions/12943
21/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294321/
END:VEVENT
BEGIN:VEVENT
SUMMARY:From Geant 3 to Virtual Monte Carlo: Approach and Experience
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294326@indico.cern.ch
DESCRIPTION:Speakers: M. POTEKHIN (BROOKHAVEN NATIONAL LABORATORY)\nThe ST
AR Collaboration is currently using simulation software\nbased on Geant 3.
The emergence of the new Monte Carlo\nsimulation packages\, coupled with
evolution of both STAR\ndetector and its software\, requires a drastic cha
nge of\nthe simulation framework.\n\nWe see the Virtual Monte Carlo (VMC)
approach as providing\na layer of abstraction that facilitates such transi
tion.\nThe VMC platform is a candidate to replace the present legacy\nsoft
ware\, and help avoid its certain shortcomings\, such as\nthe use of a par
ticular algorithmic language to describe the\ndetector geometry. It will a
lso allow us to introduce a more\nflexible in-memory representation of the
geometry.\n\nThe Virtual Monte Carlo concept includes a platform-neutral\
nkernel of the application\, to the highest degree possible.\nThis kernel
is then equipped with interfaces to the modules\nresponsible for simulatin
g the physics of particle propagation\,\nand tracking.\n\nWe consider the
geometry description classes in the ROOT\nsystem (in its latest form known
as TGeo classes) as a good\nchoice for the in-memory geometry representat
ion.\n\nWe present an application design based on the Virtual Monte Carlo\
,\nalong with the results of testing\, benchmarking and comparison\nto Gea
nt 3. Internal event representation and IO model will\nbe also discussed.\
n\nhttps://indico.cern.ch/event/0/contributions/1294326/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294326/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Overview and new developments in Geant4 electromagnetic physics
DTSTART;VALUE=DATE-TIME:20040927T143000Z
DTEND;VALUE=DATE-TIME:20040927T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294327@indico.cern.ch
DESCRIPTION:Speakers: V. Ivantchenko (CERN\, ESA)\nWe will summarize the r
ecent and current activities of the Geant4 \nworking group responsible of
the standard package of electromagnetic \nphysics. The major recent activi
ties include an design iteration in \nenergy loss and multiple scattering
domain providing "process versus \nmodels" approach\, and development of t
he following physics models: \nmultiple scattering\, ultra relativistic mu
on physics\, photo-\nbsorbtion-ionisation model\, ion ionisation\, optical
processes. An \nautomatic acceptance suite of validation of physics is un
der \ndevelopment. Also we will comment on evolution of the concept of \np
hysics list.\n\nhttps://indico.cern.ch/event/0/contributions/1294327/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294327/
END:VEVENT
BEGIN:VEVENT
SUMMARY:SPHINX: A Scheduling Middleware for Data Intensive Applications on
a Grid
DTSTART;VALUE=DATE-TIME:20040930T124000Z
DTEND;VALUE=DATE-TIME:20040930T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294328@indico.cern.ch
DESCRIPTION:Speakers: R. Cavanaugh (UNIVERSITY OF FLORIDA)\nA grid consist
s of high-end computational\, storage\, and network resources that\, \nwhi
le known a priori\, are dynamic with respect to activity and availability.
\nEfficient co-scheduling of requests to use grid resources must adapt t
o this \ndynamic environment while meeting administrative policies. We di
scusses \nthe necessary requirements of such a scheduler and introduce a d
istributed \nframework called SPHINX that schedules complex\, data intensi
ve High Energy \nPhysics and Data Mining applications in a grid environmen
t\, respecting local and \nglobal policies along with a specified level of
quality of service. The SPHINX \ndesign allows for a number of functiona
l modules and/or distributed services to \nflexibly schedule workflows rep
resenting multiple applications on grids. We \npresent experimental resul
ts for SPHINX that effectively utilize existing grid \nmiddleware such as
monitoring and workflow management/execution systems. These \nresults dem
onstrate that SPHINX can successfully schedule work across a large \nnumbe
r of grid sites that are owned by multiple units in a virtual organization
.\n\nhttps://indico.cern.ch/event/0/contributions/1294328/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294328/
END:VEVENT
BEGIN:VEVENT
SUMMARY:An intelligent resource selection system based on neural network f
or optimal application performance in a grid environment
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294329@indico.cern.ch
DESCRIPTION:Speakers: T. Coviello (DEE – POLITECNICO DI BARI\, V. ORABON
A\, 4\, 70125 – BARI\,ITALY)\nGrid computing is a large scale geographic
ally distributed and \nheterogeneous system that provides a common platfor
m for running \ndifferent grid enabled applications. As each application h
as \ndifferent characteristics and requirements\, it is a difficult\ntask
to develop a scheduling strategy able to achieve optimal \nperformance be
cause application-specific and dynamic system status \nhave to be taken in
to account.\nMoreover it may be possible to obtain optimal performance for
\nmultiple application simultaneously using a single scheduler. Hence \ni
n a lot of cases the application scheduling strategy is assigned to \nan e
xpert application user who provides a ranking criterion for \nselecting th
e best computational element on a set of \navailable resources. Such crite
ria are based on user perception of \nsystem capabilities and knowledge ab
out the features and requirements \nof his application. \nIn this paper an
intelligent mechanism has been both implemented and \nevaluated to select
the best computational resource in a grid \nenvironment from the applicat
ion viewpoint. \nA neural network based system has been used to capture au
tomatically \nthe knowledge of a grid application expert user. The system
\nscalability problem is also tackled and a preliminary solution based \no
n sorting algorithm is discussed. The aim is to allow a\ncommon grid appli
cation user to benefit of this expertise.\n\nhttps://indico.cern.ch/event/
0/contributions/1294329/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294329/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Improving Standard C++ for the Physics Community
DTSTART;VALUE=DATE-TIME:20040930T063000Z
DTEND;VALUE=DATE-TIME:20040930T070000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294330@indico.cern.ch
DESCRIPTION:Speakers: M. Paterno (FERMILAB)\nAs Fermilab's representatives
to the C++ standardization effort\, we have\nbeen promoting directions of
special interest to the physics community.\nWe here report on selected re
cent developments toward the next revision\nof the C++ Standard. Topics wi
ll include standardization of random\nnumber and special function librarie
s\, as well as core language issues\npromoting improved run-time performan
ce.\n\nThe random number library provides an extensible framework for rand
om\nnumber generators. It includes a handful of widely-used and high-quali
ty\nrandom number engines\, as well as some of the most widely-used random
\nnumber distributions. The modular design makes it easy for users to add\
ntheir own engines\, and perhaps more importantly their own distributions\
,\non an equal footing with those in the library.\n\nThe special functions
library contains many of the commonly-used\nfunctions of mathematical phy
sics. These include a variety of\ncylindrical and spherical Bessel functio
ns\, Legendre and associated\nLegendre functions\, hypergeometric and conf
luent hypergeometric\nfunctions\, among others.\n\nWe also report on an on
going analysis\, and proposal for core language\nadditions\, with the goal
of improved run-time performance. Current\ncompilers routinely perform in
ter-procedural flow analysis within a\ncompilation unit. These additions w
ould allow compilers to perform\ncomparable analysis between compilation u
nits\, and to optimize code\nbased on their findings.\n\nhttps://indico.ce
rn.ch/event/0/contributions/1294330/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294330/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The PHENIX Event Builder
DTSTART;VALUE=DATE-TIME:20040927T145000Z
DTEND;VALUE=DATE-TIME:20040927T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294333@indico.cern.ch
DESCRIPTION:Speakers: D. Winter (COLUMBIA UNIVERSITY)\nThe PHENIX detector
consists of 14 detector subsystems. It is designed such\nthat individual
subsystems can be read out independently in parallel as well\nas a single
unit. The DAQ used to read the detector is a highly-pipelined\nparallel
system. Because PHENIX is interested in rare physics events\, the DAQ\nis
required to have a fast trigger\, deep buffering\, and very high bandwidt
h.\n\nThe PHENIX Event Builder is a critical part of the back-end of the P
HENIX DAQ.\nIt is reponsible for assembling event fragments from each subs
ystem into\ncomplete events ready for archiving. It allows subsystems to
be read out\neither in parallel or simultaneously and supports a high rate
of archiving.\nIn addition\, it implements an environment where Level-2 t
rigger algorithms may\nbe optionally executed\, providing the ability to t
ag and/or filter rare\nphysics events.\n\nThe Event Builder is a set of th
ree Windows NT/2000 multithreaded executables\nthat run on a farm of over
100 dual-cpu 1U servers. All control and data\nmessaging is transported o
ver a Foundry Layer2/3 Gigabit switch. Capable of\nrecording a wide range
of event sizes from central Au-Au to p-p interactions\,\ndata archiving r
ates of over 400 MB/s at 2 KHz event rates have been achieved\nin the rece
nt Run 4 at RHIC. Further improvements in performance are expected\nfrom
migrating to Linux for Run 5.\n\nThe PHENIX Event Builder design and imple
mentation\, as well as performance and\nplans for future development\, wil
l be discussed.\n\nhttps://indico.cern.ch/event/0/contributions/1294333/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294333/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Parallel implementation of Parton String Model event generator
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294336@indico.cern.ch
DESCRIPTION:Speakers: S. Nemnyugin (ASSOCIATE PROFESSOR)\nWe report the re
sults of parallelization and tests of the Parton \nString Model event gene
rator at the parallel cluster of St.Petersburg \nState University Telecomm
unication center.\nTwo schemes of parallelization were studied. In the fir
st approach\nmaster process coordinates work of slave processes\, gathers
and \nanalyzes data. Results of MC calculations are saved in local files.
\nLocal files are sent to the host computer on which the program of \ndata
processing is started. The second approach uses the parallel \nwrite in t
he common file shared between all processes. In this case \nthe load of a
communication subsystem of the cluster grows. Both \napproaches are realiz
ed with MPICH library. Some problems including \nthe pseudorandom number g
eneration inparallel computations were \nsolved.\n\nThe modified parallel
version of the PSM code includes a number of \nthe additional possibilitie
s: a selection of the impact parameter \nwindows\,the account of acceptanc
e of the experimental setup and \ntrigger selection data\, and the calcula
tion of various long range \ncorrelations between such observables as mean
transverse momentum and \ncharged particles multiplicity.\n\nhttps://indi
co.cern.ch/event/0/contributions/1294336/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294336/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Database Usage and Performance for the Fermilab Run II Experiments
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294337@indico.cern.ch
DESCRIPTION:Speakers: L. Lueking (FERMILAB)\nThe Run II experiments at Fer
milab\, CDF and D0\, have extensive database needs\ncovering many areas of
their online and offline operations. Delivery of the data to\nusers and p
rocessing farms based around the world has represented major challenges to
\nboth experiments. The range of applications employing databases includes
data\nmanagement\, calibration (conditions)\, trigger information\, run c
onfiguration\, run\nquality\, luminosity\, and others. Oracle is the prim
ary database product being used\nfor these applications at Fermilab and so
me of its advanced features have been\nemployed\, such as table partitioni
ng and replication. There is also experience with\nopen source database p
roducts such as MySQL for secondary databases. A general\noverview of the
operation\, access patterns\, and transaction rates is examined and the\np
otential for growth in the next year presented. The two experiments\, whil
e having\nsimilar requirements for availability and performance\, employ d
ifferent architectures\nfor database access. Details of the experience for
these approaches will be compared\nand contrasted\, as well as the evolut
ion of the delivery systems throughout the run.\n Tools employed for moni
toring the operation and diagnosing problems will also be\ndescribed.\n\nh
ttps://indico.cern.ch/event/0/contributions/1294337/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294337/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience with Real Time Event Reconstruction Farm for Belle Expe
riment
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294340@indico.cern.ch
DESCRIPTION:Speakers: R. Itoh (KEK)\nA sizeable increase in the machine lu
minosity of KEKB accelerator is expected in\ncoming years. This may result
in a shortage in the data storage resource for the Belle\nexperiment in t
he near future and it is desired to reduce the data flow as much as\npossi
ble before writing the data to the storage device.\n\nFor this purpose\, a
realtime event reconstruction farm has been installed in the\nBelle DAQ s
ystem. The farm consists of 60 linux-operated PC servers with dual CPUs.\n
Every event from the event builder is distributed to one of the servers th
rough a\nsocket connection. A full event reconstruction is done on each se
rver so that a\nsophisticated event selection can be performed to reduce t
he data flow. The same\nevent reconstruction program as that used in the o
ffline DST production runs on each\nfarm server. Selected events are colle
cted through socket connections and written to\na fast disk array.\n\nThe
farm has been being operated in the beam runs from the beginning of this y
ear and\nprocessing the data at an average L1 trigger rate of 450Hz. The e
xperience of the\noperation is reported at the conference. In particular\,
the performance of the full\nevent reconstruction and selection is discus
sed in detail. A scheme to monitor the\nquality of processed data in real
time is also described.\n\nhttps://indico.cern.ch/event/0/contributions/12
94340/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294340/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mass Storage Management and the Grid
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294341@indico.cern.ch
DESCRIPTION:Speakers: S. Thorn ()\nThe University of Edinburgh has an sign
ificant interest in mass storage systems as it\nis one of the core groups
tasked with the roll out of storage software for the UK's \nparticle physi
cs grid\, GridPP. We present the results of a development project to\nprov
ide software interfaces between the SDSC Storage Resource Broker\, the EU
DataGrid\nand the Storage Resource Manager. This project was undertaken in
association with the\neDikt group at the National eScience Centre\, the U
niversities of Bristol and Glasgow\,\nRutherford Appleton Laboratory and t
he San Diego Supercomputing Center.\n\nhttps://indico.cern.ch/event/0/cont
ributions/1294341/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294341/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deterministic Annealing for Vertex Finding at CMS
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294343@indico.cern.ch
DESCRIPTION:Speakers: E. Chabanat (IN2P3)\nCMS and others LHC experiments
offer a new challenge for the vertex reconstruction:\nthe elaboration of e
fficient algorithms at high-luminosity beam collisions. We\npresent here a
new algorithm in the vertex finding field : Deterministic Annealing\n(DA)
. This algorithm comes from information theory by analogy to statistical p
hysics\nand has already been used in clustering and classification problem
s. In our purpose\,\nthe main job is to code information of a set of track
s into prototypes which will be\nour vertices at the end of the process. T
he advantages of such a technique is to\nglobally search all vertices at o
ne time and a priori knowledge of the expected\nnumber of vertices is not
required: the algorithm creates new vertices by a phase\ntransition mechan
ism which will be describe in this contribution. Thus\, the first\npart of
this talk is devoted to a short description of the DA algorithm and to th
e\nnecessary introduction of the concept of apex points which stand for tr
acks in this\nmethod \; then a discussion of vextex reconstruction efficie
ncies follows consisting\nfinding DA's internal parameters and making a co
mparison between DA and the most\npopular vertex finding algorithm. This c
omparison is done considering 4000 bbar\nevents generated in the detector
central region without pile-up in a first approach \;\nprimary and seconda
ry vertices reconstruction results are shown. Then performances of\nDA in
regional vertex search with regional tracks reconstruction is also present
ed\nand lead to a short study of 500 bbar event with pile-up at low lumino
sity.\n\nhttps://indico.cern.ch/event/0/contributions/1294343/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294343/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The LHCb Configuration Database
DTSTART;VALUE=DATE-TIME:20040929T155000Z
DTEND;VALUE=DATE-TIME:20040929T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294345@indico.cern.ch
DESCRIPTION:Speakers: L. Abadie (CERN)\nThe aim of the LHCb configuration
database is to store all the\ncontrollable devices of the detector. The ex
periment’s control system\n(that uses PVSS) will configure\, start up an
d monitor the detector\nfrom the information in the configuration database
. The database will\ncontain devices with their properties\, connectivity
and hierarchy. The\nability to rapidly store and retrieve huge amounts of
data\, and the\nnavigability between devices are important requirements. W
e have\ncollected use cases to ensure the completeness of the design.\nUsi
ng the entity relationship modeling technique we describe the use\ncases a
s classes with attributes and links. We designed the schema of\nthe tables
using the relational diagrams. This methodology has been\napplied to desc
ribe and store the connectivity of the devices in the\nTFC (switches) and
DAQ system. Other parts of the detector will follow\nlater.\n\nThe databas
e has been implemented using Oracle to benefit from \ncentral CERN databas
e support. The project also foresees the creation\nof tools to populate\,
maintain\, and configure the configuration\ndatabase. To communicate betwe
en the control system and the database\nwe have developed a system which s
ends queries to the database and\ndisplays the results in PVSS. This datab
ase will be used in\nconjunction with the configuration database developed
by the CERN JCOP\nproject for PVSS.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294345/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294345/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The application of PowerPC/VxWorks to the read-out subsystem of th
e BESIII DAQ
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294346@indico.cern.ch
DESCRIPTION:Speakers: Mei Ye ()\nThis article describes the simulation of
the read-out subsystem \nwhich will be subject to the BESIII data acquisit
ion system. \nAccording to the purpose of the BESIII\, the event rate will
be about \n4000Hz\, and the data rate up to 50Mbytes/sec after Level 1 tr
igger. \nThe read-out subsystem consists of some read-out crates and read-
out \ncomputer whose principle function is to collect event data from the
\nfront-end electronics after Level 1 trigger\, to transfer data \nfragmen
ts from each VME read-out crate to online computer farm \nthrough two leve
rs of computer pre-processing and high speed network \ntransmission. The r
ead-out implementation is based on commercial \nsingle board computer MVME
5100 running VxWorks operating system.\n\nThe article outlines the structu
re of the simulative platform \nwhich included hardware components and sof
tware components. It puts \nemphasis on the framework of the read-out subs
ystem\, data process \nflow and test method. \nEspecially\, it enumerates
key technologies in the process of design \nand analyses of the test resul
t. In addition\, results which \nsummaries the performance of the single b
oard computer from the \ndata transferring aspects will be presented.\n\n
Key word: BESIII read-out subsystem MVME5100 VxWorks VMEbus DMA \nread-
out computer\n\nhttps://indico.cern.ch/event/0/contributions/1294346/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294346/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Toward a Grid Technology Independent Programming Interface for HEP
Applications
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294351@indico.cern.ch
DESCRIPTION:In the High Energy Physics (HEP) community\, Grid technologies
have \nbeen accepted as solutions to the distributed computing problem. \
nSeveral Grid projects have provided software in the last years. Among \no
f all them\, the LCG - especially aimed at HEP applications - \nprovides a
set of services and respective client interfaces\, both in\nthe form of c
ommand line tools as well as programming language APIs \nin C\, C++\, Java
\, etc.\n\nUnfortunately\, the programming interface presented to the end
user \n(the physicist) is often not uniform or provides different levels o
f \nabstractions. In addition\, Grid technologies face a constant change \
nand an improvement process and it is of major importance to shield \nchan
ges of underlying technology to the end users. As services\nevolve and new
ones are introduced\, the way users interact with them \nalso changes.\nT
hese new interfaces are often designed to work at a different level \nand
with a different focus than the original ones. This makes it hard \nfor th
e end user to build Grid applications.\n\n We have analyzed the existin
g LCG programming environment and \nidentified several ways to provide hig
h-level technology independent \ninterfaces. In this article\, we describe
the use cases we were \npresented by the LCG experiments and the specific
problems we \nencountered in documenting existing APIs and providing \nus
age examples. As a main contribution\, we also propose a prototype \nhigh-
level interface for the information\, authentication and \nauthorization s
ystems that is now under test on the LCG EIS testbed \nby the LHC experime
nts.\n\nhttps://indico.cern.ch/event/0/contributions/1294351/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294351/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Scalable Grid User Management System for Large Virtual Organizat
ion
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294356@indico.cern.ch
DESCRIPTION:Speakers: G. Carcassi (BROOKHAVEN NATIONAL LABORATORY)\nWe pre
sent a work-in-progress system\, called GUMS\, which automates\nthe proces
ses of Grid user registration and management and supports\npolicy-aware au
thorization at well. GUMS builds on existing VO\nmanagement tools (LDAP V
O\, VOMS and VOMRS) with a local grid user\nmanagement system and a site d
atabase which stores user credentials\,\naccounting history and policies i
n XML format. We use VOMRS\, being\ndeveloped by Fermilab\, to collect us
er information and register\nlegitimate users into the VOMS server.\nOur l
ocal grid user management system jointly retrieves user\ninformation and V
O policies from multiple VO databases based on site\nsecurity policies. A
uthorization can be done by mapping the user's\ncredential to local accoun
ts. Four different mapping schemes have\nbeen implemented: user's existin
g account\, recyclable pool account\,\nnon-recyclable pool account and gro
up shared account. The mapping\nselection is determined by the type of ta
rget resource and its usage\npolicies. We already deployed our automatic
grid mapfile generators\non the BNL Grid Gatekeeper\, GridFtp server and H
PSS mass storage\nsystem. Work is in progress to enable ``single-sign-on'
'\nbased upon X509 certificate credential for job execution and access\nto
both disk and tape storage resources.\n\nhttps://indico.cern.ch/event/0/c
ontributions/1294356/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294356/
END:VEVENT
BEGIN:VEVENT
SUMMARY:XTNetFile\, a fault tolerant extension of ROOT TNetFile
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294359@indico.cern.ch
DESCRIPTION:Speakers: F. Furano (INFN Padova)\nThis paper describes XTNetF
ile\, the client side of a project \nconceived to address the high demand
data access needs of modern \nphysics experiments such as BaBar using the
ROOT framework. In this \ncontext\, a highly scalable and fault tolerant c
lient/server \narchitecture for data access has been designed and deployed
which \nallows thousands of batch jobs and interactive sessions to \neffe
ctively access the data repositories basing on the XROOTD data \nserver\,
a complex extension of the rootd daemon. The majority of the \ncommunicati
on problems are handled by the design of the client/server \nmechanism and
the communication protocol.\n\nThis allows us to build distributed data a
ccess systems which are \nhighly robust\, load balanced and scalable to an
extent which \nallows 'no jobs to fail'. \nFurthermore XTNetFile ensures
backward compatibility with the 'old' \nrootd server by using same API as
the existing ROOT TFile/TNetFile \nclasses. \nThe code is designed with a
high degree of modularity that allows to \nbuild other interfaces\, such a
s administrative tools\, based on the \nsame communication layer. In addit
ion the client plugin can also be \nused to readother types of (non-ROOT I
/O) data files\, providing the \nsame benefits.\n\nhttps://indico.cern.ch/
event/0/contributions/1294359/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294359/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Production of simulated events for the BaBar experiment by using L
CG
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294360@indico.cern.ch
DESCRIPTION:Speakers: D. Andreotti (INFN Sezione di Ferrara)\nThe BaBar ex
periment has been taking data since 1999. In 2001 the computing group\nsta
rted to evaluate the possibility to evolve toward a distributed computing
model in\na Grid environment. In 2003\, a new computing model\, described
in other talks\, was\nimplemented\, and ROOT I/O is now being used as the
Event Store. We implemented a\nsystem\, based onthe LHC Computing Grid (LC
G) tools\, to submit full-scale MonteCarlo\nsimulation jobs in this new Ba
Bar computing model framework. More specifically\, the\nresources of the L
CG implementation in Italy\, grid.it\, are used as computing\nelements (C
E) and Worker Nodes (WN). A Resource Broker (RB) specific for the Babar\nc
omputing needs was installed. Other BaBar requirements\, such as the insta
llation and\nusage of an object-oriented (Objectivity) Database to read de
tector conditions and\ncalibration constants\, were accomodated by using n
on-gridified hardware in a subset\nof grid.it sites. The BaBar simulation
software was packed and installation on Grid\nelements was centrally manag
ed with LCG tools. Sites were geographically mapped to\nObjectivity databa
ses\, and conditions were read by the WN either locally or remotely.\nAn L
CG User Interface (UI) has been used to submit simulation tests by using s
tandard\nJDL commands. The ROOT I/O output files were retrieved from the W
N and stored in the\nclosest Storage Element (SE). Standard BaBar simulati
on production tools were then\ninstalled on the UI and configured such tha
t the resulting simulated events can be\nmerged and shipped to SLAC\, like
in the standard BaBar simulation production setup.\nFinal validation of t
he system is being completed. This gridified approach results in\nthe prod
uction of simulated events on geographically distributed resources with a\
nlarge throughput and minimal\, centralized system maintenance.\n\nhttps:/
/indico.cern.ch/event/0/contributions/1294360/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294360/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The simulation for the ATLAS experiment: present status and outloo
k
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294409@indico.cern.ch
DESCRIPTION:Speakers: A. Rimoldi (PAVIA UNIVERSITY & INFN)\nThe simulation
for the ATLAS experiment is presently operational in a full OO\nenvironme
nt and it is presented here in terms of successful solutions to problems\n
dealing with application in a wide community using a common framework. The
ATLAS\nexperiment is the perfect scenario where to test all applications
able to satisfy the\ndifferent needs of a big community. Following a well
stated strategy of transition\nfrom the GEANT3 to the GEANT4-based simulat
ion\, a good validation programme during\nthe last months confirmed the ch
aracteristics of reliability\, performance and \nrobustness of this new to
ol in comparison with the results of the previous\nsimulation. Generation\
, simulation and digitization steps on different full sets of\nphysics eve
nts were tested in terms of performance and robustness in comparisons with
\nthe same samples undergoing the old GEANT3-based simulation. The simulat
ion program\nis simultaneously tested on all different testbeam setups cha
racterizing the R&D\nprogramme of all subsystems belonging to the ATLAS de
tector with comparison to real\ndata in order to validate the physics cont
ent and the reliability in the detector\ndescription of each component.\n\
nhttps://indico.cern.ch/event/0/contributions/1294409/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294409/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Management in EGEE
DTSTART;VALUE=DATE-TIME:20040927T155000Z
DTEND;VALUE=DATE-TIME:20040927T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294361@indico.cern.ch
DESCRIPTION:Speakers: K. Nienartowicz (CERN)\nData management is one of th
e cornerstones in the distributed production computing \nenvironment that
the EGEE project aims to provide for a European e-Science \ninfrastructure
. We have designed a set of services based on previous experience in \noth
er Grid projects\, trying to address the requirements of our user communit
ies.\n\nIn this paper we summarize the most fundamental requirements and c
onstraints as well \nas the security\, reliability\, stability and robustn
ess considerations that have \ndriven the architecture and the particular
choice for service decomposition in our \nservice-oriented architecture. W
e discuss the interaction of our services with each \nother\, their deploy
ment models and how failures are being managed.\n\nThe three service group
s for data management services are the Storage Element\, \nthe Data Schedu
ling and the Catalog services. The Storage Element exposes interfaces \nto
Grid managed storage\, with the appropriate semantics in the Grid distrib
uted \nenvironment. The Catalog services contain all the metadata related
to data: The File \nCatalog maintains a file-system-like view of the files
in the Grid in a logical user \nnamespace\, the Replica Catalog keeps tra
ck of identical copies of the files \ndistributed in different Storage Ele
ments and the Metadata Catalog keeps application \nspecific information ab
out the files. The Data Scheduling services take care of \ncontrolled data
transfer and keep the information in the Catalog services consistent \nwi
th what is actually available in the Storage Elements\, acting as the bind
ing \nbetween the two.\n\nWe conclude with first experiences and examples
of use-cases for High Energy Physics \napplications.\n\nhttps://indico.cer
n.ch/event/0/contributions/1294361/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294361/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Information and Monitoring Services within a Grid Environment
DTSTART;VALUE=DATE-TIME:20040930T130000Z
DTEND;VALUE=DATE-TIME:20040930T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294364@indico.cern.ch
DESCRIPTION:The R-GMA (Relational Grid Monitoring Architecture) was develo
ped within the EU \nDataGrid project\, to bring the power of SQL to an inf
ormation and monitoring system \nfor the grid. It provides producer and c
onsumer services to both publish and \nretrieve information from anywhere
within a grid environment. Users within a \nVirtual Organization may defi
ne their own tables dynamically into which to publish \ndata.\n\nWithin th
e DataGrid project R-GMA was used for the information system\, making \nde
tails about grid resources available for use by other middleware component
s. R-\nGMA has also been used for monitoring grid jobs by members of the
CMS and D0 \ncollaborations where information about jobs is published from
within a job wrapper\, \ntransported across the grid by R-GMA and made av
ailable to users. An accounting \npackage for processing PBS logging data
and sending it to one or more Grid \nOperation Centres using R-GMA has be
en written and is being deployed within LCG. \nThere are many other exist
ing and potential applications.\n\nR-GMA is currently being re-engineered
to fit into a Web Service environment as \npart of the EU EGEE project. I
mprovements being developed include fine grained \nauthorization\, an impr
oved user interface and measures to ensure superior scaling \nbehaviour.\n
\nhttps://indico.cern.ch/event/0/contributions/1294364/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294364/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The High Level Filter of the H1 Experiment at HERA
DTSTART;VALUE=DATE-TIME:20040929T132000Z
DTEND;VALUE=DATE-TIME:20040929T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294366@indico.cern.ch
DESCRIPTION:Speakers: A. Campbell (DESY)\nWe present the scheme in use for
online high level\nfiltering\, event reconstruction and classification\ni
n the H1 experiment at HERA since 2001.\n\nThe Data Flow framework ( prese
nted at CHEP2001 ) will\nbe reviewed. This is based on CORBA for all data
transfer\,\nmulti-threaded C++ code to handle the data flow and\nsynchroni
sation and fortran code for reconstruction and\nevent selection. A control
ler written in python provides\nsetup\,initialisation and process manageme
nt. Specialised\njava programs provide run control and online access to an
d display of\nhistograms. A C++ logger program provides central logging of
\nstandard printout from all processes. \n\nWe show how the system handles
online preparation and update\nof detector calibration and beam parameter
data.\nNewer features are the selection of rare events for the online\nev
ent display and the extension to multiple input sources\nand output channe
ls.\n\nWe dicuss how the system design provides automatic recovery from\nv
arious failures and show the overall and long term performance.\n\nIn addi
tion we present the framework of event\nselection and classification and t
he features it provides.\n\nhttps://indico.cern.ch/event/0/contributions/1
294366/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294366/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Distributed Tracking\, Storage\, and Re-use of Job State Informati
on on the Grid
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294370@indico.cern.ch
DESCRIPTION:Speakers: L. Matyska (CESNET\, CZECH REPUBLIC)\nThe Logging an
d Bookkeeping service tracks job passing through the Grid. It collects\nim
portant events generated by both the grid middleware components and\nappli
cations\, and processes them at a chosen L&B server to provide the job\nst
ate. The events are transported through secure reliable channels. Job\ntr
acking is fully distributed and does not depend on a single information\ns
ource\, the robustness is achieved through speculative job state computati
on in\ncase of reordered\, delayed or lost events. The state computation
is easily\nadaptable to modified job control flow.\n\nThe events are also
passed to the related Job Provenance service. Its purpose\nis a long-term
storage of information on job execution\, environment\, and the\nexecutab
le and input sandbox files. The data can be used for debugging\,\npost-mor
tem analysis\, or re-running jobs. The data are kept by the\njob-provenanc
e storage service in a compressed format\, accessible on\nper-job basis. A
complementary index service is able to find particular jobs\naccording to
configurable criteria\, e.g. submission time or "tags" assigned by\nthe u
ser. A user client to support job re-execution is planned.\n\nBoth the L&B
and Job Provenance index server provide web-service interfaces\nfor query
ing. Those interfaces comply with the On-demand producer specification\nof
the R-GMA infrastructure. Hence R-GMA capabilities can be utilized to\npe
rform complex distributed queries across multiple servers. Also\,\naggrega
te information about job collections can be easily provided.\n\nThe L&B se
rvice was deployed in the EU DataGrid and Cern LCG projects\,\nthe Job Pro
venance will be deployed in the EGEE project.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294370/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294370/
END:VEVENT
BEGIN:VEVENT
SUMMARY:New experiences with the ALICE High Level Trigger Data Transport F
ramework
DTSTART;VALUE=DATE-TIME:20040927T120000Z
DTEND;VALUE=DATE-TIME:20040927T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294410@indico.cern.ch
DESCRIPTION:Speakers: T.M. Steinbeck (KIRCHHOFF INSTITUTE OF PHYSICS\, RUP
RECHT-KARLS-UNIVERSITY HEIDELBERG\, for the Alice Collaboration)\nThe Alic
e High Level Trigger (HLT) is foreseen to consist of a \ncluster of 400 to
500 dual SMP PCs at the start-up of the \nexperiment. It's input data rat
e can be up to 25GB/s. This has to be \nreduced to at most 1.2 GB/s before
the data is sent to DAQ through \nevent selection\, filtering\, and data
compression. For these \nprocessing purposes\, the data is passed through
the cluster in \nseveral stages and groups for successive merging until\,
at the last \nstage\, fully processed complete events are available. For t
he \ntransport of the data through the stages of the cluster\, a\nsoftware
framework is being developed consisting of multiple \ncomponents. These c
omponents can be connected via a common interface \nto form complex config
urations that define the data flow in the \ncluster. For the framework\, n
ew benchmark results are available as \nwell as experience from tests and
data challenges run in Heidelberg.\nThe framework is scheduled to be used
during upcoming testbeam \nexperiments.\n\nhttps://indico.cern.ch/event/0/
contributions/1294410/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294410/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Performance of the NorduGrid ARC and the Dulcinea Executor in ATLA
S Data Challenge 2
DTSTART;VALUE=DATE-TIME:20040929T151000Z
DTEND;VALUE=DATE-TIME:20040929T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294371@indico.cern.ch
DESCRIPTION:This talk describes the various stages of ATLAS Data Challenge
2 (DC2)\nin what concerns usage of resources deployed via NorduGrid's Adv
anced\nResource Connector (ARC). It also describes the integration of thes
e\nresources with the ATLAS production system using the Dulcinea\nexecutor
.\n\nATLAS Data Challenge 2 (DC2)\, run in 2004\, was designed to be a ste
p\nforward in the distributed data processing. In particular\, much\ncoord
ination of task assignment to resources was planned to be\ndelegated to Gr
id in its different flavours. An automatic production\nmanagement system w
as designed\, to direct the tasks to Grids and\nconventional resources.\n\
nThe Dulcinea executor is a part of this system that provides interface\nt
o the information system and resource brokering capabilities of the\nARC m
iddleware. The executor translates the job definitions recieved\nfrom the
supervisor to the extended resource specification language\n(XRSL) used by
the ARC middleware. It also takes advantage of the ARC\nmiddleware's buil
t-in support for the Globus Replica Location Server\n(RLS) for file regist
ration and lookup.\n\nNorduGrid's ARC has been deployed on many ATLAS-dedi
cated resources\nacross the world in order to enable effective participati
on in ATLAS\nDC2. This was the first attempt to harness large amounts of s
trongly\nheterogeneous resources in various countries for a single\ncollab
orative exercise using Grid tools. This talk addresses various\nissues tha
t arose during different stages of DC2 in this environment:\npreparation\,
such as ATLAS software installation\; deployment of the\nmiddleware\; and
processing. The results and lessons are summarized as\nwell.\n\nhttps://i
ndico.cern.ch/event/0/contributions/1294371/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294371/
END:VEVENT
BEGIN:VEVENT
SUMMARY:SAMGrid Integration of SRMs
DTSTART;VALUE=DATE-TIME:20040927T161000Z
DTEND;VALUE=DATE-TIME:20040927T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294434@indico.cern.ch
DESCRIPTION:Speakers: R. Kennedy (FERMI NATIONAL ACCELERATOR LABORATORY)\n
SAMGrid is the shared data handling framework of the two large Fermilab\nR
un II collider experiments: DZero and CDF. In production since 1999 at D0\
, and\nsince mid-2004 at CDF\, the SAMGrid framework has been adapted over
time to\naccommodate a variety of storage solutions and configurations\,
as well as the\ndiffering data processing models of these two experiments.
This has been very\nsuccessful for both experiments. Backed by primary da
ta repositories of\napproximately 1 PB in size for each experiment\, the S
AMGrid framework delivers\nover 100 TB/day to DZero and CDF analyses at Fe
rmilab and around the world. \nEach of the storage systems used with SAMGr
id\, however\, has distinct\ninterfaces\, protocols\, and behaviors. This
led to different levels of\nintegration of the various storage devices int
o the framework\, which\ncomplicated the exploitation of their functionali
ty and limited in some cases\nSAMGrid expansion across the experiments' Gr
id.\n\n In an effort to simplify the SAMGrid storage interfaces\, SAMGr
id has\nadopted the Storage Resource Manager (SRM) concept as the universa
l interface\nto all storage devices. This has simplified the SAMGrid frame
work\, expecially\nthe implementation of storage device interactions. It p
repares the SAMGrid\nframework for future storage solutions equipped with
SRM interfaces\, without\nthe need for long and risky software integration
projects. In principle\, any\nstorage device with an SRM interface can be
used now with the SAMGrid\nframework. The integration of SRMs is an impor
tant further step towards\nevolving the SAMGrid framework into a co-operat
ing collection of distinct\,\nmodular grid-oriented services. To date\, SR
Ms for Enstore\, dCache\, local\ncaches\, and permanent disk locations are
tested and in production use. This\nreport outlines how the SRMs were int
egrated into the existing SAMGrid\nframework without disturbing on-going o
perations\, and describes our operational\nexperience with SAMGrid and SRM
s in the field.\n\nhttps://indico.cern.ch/event/0/contributions/1294434/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294434/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The STAR Unifid Meta-Scheduler project\, a front end around evolvi
ng technologies for user analysis and data production.
DTSTART;VALUE=DATE-TIME:20040930T122000Z
DTEND;VALUE=DATE-TIME:20040930T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294373@indico.cern.ch
DESCRIPTION:Speakers: J. Lauret (Brookhaven National Laboratory)\nWhile ma
ny success stories can be told as a product of the Grid middleware\ndevelo
pments\, most of the existing systems relying on workflow and job executio
n are\nbased on integration of self-contained production systems interfaci
ng with a given\nscheduling component or portal\, or directly uses the bas
e component of the Grid\nmiddleware (globus-job-run\, globus-job-submit).
However\, such systems usually do not\ntake advantage of the presence of R
esource Manager System (RMS)\; they hardly allow\nfor a mix of local RMS a
nd are either Grid or non-grid enabled. We intend to present\nan approach
taking advantage of both worlds.\nThe STAR Unified Meta-Scheduler (SUMS) p
roject provides users a way to submit jobs on\na farm\, at a site (multipl
e pools or farms) or on the Grid without the need to know\nor adapt to the
diversity of technologies and knowledge involved while using multiple\nLR
MS and their specificities. The strategy was adopted in 2002 to shield the
users\nagainst changes in technologies inherent to the emerging Grid infr
astructure and\ndevelopments.\nJava based and taking as input a simple use
r job description language (U-JDL)\, SUMS\nallows connection with multiple
(overlapping or not) LRMS and Grid job submission\n(Condor-G\, grid-job-s
ubmit\, …) without the need for changing the U-JDL. Fully\nintegrated wi
th the STAR File and Replica Catalog\, information providers (load and\nqu
eue information)\, SUMS provides a single point of reference for users to
migrate\nfrom a traditional to a distributed computing environment. Resul
ts and the\nevolutionist architecture of the SUMS will be presented and it
s future\, improvements\nand evolution will be discussed.\n\nhttps://indic
o.cern.ch/event/0/contributions/1294373/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294373/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Linux for the CLEO-c Online system
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294378@indico.cern.ch
DESCRIPTION:Speakers: H. Schwarthoff (CORNELL UNIVERSITY)\nThe CLEO collab
oration at the Cornell electron positron storage ring \nCESR has completed
its transition to the CLEO-c experiment. This new \nprogram contains a wi
de array of Physics studies of $e^+e^-$ \ncollisions at center of mass ene
rgies between 3 GeV and 5 GeV.\n\nNew challenges await the CLEO-c Online c
omputing system\, as the \ntrigger rates are expected to rise from \n\nhtt
ps://indico.cern.ch/event/0/contributions/1294378/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294378/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Network Information and Management Infrastructure Project
DTSTART;VALUE=DATE-TIME:20040927T143000Z
DTEND;VALUE=DATE-TIME:20040927T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294429@indico.cern.ch
DESCRIPTION:Speakers: P. DeMar (FNAL)\nManagement of large site network su
ch as FNAL LAN presents many\ntechnical and organizational challenges. Thi
s highly dynamic network\nconsists of around 10 thousand network nodes. Th
e nature of the\nactivities FNAL is involved in and its computing policy\n
require that the network remains as open as reasonably possible\nboth in t
erms of connectivity to the outside networks and in with\nrespect to proce
dural simplicity of joining the network by temporary\nnetwork participants
such as visitors notebook computers.\nThe goal of the Network Information
and Management Infrastructure\nproject at FNAL is to build software infra
structure which would help\nnetwork management and computer security teams
organize monitoring\nand management of the network\, simplify communicati
on between\nthese entities and users\, integrate network management into\n
FNAL computer center management infrastructure.\n\nPrimary authors: Phil D
eMar (FNAL)\, Igor Mandrichenko (FNAL)\,\n Don Petravick(FNAL)\, Dane S
kow (FNAL)\n\nhttps://indico.cern.ch/event/0/contributions/1294429/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294429/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Distributed Computing Grid Experiences in CMS DC04
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294382@indico.cern.ch
DESCRIPTION:Speakers: A. Fanfani (INFN-BOLOGNA (ITALY))\nIn March-April 20
04 the CMS experiment undertook a Data Challenge(DC04). \nDuring the previ
ous 8 months CMS undertook a large simulated event\nproduction. The goal
of the challenge was to run CMS reconstruction for\nsustained period at 2
5Hz input rate\, distribute the data to the CMS Tier-1\ncenters and analyz
e them at remote sites. Grid environments developed in\nEurope by the LH
C Computing Grid (LCG) in Europe and in the US with Grid2003 were \nutiliz
ed to complete the aspects of the challenge.\n\nDuring the simulation phas
e\, US-CMS utilized Grid2003 to simulate and\nprocess approximately 17 mil
lion events. Simultaneous usage of CPU\nresources peaked at 1200 CPUs\,
controlled by a single FTE. Using Grid3 was a \nmilestone for CMS computi
ng in reaching a new magnitude in the\nnumber of autonomously cooperating
computing sites for production. The\nuse of Grid-based job execution res
ulted in reducing the overall support effort \nrequired to submit and moni
tor jobs by a factor of two.\n\nDuring the challenge itself\, the CMS grou
ps from Italy and Spain used the LCG Grid \nEnvironment to satisfy challen
ge requirements . The LCG Replica \nManager was used to transfer the data
. The CERN RLS provided the needed \nreplica catalogue functionality. The
LCG submission system based on the \nResource Broker was used to submit an
alysis jobs to the sites hosting the \ndata. A CMS dedicated GridICE monit
oring was activated to monitor both \nservices and resources.\n\nA descrip
tion of the experiences\, successes and lessons learned from both \nexperi
ences with grid infrastructure is presented.\n\nhttps://indico.cern.ch/eve
nt/0/contributions/1294382/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294382/
END:VEVENT
BEGIN:VEVENT
SUMMARY:WAN Emulation Development and Testing at Fermilab
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294384@indico.cern.ch
DESCRIPTION:Speakers: A. Bobyshev (FERMILAB)\nThe Compact Muon Solenoid (C
MS) experiment at CERN's Large Hadron Collider (LHC) is\nscheduled to come
on-line in 2007. Fermilab will act as the CMS Tier-1 center for the\nUS a
nd make experiment data available to more than 400 researchers in the US\n
participating in the CMS experiment. The US CMS Users Facility group\, ba
sed at\nFermilab\, has initiated a project to develop a model for optimizi
ng movement of CMS\nexperiment data between CERN and the various tiers of
US CMS data centers. Fermilab\nhas initiated a project to design a WAN emu
lation facility which will enable\ncontrolled testing of unmodified or mod
ified CMS applications and TCP \nimplementations locally under conditions
that emulate WAN connectivity. The WAN\nemulator facility is configurable
for latency\, jitter\, and packet loss. The initial\nimplementation is b
ased on the NISTnet software product. In this paper we will\ndescribe the
status of this project to date\, the results of validation and comparison\
nof performance measurements obtained in emulated and real environment for
different\napplications including multistreams GridFTP. We also will intr
oduce future short term\nand intermediate term plans\, as well as outstand
ing problems and issues.\n\nhttps://indico.cern.ch/event/0/contributions/1
294384/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294384/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Supporting the Development Process of the DataGrid Workload Manage
ment System Software with GNU autotools\, CVS and RPM
DTSTART;VALUE=DATE-TIME:20040930T151000Z
DTEND;VALUE=DATE-TIME:20040930T153000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294386@indico.cern.ch
DESCRIPTION:Speakers: E. Ronchieri (INFN CNAF)\nWe described the process f
or handling software builds and realeases for the Workload\nManagement pac
kage of the DataGrid project. The software development in the project\nwas
shared among nine contractual partners\, in seven different countries\, a
nd was\norganized in work-packages covering different areas.\n\nIn this pa
per\, we discuss how a combination of Concurrent Version System\, GNU\naut
otools and other tools and practices was organised to allow the developmen
t\,\nbuild\, test and distribution of the DataGrid Workload Management Sys
tem. This is not\nonly characterised by a rather high internal geographic
and administrative dispersion\n(four institutions with developers at nine
different locations in three countries)\,\nbut by the fact we had to integ
rate and interface to a dozen of third-party code\npackages coming from di
fferent sources\, and to the software products coming from\nother three de
velopment work-packages internal to the project. \n\nA high level of centr
al co-ordination needed to be maintained for project-wide\nsteering\, and
this had also to be reflected in the software development\ninfrastructure\
, while maintaining ease-of-use for distributed developers and\nautomated
procedures wherever possible.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294386/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294386/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DIRAC Workload Management System
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294388@indico.cern.ch
DESCRIPTION:Speakers: V. garonne (CPPM-IN2P3 MARSEILLE)\nThe Workload Mana
gement System (WMS) is the core component of the\nDIRAC distributed MC pro
duction and analysis grid of the LHCb\nexperiment. It uses a central Task
database which is accessed via\na set of central Services with Agents runn
ing on each of the LHCb\nsites. DIRAC uses a 'pull' paradigm where Agents
request tasks\nwhenever they detect their local resources are available.\n
The collaborating central Services allow new components to be\nplugged in
easily. These Services can perform functions such as\nscheduling optimizat
ion\, task prioritization\, job splitting and merging\,\nto name a few. Th
ey provide also job status information for various\nmonitoring clients. We
will discuss the services deployment and operation\nwith particular empha
sis on the robustness and scalability issues.\n\nThe distributed Agents ha
ve modular design which allows easy functionality\nextensions to adapt to
the needs of a particular site. The Agent\ninstallation have only basic pr
e-requisites which makes it easy for new\nsites to be incorporated. An Age
nt can be deployed on a gatekkeeper of a\nlarge cluster or just on a singl
e worker node of the LCG grid. PBS\,LSF\,BQS\,\nCondor\,LCG\,Globus can be
used as the DIRAC computing resources.\n\nThe WMS components use XML-RPC
and instant messaging Jabber protocols\nfor communication which increases
the overall reliability of the\nsystem. The jobs handled by the WMS are de
scribed using Classad library\nwhich facilitates the interoperability with
other grids.\n\nhttps://indico.cern.ch/event/0/contributions/1294388/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294388/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data Rereprocessing on Worldwide Distributed Sytems
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294389@indico.cern.ch
DESCRIPTION:Speakers: D. Wicke (Fermilab)\nAbstract:\nThe D0 experiment fa
ces many challenges enabling access to large\ndatasets for physicists on 4
continents. The strategy of solving these\nproblems on worlwide distribut
ed computing clusters is followed.\n\nAlready since the begin of TEvatron
RunII (March 2001) all Monte-Carlo\nsimulations are produced outside of Fe
rmilab at remote systems. For\nanalyses\nas system of regional analysis ce
nters (RACs) was established which\nsupply the associated institutes with
the data. This structure which\nis similar the the Tier structure foreseen
for LHC was used in autumn\n2003 to rereprocess all D0-data with the upto
date and \nmuch improved recontruction software.\n\nAs the first running e
xperiment D0 has implemented and operated all \nimportant computing dask o
f a high energy physics experiment on\nworldwide distributed systems. \n\n
The experiences gained in D0 can be applied to judge the LHC\ncomputing mo
del.\n\nhttps://indico.cern.ch/event/0/contributions/1294389/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294389/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Muon Event Filter Software for the ATLAS Experiment at LHC
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294391@indico.cern.ch
DESCRIPTION:Speakers: M. Biglietti (UNIVERSITY OF MICHIGAN)\nAt LHC the 40
MHz bunch crossing rate dictates a high selectivity of the\nATLAS Trigger
system\, which has to keep the full physics potential of the \nexperiment
in spite of a limited storage capability.\nThe level-1 trigger\, implemen
ted in a custom hardware\, will reduce the \ninitial rate to 75 kHz and is
followed by the software based level-2 \nand Event Filter\, usually refer
red as High Level Triggers (HLT)\, \nwhich further reduce the rate to abou
t 100 Hz. \nIn this paper an overview of the implementation of the offline
muon\n recostruction algortihms MOORE (Muon Object Oriented REconstructio
n) and \n MuId (Muon Identification) as Event Filter in the Atlas online f
ramework \n is given.\nThe MOORE algorithm performs the reconstruction ins
ide the Muon Spectrometer\nproviding a precise measurement of the muon tra
ck parameters outside the \ncalorimeters\; MuId combines the measurements
of all ATLAS sub-detectors\nin order to identify muons and provides the be
st estimate of their \nmomentum at the production vertex. \nIn the HLT i
mplementation the muon reconstruction can be executed in the \n"full scan
mode"\, performing pattern recognition in the whole muon spectrometer\, \n
or in the "seeded mode"\, taking advantage of the results of the earlier t
rigger \nlevels.\nAn estimate of the execution time will be presented alon
g \nwith the performances in terms of efficiency\, momentum resolution \na
nd rejection power for muons coming from hadron decays and for fake muon t
racks\, \ndue to accidental hit correlations in the high background enviro
nment of the \nexperiment.\n\nhttps://indico.cern.ch/event/0/contributions
/1294391/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294391/
END:VEVENT
BEGIN:VEVENT
SUMMARY:GILDA: a Grid for dissemination activities
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294393@indico.cern.ch
DESCRIPTION:Speakers: R. Barbera (Univ. Catania and INFN Catania)\nComputa
tional and data grids are now entering a more mature phase where experimen
tal\ntest-beds are turned into production quality infrastructures operatin
g around the\nclock. All this is becoming true both at national level\, wh
ere an example is the\nItalian INFN production grid (http://grid-it.cnaf.i
nfn.it)\, and at the continental\nlevel\, where the most strinking example
is the European Union EGEE Project\nInfrastructure (http://www.eu-egee.or
g). \nHowever\, the impact of grid technologies on the next future way of
doing e-science\nand research in Europe will be proportional to the capabi
lity of National and\nEuropean Grid Infrastructures to attract and serve\
nmany diverse scientific and industrial communities through serious and de
tailed\ndissemination and tutoring programs.\nIn this contribution we pres
ent GILDA\, the Grid Infn Laboratory for Dissemination\nActivities (http:/
/gilda.ct.infn.it). GILDA is a complete suite of grid elements\n(Certifica
tion Authority\, Virtual Organization\, Distributed Test-bed\, Grid\nDemon
strator\, etc.) completely devoted to dissemination activities. GILDA can
also\nact as a fast-prototyping test-bed where to start the porting/interf
acing of new\napplications with the grid middle-ware. The use and exploita
tion of GILDA in the\ncontext of the Network Activities of the EGEE Projec
t will be discussed.\n\nhttps://indico.cern.ch/event/0/contributions/12943
93/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294393/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience with POOL from the LCG Data Challenges of the three LHC
experiments
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294396@indico.cern.ch
DESCRIPTION:Speakers: Maria Girone ()\nThis presentation will summarise th
e deployment experience gained \nwith POOL during the first larger LHC exp
eriments data challenges \nperformed. In particular we discuss the storage
access \nperformance and optimisations\, the integration issues with grid
\nmiddleware services such as the LCG Replica Location Service \n(RLS) an
d the LCG Replica Manager and experience with the POOL \nproposed way of e
xchanging meta data (such as File Catalog catalogue \nentries) in a de-cou
pled production system.\n\nhttps://indico.cern.ch/event/0/contributions/12
94396/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294396/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guidelines for Developing a Good GUI
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294438@indico.cern.ch
DESCRIPTION:Speakers: I. Antcheva (CERN)\nDesigning a usable\, visually-at
tractive GUI is somewhat more difficult than it\nappears at a first glance
. The users\, the GUI designers and the programmers are three\nimportant p
arts involved in this process and everyone has a comprehensive view on the
\naspects of the application goals\, as well as the steps that have to be
taken to meet\nsuccessfully the application requirements. The fundamental
GUI design principles and\nthe main programming aspects are discussed in t
his paper. \n\nKey topics include:\n - User requirements: identifying us
ers and support different user profiles -\nfrom beginners to advanced user
s\n -Close relationship between the GUI widgets\, user actions\, tasks a
nd user goals\n -Task-analysis methods\n -Prototypes development and t
esting prototypes\n -General design considerations\n -Effective GUI de
sign keys\, guidelines and style guides\n\nhttps://indico.cern.ch/event/0/
contributions/1294438/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294438/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The LCG Savannah software development portal
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294397@indico.cern.ch
DESCRIPTION:Speakers: Y. Perrin (CERN)\nA web portal has been developed\,
in the context of the LCG/SPI project\, in\norder to coordinate workflow a
nd manage information in large software\n projects. It is a development of
the GNU Savannah package and offers a range\n of services to every hosted
project: Bug / support / patch trackers\, a\n simple task planning system
\, news threads\, and a download area for software\n releases. Features an
d functionality can be fine-tuned on a per project\n basis and the system
displays content and grants permissions according to\n the user's status (
project member\, other Savannah user\, or visitor). A\n highly configurabl
e notification system is able to channel tracker\n submissions to develope
rs in charge of specific project modules.\n\nThe portal is based on the GN
U Savannah package which is now developed as\n'Savane' by the Free Softwar
e Foundation of France. It is a descendant of the\nwell known SourceForge-
2.0 software. The original trackers were contributed\nto the open source c
ommunity by XEROX\, which uses a similar system for their\ninternal softwa
re development. Several features and extensions were\nintroduced in a coll
aboration of LCG/SPI with the current main developer of\nSavannah to adapt
the software for use at CERN and the results were given\nback to the open
source. Cern Savannah currently provides services to more\n than 600 user
s in 90 projects.\n\nhttps://indico.cern.ch/event/0/contributions/1294397/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294397/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cross Experiment Workflow Management: The Runjob Project
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294398@indico.cern.ch
DESCRIPTION:Speakers: P. Love (Lancaster University)\nBuilding on several
years of sucess with the MCRunjob projects at \nDZero and CMS\, the fermil
ab sponsored joint Runjob project aims to \nprovide a Workflow description
language common to three experiments: \nDZero\, CMS and CDF. This proje
ct will encapsulate the remote \nprocessing experiences of the three exper
iments in an extensible \nsoftware architecture using web services as a\nc
ommunication medium. The core of the Runjob project will be the \nShahkar
software packages that provide services for describing jobs \nand targeti
ng them at different execution environments. A common \ninterface to multi
ple storage and compute grid elements will be \nprovided\, alllowing the t
hree experiments to share hardware resources \nin a transparent manner. S
everal tools provided by Shahkar are \ndiscussed including FileMetaBrokers
\, hich provide a \nuniform way to handle files and metadata over a distri
buted cluster\, \nthe ShREEK runtime execution environment that allows exe
cutable jobs \nto provide a real time monitoring and control interface to
any \nsystem\, the scriptObject generic task encapsulation objects and \nX
MLProcessor object persistency tool.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294398/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294398/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Self-Filling Histograms: An object-oriented analysis framework
DTSTART;VALUE=DATE-TIME:20040930T145000Z
DTEND;VALUE=DATE-TIME:20040930T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294399@indico.cern.ch
DESCRIPTION:Speakers: J. LIST (University of Wuppertal)\nAnalyses in high-
energy physics often involve the filling of large \namounts of histograms
from n-tuple like data structures\, e.g. RooT \ntrees. Even when using an
object-oriented framework like RooT\, a the \nuser code often follows a fu
nctional programming approach\, where \nbooking\, application of cuts\, ca
lculation of weights and \nhistogrammed quantities and finally the filling
of the histogram is \nperformed separately in different places of the pro
gram. \n\nWe will present a set of RooT based histogram classes that allow
to \ndefine the histogrammed quantity\, its weight and the cuts to be \na
pplied at the time of booking.\nWe use lightweight function object classes
to define plotted \nquantities and cut conditions\; the "self-filling" hi
stograms hold \nreferences to these objects\, and evaluate them in a fill
method that \nthus needs no parameters. The use of function objects rather
than \nstrings to define plotted quantities and cuts permits error \ndete
ction at compile rather than run time\, and allows the \nimplementation of
caching mechanisms if costly computations are to \nbe performed. Arithmet
ic and logical expressions are implemented by \noperator overloading. Hist
ograms can be grouped in collections. We \napply the visitor pattern to pe
rform operations like filling\, \nwriting\, fitting or attribute setting o
n such a group\, without \nhaving to extend the collection class each time
a new functionality \nis needed.\n\nAlthough developed within the object
oriented analysis framework of \nthe H1 experiment\, this toolkit can be u
sed on any RooT tree.\n\nhttps://indico.cern.ch/event/0/contributions/1294
399/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294399/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Lightweight Monitoring and Accounting System for LHCb DC'04 Prod
uction
DTSTART;VALUE=DATE-TIME:20040930T153000Z
DTEND;VALUE=DATE-TIME:20040930T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294401@indico.cern.ch
DESCRIPTION:Speakers: M. Sanchez-Garcia (UNIVERSITY OF SANTIAGO DE COMPOST
ELA)\nThe LHCb Data Challenge 04 includes the simulation of over 200 M\nsi
mulated events using distributed computing resources on N sites and\nexten
ding along 3 months. To achieve this goal a dedicated Production\ngrid (DI
RAC) has been deployed. We will present the Job\nMonitoring and Accounting
services developed to follow the status of\nthe production along its way
and to evaluate the results at the end of\nthe Data Challenge.\n\nThe end
user connects with a web browser to\nWEB-SERVER applications showing dynam
ic reports for a whole set of\npossible queries. These applications in tur
n interrogate the Job \nMonitoring\nService of the DIRAC Workload Manageme
nt system and\nAccounting Database service by means of dedicated XML-RPC i
nterfaces\,\nquerying for the information requested by the user. The repor
ts \nprovide\nan uniform view of the usage of the computing resources avai
lable. All\nthe system components are implemented as a set of cooperating
python\nclasses following the design choice of LHCb. The different service
s \nare distributed\nover a number of independent machines. This allows to
achieve the\nscalability level of multiple thousands of concurrent jobs m
onitored \nby the system\n\nhttps://indico.cern.ch/event/0/contributions/1
294401/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294401/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Control and state logging for the PHENIX DAQ System
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294404@indico.cern.ch
DESCRIPTION:Speakers: Martin purschke ()\nThe PHENIX DAQ system is managed
by a control system responsible for\nthe configuration and monitoring of
the PHENIX detector hardware and\nreadout software. At its core\, the cont
rol system\, called Runcontrol\,\nis a process that manages the various co
mponents by way of a \ndistributed architecture using CORBA. The control s
ystem\, called \nRuncontrol\, is a set of process that manages virtually a
ll detector \ncomponents through a distributed architecture base on CORBA.
\nA key aspect of the distributed control system\, the messaging \nsystem\
, is the ability to access critical detector state \ninformation\, and del
iver it to operators and applications of the \ncontrol system. The goal of
the system is to concentrate all output \nmessages of the distributed pro
cesses\, which would normally end up \nin log files or on a terminal\, in
a central place. The messages may \noriginate from or be received by appli
cations running on any of the \nmultiple platforms which are in use includ
ing Linux\, Windows\, \nSolaris\, and VxWorks. Listener applications allow
the DAQ operators \nto get a comprehensive overview of all messages they
are interested \nin\, and also allows scripts or other programs to take au
tomated \naction in response to certain messages.\nMessages are formatted
to contain information about the source of the\nmessage\, the message type
\, and its severity. Applications written to\nprovide filtering of message
s by the DAQ operators by type\, severity\nand source will be presented.\n
We will discuss the mechanism underlying this system\, present \nexamples
of the use\, and discuss performance and reliability issues.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294404/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294404/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adding Kaons to the Bertini Cascade Model
DTSTART;VALUE=DATE-TIME:20040927T153000Z
DTEND;VALUE=DATE-TIME:20040927T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294405@indico.cern.ch
DESCRIPTION:A version of the Bertini cascade model for hadronic interactio
ns is part of \nthe Geant4 toolkit\, and may be used to simulate pion-\, p
roton-\, and \nneutron-induced reactions in nuclei. It is typically valid
for incident \nenergies of 10 GeV and below\, making it especially useful
for the simulation of\nhadronic calorimeters. In order to generate the i
ntra-nuclear cascade\, the \ncode depends on tabulations of exclusive chan
nel cross section data\, \nparameterized angular distributions and phase-s
pace generation of \nmulti-particle final states. To provide a more detai
led treatment of hadronic \ncalorimetry\, and kaon interactions in general
\, this model is being extended to \ninclude incident kaons up to an energ
y of 15 GeV. Exclusive channel cross \nsections\, up to and including six
-body final states\, will be included for K+\, \nK-\, K0\, K0bar\, lambda\
, sigma+\, sigma0\, sigma-\, xi0 and xi-. K+nucleon and \nK-nucleon cross
sections are taken from various cross section catalogs\, while \nmost of
the cross sections for incident K0\, K0bar and hyperons are estimated \nfr
om isospin and strangeness considerations. Because there is little data fo
r \nincident hyperon cross sections\, use of the extended model will be re
stricted \nto incident K+\, K-\, K0S and K0L. Hyperon cross sections are
included only to \nhandle the secondary interactions of hyperons created i
n the intra-nuclear \ncascade.\n\nhttps://indico.cern.ch/event/0/contribut
ions/1294405/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294405/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Composite Framework for CMS Applications
DTSTART;VALUE=DATE-TIME:20040927T153000Z
DTEND;VALUE=DATE-TIME:20040927T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294406@indico.cern.ch
DESCRIPTION:Speakers: I. Osborne (Northeastern University\, Boston\, USA)\
nWe present a composite framework which exploits the advantages of \nthe C
MS data model and uses a novel approach for building CMS \nsimulation\, re
construction\, visualisation and future analysis \napplications. The frame
work exploits LCG SEAL and CMS COBRA plug-ins \nand extends the COBRA fram
ework to pass communications between the \nGUI and event threads\, using S
EAL callbacks to navigate through the \nmetadata and event data interactiv
ely in a distributed environment.\n\nWe give examples of current applicati
ons based on this framework\, \nincluding CMS test-beams\, geometry descri
ption debugging\, GEANT4 \nsimulation\, event reconstruction\, and the ver
ification of \nreconstruction and higher level trigger algorithms.\n\nhttp
s://indico.cern.ch/event/0/contributions/1294406/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294406/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Resource Predictors in HEP Applications
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294411@indico.cern.ch
DESCRIPTION:The ATLAS experiment uses a tiered data Grid architecture that
enables\npossibly overlapping subsets\, or replicas\, of the original set
to be\nlocated across the ATLAS collaboration. The full set of experiment
\ndata is located at a single Tier 0 site\, and then subsets of the data\n
are located at national Tier 1 sites\, smaller subsets at smaller\nregiona
l Tier 2 sites\, and so on. In order to understand the data\nneeds\, both
in terms of access\, replication policy\, and storage\ncapacity\, we need
good estimations of resource needs for data\nmanipulation. Specifically\,
we envision a time when a user will want\nto determine which is more exped
ient\, downloading a replica from a\nsite or recreating it from scratch.\n
\nThis paper presents our technique to predict the behavior of ATLAS\nappl
ications\, and then to combine this information with Internet link\nbandwi
dth estimation to improve resource usage in the ATLAS Grid\nenvironment. W
e studied the parameters that affect the execution time\nperformance of ev
ent generation\, detector simulation\, and event\nreconstruction. Our resu
lts show that we can achieve predictions\nwithin 10-40% of the execution t
ime (depending on the application)\,\nbetter than many other pragmatic pre
diction techniques. We implemented\na software package to provide data tra
nsfer bandwidth estimation and\nexecution time prediction that can be used
with the Chimera software\nto aid in managing application execution and t
o improve resource usage\nfor ATLAS.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294411/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294411/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Installing and Operating a Grid Infrastructure at DESY
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294412@indico.cern.ch
DESCRIPTION:Speakers: A. Gellrich (DESY)\nDESY is one of the world-wide le
ading centers for research with particle\naccelerators and a center for re
search with synchrotron light.\nThe hadron-electron collider HERA houses f
our experiments which are taking \ndata and will be operated until 2006 at
least.\n\nThe computer center manages a data volumes of order 1 PB and is
the home\nfor around 1000 CPUs.\n\nIn 2003 DESY started to set up a Grid
infrastructure on site.\nMonte Carlo production is the primer HEP applicat
ion candidate for the Grid\nat DESY. The experiments have started major te
sts.\n\nA first Grid Testbed was based on EDG 1.4.\nSome effort was taken
to install the binary distribution of the middleware\non SuSE based Linux
systems at DESY.\nWith the first fixed LCG-2 release in spring 2004\, the
Grid Testbed2 was\ninstalled\, which serves as the basis for all further D
ESY activities.\n\nThe contribution to CHEP2004 will start by briefly summ
arizing the status of the\nGrid activities at DESY in the context of EGEE
and D-GRID\, in which DESY\ntakes a leading role.\nIn the following\, we w
ill discuss the integration of Grid components in\nthe infrastructure of t
he DESY computer center.\nThis includes technical aspects of the operating
system\, such as\nSuSE versus RedHat Linux\, the interaction with the mas
s storage system\, and\nthe management of Virtual Organizations.\nWe will
finish with discussing installation and operation experiences\nof Grid mid
dleware at DESY\, also having in mind HEP and future synchrotron\nlight ex
periments in the X-FEL era.\n\nhttps://indico.cern.ch/event/0/contribution
s/1294412/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294412/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Portable Gathering System for Monitoring and Online Calibration at
Atlas
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294413@indico.cern.ch
DESCRIPTION:Speakers: P. Conde MUINO (CERN)\nDuring the runtime of any exp
eriment\, a central monitoring system that\ndetects problems as soon as th
ey appear has an essential role. In a large\nexperiment\, like Atlas\, the
online data acquisition system is\ndistributed across the nodes of large
farms\, each of them running several\nprocesses that analyse a fraction of
the events. In this architecture\, it is\nnecessary to have a central pro
cess that collects all the monitoring data from the\ndifferent nodes\, pro
duces full statistics histograms and analyses them. \nIn this paper we pre
sent the design of such a system\, called the "gatherer". It\nallows to co
llect any monitoring object\, such as histograms\, from the farm nodes\,\n
from any process in the DAQ\, trigger and reconstruction chain. It also ad
ds up the\nstatistics\, if required\, and processes user defined algorithm
s in order\nto analyse the monitoring data. The results are sent to a cent
ralized display\, that\nshows the information online\, and to the archivin
g system\, triggering alarms in case\nof problems. \n\nThe innovation of o
ur approach is that conceptually it abstracts the\nseveral communication p
rotocols underneath\, being able to talk with different\nprocesses using d
ifferent protocols at the same time and\, therefore\, providing\nmaximum
flexibility. The software is easily adaptable to any trigger-DAQ system. \
n\nThe first prototype of the gathering system has been implemented for At
las and will\nbe running during this year's combined test beam. \nAn evalu
ation of this first prototype will also be presented.\n\nhttps://indico.ce
rn.ch/event/0/contributions/1294413/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294413/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Binary Cascade
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294414@indico.cern.ch
DESCRIPTION:Speakers: G. Folger (CERN)\nGeant4 is a toolkit for the simula
tion of the passage of particles \nthrough matter. Amongst its application
s are hadronic calorimeters \nof LHC detectors and simulation of radiation
environments. For these \ntypes of simulation\, a good description of sec
ondaries generated by \ninelastic interactions of primary nucleons and pio
ns is particularly \nimportant. \n\nThe Geant4 Binary Cascade is a hybrid
between a classical intra-\nnuclear cascade and a QMD model for the simula
tion of inelastic \nscattering of pions\, protons and neutrons\, and light
ions \nof intermediate energies off nuclei. The nucleus is modeled by \ni
ndividual nucleons bound in the nuclear potential. Binary \ncollisions of
projectiles or projectile constituants and secondaries \nwith single nucle
ons\, resonance production\, and decay are simulated \naccording to measur
ed\, parametrised or calculated cross sections. \nPauli's exclusion princi
ple\, i.e. blocking of interactions due to \nFermi statistics\, reduces th
e free cross section to an effective \nintra-nuclear cross section. Second
ary particles are allowed to \nfurther interact with remaining nucleons. \
n\nWe will describe the modeling\, and give an overview of the \ncomponent
s of the model\, their object oriented design\, and \nimplementation.\n\nh
ttps://indico.cern.ch/event/0/contributions/1294414/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294414/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The GANGA user interface interface for physics analysis on distrib
uted resources
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294448@indico.cern.ch
DESCRIPTION:Any physicist who will analyse data from the LHC experiments w
ill have to deal with\ndata and computing resources which are distributed
across multiple locations and with\ndifferent access methods. GANGA helps
the end user by tying in specifically to the\nsolutions for a given experi
ment ranging from specification of data to retrieval and\npost-processing
of produced output. For LHCb and ATLAS the main goal is to assist in\nrunn
ing jobs based on the Gaudi/Athena C++ framework. GANGA is written in Pyth
on and\npresents the user with a single GUI rather than a set of different
applications. It\ninteracts with external resources like experiments book
keeping databases\, job\nconfiguration\, and Grid submission systems throu
gh plug-able modules. The user is\nupon start-up presented with a list of
templates for common analysis tasks and GANGA\npersists information about
ongoing tasks between invocations. GANGA can also be used\nthrough a comma
nd line interface that has a tight connection to the GUI to ease the\ntran
sition from one to the other. Examples will be presented that demonstrates
the\nintegration into the distributed analysis systems of the LHCb and AT
LAS experiments\nas used during their 2004 data challenges.\n\nhttps://ind
ico.cern.ch/event/0/contributions/1294448/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294448/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Global Grid User Support for LCG
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294421@indico.cern.ch
DESCRIPTION:Speakers: T. ANTONI (GGUS)\nFor very large projects like the L
HC Computing Grid Project (LCG) involving 8\,000 \nscientists from all aro
und the world\, it is an indispensable requirement to have a \nwell organi
zed user support. The Institute for Scientific Computing at the \nForschun
gszentrum Karlsruhe started implementing a Global Grid User Support (GGUS)
\nafter official assignment of the Grid Deployment Board in March 2003. F
or this \npurpose a web portal and a helpdesk application have been develo
ped. As a single \nentry point for all Grid related issues and problems GG
US follows the objectives of \nproviding news\, documentation and status i
nformation about Grid resources. The user \nwill find forms to submit and
track service requests. GGUS collaborates with \ndifferent support teams i
n the Grid environment like the Grid Operations Center and \nthe Experimen
t Specific Support. They can access the helpdesk system via web \ninterfac
e. GGUS stores all the incoming trouble tickets and outgoing solutions in
a \ncentral database and plans to build up a knowledge base where all the
information \ncan be offered in a structured manner. \nAs a prototype GGUS
started operation at the Forschungszentrum Karlsruhe in October \n2003 an
d supported local user groups of the German Tier 1 Computing Center\, call
ed \nGridKa. 4 month later the GGUS system was opened for the LCG communit
y. \nThe GGUS system will be explained and demonstrated. The present statu
s of GGUS \nwithin the LCG environment will be discussed.\n\nhttps://indic
o.cern.ch/event/0/contributions/1294421/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294421/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Concepts and technologies used in contemporary DAQ systems
DTSTART;VALUE=DATE-TIME:20040927T100000Z
DTEND;VALUE=DATE-TIME:20040927T103000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294422@indico.cern.ch
DESCRIPTION:Speakers: M. Purschke (Brookhaven National Laboratory)\nThe co
ncepts and technologies applied in data acquisition systems have changed \
ndramatically over the past 15 years. Generic DAQ components and standards
such as \nCAMAC and VME have largely been replaced by dedicated FPGA and
ASIC boards\, and \ndedicated real-time operation systems like OS9 or VxWo
rks have given way to Linux-\nbased trigger processor and event building f
arms. We have also seen a shift from \nstandard or proprietary bus systems
used in event building to GigaBit networks and \ncommodity components\, s
uch as PCs. With the advances in processing power\, network \nthroughput\,
and storage technologes\, today's data rates in large experiments \nrouti
nely reach hundreds of MegaBytes/s.\n\nWe will present examples of contemp
orary DAQ systems from different experiments\, try \nto identify or catego
rize new approaches\, and will compare the performance and \nthroughput of
existing DAQ systems with the projected data rates of the LHC \nexperimen
ts to see how close we have come to accomplish these goals. We will also \
ntry to look beyond the field of High-Energy Physics and see if there are
trends and \ntechnologies out there which are worth keeping an eye on.\n\n
https://indico.cern.ch/event/0/contributions/1294422/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294422/
END:VEVENT
BEGIN:VEVENT
SUMMARY:BaBar computing - From collisions to physics results
DTSTART;VALUE=DATE-TIME:20040927T093000Z
DTEND;VALUE=DATE-TIME:20040927T100000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294423@indico.cern.ch
DESCRIPTION:Speakers: P. ELMER (Princeton University)\nThe BaBar experimen
t at SLAC studies B-physics at the Upsilon(4S) resonance using \nthe high-
luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center
\n(SLAC). Taking\, processing and analyzing the very large data samples is
a \nsignificant computing challenge.\n\n This presentation will describe
the entire BaBar computing chain and illustrate \nthe solutions chosen as
well as their evolution with the ever higher luminosity \nbeing delivered
by PEP-II. This will include data acquisition and software \ntriggering i
n a high availability\, low-deadtime online environment\, a prompt\, \naut
omated calibration pass through the data SLAC and then the full reconstruc
tion of \nthe data that takes place at INFN-Padova within 24 hours. Monte
Carlo production \ntakes place in a highly automated fashion in 25+ sites.
The resulting real and \nsimulated data is distributed and made available
at SLAC and other computing centers.\n\n For analysis a much more sophis
ticated skimming pass has been introduced in the \npast year\, along with
a reworked eventstore. This allows 120 highly customized \nanalysis-specif
ic skims to be produced for direct use by the analysis groups. This \nskim
data format is the same eventstore data as that produced directly by the
data \nand monte carlo productions and can be handled and distributed in t
he same way.\n\n The total data volume in BaBar is about 1.5PB.\n\nhttps:
//indico.cern.ch/event/0/contributions/1294423/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294423/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ATLAS Data Challenge Production on Grid3
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294424@indico.cern.ch
DESCRIPTION:Speakers: M. Mambelli (UNIVERSITY OF CHICAGO)\nWe describe the
design and operational experience of the ATLAS production system as \nimp
lemented for execution on Grid3 resources. The execution environment cons
isted \nof a number of grid-based tools: Pacman for installation of VDT-ba
sed Grid3 services \nand ATLAS software releases\, the Capone execution se
rvice built from the \nChimera/Pegasus virtual data system for directed ac
yclic graph (DAG) generation\, \nDAGMan/Condor-G for job submission and
management \, and the Windmill production \nsupervisor which provides the
messaging system for distributing production tasks to \nCapone. Produced
datasets were registered into a distributed replica location \nservice (Gl
obus RLS) that was integrated with the Don Quixote proxy service for \nint
eroperability with other Grids used by ATLAS. We discuss performance\, \ns
calability\, and fault handling during the first phase of ATLAS Data Chall
enge 2.\n\nhttps://indico.cern.ch/event/0/contributions/1294424/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294424/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Housing Metadata for the Common Physicist Using a Relational Datab
ase
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294425@indico.cern.ch
DESCRIPTION:SAM was developed as a data handling system for Run II at Ferm
ilab. SAM is a \ncollection of services\, each described by metadata. The
metadata are modeled on a \nrelational database\, and implemented in ORACL
E. SAM\, originally deployed in \nproduction for the D0 Run II experiment\
, has now been also deployed at CDF and is \nbeing commissioned at MINOS.
This illustrates that the metadata decomposition of its \nservices has a b
roader applicability than just one experiment. A joint working group \non
metadata with representatives from ATLAS\, BaBar\, CDF\, CMS\, D0\, and LH
CB in \ncooperation with EGEE has examined this metadata decomposition in
the light of \ngeneral HEP user requirements.\nGreater understanding of th
e required services of a performant data handling system \nhas emerged fro
m Run II experience. This experience is being merged with the \nunderstand
ing being developed in the course of LHC experience with data challenges \
nand user case discussions. We describe the SAM schema and the commonalit
ies of \nfunction and service support between this schema and proposals fo
r the LHC \nexperiments. We describe the support structure required for S
AM schema updates\, the \nuse of development\, integration\, and productio
n instances. We are also looking at \nthe LHC proposals for the evolution
of schema using keyword-value pairs that are \nthen transformed into a nor
malized\, performant database schema.\n\nhttps://indico.cern.ch/event/0/co
ntributions/1294425/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294425/
END:VEVENT
BEGIN:VEVENT
SUMMARY:StoRM: grid middleware for disk resource management
DTSTART;VALUE=DATE-TIME:20040929T155000Z
DTEND;VALUE=DATE-TIME:20040929T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294433@indico.cern.ch
DESCRIPTION:Speakers: L. Magnoni (INFN-CNAF)\nWithin a Grid the possibilit
y of managing storage space is fundamental\, in\nparticular\, before and d
uring application execution. On the other hand\, the\nincreasing availabil
ity of highly performant computing resources raises the need for\nfast and
efficient I/O operations and drives the development of parallel distribut
ed\nfile systems able to satisfy these needs granting access to distribute
d storage.\nThe demand of POSIX compliant access to storage and the need t
o have a uniform\ninterface for both Grid integrated and pure vanilla appl
ications stimulate developers\nto investigate the possibility to integrate
already existing filesystems into a Grid\ninfrastructure\, allowing users
to take advantage of storage resources without being\nforced to change th
eir applications.\nThis paper describes the design and implementation of S
toRM\, a storage resource\nmanager (SRM) for disk only. Through StoRM an a
pplication can reserve and manage\nspace on disk storage systems. It can t
hen access the space either in a Grid\nenvironment or locally in a transpa
rent way via classic POSIX calls.\nThe StoRM architecture is based on a pl
uggable model in order to easily add new\nfunctionalities. The StoRM imple
mentation uses now filesystems such as GPFS or\nLUSTRE. The StoRM prototyp
e includes space reservation functionalities that\ncomplement SRM space re
servation to allow applications to directly access/use the\nmanaged space
trough POSIX calls. Moreover\, StoRM includes quota management\nand a spac
e guard. StoRM will serve as policy enforcement point (PEP) for the Grid\n
Policy Management System over disk resources. The experimental results obt
ained are\npromising.\n\nhttps://indico.cern.ch/event/0/contributions/1294
433/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294433/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience with CORBA communication middleware in the ATLAS DAQ
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294440@indico.cern.ch
DESCRIPTION:Speakers: S. Kolos (CERN)\nAs modern High Energy Physics (HEP)
experiments require more \ndistributed computing power to fulfill their d
emands\, the need for \nan efficient distributed online services for contr
ol\, configuration \nand monitoring in such experiments becomes increasing
ly important. \nThis paper describes the experience of using standard Comm
on Object \nRequest Broker Architecture (CORBA) middleware for providing a
high \nperformance and scalable software\, which will be used for the onl
ine \ncontrol\, configuration and monitoring in the ATLAS Data Acquisition
\n(DAQ) system. It also presents the experience\, which was gained from \
nusing several CORBA implementations and replacing one CORBA broker \nwith
another.\nFinally the paper introduces results of the large scale tests\,
which \nhave been done on the cluster of more then 300 nodes\, demonstrat
ing \nthe performance and scalability of the ATLAS DAQ online services. \n
These results show that the CORBA standard is truly appropriate for \nthe
highly efficient online distributed computing in the area of\nmodern HEP e
xperiments.\n\nhttps://indico.cern.ch/event/0/contributions/1294440/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294440/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ATLAS Computing Model
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294444@indico.cern.ch
DESCRIPTION:Speakers: R. JONES (LANCAS)\nThe ATLAS Computing Model is unde
r continuous active development. \nPrevious exercises focussed on the Tier
-0/Tier-1 interactions\, with \nan emphasis on the resource implications a
nd only a high-level view \nof the data and workflow. The work presented h
ere considerably \nrevises the resource implications\, and attempts to des
cribe in some \ndetail the data and control flow from the High Level Trigg
er farms \nall the way through to the physics user. The model draws from t
he \nexperience of previous and running experiments\, but will be tested i
n \nthe ATLAS Data Challenge 2 (DC2\, described in other abstracts) and in
\nthe ATLAS Combined Testbeam exercises. \nAn important part of the work
is to devise the measurements and \ntests to be run during DC2. DC2 will b
e nearing completion in \nSeptember 2004\, and the first assessments of th
e performance of the \ncomputing model in scaled slice tests will be prese
nted.\n\nhttps://indico.cern.ch/event/0/contributions/1294444/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294444/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Software agents in data and workflow management
DTSTART;VALUE=DATE-TIME:20040929T134000Z
DTEND;VALUE=DATE-TIME:20040929T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294446@indico.cern.ch
DESCRIPTION:Speakers: T. Barrass (CMS\, UNIVERSITY OF BRISTOL)\nCMS curren
tly uses a number of tools to transfer data which\, taken together\, form
\nthe basis of a heterogenous datagrid. The range of tools used\, and the
directed\, \nrather than optimised nature of CMS recent large scale data c
hallenge required the \ncreation of a simple infrastructure that allowed a
range of tools to operate in a \ncomplementary way.\n\nThe system created
comprises a hierarchy of simple processes (named agents) that \npropagate
files through a number of transfer states. File locations and some \nappl
ication metadata were stored in POOL file catalogues\, with LCG LRC or MyS
QL \nbackends. Agents were assigned limited responsibilities\, and were re
stricted to \ncommunicating state in a well-defined\, indirect fashion thr
ough a central transfer \nmanagement database. In this way\, the task of d
istributing data was easily divided \nbetween different groups for impleme
ntation.\n\nThe prototype system was developed rapidly\, and achieved the
required sustained \ntransfer rate of ~10 MBps\, with O(10^6) files distri
buted to 6 sites from CERN. \nExperience with the system during the data c
hallenge raised issues with underlying \ntechnology (MSS write/read\, stab
ility of the LRC\, maintenance of file catalogues\, \nsynchronisation of f
ilespaces _) which have been successfully identified and \nhandled. The de
velopment of this prototype infrastructure allows us to plan the \nevoluti
on of backbone CMS data distribution from a simple hierarchy to a more \na
utonomous\, scalable model drawing on emerging agent and grid technology.\
n\nhttps://indico.cern.ch/event/0/contributions/1294446/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294446/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Using Tripwire to check cluster system integrity
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294447@indico.cern.ch
DESCRIPTION:Speakers: E. Perez-Calle (CIEMAT)\nExpansion of large computin
g fabrics/clusters throughout the world would\ncreate a need for stricter
security. Otherwise any system could suffer damages \nsuch as data loss\,
data falsification or misuse.\n\nPerimeter security and intrusion detectio
n system (IDS) are the two main \naspects that must be taken into account
in order to achieve system security. \n\nThe main target of an intrusion d
etection system is early detection in\nthe previously mentioned cases\, as
a way to minimize any damage in data\ncontained in the system.\n\nTripwir
e is one of the most powerful IDSs and is widely used as a \nsecurity tool
by the community of network administrators. Tripwire is\noriented to moni
tor the status of files and directories\, being\nable to detect the lighte
st change suffered by them.\n\nAt Ciemat\, Tripwire has been used to monit
or our local clusters\, involved\nin GRID projects such as implementation
of LCG prototypes\, to\nguarantee the integrability of data generated\, an
d stored there. It is\nused as well to monitor any modificacion of operati
ng system files and\nany other scientific core software.\n\nhttps://indico
.cern.ch/event/0/contributions/1294447/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294447/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience with the Unified Process
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294453@indico.cern.ch
DESCRIPTION:Speakers: M.G. Pia (INFN GENOVA)\nThe adoption of a rigorous s
oftware process is well known to represent a key factor \nfor the quality
of the software product and the most effective \nusage of the human resour
ces available to a software project.\nThe Unified Process\, in particular
its commercial packaging known as the RUP \n(Rational Unified Process) has
been one of the most widely used \nsoftware process models in the softwar
e industry for a number of years. \nWe present the application of the Unif
ied Process and of the RUP to a variety of \nsoftware projects in the High
Energy Physics environment. We \nillustrate how the UP/RUP provide a flex
ible process framework\, that can be tailored \nto the different needs of
individual software projects. We \ndescribe the experience with different
approaches (top-down and bottom-up) to the \nimplementation of the process
in software organizations.\nWe document a critical analysis of the effect
s of the adoption of the UP/RUP\, and \ndiscuss the relative benefits of
the public (UP) and commercial \n(RUP) versions of the process.\nFinally\,
we discuss the curious results of the effects of applying the RUP to a \n
software development environment that is not aware of adopting it\n\nhttp
s://indico.cern.ch/event/0/contributions/1294453/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294453/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Precision electromagnetic physics in Geant4: the atomic relaxation
models
DTSTART;VALUE=DATE-TIME:20040927T145000Z
DTEND;VALUE=DATE-TIME:20040927T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294454@indico.cern.ch
DESCRIPTION:Speakers: M.G. Pia (INFN GENOVA)\nVarious experimental configu
rations - such as\, for instance\, some \ngaseous detectors\, require a hi
gh precision simulation of \nelectromagnetic physics processes\, accountin
g not only for the \nprimary interactions of particles with matter\, but a
lso capable of \ndescribing the secondary effects deriving from the de-exc
itation of \natoms\, where primary collisions may have created vacancies.\
nThe Geant4 Simulation Toolkit encompasses a set of models to handle \nthe
atomic relaxation induced by the photoelectric effect\, Compton \nscatter
ing and ionization\, with the production of X-ray fluorescence \nand of Au
ger electrons. \nWe describe the physics models implemented in Geant4 to h
andle the \natomic relaxation\, the object-oriented design of the software
and \nthe validation of the models with respect to test beam data.\nIn pa
rticular\, we present a novel development of an original model \nfor parti
cle induced X-ray emission\, to be released for the first \ntime in the su
mmer of 2004. \nWe illustrate applications of Geant4 atomic relaxation mod
els for \nphysics reach studies in a real-life experimental context\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294454/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294454/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The BABAR Analysis Task Manager
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294455@indico.cern.ch
DESCRIPTION:Speakers: Douglas Smith (Stanford Linear Accelerator Center)\n
The new BaBar bookkeeping system comes with tools to directly support\ndat
a analysis tasks. This Task Manager system acts as an interface\nbetween
datasets defined in the bookkeeping system\, which are used as\ninput to a
nalyzes\, and the offline analysis framework. The Task\nManager organizes
the processing of the data by creating specific jobs\nto be either submit
ted to a batch system\, or run in the background on a\nlocal desktop\, or
laptop. The current system has been designed to\nsupport pbs and lsf batc
h systems. Changes to defined datasets due\nproduction is directly suppor
ted by the Task Manager\, where new\ncollections that add to a dataset or
replace other collections are\nautomatically detected\, allowing an analys
is at any time to be\nup-to-date with the latest available data. The outp
ut of tasks\,\nwhether new data collections\, ntuple/hbook files\, or text
files\, can be\nput back into a collections bookkeeping system or stored
in the private\nTask Manager database. Currently MySQL and Oracle relatio
nal databases\nare supported. The BABAR Task Manager has been in use for
data\nproduction since January this year\, and the schema of the working\n
system will be presented.\n\nhttps://indico.cern.ch/event/0/contributions/
1294455/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294455/
END:VEVENT
BEGIN:VEVENT
SUMMARY:LambdaStation: A forwarding and admission control service to inte
rface production network facilities with advanced research network paths
DTSTART;VALUE=DATE-TIME:20040930T145000Z
DTEND;VALUE=DATE-TIME:20040930T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294456@indico.cern.ch
DESCRIPTION:Speakers: P. DeMar (FERMILAB)\nAdvanced optical-based networks
have the capacity and capability to meet the \nextremely large data movem
ent requirements of particle physics collaborations. To \ndate\, research
efforts in the advanced network area have been primarily been focused \no
n provisioning\, dynamically configuring\, and monitoring the wide area op
tical \nnetwork infrastructure itself. Application use of these facilitie
s has been largely \nlimited to demonstrations using prototype high perfor
mance computing systems. \nFermilab has initiated a project to enable our
production network facilities to \nexploit these advanced research networ
ks. Our objective is to selectively forward \ndesignated data transfers\,
on a per-flow basis\, between capacious production-use \nstorage systems
on local campus networks\, using a dynamically provisioned alternate \npat
h on a wide area advanced research network. To accomplish this\, it is ne
cessary \nto develop the capability to dynamically reconfigure forwarding
of specific flows \nwithin our local production-use routers\, provide an i
nterface that enables \napplications to utilize the service\, and dynamica
lly implement appropriate access \ncontrol on the alternate network path.
Our project involves developing that \ninfrastructure. We call it Lambda
Station. If one envisions wide area optical \nnetwork paths as high bandw
idth data railways\, then LambdaStation would functionally \nbe the railro
ad terminal that regulates which flows at the local site get directed \non
to the high bandwidth data railways. LambdaStation is in a very early sta
ge of \ndevelopment. Our paper will discuss its design\, early deployment
experiences\, and \nfuture directions for the project.\n\nhttps://indico.
cern.ch/event/0/contributions/1294456/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294456/
END:VEVENT
BEGIN:VEVENT
SUMMARY:SRM AND GFAL TESTING FOR LCG2
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294457@indico.cern.ch
DESCRIPTION:Speakers: E. Slabospitskaya (Institute for High Energy Physics
\,Protvino\,Russia)\nStorage Resource Manager (SRM) and Grid File Access L
ibrary (GFAL) are GRID\nmiddleware components used for transparent access
to Storage Elements. SRM provides a\ncommon interface (WEB service) to bac
kend systems giving dynamic space allocation and\nfile management. GFAL pr
ovides a mechanism whereby an application software can access\na file at a
site without having to know which transport mechanism to use or at which\
nsite it is running.\n Two separate Test Suites have been developed for
testing of SRM interface v 1.1 and\ntesting against the GFAL file system.
Test Suites are written in C and Perl languages.\n SRM test suite: a s
cript in Perl generates files and their replicas. These files are\ncopied
to the local SE and registered (published). Replicas of files are made to
the\nspecified SRM site. All replicas are used by the C-program. The SRM f
unctions\, such\nas get\, put\, pin\, unPin etc. are tested using a progra
m written in C. As SRMs do not\nperform file movement operations\, the C-p
rogram transfers files using\n"globus-url-copy". It then compares the data
files before and after transfer.\n GFAL test suite: as GFAL allows use
rs to access a file in a Storage Element directly\n(read and write) withou
t copying it locally\, a C-program tests the implementation of\nPOSIX I/O
functions such as open/seek/read/write. A Perl script executes almost all\
nUnix based commands: dd\, cat\, cp\, mkdir and so on. Also the Perl scrip
t launches a\nstress test\, creating many small files (~5000)\, nested dir
ectories and huge files. \n The investigation of interactions between the
Replica Manager\, the SRM and the file\naccess mechanism will help making
the Data Management software better.\n\nhttps://indico.cern.ch/event/0/con
tributions/1294457/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294457/
END:VEVENT
BEGIN:VEVENT
SUMMARY:GoToGrid - A Web-Oriented Tool in Support to Sites for LCG Install
ations
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294477@indico.cern.ch
DESCRIPTION:Speakers: A. Retico (CERN)\nThe installation and configuration
of LCG middleware\, as it is currently being done\, \nis complex and deli
cate.\nAn “accurate” configuration of all the services of LCG middlewa
re requires a deep \nknowledge of the inside dynamics and hundreds of para
meters to be dealt with. On the \nother hand\, the number of parameters an
d flags that are strictly needed in order to \nrun a working ”default”
configuration of the middleware is relatively small\, due to \nthe fact t
hat the values to be set mainly deal with environment configuration and \n
with a limited set of possible operation scenarios.\nThis “default” co
nfiguration appears to be the most suitable for sites joining LCG \nfor th
e first time. \nThe GoToGrid system is aimed to support Site Administrator
s to easily perform such a \nconfiguration.\nG2G combines the gathering of
configuration information\, provided by sites\, with the \ndynamic adapti
ve creation of customized documentation and installation tools.\nBy using
a web interface and being requested only for the relevant configuration \n
information\, site Administrators will be able to design the desired confi
guration of \ntheir own LCG site.\nSite configuration data is collected an
d stored in a well defined format liable to \nbe used as the interface to
different configuration management tools.\n\nhttps://indico.cern.ch/event/
0/contributions/1294477/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294477/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Performance of the ATLAS DAQ DataFlow system
DTSTART;VALUE=DATE-TIME:20040927T161000Z
DTEND;VALUE=DATE-TIME:20040927T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294459@indico.cern.ch
DESCRIPTION:Speakers: G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN
)\nThe ATLAS Trigger and DAQ system is designed to use the Region of \nInt
erest (RoI)mechanism to reduce the initial Level 1 trigger rate of \n100 k
Hz down to about 3.3 kHz Event Building rate.\nThe DataFlow component of t
he ATLAS TDAQ system is responsible\nfor the reading of the detector speci
fic electronics via 1600 point \nto point readout links\, the collection a
nd provision of RoI to the \nLevel 2 trigger\, the building of events acce
pted by the Level 2 \ntrigger and their subsequent input to the Event Filt
er\nsystem where they are subject to further selection criteria.\n\nTo val
idate the design and implementation of the DAQ DataFlow system\, \na proto
type setup representing 20% of the final system\, has been put \ntogether
at CERN. Thisbaseline prototype contains 68 PCs running \nLinux\, and exch
anging data via a 64-portand a 31-port Gigabit \nEthernet switches for Ev
ent Building and RoI Collection. The\nsystem performance is measured by pl
aying back simulated data through \nthe system andrunning prototype algori
thms in the Level 2 trigger. In \nparallel a full discrete event model of
the system has been developed \nand tuned to the testbed results as an aid
to studying the system \nperformance at and beyond the size of the protot
ype setup.\n\nMeasurements will be presented on the performance of the pro
totype \nsetup\, showing that the components of the current integrated sy
stem \nimplementation can already sustain the their nominal ATLAS \nrequir
ements using existing hardware and Gigabit network technology: \n20 kHz Ro
I Collection rate per readout link\, 3 kHz Event Building\nrate and 70 Mby
te/s throughput per event building node. The use of \nthese results to cal
ibrate the model will also be presented along \nwith the model predication
s for the performance of the final DAQ \nDataFlow system.\n\nhttps://indic
o.cern.ch/event/0/contributions/1294459/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294459/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ATLAS Detector Description Database Architecture
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294460@indico.cern.ch
DESCRIPTION:In addition to the well-known challenges of computing and data
handling at LHC \nscales\, LHC experiments have also approached the scala
bility limit of manual \nmanagement and control of the steering parameters
("primary numbers") provided to \ntheir software systems. The laborious
task of detector description benefits from \nthe implementation of a scala
ble relational database approach. We have created and \nextensively exerc
ised in the ATLAS production environment a primary numbers database \nutil
izing NOVA relational database technologies. In our report we describe th
e \narchitecture of the relational database deployed for the storage\, man
agement\, and \nuniform treatment of primary numbers in ATLAS detector des
cription. We describe the \nbenefits of the ATLAS software framework (Ath
ena) on-demand data access \narchitecture\, and an automatic system for co
de generation of more than 300 classes \n(about 10% of ATLAS offline code)
for primary numbers access from the Athena \nframework. Integration with
the LHC Interval-of-Validity database infrastructure\, \nmeasures for tig
hter primary numbers database input control\, experience with ATLAS \nComb
ined Testbeam geometry and conditions payload storage using NOVA technolog
ies \nintegrated with the LHC ConditionsDB implementation\, methods for ap
plication-side \nresource pooling\, new user tools for knowledge discovery
\, navigation and browsing\, \nand plans for new primary numbers database
developments\, are also described.\n\nhttps://indico.cern.ch/event/0/contr
ibutions/1294460/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294460/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ATLAS DAQ system
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294465@indico.cern.ch
DESCRIPTION:Speakers: G. unel (UNIVERSITY OF CALIFORNIA AT IRVINE AND CERN
)\nThe 40 MHz collision rate at the LHC produces ~25 interactions per bunc
h crossing\nwithin the ATLAS detector\, resulting in terabytes of data pe
r second to be handled\nby the detector electronics and the trigger and DA
Q system. A Level 1 trigger system\nbased on custom designed and built ele
ctronics will reduce the event rate to 100 kHz. \n\nThe DAQ system is resp
onsible for the readout of the detector specific electronics\nvia 1600 poi
nt to point links hosted by Readout Subsystems\, the collection and\nprovi
sion of ''Region of Interest data'' to the Level 2 trigger\, the building
of\nevents accepted by the Level 2 trigger and their subsequent input to t
he Event Filter\nsystem where they are subject to further selection criter
ia. Also the DAQ provides\nthe functionality for the configuration\, contr
ol\, information exchange and monitoring\nof the whole ATLAS detector.\n\n
The baseline ATLAS DAQ architecture and its implementation will be introdu
ced. In\nthis implementation\, the configuration\, control\, information
exchange and monitoring\n functionalities are provided with CORBA\; the\nc
ontrol aspects are handled by an expert system based on CLIPS and the data
\nconnection between 150 Readout Subsystems\, up to 500 Level 2 Processing
Units and to\n80 Event building nodes is done Gigabit Ethernet network
technology.\n\nThe experience from using the DAQ system in a combined test
beam environment where\nall ATLAS subdetectors are participating will be
presented. The current performances\nof some DAQ components as measured in
the laboratory environment will be summarized.\nSome results from the lar
ge scale functionality tests\, on a system of a 300 nodes\,\naimed at unde
rstanding the scalability of the current implementation will also be shown
.\n\nhttps://indico.cern.ch/event/0/contributions/1294465/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294465/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ARDA Project Status Report
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294520@indico.cern.ch
DESCRIPTION:Speakers: The ARDA Team ()\nThe ARDA project was started in Ap
ril 2004 to support\nthe four LHC experiments (ALICE\, ATLAS\, CMS and LHC
b) \nin the implementation of individual\nproduction and analysis environm
ents based on the EGEE middleware.\n\nThe main goal of the project is to a
llow a fast feedback between the \nexperiment and the middleware developme
nt teams via the\nconstruction and the usage of end-to-end prototypes\nall
owing users to perform analyses out of the present \ndata sets from recent
montecarlo productions.\n\nIn this talk the project is presented with hig
hlights of the \nfirst results and lessons learnt so far.\nThe relations o
f the project with similar initiatives within\nand outside the High Energy
Physics community are reviewed \n(notably in the EGEE application identif
ication and support).\n\nhttps://indico.cern.ch/event/0/contributions/1294
520/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294520/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Migrating PHENIX databases from object to relational model
DTSTART;VALUE=DATE-TIME:20040927T143000Z
DTEND;VALUE=DATE-TIME:20040927T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294468@indico.cern.ch
DESCRIPTION:Speakers: I. Sourikova (BROOKHAVEN NATIONAL LABORATORY)\nTo be
nefit from substantial advancements in Open Source database\n technology a
nd ease deployment and development concerns with\n Objectivity/DB\, the Ph
enix experiment at RHIC is migrating its principal\n databases from Object
ivity to a relational database management system\n (RDBMS). The challenge
of designing a relational DB schema to store a\n wide variety of calibrat
ion classes was\n solved by using ROOT I/O and storing each calibration ob
ject opaquely as\n a BLOB ( Binary Large OBject ). Calibration metadata is
stored as\n built-in types to allow fast index-based database search. To
avoid a\n database back-end dependency the application\n was made ODBC-com
pliant (Open DataBase Connectivity is a standard\n database interface). An
existent well-designed calibration DB API\nallowed users to be shielded f
rom the underlying database technology\nchange. Design choices and experie
nce with transferring a large amount\n of Objectivity data into relational
DB will be presented.\n\nhttps://indico.cern.ch/event/0/contributions/129
4468/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294468/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Testbed Management for the ATLAS TDAQ
DTSTART;VALUE=DATE-TIME:20040927T153000Z
DTEND;VALUE=DATE-TIME:20040927T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294478@indico.cern.ch
DESCRIPTION:Speakers: M. ZUREK (CERN\, IFJ KRAKOW)\nThe talk presents
the experience gathered during the testbed\nadministration (~100 PC
and 15+ switches) for the ATLAS Experiment at\nCERN.\n\nIt covers the tec
hniques used to resolve the HW/SW conflicts\, network \nrelated problems\
, automatic installation and configuration of the \ncluster nodes as
well as system/service monitoring in the heterogeneous\ndynamically changi
ng cluster environment.\n\nTechniques range from manual actions to the ful
ly automated procedures\nbased on tools like Kickstart\, SystemImage
r\, Nagios\, MRTG and\nSpectrum. Booting diskless nodes using EtherB
oot\, PXEboot is also\ninvestigated as a possible technique of manag
ing Atlas Production\nFarms. \n\nKernel customization techniques (buil
ding\, deploying\, distribution\npolicy) allow users to freely choose pro
ffered kernel flavors without\nsysadmin intervention. At the same time
administrator retains full\ncontrol over entire testbed.\n\nThe overall
experience has shown that the proper use of the\nopen-source too
ls addresses very well the needs of the ATLAS Trigger\nDAQ community.
This approach may also be interesting for addressing\ncertain aspects of
GRID Farm Management.\n\nhttps://indico.cern.ch/event/0/contributions/1294
478/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294478/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Update On the Status of the FLUKA Monte Carlo Transport Code
DTSTART;VALUE=DATE-TIME:20040927T130000Z
DTEND;VALUE=DATE-TIME:20040927T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294469@indico.cern.ch
DESCRIPTION:Speakers: L. Pinsky (UNIVERSITY OF HOUSTON)\nThe FLUKA Monte C
arlo transport code is a well-known simulation tool in High Energy \nPhysi
cs. FLUKA is a dynamic tool in the sense that it is being continually upd
ated \nand improved by the authors. Here we review the progresses achieve
d in the last \nyear on the physics models. From the point of view of hadr
onic physics\, most of the \neffort is still in the field of nucleus--nucl
eus interactions. The currently \navailable version of FLUKA already incl
udes the internal capability to simulate \ninelastic nuclear interactions
beginning with lab kinetic energies of 100 MeV/A up \nthe the highest acce
ssible energies by means of the DPMJET-II.5 event generator to \nhandle th
e interactions for >5 GeV/A and rQMD for energies below that. The new \nd
evelopments concern\, at high energy\, the embedding of the DPMJET-III gen
erator\, \nwhich represent a major change with respect to the DPMJET-II st
ructure. This will \nalso allow to achieve a better consistency between th
e nucleus-nucleus section with \nthe original FLUKA model for hadron-nucle
us collisions. Work is also in progress \nto implenent a third event gene
rator model based on the Master Boltzmann Equation \napproach\, in order t
o extend the energy capability from 100 MeV/A down to the \nthreshold for
these reactions. In addition to these extended physics capabilities\, \ns
tructural changes to the programs input and scoring capabilities are conti
nually \nbeing upgraded. In particular we want to mention the upgrades in
the geometry \npackages\, now capable of reaching higher levels of abstrac
tion. Work is also \nproceeding to provide direct import into ROOT of the
FLUKA output files for \nanalysis and to deploy a user-friendly GUI input
interface.\n\nhttps://indico.cern.ch/event/0/contributions/1294469/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294469/
END:VEVENT
BEGIN:VEVENT
SUMMARY:On-demand Layer VPN Support for Grid Applications
DTSTART;VALUE=DATE-TIME:20040930T143000Z
DTEND;VALUE=DATE-TIME:20040930T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294472@indico.cern.ch
DESCRIPTION:Speakers: E. Ronchieri (INFN CNAF)\nThe problem of finding the
best match between jobs and computing \nresources is critical for an effi
cient work load distribution in \nGrids. Very often jobs are preferably ru
n on the Computing Elements \n(CEs) that can retrieve a copy of the input
files from a local \nStorage Element (SE). This requires that multiple fil
e copies are\ngenerated and managed by a data replication system.\n\nWe pr
opose the use of scheduled on-demand Layer 2 Virtual Private \nNetworks (L
2 VPNs) for an alternative data access model based on the \npossibility to
connect to the same virtual LAN both CEs and SEs from \nremote Grid domai
ns. The L2 VPN members are "close" to each other. In \nthis way a CE can b
e selected by a Resource Broker without requiring \nthe presence of a loca
l file replica. This simplifies the data \nmanagement and allows a more ef
ficient use of the network resources \non the links connecting the Grid to
its main data sources.\n\nIn this paper we detail how L2 VPNs are dynamic
ally provisioned \nthrough the Grid Network Agreement Service. We propose
a hierarchical \nnetwork resource abstraction\,the Path\, and we show how
it can be \nintegrated in the Grid Information Service to perform network
\nresource discovery and matchmaking. We then describe the User\nInterface
through which the Path negotiation terms are specified by \nthe user and
we propose a path management approach that integrates \ndifferent network
technologies\, namely MPLS and the Differentiated \nServices architecture.
The implementation details of a system \nprototype are described together
with some initial experimental \nresults.\n\nhttps://indico.cern.ch/event
/0/contributions/1294472/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294472/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Developments of Mathematical Software Libraries for the LHC experi
ments
DTSTART;VALUE=DATE-TIME:20040930T132000Z
DTEND;VALUE=DATE-TIME:20040930T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294482@indico.cern.ch
DESCRIPTION:Speakers: L. Moneta (CERN)\nThe main objective of the MathLib
project is to give expertise and support to the LHC\nexperiments on mathem
atical and statistical computational methods. The aim \nis to provide a co
herent set of mathematical libraries. Users of this set of\nlibraries are
developers of experiment reconstruction and simulation software\, \nof ana
lysis tools frameworks\, such as ROOT\, and physicists performing data ana
lysis.\n \nAfter having performed a detailed evaluation of the existing f
unctionality present in\nGSL\,a general purpose mathematical library\, and
in more HEP specific libraries such\nas CLHEP\, CERNLIB and ROOT\, a new
object oriented library has been started to be\ndeveloped. The new library
incorporates or uses most of the functions and algorithms\nof the alread
y existing libraries. Examples of these functions and algorithms are\nmath
ematical special functions\, linear algebra\, minimization and any other r
equired\nnumerical algorithms.Wrappers to these are written in C++ and int
egrated in a\ncoherent object oriented framework. Interfaces to the Python
interactive environment\nare as well provided. \n\nAn overview of the
project activities will be presented\, describing in detail the\ncurrent f
unctionality of the library and its design. Furthermore\, the object orien
ted\nimplementation of Minuit\, a fitting and minimization framework\, wil
l be covered in\nthe presentation.\n\nhttps://indico.cern.ch/event/0/contr
ibutions/1294482/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294482/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The WEB interface for the ATLAS/LCG MySQL Conditions Databases and
performance constraints in the visualisation of extensive scientific/te
chnical data
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294474@indico.cern.ch
DESCRIPTION:Speakers: D. Klose (Universidade de Lisboa\, Portugal)\nA comm
on LCG architecture for the Conditions Database for the time\nevolving dat
a enables the possibility to separate the interval-of-\nvalidity (IOV) inf
ormation from the conditions data payload. The two\napproaches can be bene
ficial in different cases and separation\npresents challenges for efficien
t knowledge discovery\, navigation and\ndata visualization. In our paper w
e describe the conditions data\nbrowser - CondDBrowser - a tool deployed i
n ATLAS for scientific\nanalysis and visualization of this data.\n\nA wide
availability and access to the overall distributed conditions \ndata repo
sitory was achieved through a seamless integration of the IOV\nand the pay
load data to the user a unifying web interface that hides\nthe persistency
storage details. Another user-friendly feature of\nthe tool is a simplifi
ed querying language similar to QBE (Query by\nExample).\n\nOur case study
is based on the web interface developed for the \nATLAS/LCG ConditionsDB.
The interaction with other payload storage\ntechnologies\, external to th
e ConditionsDB\, will also be presented. In\nparticular\, the integration
of the NOVA database technologies.\n\nWe will discuss how the information
is gathered from the ConditionsDB \nand the corresponding extensions neede
d to enable data browsing in the\nexternal repositories\, how it is organi
zed\, and what kind of\noperations (search and visualization) are allowed.
We'll also present\nhow this interface uses the C++ API extending it to a
similar PHP\ninterface\, that can be used to browse data collected using
any of the\nConditionsDB implementations.\n\nPerformance constraints are a
lso presented and will be discussed in \ndetail.\n\nhttps://indico.cern.ch
/event/0/contributions/1294474/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294474/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Addressing the persistency patterns of the time evolving HEP data
in\n the ATLAS/LCG MySQL Conditions Databases
DTSTART;VALUE=DATE-TIME:20040929T153000Z
DTEND;VALUE=DATE-TIME:20040929T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294475@indico.cern.ch
DESCRIPTION:Speakers: A. Amorim (FACULTY OF SCIENCES OF THE UNIVERSITY OF
LISBON)\nThe size and complexity of the present HEP experiments\nrepresent
s an enormous effort in the persistency of data. These efforts imply\na tr
emendous investment in the databases field not only for the event data\nbu
t also for data that is needed to qualify this one - the Conditions Data.\
n\nIn the present document we'll describe the strategy for addressing the\
nConditions data problem in the ATLAS experiment\, focusing in the Conditi
onsDB\nMySQL for the ATLAS/LCG project. The need for a persistent engine f
or\nstructured conditions data has motivated the studies for an relational
backend\nthat maps transient structured objects in the relational databas
e persistent\nengine. This paper illustrate the proposal for the storage o
f Conditions data\nin the LCG framework using it both to store only the In
terval Of Validity\n(IOV) and a reference that represents the 'path' to an
external\npersistent storage mechanism\, and to archive the IOV and the d
ata in\nrelational tables mapping the costumizable CondDBTable objects. Th
is allow to\ntake advantages of all the relational features and also to\nd
irectly map between transient objects and tables in the database server.\n
\nThe issue of distributed data storage and partitioning\, is also analyze
d in\nthis paper\, taking into account the different levels of indirection
that are\nprovided by the ConditionsDB MySQL implementation. These featur
es represent a\nvery important built in functionality in terms of scalabil
ity\, data balance\nin a system that aims to be completely distributed and
with a very high\nperformance for hundreds of users.\n\nhttps://indico.ce
rn.ch/event/0/contributions/1294475/
LOCATION:Interlaken\, Switzerland Brunig 1 + 2
URL:https://indico.cern.ch/event/0/contributions/1294475/
END:VEVENT
BEGIN:VEVENT
SUMMARY:New Applications of PAX in Physics Analyses at Hadron Colliders
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294479@indico.cern.ch
DESCRIPTION:Speakers: A. Schmidt (Institut fuer Experimentelle Kernphysik\
, Karlsruhe University\, Germany)\nAt CHEP03 we introduced "Physics Analys
is eXpert" (PAX)\, a C++ toolkit\nfor advanced physics analyses in High En
ergy Physics (HEP)\nexperiments. PAX introduces a new level of abstraction
beyond detector\nreconstruction and provides a general\, persistent conta
iner model for\nHEP events. Physics objects like fourvectors\, vertices an
d collisions\ncan easiliy be stored\, accessed and manipulated. Bookkeepin
g of\nrelations between these objects (like decay trees\, vertex and\ncoll
ision separation\, including deep copies etc.) is fully provided\nby a "re
lation manager". Event container and associated objects\nrepresent a unifo
rm interface for algorithms and facilitate the\nparallel development and e
valuation of different physics\ninterpretations of individual events. So c
alled "analysis factories"\,\nwhich actively identify and distinguish diff
erent physics processes\,\ncan easily be constructed with the PAX toolkit.
\n\nPAX has been officially released to the experiments CDF (Tevatron) an
d\nCMS (LHC) during the last year. It is being explored by a growing user\
ncommunity and applied in various complex physics analyses. We report\nabo
ut the successful application in studies of ttbar production at\nthe Tevat
ron and Higgs searches in the channel ttH at the LHC.\n\nhttps://indico.ce
rn.ch/event/0/contributions/1294479/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294479/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Conditions Databases: the interfaces between the different ATLAS s
ystems
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294480@indico.cern.ch
DESCRIPTION:Speakers: D. KLOSE (Universidade de Lisboa\, Portugal)\nCondit
ions Databases are beginning to be widely used in the ATLAS\nexperiment. C
onditions data are time-varying data describing the state of the\ndetector
used to reconstruct the event data. This includes all sorts of slowly \ne
volving data like detector alignment\, calibration\, monitoring and data f
rom Detector\nControl System (DCS).\n\nIn this paper we'll present the int
erfaces between the ConditionsDB and\nthe DCS\, Trigger and Data Acquisiti
on (TDAQ)and offline control framework (Athena).\n\nIn the DCS case\, a PV
SS API Manager was developed based on the C++ interface for the\nCondition
sDB. The Manager links to a selection of datapoints and stores any value\n
change in the ConditionsDB. The structure associated to each datapoint is
mapped to a\ntable that reflects this structure and is stored in the datab
ase.\n\nThe ConditionsDB Interface to the TDAQ (CDI) is a service provided
by\nthe Online Software that acts as an intermediary between TDAQ\nproduc
ers and consumers of conditions data. CDI provides the pathway\nto the Con
ditionsDB information regarding the present or past condition of\nthe dete
ctor and trigger system as well as all the operational and monitoring\ndat
a. It will provide the link between the Information Service (IS) and the \
nConditionsDB\n\nConditions database integration into the ATLAS Athena fra
mework s also described\,\nincluding connections to Athena's transient int
erval-of-validity management\,\nconversion services to support conditions
data I/O into Athena transient stores\, and\nmechanisms by which the condi
tions databas may be used for timestamp-mediated access\n to data stored i
n other technologies such as NOVA and POOL.\n\nhttps://indico.cern.ch/even
t/0/contributions/1294480/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294480/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adaptive Multi-vertex fitting
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294483@indico.cern.ch
DESCRIPTION:Speakers: W. Waltenberger (HEPHY VIENNA)\nState of the art in
the field of fitting particle tracks to one \nvertex is the Kalman techniq
ue. This least-squares (LS) estimator is \nknown to be ideal in the case o
f perfect assignment of tracks to \nvertices and perfectly known Gaussian
errors. Experimental data and \ndetailed simulations always depart from th
is perfect model. The \nimperfections can be expected to be larger in high
luminosity \nexperiments like at the LHC. In such a context vertex fittin
g \nalgorithms will have to be able to deal with mis-associated tracks \na
nd mis-estimated or non-Gaussian track errors. We present a vertex \nfitti
ng technique that is insensitive to outlying observations and \nmis-estima
ted track errors\, while it retains close-to-optimal \nresults in the case
of perfect data\; it adapts to the data. This is \nrealized by introducin
g weights that are associated to the tracks \nand reflect the compatibilit
y of the tracks with the vertex. \nOutliers are no longer simply discarded
- as is done in most \nof the classical robustification techniques - but
rather \ndownweighted. The algorithm will be presented in detail\, and \nc
omparisons with classical methods will be shown in relevant physics \ncase
s.\n\nhttps://indico.cern.ch/event/0/contributions/1294483/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294483/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ALICE Experiment Control System
DTSTART;VALUE=DATE-TIME:20040929T145000Z
DTEND;VALUE=DATE-TIME:20040929T151000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294490@indico.cern.ch
DESCRIPTION:Speakers: F. Carena (CERN)\nThe Experiment Control System (ECS
) is the top level of control of the ALICE \nexperiment.\nRunning an exper
iment implies performing a set of activities on the online systems \nthat
control the operation of the detectors. In ALICE\, online systems are the
\nTrigger\, the Detector Control Systems (DCS)\, the Data-Acquisition Syst
em (DAQ) and \nthe High-Level Trigger (HLT).\nThe ECS provides a framework
in which the operator can have a unified view of all \nthe online systems
and perform operations on the experiment seen as a set of \ndetectors.\nA
LICE has adopted a hierarchical -yet loose- architecture\, in which the EC
S is a \nlayer sitting above the online systems\, still preserving their a
utonomy to operate \nindependently. The interface between the ECS and the
online systems applies a \npowerful paradigm based on inter-communicating
objects. The behavioural aspects of \nthe ECS are described using a finite
-state machine model.\nThe ALICE experiment must be able to run either as
a whole (during the physics \nproduction) or as a set of independent detec
tors (for installation and \ncommissioning). The ECS provides all the feat
ures necessary to split the experiment \ninto partitions\, containing one
or more detectors\, which can be operated \nindependently and concurrently
. \nThis paper will present the architecture of the ALICE ECS\, its curren
t status and \nthe practical experience acquired at the test beams.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294490/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294490/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Performance of an operating High Energy Physics Data grid\, D0SAR-
grid
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294484@indico.cern.ch
DESCRIPTION:Speakers: B. Quinn (The University of Mississippi)\nThe D0 exp
eriment at Fermilab's Tevatron will record several petabytes of data over
\nthe next five years in pursuing the goals of understanding nature and se
arching for \nthe origin of mass. Computing resources required to analyze
these data far exceed \nthe capabilities of any one institution. Moreove
r\, the widely scattered \ngeographical distribution of collaborators pose
s further serious difficulties for \noptimal use of human and computing re
sources. These difficulties will be \nexacerbated in future high energy p
hysics experiments\, like those at the LHC. The \ncomputing grid has long
been recognized as a solution to these problems. This \ntechnology is bei
ng made a more immediate reality to end users by developing a \nfully real
ized grid in the D0 Southern Analysis Region (D0SAR). \nD0SAR consists of
eleven universities in the Southern US\, Brazil\, Mexico and India. \nThe
centerpiece of D0SAR is a data and resource hub\, a Regional Analysis Cent
er \n(RAC). Each D0SAR member institution constructs an Institutional Ana
lysis Center \n(IAC)\, which acts as a gateway to the grid for users withi
n that institution. \nThese IACs combine dedicated rack-mounted servers
and personal desktop computers \ninto a local physics analysis cluster.
D0SAR has been working on establishing an \noperational regional grid\, D0
SAR-Grid\, using all available resources within it and \na home-grown loca
l task manager\, McFarm.\nIn this talk\, we will describe the architecture
of the D0SAR-Grid implementation\, \nthe use and functionality of the gri
d\, and the experiences of operating the grid \nfor simulation\, reprocess
ing and analysis of data from a currently running HEP \nexperiment.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294484/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294484/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Automated Tests in NICOS Nightly Control System
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294486@indico.cern.ch
DESCRIPTION:Speakers: A. Undrus (BROOKHAVEN NATIONAL LABORATORY\, USA)\nSo
ftware testing is a difficult\, time-consuming process that\nrequires tech
nical sophistication and proper planning. This\nis especially true for the
large-scale software projects of\nHigh Energy Physics where constant modi
fications and\nenhancements are typical. The automated nightly testing is\
nthe important component of NICOS\, NIghtly COntrol System\,\nthat manages
the multi-platform nightly builds based on the\nrecent versions of softwa
re packages. It facilitates collective\nwork in collaborative environment
and provides four benefits\nto developers: repeatability (tests can be exe
cuted more than\nonce)\, accumulation (results are stored and reflected on
\nNICOS web pages)\, feedback (automatic e-mail notifications\nabout test
failures)\, user friendly setup (configuration\nparameters can be encrypte
d in the body of test scripts).\nThe modular structure of NICOS allows plu
gging in other\nvalidation and organization tools\, such as QMTest\nand Cp
pUNIT. NICOS classifies tests according to their\ngranularity level and pu
rpose. The low level structural tests\nreveal compilation problems\, incon
sistencies in package\nconfiguration\, such as circular dependencies\, and
simple\nisolated bugs. The results for these three groups of tests are\np
ublished for each package of the software project. The\nintegrated (or be
havioral) tests find bugs at levels of users\nscenarios and NICOS generate
s the special web page with\ntheir results. The NICOS tool is currently us
ed to coordinate\nthe efforts of more than 100 developers for the ATLAS\n
project at CERN and included in the tool library of the LHC\ncomputing pro
je\n\nhttps://indico.cern.ch/event/0/contributions/1294486/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294486/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Grid Security
DTSTART;VALUE=DATE-TIME:20040928T093000Z
DTEND;VALUE=DATE-TIME:20040928T100000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294498@indico.cern.ch
DESCRIPTION:Speakers: David Kelsey (RAL)\nThe aim of Grid computing is to
enable the easy and open sharing of resources \nbetween large and highly d
istributed communities of scientists and institutes across \nmany independ
ent administrative domains. Convincing site security officers and \ncomput
er centre managers to allow this to happen in view of today's ever-increas
ing \nInternet security problems is a major challenge. Convincing users an
d application \ndevelopers to take security seriously is equally difficult
. This paper will describe \nthe main Grid security issues\, both in terms
of technology and policy\, that have \nbeen tackled over recent years in
LCG and related Grid projects. Achievements to \ndate will be described an
d opportunities for future improvements will be addressed.\n\nhttps://indi
co.cern.ch/event/0/contributions/1294498/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294498/
END:VEVENT
BEGIN:VEVENT
SUMMARY:IGUANA Interactive Graphics Project: Recent Developments
DTSTART;VALUE=DATE-TIME:20040930T130000Z
DTEND;VALUE=DATE-TIME:20040930T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294487@indico.cern.ch
DESCRIPTION:Speakers: S. MUZAFFAR (NorthEastern University\, Boston\, USA)
\nThis paper describes recent developments in the IGUANA (Interactive Grap
hics for User\nANAlysis) project. IGUANA is a generic framework and toolki
t\, used by CMS and D0\, to\nbuild a variety of interactive applications s
uch as detector and event visualisation\nand interactive GEANT3 and GEANT4
browsers.\n\nIGUANA is a freely available toolkit based on open-source co
mponents including Qt\,\nOpenInventor (Coin3D) and OpenGL and LCG services
.\n\nNew features we describe since the last CHEP conference include: \n
multi-document architecture\;\n user interface to Python scri
pting\;\n 2D visualisation with auto-generation of slices/projectio
ns from 3D data\;\n per-object actions such as clipping\, slicing\,
lighting or animation\;\n correlated actions (e.g. picking) for mu
ltiple views\;\n production of high-quality and compact vector post
script output from any\nOpenGL display\, with surface shading and invisibl
e surface culling (together with the\ngl2ps project).\n\nWe compare the IG
UANA rendering\, memory performance\, and porting issues for various\nplat
forms including: Linux on x86\, Windows\, and Mac OSX with its native\nQua
rtz-Extreme rendering system.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294487/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294487/
END:VEVENT
BEGIN:VEVENT
SUMMARY:OptorSim: a Simulation Tool for Scheduling and Replica Optimisatio
n in Data Grids
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294488@indico.cern.ch
DESCRIPTION:Speakers: C. Nicholson (UNIVERSITY OF GLASGOW)\nIn large-scale
Grids\, the replication of files to different sites is an important\ndata
management mechanism which can reduce access latencies and give improved
usage\nof resources such as network bandwidth\, storage and computing powe
r.\nIn the search for an optimal data replication strategy\, the Grid simu
lator OptorSim\nwas developed as part of the European DataGrid project. Si
mulations of various HEP\nGrid scenarios have been undertaken using differ
ent job scheduling and file\nreplication algorithms\, with the experimenta
l emphasis being on physics analysis\nuse-cases. Previously\, the CMS Data
Challenge 2002 testbed and UK GridPP testbed were\namong those simulated\
; recently\, our focus has been on the LCG testbed. A novel\neconomy-based
strategy has been investigated as well as more traditional methods\,\nwit
h the economic models showing distinct advantages in terms of improved res
ource\nusage. \nHere\, an overview of OptorSim's design and implementation
is presented with a\nselection of recent results\, showing its usefulness
as a Grid simulator both in its\ncurrent features and in the ease of exte
nsibility to new scheduling and replication\nalgorithms.\n\nhttps://indico
.cern.ch/event/0/contributions/1294488/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294488/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Test of data transfer over an international network with a large R
TT
DTSTART;VALUE=DATE-TIME:20040930T130000Z
DTEND;VALUE=DATE-TIME:20040930T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294489@indico.cern.ch
DESCRIPTION:Speakers: J. Tanaka (ICEPP\, UNIVERSITY OF TOKYO)\nWe have mea
sured the performance of data transfer between CERN\nand our laboratory\,
ICEPP\, at the University of Tokyo in Japan.\nThe ICEPP will be one of the
so-called regional centers for handling\nthe data from the ATLAS experime
nt which will start data taking in 2007.\nMore than petabytes of data are
expected to be generated from the experiment\neach year. It is therefore e
ssential to achieve a high throughput of data\ntransfer over the long-dist
ance network connection between CERN and ICEPP.\nA connection with several
gigabits per second is now available between\nthe two sites. The round tr
ip time\, however\, reaches about 300 msec.\nMoreover the connection is no
t dedicated to us.\nDue to the large latency and other traffic on the same
network\,\nit is not easy to fully exploit the available bandwidth.\nWe h
ave measured the performance of the network connection using\ntools such a
s iperf\, bbftp\, and gridftp with various TCP parameters\,\nLinux kernel
versions and so on.\nWe have examined factors limiting the speed and tried
to improve\nthe throughput of the data transfer.\nIn this talk we report
on the results of our measurements and\ninvestigations.\n\nhttps://indico.
cern.ch/event/0/contributions/1294489/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294489/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Building the LCG: from middleware integration to production qualit
y software
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294492@indico.cern.ch
DESCRIPTION:Speakers: L. Poncet (LAL-IN2p3)\nIn the last few years grid so
ftware (middleware) has become available \nfrom various sources. However\,
there are no standards yet which \nallow for an easy integration of diffe
rent services. \nMoreover\, middleware was produced by different projects
with the main \ngoal of developing new functionalities rather than product
ion quality \nsoftware. \nIn the context of the LHC Computing Grid project
(LCG) an integration\,\ntesting and certification activity is ongoing whi
ch aims at producing \na stable coherent set of services. \nHere we report
on the processes employed to produce the LCG middleware \nrelease and rel
ated activities\, including the infrastructures used\, the \nactivities ne
eded to integrate the various components and the \ncertification process.\
nOur certification process consists of a continuous iterative cycle that \
nalso involves feedback from the LCG production system and input from \nth
e software providers.\nThe architecture of the LCG middleware is described
\, including \nadditional components developed by LCG to improve scalabili
ty and \nperformance. \nOther associated activities include packaging for
deployment\, porting \nto different platforms\, debugging and patching of
the software. \nFunctionality and stress tests are performed via a large t
est-bed \ninfrastructure that allows for benchmarking of different configu
rations. \nWe describe also the results of our tests and our experience \n
collected during the building of the LCG infrastructure.\n\nhttps://indico
.cern.ch/event/0/contributions/1294492/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294492/
END:VEVENT
BEGIN:VEVENT
SUMMARY:IgProf profiling tool
DTSTART;VALUE=DATE-TIME:20040930T161000Z
DTEND;VALUE=DATE-TIME:20040930T163000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294495@indico.cern.ch
DESCRIPTION:Speakers: G. Eulisse (NORTHEASTERN UNIVERSITY OF BOSTON (MA) U
.S.A.)\nA fundamental part of software development is to detect and analys
e weak spots of the programs to guide \noptimisation efforts. We present a
brief overview and usage experience on some of the most valuable open-\ns
ource tools such as valgrind and oprofile. We describe their main strength
s and weaknesses as experienced \nby the CMS experiment.\n\nAs we have fou
nd that these tools do not satisfy all our needs\, CMS has also developed
a tool of its own called \n"igprof". It complements the other tools\, allo
wing us to profile memory usage\, CPU usage\, memory leaks and \nfile desc
riptor usage of large complex applications such as the CMS reconstruction
and analysis software. It is \nrequires no instrumentation and works with
multi-threaded programs and with all shared libraries\, including \ndynami
cally loaded ones.\n\nWe describe this new tool\, it's features and output
\, and experience including improvements gained in CMS.\n\nhttps://indico.
cern.ch/event/0/contributions/1294495/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294495/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Computing Models and Data Challenges of the LHC experiments
DTSTART;VALUE=DATE-TIME:20040930T073000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294497@indico.cern.ch
DESCRIPTION:Speakers: David Stickland (CERN)\nThe LHC experiments are unde
rtaking various data-challenges in the\nrun-up to completion of their comp
uting models and the submission of\nthe experiment and of the LHC Computin
g Grid (LCG)\, Technical Design\nReports(TDR) in 2005. In this talk we sum
marize the current working\nmodels of the LHC Computing Models\, identifyi
ng their similarities and\ndifferences. We summarize the results and statu
s of the data challenges\nand identify critical areas still to be tested\n
\nhttps://indico.cern.ch/event/0/contributions/1294497/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294497/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ATLAS Distributed Analysis
DTSTART;VALUE=DATE-TIME:20040930T132000Z
DTEND;VALUE=DATE-TIME:20040930T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294501@indico.cern.ch
DESCRIPTION:Speakers: D. Adams (BNL)\nThe ATLAS distributed analysis (ADA)
system is described. The ATLAS\nexperiment has more that 2000 physicists
from 150 insititutions in\n34 countries. Users\, data and processing are d
istributed over these\nsites. ADA makes use of a collection of high-level
web services\nwhose interfaces are expressed in terms of AJDL (abstract jo
b\ndefinition language) which includes descriptions of datasets\,\ntransfo
rmations and jobs. The high-level services are implemented\nusing generic
parts of these objects while clients and endpoint\napplications additional
ly make use of experiment-specific\nextensions. The key high-level service
is the analysis service\nwhich receives a generic job request and creates
and runs a\ncorrresponding job\, typically as a collection of sub-jobs ea
ch\nhandling a subset of the input dataset. The submitting client is\nable
to monitor the progress of the job including partial results.\nThe system
is capable of running a wide range of applications but\nthe emphasis is o
n event processing\, in particular simulation\,\nreconstruction and analys
is of ATLAS data. Other high-level services\ninclude catalogs and dataset
splitters and mergers. The ATLAS\nproduction system has been used to const
ruct an analysis service\nthat makes production activities available to AT
LAS users. An\nanalysis service with interactive response is provided by D
IAL.\nAnother analysis service based on the EGEE middleware is being\ncons
tructed in the context of the ARDA project. All are accessible\nfrom ROOT
and python command lines and from the user-friendly\ngraphical interface p
rovided by GANGA.\n\nhttps://indico.cern.ch/event/0/contributions/1294501/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294501/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experience integrating a General Information System API in LCG Job
Management and Monitoring Services
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294502@indico.cern.ch
DESCRIPTION:Speakers: P. Mendez Lorenzo (CERN IT/GD)\nIn a Grid environmen
t\, the access to information on system resources is a necessity \nin orde
r to perform common tasks such as matching job requirements with available
\nresources\, accessing files or presenting monitoring information. Thus
both \nmiddleware service\, like workload and data management\, and applic
ations\, like \nmonitoring tools\, requiere an interface to the Grid infor
mation service which \nprovides that data. \nEven though a unique schema f
or the published information is defined\, actual \nimplementations use dif
ferent data models\, and define different access protocols. \nApplications
interacting with the information service must therefore deal with \nsever
al APIs\, and be aware of the underlying technology in order to use the \n
appropiate syntax for their queries or to publish new information. \nWe ha
ve produced a new hign level C++ API that accomodates several existing \ni
mplementations of the information service such as Globus MDS(LDAP based)\,
MDS3(XML \nbased) an R-GMA(SQL based). It allows applications to access i
nformation in a \ntransparent manner loading the needed implementation spe
cific library on demand. \nFeatures allowing for the adding and removal of
dynamic information have been \nincluded as well. A general query languag
e to make the API compatible with future \nprotocols has been used. \nIn t
his paper we described the design of this API and the results obtained \ni
ntegrating this API in the Workload Management system and in the GridIce m
onitoring \nsystem of LCG.\n\nhttps://indico.cern.ch/event/0/contributions
/1294502/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294502/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Implementation and Performance of the High-Level Trigger electron
and photon selection for the ATLAS experiment at the LHC
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294503@indico.cern.ch
DESCRIPTION:Speakers: Manuel Dias-Gomez (University of Geneva\, Switzerlan
d)\nThe ATLAS experiment at the Large Hadron Collider (LHC) will face the
challenge of \nefficiently selecting interesting candidate events in pp co
llisions at 14 TeV center-\nof-mass energy\, whilst rejecting the enormous
number of background events\, stemming \nfrom an interaction rate of abou
t 10^9 Hz. The Level-1 trigger will reduce the \nincoming rate to around
O(100 kHz). Subsequently\, the High-Level Triggers (HLT)\, \nwhich are com
prised of the second level trigger and the event filter\, will need to \nr
educe this rate further by a factor of O(10^3). The HLT selection is softw
are based \nand will be implemented on commercial CPUs using a common fram
ework\, which is based \non the standard ATLAS object-oriented software ar
chitecture. In this talk an \noverview of the current implementation of th
e selection for electrons and photons in \nthe trigger is given. The perfo
rmance of this implementation has been evaluated \nusing Monte Carlo simul
ations in terms of the efficiency for the signal channels\, \nthe rate exp
ected for the selection\, the data preparation times\, and the algorithm \
nexecution times. Besides the efficiency and rate estimates\, some physics
examples \nwill be discussed\, showing that the triggers are well adapted
for the physics \nprogramme envisaged at LHC. The electron/gamma trigger
software has been also \nintegrated in the ATLAS 2004 combined test-beam\
, to validate the chosen selection \narchitecture in a real on-line enviro
nment.\n\nhttps://indico.cern.ch/event/0/contributions/1294503/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294503/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experiment Software Installation experience in LCG-2
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294504@indico.cern.ch
DESCRIPTION:Speakers: R. santinelli (CERN/IT/GD)\nThe management of Applic
ation and Experiment Software represents a very\ncommon issue in emerging
grid-aware computing infrastructures.\nWhile the middleware is often insta
lled by system administrators at a site\nvia customized tools that serve a
lso for the centralized management of\nthe entire computing facility\, the
problem of installing\, configuring and\nvalidating Gigabytes of Virtual
Organization (VO) specific software or \nfrequently changing user applicat
ions remains an open issue.\nFollowing the requirements imposed by the exp
eriments\, in the LHC Computing\nGrid (LCG) Experiment Software Managers (
ESM) are designated people\nwith privileges of installing\, removing and
validating software for a \nspecific VO on a per site basis.\nThey can ma
nage univocally identifying tags in the LCG Information\nSystem to announc
e the availability of a specific software version.\nUsers of a VO can then
select\, via the published tag\, sites to run their jobs. \nThe solution
adopted by LCG has mainly served its purpose but it presents many problems
.\nThe requirement imposed by the present solution for the existence of a\
nshared file-system in a computing farm poses performance\,\nreliability a
nd scalability issues for large installations.\nWith this work we present
a more flexible service based on P2P\ntechnology that has been designed to
tackle the limitation of the current system.\nThis service allows the ESM
to propagate the installation occuring in a given WN to\nthe rest of the
farm elements.\nWe illustrate the deployment\, the design\, preliminary re
sults obtained and the\nfeedback from the LHC experiments and sites that h
ave adopted it.\n\nhttps://indico.cern.ch/event/0/contributions/1294504/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294504/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The DAQ system for the Fluorescence Detectors of the Pierre Auger
Observatory
DTSTART;VALUE=DATE-TIME:20040927T134000Z
DTEND;VALUE=DATE-TIME:20040927T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294509@indico.cern.ch
DESCRIPTION:Speakers: H-J. Mathes (FORSCHUNGSZENTRUM KARLSRUHE\, INSTITUT
FüR KERNPHYSIK)\nS.Argiro`(1)\, A. Kopmann (2)\, O.Martineau (2)\, H.-J.
Mathes (2) \nfor the Pierre Auger Collaboration\n\n(1) INFN\, Sezione Tori
no\n(2) Forschungszentrum Karlsruhe\n\nThe Pierre Auger Observatory curren
tly under construction in Argentina will\ninvestigate extensive air shower
s at energies above 10^18 eV. It\nconsists of a ground array of 1600 Chere
nkov water detectors and 24 \nfluorescence telescopes to discover the natu
re and origin of cosmic rays \nat these ultra-high energies.\n\nThe ground
array is overlooked by 4 different fluorescence buildings which are \nequ
ipped with 6 telescopes each. An independent local data acquisition (DAQ)
is \nrunning in each building to readout 480 channels per telescope. In ad
dition\, a \ncentral DAQ merges data coming from the water detectors and a
ll fluorescence \nbuildings.\n\nThe system architecture follows the object
oriented paradigm and has been\nimplemented using several of the most wid
espread open source tools for \ninterprocess communication\, data storage
and user interfaces.\n\nEach local DAQ is connected with further sub-syste
ms for calibration\,\nfor monitoring of atmospheric parameters and slow co
ntrol. The latter is\nresponsible for general safety functions and the exp
eriment control.\n\nAfter a prototype phase to validate the system concept
the Observatory is \ntaking data in the final setup since September 2003.
The data taking will \ncontinue during the construction phase and the int
egration of all sub-systems.\n\nWe present the design and the present stat
us of the system currently running \nin two different buildings with a tot
al of 8 telescopes installed.\n\nhttps://indico.cern.ch/event/0/contributi
ons/1294509/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294509/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pratical approaches to Grid workload and resource management in th
e EGEE project
DTSTART;VALUE=DATE-TIME:20040930T132000Z
DTEND;VALUE=DATE-TIME:20040930T134000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294513@indico.cern.ch
DESCRIPTION:Speakers: M. Sgaravatto (INFN Padova)\nResource management and
scheduling of distributed\, data-driven\napplications in a Grid environme
nt are challenging problems. Although\nsignificant results were achieved i
n the past few years\, the\ndevelopment and the proper deployment of gener
ic\, reliable\, standard\ncomponents present issues that still need to be
completely\nsolved. Interested domains include workload management\, resou
rce\ndiscovery\, resource matchmaking and brokering\, accounting\,\nauthor
ization policies\, resource access\, reliability and\ndependability. The e
volution towards a service-oriented architecture\,\nsupported by emerging
standards\, is another activity that will demand\nattention.\nAll these is
sues are being tackled within the EU-funded EGEE project\n(Enabling Grids
for E-science in Europe)\, whose primary goals are the\nprovision of robus
t middleware components and the creation of a\nreliable and dependable Gri
d infrastructure to support e-Science\napplications.\nIn this paper we pre
sent the plans and the preliminary activities\naiming at providing adequat
e workload and resource management\ncomponents\, suitable to be deployed i
n a production-quality Grid.\n\nhttps://indico.cern.ch/event/0/contributio
ns/1294513/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294513/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ZEUS Global Tracking Trigger
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294514@indico.cern.ch
DESCRIPTION:Speakers: Dimitri gladkov ()\nThe design\, implementation and
performance of the ZEUS Global \nTracking Trigger (GTT) Forward Algorithm
is described. The ZEUS GTT \nForward Algorithm integrates track informati
on from the ZEUS Micro \nVertex Detector (MVD) and forward Straw Tube Tra
cker (STT) to \nprovide a picture of the event topology in the forward di
rection \n($1.5\n\nhttps://indico.cern.ch/event/0/contributions/1294514/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294514/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Persistence for Analysis Objects
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294518@indico.cern.ch
DESCRIPTION:Speakers: J. Hrivnac (LAL)\nThere are two kinds of analysis ob
jects with respect to their \npersistent requirements:\n* Objects\, which
need direct access to the persistency service only \nfor their IO operatio
ns (read/write/update/...): histograms\, clouds\, \nprofiles\, ...\nAll Pe
rsistency requirements for those objects can be implemented\nby standard T
ransient-Persistent Separation techniques like JDO\, \nSerialization\, \ne
tc.\n* Objects\, which need direct access to the persistency service for \
nsome of their standard operations: NTuples\, Tags\,.... It is not\nfeasib
le to completely separate Transient and Persistent form of those\nobjects.
Their Persistency should be tightly interfaced with their\ntransient form
. One possibility is to directly implement a persistent\nextension of thos
e objects for each persistency mechanism.\nThe SQLTuple has been developed
to deliver efficient SQL persistency \nfor AIDA standard NTuple objects.
The implementation is based on\nFreeHEP AIDA implementation and is complet
ely inter-operable with\nother FreeHEP components as well as with other AI
DA implementations.\nSQLTuple dependency on SQL database implementation is
handled at\nrun-time by textual configuration. In principle all mainstrea
m SQL\ndatabases are supported. The default mapping layer can be \ncustomi
zed so that\, for example\, LCG Pool Tag databases can be\ntransparently s
upported. This customization is used to implement\nhigher level management
utilities for Pool Tag databases - package\nColMan. ColMan utilities are
accessible also from the C++ environment\nand via standard Web Service.\nT
he representation will cover both SQLTuple and ColMan packages and \ntheir
inter-operability with other tools. Performance assessment of\nvarious av
ailable technologies will be covered as well.\n\nhttps://indico.cern.ch/ev
ent/0/contributions/1294518/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294518/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aspects
DTSTART;VALUE=DATE-TIME:20040930T122000Z
DTEND;VALUE=DATE-TIME:20040930T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294519@indico.cern.ch
DESCRIPTION:Speakers: J. Hrivnac (LAL)\nAspect-Oriented Programming (AOP)
is a new paradigm promising to allow \nfurther modularization of large sof
tware frameworks\, like those developed\nin HEP. Such frameworks often man
ifest several orthogonal axes of contracts\n(Crosscutting Concerns - CC) l
eading to complex multidepenencies. Currently\nused programing languages a
nd development methodologies don't allow to easily\nidentify and encapsula
te such CC. AOP offers ways to solve CC problems by\nidentifying places wh
ere they appear (Joint Points) and specifying actions to\nbe applied at th
ose places (Advices). While Aspects can be added in principle\nto any pro
gramming paradigm\, they are mostly used in Object-Oriented\nenvironments.
Thanks to wide acceptance and rich object model\, most\nAspect-Oriented t
oolkits have been developed for Java language. Probably the\nmost used AOP
language is AspectJ.\nThe presentation will demonstrate using AspectJ lan
guage to solve several common\nHEP Crosscutting Concerns from simple cases
(like logging or debugging) to\ncomplex ones (like object persistency\, d
ata analysis or graphics).\n\nhttps://indico.cern.ch/event/0/contributions
/1294519/
LOCATION:Interlaken\, Switzerland Brunig 1+2
URL:https://indico.cern.ch/event/0/contributions/1294519/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A GRID approach for Gravitational Waves Signal Analysis with a Mul
ti-Standard Farm Prototype
DTSTART;VALUE=DATE-TIME:20040927T130000Z
DTEND;VALUE=DATE-TIME:20040927T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294521@indico.cern.ch
DESCRIPTION:Speakers: S. Pardi (DIPARTIMENTO DI MATEMATICA ED APPLICAZIONI
"R.CACCIOPPOLI")\nThe standard procedures for the extraction of gravitati
onal wave signals coming \nfrom coalescing binaries provided by the output
signal of an interferometric \nantenna may require computing powers gener
ally not available in a single computing \ncentre or laboratory. A way to
overcome this problem consists in using the \ncomputing power available in
different places as a single geographically \ndistributed computing syste
m. This solution is now effective within the GRID \nenvironment\, that all
ows distributing the required computing effort for specific \ndata analysi
s procedure among different sites according to the available computing \np
ower. \nWithin this environment we developed a system prototype with appli
cation software \nfor the experimental tests of a geographically distribut
ed computing system for the \nanalysis of gravitational wave signal from c
oalescing binary systems. The facility \nhas been developed as a general p
urpose system that uses only standard hardware and \nsoftware components\,
so that it can be easily upgraded and configured. In fact\, it \ncan be p
artially or totally configured as a GRID farm\, as MOSIX farm or as MPI \n
farm. All these three configurations may coexist since the facility can be
split \ninto configuration subsets. A full description of this farm is re
ported\, together \nwith the results of the performance tests and planned
developments.\n\nhttps://indico.cern.ch/event/0/contributions/1294521/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294521/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The ARDA Prototypes
DTSTART;VALUE=DATE-TIME:20040930T124000Z
DTEND;VALUE=DATE-TIME:20040930T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294522@indico.cern.ch
DESCRIPTION:Speakers: Julia ANDREEVA (CERN)\nThe ARDA project was started
in April 2004 to support\nthe four LHC experiments (ALICE\, ATLAS\, CMS an
d LHCb)\nin the implementation of individual\nproduction and analysis envi
ronments based on the EGEE middleware.\n\nThe main goal of the project is
to allow a fast feedback between the \nexperiment and the middleware devel
opment teams via the\nconstruction and the usage of end-to-end prototypes\
nallowing users to perform analyses out of the present \ndata sets from re
cent montecarlo productions.\n\nWe present the status of the integration o
f the EGEE\nprototype Grid middleware into the analysis environment of the
\nfour LHC experiments. First an overview is given on the individual\narch
itectures of the four experiments' prototypes with a strong focus\non how
the EGEE middleware is incorporated into the framework. We\noutline common
points in the usage of the middleware and try to point\nout differences i
n the decisions taken by the experiments on the\ninclusion of different pa
rts of the EGEE software. We will conclude\nby presenting the first feedba
ck from the usage of these analysis\nenvironments.\n\nhttps://indico.cern.
ch/event/0/contributions/1294522/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294522/
END:VEVENT
BEGIN:VEVENT
SUMMARY:HEPBook - A Personal Collaborative HEP notebook
DTSTART;VALUE=DATE-TIME:20040930T153000Z
DTEND;VALUE=DATE-TIME:20040930T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294523@indico.cern.ch
DESCRIPTION:Speakers: G. Roediger (CORPORATE COMPUTER SERVICES INC. - F
ERMILAB)\nA High Energy Physics experiment has between 200 and 1000 collab
orating physicists \nfrom nations spanning the entire globe. Each collabor
ator brings a unique \ncombination of interests\, and each has to search t
hrough the same huge heap of \nmessages\, research results\, and other com
munication to find what is useful.\n\nToo much scientific information is a
s useless as too little. It is time consuming\, \ntedious\, and difficult
to sift and search for the pertinent bits. Often\, the exact \nwords to se
arch for are unknown\, or the information is badly organized\, and the \np
ertinent bits are not found. The search is abandoned\, the time is lost\,
and \nvaluable information is never communicated as it was intended.\n\nMu
ch of a collaboration's information is in the individual physicists paper
\nlogbooks. The physicists record important and pertinent information for
their \nresearch. They save the log books to refer to it later\, copy page
s\, and distribute \nthem to their collaborators who share their interest
and research. \n\nElectronic Logbooks are now used in the control room of
large detectors during the \nacquisition phase. They have proven useful fo
r communicating the status of the \ndetector and to keep the history of la
b sessions in a format that can be queried and \nretrieved quickly. It has
enabled remote monitoring of the detector and remote \nemergency help.\n\
nWe have implemented an electronic Control Room Logbook\, called CRL. It
is used in \nthe D0 experiment's detector control room for the Run II acqu
isition. As of mid \nApril 2004 there are over 305\,000 entries in the D0
logbook\, all viewable and able \nto be annotated from the web. Other exp
eriments such as CMS\, MiniBoone\, and Minos \nhave also adapted the CRL.
These experiments all have very different needs\, so they \nall configure
d and customized the CRL in many different ways. The HEPBook will move \nt
he logbook from the control room to the personal and collaboratory HEP not
ebook. In \nthis paper we will review the HEPBook technology and capabilit
ies and discuss the \nnew HEPBook architecture. Among the topics discusse
d will be the use of Java \nreflection to recursively produce an XML repre
sentation of an entry\, the ability to \nsave personal entries as well as
share entries among a collaboration through \nmultiple repositories which
incorporate software agent technology\, interface with \nthe GRID\, and im
plement multiple security models. The HEPBook runs on all Java \nplatform
s including Apple\, Win32\, and Linux. A brief demo will be given of the \
nHEPBook.\n\nhttps://indico.cern.ch/event/0/contributions/1294523/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294523/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Design\, Installation and Management of a Tera-Scale High Thro
ughput Cluster for Particle Physics Research
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294528@indico.cern.ch
DESCRIPTION:Speakers: A. Martin (QUEEN MARY\, UNIVERSITY OF LONDON)\nWe de
scribe our experience in building a cost efficient High Throughput Cluster
(HTC)\nusing commodity hardware and free software within a university env
ironment.\nOur HTC has a modular system architecture and is designed to be
upgradable. \nThe current\, second phase configuration\, consists of 344
processors and 20 Tbyte of \nRAID storage.\n\nIn order to rapidly install
and upgrade software\, we have developed\nautomatic remote system installa
tion and configuration tools to deploy standard\nsoftware configurations o
n individual machines. To efficiently manage machines we \nhave written a
custom cluster configuration database. This database is used to track \nal
l hardware components in the cluster\, the network and power distribution
and the \nsoftware configuration. Access to this database and the cluster
performance and \nmonitoring systems is provided by a web portal\, which a
llows efficient remote \nmanagement in our low-manpower environment.\n\nWe
describe the performance of our system under a mixed load of scalar and p
arallel \ntasks and discuss future possible improvements.\n\nhttps://indic
o.cern.ch/event/0/contributions/1294528/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294528/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Experiences with the gLite Grid Middleware
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294529@indico.cern.ch
DESCRIPTION:Speakers: Birger KOBLITZ (CERN)\nThe ARDA project was started
in April 2004 to support\nthe four LHC experiments (ALICE\, ATLAS\, CMS an
d LHCb)\nin the implementation of individual\nproduction and analysis envi
ronments based on the EGEE middleware.\n\nThe main goal of the project is
to allow a fast feedback between the \nexperiment and the middleware devel
opment teams via the\nconstruction and the usage of end-to-end prototypes\
nallowing users to perform analyses out of the present \ndata sets from re
cent montecarlo productions.\n\nThe LCG ARDA project is contributing to th
e development\nof the new EGEE Grid middleware by exercising it with reali
stic\nanalysis systems developed within the four LHC experiments. We will\
npresent our experiences in using the EGEE middleware in first\nprototypes
developed by the experiments together with the ARDA\nproject. We will cov
er aspects such as the usability of individual \ncomponents of the middlew
are and give an overview on which\ncomponents are used by which experiment
s.\n\nhttps://indico.cern.ch/event/0/contributions/1294529/
LOCATION:Interlaken\, Switzerland Theatersaal
URL:https://indico.cern.ch/event/0/contributions/1294529/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Use of Condor and GLOW for CMS Simulation Production
DTSTART;VALUE=DATE-TIME:20040927T153000Z
DTEND;VALUE=DATE-TIME:20040927T155000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294530@indico.cern.ch
DESCRIPTION:Speakers: S. Dasu (UNIVERSITY OF WISCONSIN)\nThe University of
Wisconsin distributed computing research groups\ndeveloped a software sys
tem called Condor for high throughput computing\nusing commodity hardware.
An adaptation of this software\, Condor-G\, is\npart of Globus grid compu
ting toolkit. However\, original Condor has\nadditional features that allo
ws building of an enterprise level grid.\nSeveral UW departments have Cond
or computing pools that are integrated\nin such a way as to flock jobs fro
m one pool to another as resources\nbecome available. An interdisciplinary
team of UW researchers recently\nbuilt a new distributed computing facili
ty\, the Grid Laboratory of\nWisconsin (GLOW). In total Condor pools in th
e UW have about 2000 Intel\nCPUs (P-III and Xeon) which are available for
scientific computation.\nBy exploiting special features of Condor such as
checkpointing and\nremote IO we have generated over 10 million fully simul
ated CMS events.\nWe were able to harness about 260 CPU-days per day for a
period of 2\nmonths when we were operational late fall. We have scaled to
using 500\nCPUs concurrently when opportunity to exploit unused resources
in\nlaboratories on our campus. We have built a scalable job submission a
nd\ntracking system called Jug using Python and mySQL which enabled us to\
nscale to run hundreds of jobs simultaneously. Jug also ensured that the\n
data generated is transferred to US Tier-I center at Fermilab. We have\nal
so built a portal to our resources and participated in Grid2003\nproject.
We are currently adapting our environment for providing\nanalysis resource
s. In this paper we will discuss our experience and\nobservations regardin
g the use of opportunistic resources\, and\ngeneralize them to wider grid
computing context.\n\nhttps://indico.cern.ch/event/0/contributions/1294530
/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294530/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fast reconstruction of tracks in the inner tracker of the CBM expe
riment
DTSTART;VALUE=DATE-TIME:20040930T122000Z
DTEND;VALUE=DATE-TIME:20040930T124000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294531@indico.cern.ch
DESCRIPTION:Speakers: I. Kisel (UNIVERSITY OF HEIDELBERG\, KIRCHHOFF INSTI
TUTE OF PHYSICS)\nTypical central Au-Au collision in the CBM experiment (G
SI\, Germany) will produce up \nto 700 tracks in the inner tracker. Large
track multiplicity together with presence \nof nonhomogeneous magnetic fie
ld make reconstruction of events complicated.\n\nA cellular automaton meth
od is used to reconstruct tracks in the inner tracker. The \ncellular auto
maton algorithm creates short track segments in neighboured detector \npla
nes and links them into tracks. Being essentially local and parallel the c
ellular \nautomaton avoids exhaustive combinatorial search\, even when imp
lemented on \nconventional computers. Since the cellular automaton operate
s with highly structured \ninformation\, the amount of data to be processe
d in the course of the track search is \nsignificantly reduced. The method
employs a very simple track model which leads to \nutmost computational s
implicity and fast algorithm. \n\nEfficiency of track reconstruction for p
articles detected in at least three stations \nis presented. Tracks of hig
h momentum particles are reconstructed very well with \nefficiency about 9
8%\, while multiple scattering in detector material leads to lower \nrecon
struction efficiency of slow particles.\n\nhttps://indico.cern.ch/event/0/
contributions/1294531/
LOCATION:Interlaken\, Switzerland Kongress-Saal
URL:https://indico.cern.ch/event/0/contributions/1294531/
END:VEVENT
BEGIN:VEVENT
SUMMARY:HEP@HOME - A distributed computing system based on BOINC
DTSTART;VALUE=DATE-TIME:20040930T120000Z
DTEND;VALUE=DATE-TIME:20040930T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294532@indico.cern.ch
DESCRIPTION:Project SETI@HOME has proven to be one of the biggest successe
s of\ndistributed computing during the last years. With a quite simple\nap
proach SETI manages to process huge amounts of data using a vast\namount o
f distributed computer power.\n\nTo extend the generic usage of these kind
s of distributed computing\ntools\, BOINC (Berkeley Open Infrastructure fo
r Network Computing) is\nbeing developed. In this communication we propose
a BOINC version\ntailored to the specific requirements of the High Energy
Physics (HEP)\ncommunity - the HEP@HOME\n \nThe HEP@HOME will be able to
process large amounts of data\nusing virtually unlimited computing power\,
as BOINC does\, and it should be\nable to work according to HEP specifica
tions.\n\nOne of the main applications of distributed computing is distrib
uted data\nanalysis. In HEP the amounts of data to be analyzed have a larg
e\norder of magnitude. Therefore\, one of the design principles of \nthis
tool is to avoid data transfer - computation is done where data is \nstore
d. This will allow scientists to run their analysis applications even \nif
they do not have a local copy of the data to be analyzed\, taking \nadvan
tage of either very large farms of dedicated computers or \nusing their co
lleagues desktop PCs. This tool also satisfies other \nimportant requireme
nts in HEP\, namely\, security\, fault-tolerance \nand monitoring.\n\nhttp
s://indico.cern.ch/event/0/contributions/1294532/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294532/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A database perspective on CMS detector data
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294535@indico.cern.ch
DESCRIPTION:Building a state of the art high energy physics detector like
CMS \nrequires strict interoperability and coherency in the design and \nc
onstruction of all sub-systems comprising the detector. This issue \nis es
pecially critical for the many database components that are \nplanned for
storage of the various categories of data related to\nthe construction\, o
peration\, and maintainance of the detector like \nevent data\, slow contr
ol data\, conditions data\, calibration data\, \nevent meta data\, etc ...
. The data structures needed to operate the \ndetector as a whole need to
be present in the database before the \ndata is entered. Changing these s
tructures for a database system\nthat already contains a substantial amoun
t of data is a very time \nand labour consuming exercise that needs to be
avoided. Cases where \nthe detector needs to be treated as a whole are det
ector operation \n(control\, error tracking\, conditions) and the interfac
ing of there \nconstruction and simulation software.\n\nIn this paper we p
ropose to use the detector geometry as the \nstructure connecting the vari
ous elements. The design and \nimplementation of a relational database tha
t captures the CMS \ndetector geometry and the detector components is disc
ussed. The\ndetector geometry can serve as a core component in several oth
er \ndatabases in order to make them interoperable. It also provides a \nc
ommon viewpoint between the physical detector and its image in the \nrecon
struction software. Some of the necessary extensions to the \ndetector des
cription are discussed.\n\nhttps://indico.cern.ch/event/0/contributions/12
94535/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294535/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evaluation of Grid Security Solutions using Common Criteria
DTSTART;VALUE=DATE-TIME:20040929T120000Z
DTEND;VALUE=DATE-TIME:20040929T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294536@indico.cern.ch
DESCRIPTION:Speakers: S. NAQVI (TELECOM PARIS)\nIn the evolution of comput
ational grids\, security threats were overlooked in the \ndesire to implem
ent a high performance distributed computational system. But now \nthe gro
wing size and profile of the grid require comprehensive security solutions
\nas they are critical to the success of the endeavour. A comprehensive s
ecurity \nsystem\, capable of responding to any attack on grid resources\,
is indispensable to \nguarantee its anticipated adoption by both the user
s and the resource providers. \nSome security teams have started working o
n establishing in-depth security \nsolutions. The evaluation of their grid
security solutions requires excellent \ncriteria to assure sufficient sec
urity to meet the needs of its users and resource \nproviders. Grid commun
ity's lack of experience in the exercise of the Common \nCriteria (CC)\, w
hich was adopted in 1999 as an international standard for security \nprodu
ct evaluation\, makes it imperative that efforts be exerted to investigate
the \nprospective influence of the CC in advancing the state of grid secu
rity. This \narticle highlights the contribution of the CC to establishing
confidence in grid \nsecurity\, which is still in need of considerable at
tention from its designers. The \nprocess of security evaluation is outlin
ed and the roles each part of the \nevaluation may play in obtaining confi
dence are examined.\n\nhttps://indico.cern.ch/event/0/contributions/129453
6/
LOCATION:Interlaken\, Switzerland Brunig 3
URL:https://indico.cern.ch/event/0/contributions/1294536/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Hardware Based Cluster Control And Management System
DTSTART;VALUE=DATE-TIME:20040929T130000Z
DTEND;VALUE=DATE-TIME:20040929T132000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294537@indico.cern.ch
DESCRIPTION:Speakers: R. Panse (KIRCHHOFF INSTITUTE FOR PHYSICS - UNIVERSI
TY OF HEIDELBERG)\nSuper-computers will be replaced more and more by PC cl
uster \nsystems. Also future LHC experiments will use large PC clusters. \
nThese clusters will consist of off-the-shelf PCs\, which in general \nare
not built to run in a PC farm. Configuring\, monitoring and \ncontrolling
such clusters requires a serious amount of time \nconsuming and administr
ative effort. \nWe propose a cheap and easy hardware solution for this iss
ue. The \nmain item of our cluster control system is the Cluster Interfac
e \nAgent card (CIA). \nThe CIA card is a low-cost PCI expansion card equi
pped with a \nnetwork interface. With the aid of the CIA card the computer
can be \nfully controlled remotely\, independent of the state of the node
\nitself. The card combines a number of feature needed for this remote \n
control\, including power management and reset. The card operates \nentire
ly independent of the PC and can remain powered while the PC \nmay even be
powered down. It offers a wide range of automatization \nfeatures\, inclu
ding automatic installation of the operating \nsystem\, changing BIOS sett
ings or booting a rescue disk and also to \nmonitor and debug the node. Wi
th the aid of PCI scans and hardware \ntests errors and pending failures c
an be easily detected in an early \nstage. \nWorking prototypes exist. The
presentation will outline the status \nof the project and first implement
ation results of the preproduction \ndevices\, currently being built.\n\nh
ttps://indico.cern.ch/event/0/contributions/1294537/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294537/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The SEAL C++ Reflection System
DTSTART;VALUE=DATE-TIME:20040927T124000Z
DTEND;VALUE=DATE-TIME:20040927T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294538@indico.cern.ch
DESCRIPTION:Speakers: S. Roiser (CERN)\nThe C++ programming language has v
ery limited capabilities for\nreflection information about its objects. In
this paper a new reflection\nsystem will be presented\, which allows comp
lete introspection of C++\nobjects and has been developed in the context o
f the CERN/LCG/SEAL\nproject in collaboration with the ROOT project. \n\nT
he reflection system consists of two different parts. The first part is\na
code generator that produces automatically reflection information from\ne
xisting C++ classes. This generation of the reflection information is\ndon
e in a non intrusive way\, which means that the original C++ classes\ndefi
nition do not need to be changed or instrumented. The second part of\nthe
reflection system is able to load/build this information in memory\nand pr
ovides an API to the user.\nThe user can query reflection information from
any C++ class and also\ninteract generically with the objects\, like invo
cation of functions\,\nsetting and getting data members or constructing an
d deleting objects.\nWhen designing the different packages it was taken ca
re of having\nminimal dependencies on external software and a possibility
to port the\nsoftware to different platforms/compilers. \n\nA quick overvi
ew of the current implementation in use by the LCG SEAL\nand POOL projects
will be given. A more detailed description of the new\nmodel\, which aims
to reflect the complete C++ language and to be a\ncommon reflection syste
m used also by the ROOT framework\, will be given.\n\nhttps://indico.cern.
ch/event/0/contributions/1294538/
LOCATION:Interlaken\, Switzerland Brunig
URL:https://indico.cern.ch/event/0/contributions/1294538/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Globally Distributed Real Time Infrastruture for World Wide Coll
aborations
DTSTART;VALUE=DATE-TIME:20040930T155000Z
DTEND;VALUE=DATE-TIME:20040930T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294541@indico.cern.ch
DESCRIPTION:Speakers: P. Galvez (CALTECH)\nVRVS (Virtual Room Videoconfere
ncing System) is a unique\, globally\nscalable next-generation system for
real-time collaboration by small\nworkgroups\, medium and large teams enga
ged in research\, education and\noutreach. VRVS operates over an ensemble
of national and international\nnetworks. Since it went into production ser
vice in early 1997\, VRVS has\nbecome a standard part of the toolset used
daily by a large sector of\nHENP\, and it is used increasingly for other D
oE/NSF-supported programs.\nToday\, the VRVS Web-based system is regularly
accessed by more than\n30\,000 registered hosts running the VRVS software
in more than 103\ncountries. There are currently 78 VRVS "reflectors" tha
t create the\ninterconnections and manage the traffic flow\, in the Americ
as\, Europe\nand Asia. New reflectors recently have been installed in Braz
il\, China\,\nPakistan\, Australia and Slovakia.\n\n VRVS is global in sco
pe: it covers the full range of existing and\nemerging protocols and the f
ull range of client devices for\ncollaboration\, from mobile systems throu
gh desktops to installations in\nlarge auditoria. VRVS will be integrated
with the Grid-enabled Analysis\nEnvironment (GAE) now under development at
Caltech in partnership with\nthe GriPhyN\, iVDGL and PPDG projects in the
US\, and Grid projects in Europe.\n\nA major architectural change is curr
ently in development. The new\nversion v4.0\, is expected to be deployed i
n early 2005. We will describe\nthe current operational state of the VRVS
service and provide a\ndescription of the new architecture including all t
he new and advanced\nfunctionalities that will be added.\n\nhttps://indico
.cern.ch/event/0/contributions/1294541/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294541/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simplified deployment of an EDG/LCG cluster via LCFG-UML
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294543@indico.cern.ch
DESCRIPTION:Speakers: A. Garcia (KARLSRUHE RESEARCH CENTER (FZK))\nThe clu
sters using DataGrid middleware are usually installed and \nmanaged by mea
ns of an "LCFG" server. Originally developed by the \nUniv. of Edinburgh a
nd extended by DataGrid\, this is a complex piece \nof software. It allows
for automated installation and configuration of \na complete grid site. H
owever\, installation of the "LCFG"-Server takes \nmost of the time\, thus
hinder widespread use. \n \nOur approach was to set up and preconfigure t
he LCFG-server inside a \n"User Mode Linux" (UML) instance in order to mak
e deployment \nfaster. The result is the "UML-LCFG-Sserver". It is provide
d as a \nprebuilt root-filesystem image which can be up and running within
only \nwith few configuration steps. Detailed instructions and experience
are \nalso provided on the basis of tests within the CrossGrid \nproject.
Altogether UML-LCFG makes it easier for a new site to join an \nEDG/LCG b
ased Grid by bypassing most of the LCFG server installation.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294543/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294543/
END:VEVENT
BEGIN:VEVENT
SUMMARY:InGRID - Installing GRID
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294545@indico.cern.ch
DESCRIPTION:Speakers: F.M. Taurino (INFM - INFN)\nThe "gridification" of a
computing farm is usually a complex and time consuming task.\nOperating s
ystem installation\, grid specific software\, configuration files\ncustomi
zation can turn into a large problem for site managers.\nThis poster intro
duces InGRID\, a solution used to install and maintain grid software\non s
mall/medium size computing farms.\nGrid elements installation with InGRID
consists in three steps.\nIn the first step nodes are installed using RedH
at Kickstart\, an installation\nmethod that automate most of a Linux distr
ibution installation\, including disk\npartitioning\, boot loader configur
ation\, network configuration\, base package selection.\nGrid specific sof
tware is than integrated using apt4rpm\, a package management wrapper\nove
r the rpm commands. Apt automatically manages packages dependencies\, and
is able\nto download\, install and upgrade RPMs from a central software re
pository.\nFinally\, grid configuration files are customized through LCFGn
g\, a system to setup\nand maintain Unix machines\, that can configure man
y system files\, execute scripts\,\ncreate users\, etc.\n\nhttps://indico.
cern.ch/event/0/contributions/1294545/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294545/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mantis: the Geant4-based simulation specialization of the CMS COBR
A framework
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294546@indico.cern.ch
DESCRIPTION:Speakers: M. Stavrianakou (FNAL)\nThe CMS Geant4-based Simulat
ion Framework\, Mantis\, is a specialization of the COBRA\nframework\, whi
ch implements the CMS OO architecture. Mantis\, which is the basis for\nth
e CMS-specific simulation program OSCAR\, provides the infrastructure for
the\nselection\, configuration and tuning of all essential simulation elem
ents: geometry\nconstruction\, sensitive detector and magnetic field manag
ement\, event generation and\nMonte Carlo truth\, physics\, particle propa
gation and tracking\, run and event\nmanagement\, and user monitoring acti
ons.\nThe experimental setup is built by Mantis using the COBRA Detector D
escription\nDatabase\, DDD\, which allows transparent instantiation of any
layout (full or partial\nCMS simulation\, test beam setups etc).\nPersist
ency\, histogramming and other important services are available using the\
nstandard COBRA infrastructure and are transparent to user applications.\n
\nAuthors: \nM. Stavrianakou\, P. Arce\, S. Banerjee\, T. Boccali\, A. De
Roeck\, V. Innocente\, \nM. Liendl\, T. Todorov\n\nhttps://indico.cern.ch/
event/0/contributions/1294546/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294546/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jefferson Lab Data Acquisition Run Control System
DTSTART;VALUE=DATE-TIME:20040929T143000Z
DTEND;VALUE=DATE-TIME:20040929T145000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294548@indico.cern.ch
DESCRIPTION:Speakers: V. Gyurjyan (Jefferson Lab)\nA general overview of t
he Jefferson Lab data acquisition run control system is presented.\nThis r
un control system is designed to operate the configuration\, control\, and
\nmonitoring of all Jefferson Lab experiments. It controls data-taking act
ivities by\ncoordinating the operation of DAQ sub-systems\, online softwar
e components and\nthird-party software such as external slow control syste
ms.\nThe main\, unique feature which sets this system apart from conventio
nal systems\nis its incorporation of intelligent agent concepts. Intellige
nt agents are autonomous\nprograms which interact with each other through
certain protocols on a peer-to-peer\nlevel. In this case\, the protocols a
nd standards used come from the\ndomain-independent Foundation for Intelli
gent Physical Agents (FIPA)\, and the\nimplementation used is the Java Age
nt Development Framework (JADE).\nA lightweight\, RDF (Resource Deffinitio
n Framework) based language was developed to\nstandardize the description
of the run control system for configuration purposes.\nFault tolerance and
recovery issues are addressed. \nKey features of the system include: subs
ystem state management\, configuration\nmanagement\, agent communication\,
multiple simultaneous run management\nand synchronization\, and user inte
rfaces. A user interface allowing web-wide\nmonitoring was developed which
incorporates a JAS/AIDA data server extensible through\nJava servlets.\n\
nhttps://indico.cern.ch/event/0/contributions/1294548/
LOCATION:Interlaken\, Switzerland Jungfrau
URL:https://indico.cern.ch/event/0/contributions/1294548/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cluster architectures used to provide CERN central CVS services
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294550@indico.cern.ch
DESCRIPTION:Speakers: M. Guijarro (CERN)\nThere are two cluster architectu
re approaches used at CERN to provide central CVS \nservices. The first on
e (http://cern.ch/cvs) depends on AFS for central storage of \nrepositorie
s and offers automatic load-balancing and fail-over mechanisms.\n\nThe sec
ond one (http://cern.ch/lcgcvs) is an N + 1 cluster based on local file \n
systems\, using data replication and not relying on AFS. It does not provi
de either \ndynamic load-balancing or automatic fail-over. Instead a serie
s of tools were \ndeveloped for repository relocation in case of fail-over
and for manual load-\nbalancing.\n\nBoth architectures are used in produc
tion at CERN and project managers can chose one \nor the other\, depending
on their needs. If\, eventually\, one architecture proves to \nbe signif
icantly better\, the other one may be phased out. This paper presents in \
ndetail both approaches and describes their relative advantages and drawba
cks\, as \nwell as some data about them (number of repositories\, average
repository size\, etc).\n\nhttps://indico.cern.ch/event/0/contributions/12
94550/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294550/
END:VEVENT
BEGIN:VEVENT
SUMMARY:BFD: A software management & deployment tool for mixed-language di
stributed projects
DTSTART;VALUE=DATE-TIME:20040930T080000Z
DTEND;VALUE=DATE-TIME:20040930T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294551@indico.cern.ch
DESCRIPTION:Speakers: M. Stoufer (LAWRENCE BERKELEY NATIONAL LAB)\nAs any
software project grows in both its collaborative and mixed codebase nature
\,\ncurrent tools like CVS and Maven start to sag under the pressure of co
mplex\nsub-project dependencies and versioning. A developer-wide failure i
n mastery of these\ntools will inevitably lead to an unrecoverable instabi
lity of a project. Even keeping\na single software project stable in a lar
ge collaborative environment has proved a\ndifficult venture in which nume
rous home-spun and commercial tools have yet to fully\nsucceed.\n\n BFD lo
oks to solve the problems inherent with large scale software projects that
\nspan multiple software mixed-language projects. This is accomplished two
-fold. BFD\nextends the versioning methodology of CVS or Maven by enforcin
g a rich data type\nformat for its version tags. BFD also improves on the
naive build ideologies of the\ndevelopers IDE by being able to resolve com
plex dependencies between non-related\nprojects as well as knowing when in
compatible dependencies cannot be resolved. The\nconcept of the Meta proje
ct has also been introduced to allow projects to be grouped\ntogether in a
logical manner. Thus allowing varying versions of said projects to be\nke
pt track of by an overarching framework.\n\nhttps://indico.cern.ch/event/0
/contributions/1294551/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294551/
END:VEVENT
BEGIN:VEVENT
SUMMARY:AMS-02 Computing and Ground Data Handling
DTSTART;VALUE=DATE-TIME:20040929T124000Z
DTEND;VALUE=DATE-TIME:20040929T130000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294552@indico.cern.ch
DESCRIPTION:Speakers: A. Klimentov (A)\nAMS-02 Computing and Ground Data H
andling. \n \n V.Choutko (MIT\, Cambridge)\, A.Klimentov (MIT\, Cambridge
) and\n M.Pohl (Geneva University)\n \n AMS (Alpha
Magnetic Spectrometer) is an experiment to search in \nspace for dark mat
ter and antimatter on the International Space \n Station (ISS). The AMS d
etector had a precursor flight in 1998 (STS-\n91\, June 2-12\, 1998). Mor
e than 100M events were collected and \nanalyzed. \n The final detector (A
MS-02) will be installed on ISS in the fall of \n2007 for at least 3 years
. The data will be transmitted from ISS to \nNASA Marshall Space Flight Ce
nter (MSFC\, Huntsvile\, Alabama) and \n transfered to CERN (Geneva Switze
rland) for processing and analysis.\n \n We are presenting the AMS-02 Gro
und Data Handling scenario and\n requirements to AMS ground centers: the P
ayload Operation and Control\n Center (POCC) and the Science Operation Cen
ter (SOC).\n \n The Payload Operation and Control Center is where AMS ope
rations \ntake place\, including commanding\, storage and analysis of hous
e \nkeeping data and partial science data analysis for rapid quality \ncon
trol and feed back. \n \n The AMS Science Data Center receives and stores
all AMS science and\n house keeping data\, as well as ancillary data from
NASA. It ensures \nfull science data reconstruction\, calibration and ali
gnment\; it keeps \ndata available for physics analysis and archives all d
ata. \n \n We also discuss the AMS-02 distributed MC production currently
\nrunning in 15 Universities and Labs in Europe\, USA and Asia\, with \na
utomatic jobs submission and control from one central place (CERN). \nThe
software uses CORBA technology to control and monitor MC \nproduction and
an ORACLE relational database\, to keep catalogues\, \nevent description a
s well as production and monitoring information.\n\nhttps://indico.cern.ch
/event/0/contributions/1294552/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294552/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A distributed\, Grid-based analysis system for the MAGIC telescope
DTSTART;VALUE=DATE-TIME:20040927T134000Z
DTEND;VALUE=DATE-TIME:20040927T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294555@indico.cern.ch
DESCRIPTION:Speakers: H. Kornmayer (FORSCHUNGSZENTRUM KARLSRUHE (FZK))\nTh
e observation of high-energetic gamma-rays with ground based air cerenkov
telescopes is one of \nthe most exciting areas in modern astro particle ph
ysics. End of the year 2003 the MAGIC telescope \nstarted operation.The lo
w energy threshold for gamma-rays together with different background \nsou
rces leads to a considerable amount of data. The analysis will be done in
different institutes \nspread over Europe. The production of Monte Carlo e
vents including the simulation of Cerenkov light \nin the atmosphere is ve
ry computing intensive and another challenge for a collaboration like MAGI
C. \nTherefore the MAGIC telescope collaborations will take the opportunit
y to use Grid technology to set \nup a distributed computational and data
intensive analysis system with nowadays available \ntechnology. The basic
architecture of such a distributed\, Europe wide Grid system will be prese
nted. \nFirst implementation results will be shown. This Grid might be the
starting point for a wider distributed \nastro particle Grid in Europe.\n
\nhttps://indico.cern.ch/event/0/contributions/1294555/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294555/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Current Status of Fabric Management at CERN
DTSTART;VALUE=DATE-TIME:20040927T120000Z
DTEND;VALUE=DATE-TIME:20040927T122000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294556@indico.cern.ch
DESCRIPTION:Speakers: G. Cancio (CERN)\nThis paper describes the evolution
of fabric management at CERN's T0/T1 Computing \nCenter\, from the select
ion and adoption of prototypes produced by the European \nDataGrid (EDG) p
roject[1] to enhancements made to them.\nIn the last year of the EDG proje
ct\, developers and service managers have been \nworking to understand and
solve operational and scalability issues.\n\nCERN has adopted and strengt
hened Quattor[2]\, EDG's installation and configuration \nmanagement tools
uite\, for managing all Linux clusters and servers in the Computing \nCent
er\, replacing existing legacy management systems. Enhancements to the ori
ginal \nprototype include a redundant and scalable server architecture usi
ng proxy \ntechnology and producing plug-in components for configuring sys
tem and LHC computing \nservices.\nCERN now coordinates the maintenance of
Quattor\, making it available to other sites.\n\nLemon[3]\, the EDG fabri
c monitoring framework\, has been progressively deployed onto \nall manage
d Linux nodes. We have developed sensors to instrument fabric nodes to \np
rovide us with complete performance and exception monitoring information.
\nPerformance visualization displays and interfaces to the existing alarm
system have \nalso been provided.\n\nLEAF[4]\, the LHC-Era Automated Fabri
c toolset\, comprises the State Management \nSystem\, a tool to enable hig
h-level configuration commands to be issued to sets of \nnodes during both
hardware and service management Use Cases\, and the Hardware \nManagement
System\, a tool for administering hardware workflows and for visualizing
\nand locating equipment.\n\nFinally\, we will describe issues currently b
eing addressed and planned future \ndevelopments.\n\nhttps://indico.cern.c
h/event/0/contributions/1294556/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294556/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Data Grid for the Analysis of Data from the Belle Experiment
DTSTART;VALUE=DATE-TIME:20040930T155000Z
DTEND;VALUE=DATE-TIME:20040930T161000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294559@indico.cern.ch
DESCRIPTION:Speakers: G R. Moloney ()\nWe have developed and deployed a da
ta grid for the processing of data \nfrom the Belle experiment\, and for t
he production of simulated Belle \ndata. The Belle Analysis Data Grid brin
gs together compute and storage \nresources across five separate partners
in Australia\, and the \nComputing Research Centre at the KEK laboratory i
n Tsukuba\, Japan.\n\nThe data processing resouces are general purpose\, s
hared use\, compute \nclusters at the Universities of Melbourne and Sydney
\, the Australian \nPartnership for Advanced Computing (APAC)\, the Victor
ian Partnership \nfor Advanced Computing (VPAC) and the Australian Centre
for Advanced \nComputing and Communications (AC3).\n\nThis system is in us
e for the Australian contribution to the \nproduction of simulated data fo
r the Belle experiment\, and for physics \nanalyses.\n\nThe Storage Resour
ce Broker (SRB)\, from the San Diego Supercomputing \nCentre\, is used to
provide a robust underlying data repository. A \nfederation of SRB servers
has been established to share and manage \nBelle data between the KEK lab
oratory\, the mass data store at the \nAustralian National University (ANU
) and satellite storage at each of \nthe compute clusters.\n\nThe globus t
oolkit is the underlying technology for the management of \nthe computing
resources\, and the despatching of jobs. A network aware \njob scheduler h
as been developed. The scheduler queries the SRB \nservers for location of
data replicas\, and arranges scheduling of processing and \nproduction j
obs on the compute resources according to a \nstatic model of the network
connectivity and dynamic assessment of the \nrelative system loads.\n\nhtt
ps://indico.cern.ch/event/0/contributions/1294559/
LOCATION:Interlaken\, Switzerland Ballsaal
URL:https://indico.cern.ch/event/0/contributions/1294559/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Monitoring the CDF distributed computing farms
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294561@indico.cern.ch
DESCRIPTION:Speakers: I. Sfiligoi (INFN Frascati)\nCDF is deploying a vers
ion of its analysis facility (CAF) at several globally \ndistributed sites
. On top of the hardware at each of these sites is either an FBSNG \nor Co
ndor batch manager and a SAM data handling system which in some cases also
\nmakes use of dCache.\nThe jobs which run at these sites also make use o
f a central database located at \nFermilab. Each of these systems has its
own monitoring.\nIn order to maintain and effectively use the distributed
system\, it isimportant that \nboth the administrators and the users can g
et a complete global view of the system. \nWe will present a system which
integrates the monitoring of all of these services \ninto one globally acc
essible system based on the Monalisa product. This system is \nintended fo
r administrators to monitor the system status and service level and for \n
users to better locate resources and monitor job progress.\nIn addition\,
it is meant to satisfy the request by the CDF International Finance \nComm
ittee that global computing resource usage by CDF can be audited.\n\nhttps
://indico.cern.ch/event/0/contributions/1294561/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294561/
END:VEVENT
BEGIN:VEVENT
SUMMARY:SAMGrid Experiences with the Condor Technology in Run II Computing
DTSTART;VALUE=DATE-TIME:20040929T080000Z
DTEND;VALUE=DATE-TIME:20040929T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294564@indico.cern.ch
DESCRIPTION:Speakers: I. Terekhov (FERMI NATIONAL ACCELERATOR LABORATORY)\
nSAMGrid is a globally distributed system for data handling and job manage
ment\,\ndeveloped at Fermilab for the D0 and CDF experiments in Run II. Th
e Condor\nsystem is being developed at the University of Wisconsin for man
agement\nof distributed resources\, computational and otherwise. We briefl
y review the\nSAMGrid architecture and its interaction with Condor\, which
was presented\nearlier. We then present our experiences using the system
in production\,\nwhich have two distinct aspects.\n\nAt the global level\,
we deployed Condor-G\, the Grid-extended Condor\, for\nthe resource broke
ring and global scheduling of our jobs. At the heart of\nthe system is Con
dor's Matchmaking Service. As a more recent work at the \ncomputing elemen
t level\, we have been benefitting from the large computing \ncluster at t
he University of Wisconsin campus. The architecture of \nthe computing fac
ility and the philosophy of Condor's resource management \nhave prompted u
s to improve the application infrastructure for D0 and CDF\,\nin aspects s
uch as parting with the shared file system or reliance on\nresources being
dedicated. As a result\, we have increased productivity\nand made our app
lications more portable and Grid-ready. We include some\nstatistics gather
ed from our experience. Our fruitful collaboration \nwith the Condor team
has been made possible by the Particle Physics Data Grid.\n\nhttps://indic
o.cern.ch/event/0/contributions/1294564/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294564/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Designing a Useful Email System
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294567@indico.cern.ch
DESCRIPTION:Speakers: J. Schmidt (Fermilab)\nEmail is an essential part of
daily work. The FNAL gateways process in excess of \n700\,000 messages pe
r week. Amomng those messages are many containing viruses and \nunwanted s
pam. This paper outlines the FNAL email system configuration. We will \ndi
scuss how we have defined our systems to provide optimum uptime as well as
\nprotection against viruses\, spam and unauthorized users.\n\nhttps://in
dico.cern.ch/event/0/contributions/1294567/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294567/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CHOS\, a method for concurrently supporting multiple operating sys
tem
DTSTART;VALUE=DATE-TIME:20040927T134000Z
DTEND;VALUE=DATE-TIME:20040927T140000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294569@indico.cern.ch
DESCRIPTION:Speakers: S. Canon (NATIONAL ENERGY RESEARCH SCIENTIFIC COMPUT
ING CENTER)\nSupporting multiple large collaborations on shared compute\nf
arms has typically resulted in divergent requirements from the\nusers on t
he configuration of these farms. As the frameworks used\nby these collabo
rations are adapted to use Grids\, this issue will likely\nhave a signific
ant impact on the effectiveness of Grids.\nTo address these issues\, a met
hod was developed at Lawrence Berkeley National\nLab and is being used in
production on the PDSF cluster. This method\, termed\nCHOS\, uses a combi
nation of a Linux kernel module\, the change\nroot system call\, and sever
al utilities to provide access to\nmultiple Linux distributions and versio
ns concurrently on a\nsingle system. This method will be presented\, alon
g with an explanation\non how it is integrated into the login process\, gr
id services\,\nand batch scheduler systems. We will also describe how a d
istribution\nis installed and configured to run in this environment and ex
plore\nsome common problems that arise. Finally\, we will relate our expe
rience\nin deploying this framework on a production cluster used by severa
l\nhigh energy and nuclear physics collaborations.\n\nhttps://indico.cern.
ch/event/0/contributions/1294569/
LOCATION:Interlaken\, Switzerland Harder
URL:https://indico.cern.ch/event/0/contributions/1294569/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patching PCs
DTSTART;VALUE=DATE-TIME:20040928T080000Z
DTEND;VALUE=DATE-TIME:20040928T080000Z
DTSTAMP;VALUE=DATE-TIME:20190121T225501Z
UID:indico-contribution-0-1294572@indico.cern.ch
DESCRIPTION:Speakers: J. Schmidt (Fermilab)\nFNAL has over 5000 PCs runnin
g either Linux or Windows software. Protecting these \nsystems efficiently
against the latest vulnerabilities that arise has prompted FNAL \nto take
a more central approach to patching systems. We outline the lab support \
nstructure for each OS and how we have provided a central solution that wo
rks within \nexisting support boundaries. The paper will cover how we iden
tify what patches are \nconsidered crucial for a system on the FNAL networ
k and how we verify that systems \nare appropriately patched.\n\nhttps://i
ndico.cern.ch/event/0/contributions/1294572/
LOCATION:Interlaken\, Switzerland Coffee
URL:https://indico.cern.ch/event/0/contributions/1294572/
END:VEVENT
END:VCALENDAR