THOR: A Versatile Commodity Component of Supercomputer Development

CERN continues to use Linux as their OS of choice for modeling and simulation studies.

The world's highest energy particle
accelerator, the Large Hadron Collider (LHC), is presently being
constructed at the European Center for Particle Physics Research
(CERN) near Geneva, Switzerland. The planned date for first
collisions is 2005. Since the demise of the US Superconducting
Super Collider (SSC) in 1993, CERN has essentially become a world
laboratory where American, African, European, Asian and Australian
physicists work side by side. The LHC will penetrate deeper than
ever into the microcosm to recreate the conditions prevailing in
the universe just a millionth of a millionth of a second after the
big bang when the temperature was ten-thousand-million-million
degrees.

Our group is a small part of the team of approximately 1500
physicists, from over 100 institutions around the world, engaged in
the construction of the ATLAS (A Toroidal LHC ApparatuS)
experiment, one of two general-purpose detectors preparing to take
data at the LHC. The experimental environment of ATLAS is
punishing. For example, ATLAS has hundreds of thousands of detector
channels and must keep up with a collision rate that can give rise
to approximately 30 new events every 25 nanoseconds. Also,
detectors and their accompanying electronics often must operate in
high-radiation environments. It is obvious that the computing
requirements in such an arena are, to say the least, demanding.
CERN is no stranger to software developments required to solve the
unique problems presented by international particle physics. For
example, the World Wide Web was initially designed at CERN to help
communication among the several hundred members scattered in
numerous research institutes and universities.

Design Considerations

The particle physicists in our group are involved in two
areas which pose large computing problems. The first is in the area
of time-critical computing, where the raw rate must be reduced from
an event rate of around one gigahertz to about 100Hz by a
three-stage, real-time data selection process called triggering. We
are involved, along with groups from CERN, France, Italy and
Switzerland, in the final stage of triggering, called the Event
Filter, that reduces the data rate from 1GB/s to 100MB/s, fully
reconstructs the data for the first time and writes the data to a
storage medium. It is estimated that this last stage of processing
would currently require on the order of a thousand “Pentiums”, if
current trends in the development of processor speed
continue.

We are also actively involved in simulating the response of
the ATLAS detector to the physics processes that will be, or might
be, present. This second task is not time-critical, but requires
large simulation programs and often many hundreds of thousands of
fully simulated events. Neither of these applications requires
nodes to communicate during processing.

In order to pursue our research aims in these two areas, we
had to develop a versatile system that could function as a
real-time prototype of the ATLAS Event Filter and also be able to
generate large amounts of Monte Carlo data for modeling and
simulation. We needed a cost-effective solution that was scalable
and modular, as well as compatible with existing technology and
software. Also, because of the time scale of the project, we
required a solution with a well-defined and economical upgrade
path. These constraints led us inevitably toward a “Beowulf-type”
commodity-component multiprocessor with a Linux operating system.
The machine we finally developed was called THOR, in keeping with
the Nordic nature of the names of similar-type systems such as
NASA's Beowulf machine and LOKI at Los Alamos National
Laboratory.

During our design discussions on THOR, it soon became clear
to us that the benefits of scalability, modularity,
cost-effectiveness, flexibility and access to a commercial upgrade
path make the commodity-component multiprocessor an effective
approach for providing high-performance computing for a myriad of
scientific and commercial tasks—capable of being utilized for both
time-critical and off-line data acquisition and analysis tasks. The
combination of commodity Intel processors with conventional fast
Ethernet and a high-speed network/back-plane fabric (Scalable
Coherent Interface (SCI) from Dolphin Interconnect Solutions Inc.)
enables the THOR machine to run as a cluster of serial processors,
or as a fully parallel multiprocessor using MPI. It is also
possible to rapidly reconfigure the THOR machine from a fully
parallel mode to an all-serial mode, or for mixed parallel-serial
use.

The THOR Prototype

In order to demonstrate the basic ideas of the THOR project,
a prototype has been constructed. A photograph of a slightly
earlier incarnation of THOR is shown in Figure 1. This prototype at
present consists of 42 dual Pentium II/III MHz machines (40 450MHz
and 44 600MHz processors), each with 256MB of RAM. Each node is
connected via a 100Mb/s Ethernet 48-way switch. A 450MHz dual
Pentium II computer provides the gateway into the THOR prototype.
The prototype has access to 150 gigabytes of disk space via a
fast/wide SCSI interface and a 42-slot DDS2 tape robot capable of
storing approximately half a terabyte of data. The THOR prototype
currently runs under Red Hat Linux 6.1.

Figure 1. The THOR Commodity Component
Multiprocessor

Sixteen of the 40 nodes have been connected into a
two-dimensional 4x4 torus, using SCI, which allows a maximum
bi-directional link speed of 800MB/s. We have measured the
throughput of the SCI to be 91MB/s, which is close to the PCI bus
maximum of 133MB/s. This maximum will rise when the 64-bit version
of the SCI hardware, in conjunction with 64-bit PCI bus widths, are
available. The use of SCI on THOR permits the classification of
these THOR nodes as a Cache Coherent Non-Uniform Memory Access
(CC-NUMA) architecture machine. This 16-node (32-processor)
subdivision of the THOR prototype was implemented and tested as a
fully parallel machine by a joint team from Dolphin Interconnect
Solutions Inc. and THOR in the summer of 1999. A schematic diagram
of the THOR Linux cluster is shown in Figure 2.

Figure 2. Schematic Diagram of the THOR Linux
Cluster

The THOR prototype described above is now being benchmarked
as a parallel and serial machine, as well as being used for active
physics research. Researchers have access to full C, C++ and
FORTRAN compilers, CERN and NAG numerical libraries and MPI
parallel libraries for their research use. We also plan to acquire
the recent Linux release of IRIS Explorer for THOR research use in
the near future. PBS (a Portable Batch System developed at NASA)
has been running on THOR since March, 1999.