DOE flips switch on Titan, world’s newest fastest supercomputer

Titan is set to topple the current world leader in supercomputing power—DOE's Sequoia supercomputer, an IBM BlueGene/Q system.

Oak Ridge National Laboratory

The Department of Energy's Oak Ridge National Labs today powered up Titan, a new supercomputer with 299,008 CPU cores, 18,688 GPUs, and more than 700 terabytes of memory. Titan is capable of a peak speed of 27 quadrillion calculations per second (petaflops)—ten times the processing power of its predecessor at Oak Ridge—and will likely unseat DOE's Sequoia supercomputer (an IBM BlueGene/Q system at Lawrence Livermore National Laboratory) as the fastest in the world.

Based on the Cray XK7 system, Titan consists of 18,688 computing nodes, each with an AMD Opteron 6274 processor and an NVIDIA Tesla K20 GPU accelerator. The NVIDIA GPUs provide most of the computing horsepower for simulations, with the Opteron cores managing them. True to its name, Titan is big—it takes up 4,352 square feet of floorspace in ORNL's National Center for Computational Sciences.

The combination of GPUs and CPUs dramatically reduces the electrical power consumption required to generate the computing power required. "Combining GPUs and CPUs in a single system requires less power than CPUs alone," said Jeff Nichols, ORNL's Associate Laboratory Director for computing and computational sciences. In his written statement on the launch, he called Titan a "responsible move toward lowering our carbon footprint."

Titan is an upgrade to Jaguar, a Cray XK6 system which as of June was the sixth fastest supercomputer in the world, drawing seven megawatts at its 2.3-petaflop peak performance. Titan will provide about 10 times that performance at nine megawatts. To achieve the same performance using solely Opteron CPUs, according to NVIDIA officials, Titan would have had to have been four times larger and would have consumed over 30 megawatts of power. The move to a hybrid CPU/GPU architecture is another step down the road toward "exascale" computing systems—with a goal of achieving 1,000 quadrillion (or 1 quintillion) computations per second.

ORNL researchers have been preparing for the shift to Titan's hybrid architecture for the past two years as the upgrade from Jaguar was planned, and several projects are already set to run on the new architecture. James Hack, Director of ORNL's National Center for Computational Sciences, said "Titan will allow scientists to simulate physical systems more realistically and in far greater detail. The improvements in simulation fidelity will accelerate progress in a wide range of research areas such as alternative energy and energy efficiency, the identification and development of novel and useful materials and the opportunity for more advanced climate projections."

Who knows what the actual top dog super computer really is these days. Old Cray guys will tell stories of driving a semi-truck loaded with a tricked out super computer to edge of a generic grocery store and leaving it there for some unnamed agency to pick up. There's a lot of computing power we'll never hear about.

What exactly will this be used for? Since it is part of the DoE, I'd assume nuke simulation?

No not really. This is not a classified machine like Sequoia, this is an Open Science machine run by the Office of Science. The major applications are s3d (combustion simulation), LAMMPS (molecular dynamics), CSM (climate), NAMR (nuclear physics), Physics simulations like Chimera and Maya and fusion like GTC, GTS, XGC and Pixie.

But there will be very very few classified simulations (if any and probably only with a full system reservation).

How many iterations of "we need this huge, monstrously fast machine to perform stewardship of our nuclear arsenal" crap do we need to hear, when it's probably costing more to build computers and program the damn things than it does just building new nukes, for cryn out loud!.

It's much easier to get Greenpeace and the rest of the world to agree with you simulating a few dozen nuclear explosions than it is to get them to agree with you doing it IRL.

Cave Johnson here. The boys down at the lab told me 10 petaflops was a pipe dream, that 20 petaflops was impossible under the laws of physics. Well, I fired the lot of them and hired some engineers that aren't afraid to break a few rules. Ha! 30 petaflops! They said it couldn't be done, but we here at Aperture don't let petty things like the laws of thermodynamics get between us and some good ol' fashioned science. Now, I know what you're thinking: Cave, how did you get AMD and Nvidia to cooperate long enough to build a supercomputer? Well, we couldn't. Bought 'em both out and made 'em do it anyways! When ol' Cave's signing the paychecks, those goobers are a lot more motivated to get the job done, let me tell you. Now, let's figure out how to turn the damn thing on!

Who knows what the actual top dog super computer really is these days. Old Cray guys will tell stories of driving a semi-truck loaded with a tricked out super computer to edge of a generic grocery store and leaving it there for some unnamed agency to pick up. There's a lot of computing power we'll never hear about.

Are there any speculations as to what is being installed in the NSA's gigantic datacenter out in Utah? I guess it will be storage-based rather than CPU-based, but it's still got to be in the same family as something like this.

How many iterations of "we need this huge, monstrously fast machine to perform stewardship of our nuclear arsenal" crap do we need to hear, when it's probably costing more to build computers and program the damn things than it does just building new nukes, for cryn out loud!.

Titan isn't used to perform stewardship. Instead its used to make better engines, predict ITER design directions, maintain safe reactors, study fundamental problems, design new drugs, even make better trucks!

The one thing Titan will never be used for is stewardship. Thats what Sequoia is for.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.