When the Summit supercomputer opens for business next year at the Oak Ridge National Laboratory in Tennessee, it will be the most powerful supercomputer in the US, and perhaps the world.

Credit: HotLittlePotato

If you want to do big, serious science, you'll need a serious machine. You know, like a giant water-cooled computer that's 200,000 times more powerful than a top-of-the-line laptop and that sucks up enough energy to power 12,000 homes.

You'll need Summit, a supercomputer nearing completion at the Oak Ridge National Laboratory in Tennessee. When it opens for business next year, it'll be the United States' most powerful supercomputer and perhaps the most powerful in the world. Because as science gets bigger, so too must its machines, requiring ever more awesome engineering, both for the computer itself and the building that has to house it without melting. Modeling the astounding number of variables that affect climate change, for instance, is no task for desktop computers in labs. Some goes for genomics work and drug discovery and materials science. If it's wildly complex, it'll soon course through Summit's circuits.

Summit will be five to 10 times more powerful than its predecessor, Oak Ridge's Titan supercomputer, which will continue running its science for about a year after Summit comes online. (Not that there's anything wrong with Titan. It's just that at 5 years old, the machine is getting on in years by supercomputer standards.) But it'll be pieced together in much the same way: cabinet after cabinet of so-called nodes. While each node for Titan, all 18,688 of them, consists of one CPU and one GPU, with Summit it'll be two CPUs working with six GPUs.

Think of the GPU as a turbocharger for the CPU in this relationship. While not all supercomputers use this setup, known as a heterogeneous architecture, those that do get a boosteach of the 4,600 nodes in Summit can manage 40 teraflops. So at peak performance, Summit will hit 200 petaflops, a petaflop being one million billion operations a second. "So we envision research teams using all of those GPUs on every single node when they run, that's sort of our mission as a facility," says Stephen McNally, operations manager.