PNNL advancing the frontiers of computing: the march toward exascale

As part of the Department of Energy's Exascale Computing Project, PNNL is leading the ExaGraph co-design center. Researcher Mahantesh Halappanavar and his colleagues on the project are developing methods and techniques to efficiently implement key combinatorial algorithms. These algorithms play a critical enabling role in numerous scientific applications, but are among the hardest programs to implement on the parallel computing systems that will be the backbone for computing at the exascale.

Matt Macduff, a researcher in PNNL's High Performance Computing group, examines the inner workings of the Seapearl cluster, used to research computing power consumption as part of PNNL's Center for Advanced Technology Evaluation, known as CENATE. In 2016, CENATE was recognized with HPCwire Editors' Choice award for "Best HPC Collaboration between Government & Industry" and acknowledged CENATE's integrated evaluation of early technologies from industry collaborators: Micron Technology, Mellanox Technologies, Penguin Computing, NVIDIA, IBM, and Data Vortex Technologies.

The Seapearl compute cluster, instrumented with hundreds of power and temperature sensors, provides researchers a unique testbed for studying these important parameters with great precision on a large scale.

We often hear how supercomputers are used to tackle incredibly complex problems. They have played an essential role in the design of automobiles and aircraft, including the Boeing 777 Dreamliner, the creation of new anti-cancer drugs and discoveries related to the origin of the universe. Despite these amazing successes, and the astonishing advancements in computing over the past several decades, there are still limits to what today's most powerful computers can do.

Computational scientists at the Department of Energy's Pacific Northwest National Laboratory are collaborating with colleagues from around the world, in government, academia and industry, to overcome these limits. They are working together, under leadership from DOE, to take supercomputers to the next level of performance. PNNL researchers are among the experts helping design tomorrow's supercomputers and write the software codes that will run on them.

This national quest is called the Exascale Computing Project and it seeks to deliver an "exascale-capable" computer by 2021. Exascale means that the computer can perform one quintillion calculations per second — that's a "1" with 18 zeroes behind it. This computer would be 10 times faster than the current Chinese record holder.

To grasp the difficulty of this task, suppose that we gave every person in the United States a PC and asked them to harness the collective power of those 300 million machines to solve a single problem. If they could do this, they would be operating at the exascale. But how would you program so many machines? How would it store and share results? And how would it be powered? The team is focused on answering these questions — as well as ensuring the resulting system is reliable and efficient.

To grasp the difficulty of this task, suppose that we gave every person in the United States a PC and asked them to harness the collective power of those 300 million machines to solve a single problem. If they could do this, they would be operating at the exascale.

Researchers are looking at how to redesign and reinvent the hardware, system software and applications that would be used for an exascale computer. This requires a testbed for systems evaluation. PNNL is taking the lead here by providing a first-of-its-kind computing proving ground, much like test tracks for automobiles. It provides access to advanced instruments that measure performance, power, reliability and thermal effects. It also provides modeling and simulation tools that assess individual technologies as well as the impact of combining them into a complete system. Results are broadly shared within the high-performance computing community.

PNNL researchers also are involved in developing applications for these future computing systems. It might seem a bit like the chicken-and-egg situation, but code development and system development are being done in parallel so they will be ready to work together as expected and desired.

For example, researchers are enhancing PNNL's world-leading computational chemistry capability to take full advantage of exascale computing technologies. This will allow researchers to design new drugs, develop new energy sources and invent new materials.

In another project, PNNL power engineers and mathematicians are creating a code that will simulate the electric grid. This code will be used to improve grid operations, manage fluctuating renewable power sources and avoid costly blackouts.

At the same time, PNNL computer scientists are advancing the frontiers of machine learning to glean insight from supercomputer simulations. The computer "learns" on its own, without explicit programming, to find patterns of interest in the massive output generated by the supercomputer. This speeds discovery and frees the human for other tasks.

We often take technological innovation for granted, expecting each new laptop or cell phone to be faster than the last one. But the leap to exascale will require revolutionary — not evolutionary — advances in computing. The national labs are up to this challenge, and when we deliver, it will improve our nation's economic competitiveness, advance scientific discovery and strengthen our national security.

Steven Ashby, director of Pacific Northwest National Laboratory, writes this column monthly. His other columns and opinion pieces are available here.