ISC HPC Blog

All Change!

Posted: 07-10-2012 09:00

One of the reasons that working with the high-performance computing (HPC) community is so interesting is that no sooner does it achieve one target than it turns to the next goal. In the November 2010 TOP500 list (the 'pop chart' for supercomputers), only 10 systems boasted a peak performance in excess of one petaflop/s (that's ten raised to the 15th power – or one thousand trillion – floating point operations per second). Of these 10 systems, only seven exceeded one petaflop/s for the Linpack benchmark, and it is unlikely that any real applications have actually gotten close to that performance level. However, the issue at the forefront of the HPC community today is how it will get to exaflop/s performance levels by the end of this decade.

For HPC systems, increasing performance by three orders of magnitude every decade is a way of life. Historically, this has been achieved through a combination of architectural advances, and more of the same – but bigger and faster. The problem with the 'bigger and faster' route now is that the average power consumption of the top 10 systems exceeds four megawatts, with the highest power consumption being 12.6 megawatts. 'Bigger' increases power linearly with size, while 'faster' systems use more power at a rate that increases faster than the square of the clock rate – so 'bigger and faster' is no longer an option.

If you run your eye down the “Power” column for the top end of the current TOP500 list you see the numbers jumping all over the place, from 12.6 megawatts, down to 4, back up to 7, down to 1.4 and up to 4. The big, non-linear changes are due to systems architecture. Machines that use GPUs to deliver their performance consume much less power. This shows us the way to the future.

The next generation of HPC systems must be built differently. Processors will be based on low-power technologies used in mobile devices today. Systems will use a variety of processors and accelerators to deliver floating point performance. Many HPC systems today use GPUs to augment their peak performance. The resulting systems have much lower power consumption but are more difficult to program due to their complexity. Perhaps future systems will be built from a combination of high-speed nodes, big-memory nodes, GPUs, vector nodes and special-purpose devices – maybe based on FPGAs. In order to minimize the power consumed passing data between the components in a system, computer chips will be built in three dimensions, with memory directly interfaced to processors. This increased on-chip density will require innovative cooling approaches.

In order to deliver a 1,000-fold performance increase during this decade supported by only a modest increase in power consumption (perhaps a factor of three), high-performance computer design will change dramatically. Clock speeds may slow, but component density will increase significantly, increasing the power consumed by a fully loaded rack, and system packaging will also change.

The bottom line is that if you are building a datacenter today for a dedicated HPC facility or an HPC facility in the Cloud, you must think not only about the system you plan to install this year, but also consider the different styles of systems you may be installing in 5-10 years – because they will be fundamentally different from today's systems.

About the author:

John covers IT early adoption and innovation in high-performance computing at 451 Research. He is also responsible for the company's research activities within the European Commission Framework Program. John has over 30 years of experience in the IT industry, initially writing compilers and development tools for HPC platforms. The bulk of his career has been spent in a variety of technical roles at HPC systems vendors, delivering training, running benchmarks, and providing pre- and post-sales customer support. John's core technical skills are in application performance analysis, optimization and parallelization.