IDC: Searching for Dark Energy in the HPC Universe

Editor’s Note: In this guest commentary, Bob Sorensen, research vice president in IDC's High Performance Computing group, argues that high performance computing is undergoing basic changes in how we should think about it and define it. Advanced scale computing was once mostly the domain of government labs and large academic research centers. Today, the HPC universe is expanding, to use Sorensen’s metaphor, and many more forces are at play and becoming visible and must be taken into account. This isn’t a new idea (see IDC: The Changing Face of HPC, HPCwire) but it is one that is increasingly crystallizing and being embraced. No doubt we will hear more at the annual IDC HPC Update Breakfast held at SC16 in November. – John Russell

The latest scientific evidence indicates that the universe is expanding at an accelerating rate and that so-called dark energy is the driver behind this growth. Even though it comprises roughly two-thirds of the universe, not much is known about dark energy because it cannot be directly observed. The same idea of such dark energy in HPC applies. Simply put, the HPC universe is expanding in ways that are not being directly observed using traditional HPC definitions, and that new definitions may be needed to accurately capture this phenomenon.

Potential dark energy in the HPC universe encompasses a number of emerging and distinct elements, but each in its own way adds to the collective technology and market dynamics of the HPC sector. They include:

New hardware to support deep learning applications that with their emphasis on high computational capability, large memory capacity, and strong interconnect schemes, can rightly be called HPC systems. Examples here include the NVIDIA DGX-1 supercomputer in a box, the Facebook Big Sur rack, or the Google Tensor Processing Unit. Even Intel is moving into the field with its recent acquisition of Nervana, a cloud-based deep learning provider that will demonstrate next year its custom designed ASIC that includes 32 GB of on-chip storage and six bi-directional high-bandwidth links. IDC projects that global spending on cognitive systems – of which deep learning is an integral component – will reach nearly $31.3 billion in 2019 with a five-year compound annual growth rate (CAGR) of 55%. For perspective, IDC estimates that the total HPC server market that same year will be about $14 billion

HPC in the cloud offerings that are increasingly providing HPC capabilities outside the traditional HPC vendor/user relationships, such as what is being done at AWS, Google, and Microsoft Azure. These HPC in the cloud providers are offering both the hardware and software needed to attract traditional HPC users to their services, and many expect that once the pricing models for these services settle down, more and more traditional HPC workloads will be pushed out into a cloud environment. Many see this not as a zero sum game, but as a way to grow the total HPC market. In addition, as many traditional HPC users are looking to cloud-based computation as a way to complement their in-house capabilities, vendors will need to offer seamless application migration between cloud and on-prem hardware or risk finding themselves locked out of the market. Some project that cloud-based HPC could grow to over $10 billion by 2020.

New big data applications that are running in non-traditional HPC environments but that use HPC hardware, such as in the finance or cyber security sectors. For example, Cray and Deloitte recently announced the first commercially available supercomputing-based threat analytics service on a subscription basis. Across the board, commercial firms that currently are engaging in traditional enterprise business analytics are increasingly turning to HPCs to address some of their more complex, time sensitive, or data rich problems. Despite this, many of these users likely will not strongly identity with or be strongly identified by the traditional HPC sector as part of the HPC universe. The process whereby these ‘new’ users enter into the HPC universe will be an interesting one to watch as they will bring their own unique experiences, expectations, and requirements into the mix.

As no credible theory can go forward without some notion of identifying validating experiments, it is instructive to look at what is already happening in the sector as seen in the Top 500 HPC list. For example, in the most recent Top 500, there were 138 entries that simply did not fit into the traditional HPC categories. Here is how those sites self-identity as instead:

68 Internet Companies

39 IT Service Providers

14 Telecommunications Companies

12 Hosting Companies

5 Cloud Companies

Although one could argue that many of these HPCs are being used for traditional HPC workloads, it is clear that something interesting is going on in the sector. Does the ability of these systems to qualify for the Top 500 list – a list that does not expressly claim to be a measure of technical HPC computing, but does use a traditional scientific calculation for its performance gatekeeper – mean that they are running scientific workloads? Or is it more likely that increasingly systems that can qualify for the Top 500 are not being used in traditional HPC environments, but instead finding use in a broader range of applications?

Ultimately, this is a case where if it is important to identify the dark energy in HPC, the sector needs to consider what exactly an HPC is. Can systems that run CFD calculations for an automaker, drive real-time decision making for credit card fraud detection, and self-learn to do highly accurate photo image identification all be considered HPC? If one defines HPC to be the embodiment of some of the most advanced developments in hardware and software that enables new scientific discoveries, underwrites innovation in engineering and manufacturing, and creates significant economic return, then the answer is a clear yes. And maybe little else matters.

Perhaps it’s time for the HPC sector to expand its perspective and embrace the dark energy out there that offers significant promise for a renaissance of the HPC sector writ large. It’s either that or get left behind by these new fields that look to be key drivers of HPC-related technologies – as well as a source of financial growth – for the foreseeable future. Are we looking at a missing 68% content like we see at the galactic level? It’s hard to say right now, but it is clear that as time passes these new HPC use cases will only grow more prevalent.

Author Bio:

Bob Sorensen, IDC

Bob Sorensen, Research Vice President in IDC's High Performance Computing group, is part of the HPC technical computing team, driving research and consulting efforts in the United States, European, and Asian-Pacific markets for technical servers, supercomputers, clouds, and high performance data analysis. Prior to joining IDC, Mr. Sorensen worked 33 years for the U.S. Federal Government. There he served as a Senior Science and Technology analyst covering global competitive and technical HPC and related advanced computing developments to support senior-level U.S. policy makers, including those in the White House, Department of Defense, and Treasury. Mr. Sorensen holds a bachelor’s degree in electrical engineering from the University of Rochester and a master’s degree in computer science from the George Washington University.