Personal supercomputing anyone?

Personal supercomputing may be a contradiction in terms, but that does not mean it cannot exist, says Andrew Jones.

High-performance computing (HPC) is becoming mainstream. Supercomputing is moving out of the labs and universities into industry. HPC on a desktop. Your personal supercomputer. These are all typical headlines or slogans supposedly made to show the democratisation of HPC. Yet are any of these statements true?

I have found myself on both sides of this debate at times, occasionally even at the same time. However, let me start with something categorical: supercomputing is not, cannot, be going mainstream — simply because the most encompassing definition of supercomputing is "significantly more powerful computing than is widely used".

No-one really agrees on the precise definition of a supercomputer, but few would deny that it represents the class of computers that are at least a couple orders of magnitude more capable than a prospective user's desktop machine.

Processing powerSo, while greater amounts of processing power are becoming cheaply available to individual users — for example, the use of graphics processing units (GPU) for scientific computing, or deskside clusters — these resources are just more powerful forms of personal computing, not supercomputing.

And this is the point where my competing opinions emerge, because I will also state that the use of extra processing solutions — whether a 16-core PC, GPUs, mini-clusters, or whatever — does constitute HPC.

So, how do I resolve this self-conflict? It's all relative. For a researcher who has only used desktop computers, experiencing a 10-fold increase in speed on one of these cheap 'personal HPC' platforms is a step-change.

And that is the root of HPC — enabling a step-change in the time to solution, or in the size of problem that can be investigated. The higher your starting point — already using clusters? — the more your step-change needs to deliver, for example, multi-thousand node clusters.

Significant speed increaseThat neatly brings me to my favourite soapbox subject. If HPC is really about a step-change in performance and, by re-engineering your software on the same hardware, you achieve a significant increase in capability or speed, then that is HPC. Yes, really. HPC is not just about the hardware.

Of course, HPC needs powerful hardware. But if your application does not run any faster or enable a larger dataset, then buying that more powerful box does not mean you are doing HPC.

I often break this down as: HPC = high-performance computer + system software + application software + user skills. I've written about this subject before, so I'll step off now, before the soapbox dissolves into a rant.

Closer to reality Strangely, personal supercomputing is closer to reality when looking at large shared supercomputer facilities, such as national ones. Very often, a single user or research group will dominate the usage of the resource at any one time.

Perhaps this week Professor A is consuming most of the cycles; next week it might be Professor B. This pattern is a natural consequence of the rise and fall of researchers' related work outside the supercomputer — preparing simulations, post-processing, or other duties such as lecturing.

During their active phase, each user might be considered as having a pseudo-personal supercomputer. In fact, many major supercomputer centres can identify a small group of users who consume most of the resource over the course of a year.

However, there are occasionally stories of real personal supercomputers — single users who have a majority share of a facility that is unambiguously a supercomputer, maybe among the top 50 supercomputers in the world. This situation may occur because they take the lead for the modelling activities of their company, or because the nature of their work can justify such a dedicated resource.

Which loosely leads me to my final point. HPC is not only just moving into industry from the labs. HPC has been in active use by industry for many years. A quick study of the historical Top500 lists will support this assertion.

Sure, the largest supercomputers in the world will almost always be in national labs or national academic services. But many companies do not talk about their HPC, not because they are not doing it, but because the scale of their HPC use is one critical element of their competitive capability.

For them, that step-change enabled by HPC — hardware and software — is supporting a better bottom line.

As vice-president of HPC at the Numerical Algorithms Group, Andrew Jones leads the company's HPC services and consulting business, providing expertise in parallel, scalable and robust software development. Jones is well known in the supercomputing community. He is a former head of HPC at the University of Manchester and has more than 10 years' experience in HPC as an end user.