Thursday, 4 February 2010

Having just signed up for twitter (HPCnotes), I've realised that the space I previously had to get my point across was nothing short of luxurious (e.g. my ZDNet columns). It's like the traditional challenge of the elevator pitch - can you make your point about High Performance Computing (HPC) in the 140 character limit of a tweet? It might even be a challenge to state what HPC is in 140 characters. Can we sum up our profession that simply? To a non-HPC person?

The inspired John West of InsideHPC fame wrote about the need to explain HPC some time ago in HPCwire. It's not an abstract problem. As multicore processors (whether CPUs or GPUs) become the default for scientific computing, the parallel programming technologies and methods of HPC are becoming important for all numercial computing users - even if they don't identify themselves as HPC users. In turn, of course, HPC benefits in sustainability and usability from the mass market use of parallel programming skills and technologies.

I'll try to put it in 140 characters (less space for a link): Multicore CPUs promise extra performance but software must be optimised to take advantage. HPC methods can help.

It's not good - can you say it better? Add a comment to this blog post to try ...

For those of you finding this blog post from the short catch line above, hoping to find the answer to how HPC methods can help - well that's what my future posts and those of my colleagues here will address.

He said that the discipline of having to build something in such a tiny space (the agency had a list of about a hundred and fifty adjectives in their specification of the sound they wanted, which finished with "and it must be 3.8 seconds long") was liberating and inspiring; when he went back to working with pieces of music that were more like three minutes long, "it seemed like oceans of time".

Perhaps you'll experience a similar effect when you're allowed to use more than 140 characters?

It's good and short, although not all HPC is developing new hardware or software - it might be just using established HPC solutions for example. Maybe an evolution of your slogan: "HPC helps get answers to your problems faster by using powerful hardware and software."

In my own effort, I was trying to not so much define HPC, as show the link between HPC and desktop multicore.

The key thing of course is that everyone doing technical computing is encountering multicore - whether desktop modelling, trying GPU's, or using the world's fastest supercomputer. It is only through this pervasive spread of parallel hardware that the techniques previously left to HPC programmers are becomming directly relevant for the wider market. (Even if only through tools or libraries rather than explicity learnt and used.)

Perhaps my earlier pointer to Brian Eno's work in small spaces was a bit too off-topic. As compensation, here's something that might be slightly more relevant: a few attempts to wedge Euclid's proof about the infinity of primes into Twitter space. It's still nothing to do with HPC, I'm afraid, but it has that same fixation on terseness (that is perhaps ironic, given the proliferation of space and speed in modern computing).

" ... HPC is used to solve problems where an algorithm suitable for a PC is yet to be discovered."

This implies that a standard PC with multicore processors or gpus is not an HPC system.

It also highlights the need to think about algorithms and not just turn to brute force number crunching.

In his book, "The Algorithm Design Manual", Steven Skiena gives a "war story" where he manages to speed up a computation algorithmically (for a PC) in less time than his colleague spent parallelising the original algorithm. He also achieved better performance too.

(Brian Eno is an inspirational character, not just for his music, but for his passion and his dedication to his work. Thank you Jeremy for the anecdote.)

About NAG

The Numerical Algorithms Group (NAG) is dedicated to applying its unique expertise in numerical engineering to delivering high-quality computational software and high performance computing services. For 40 years NAG experts have worked closely with world-leading researchers in academia and industry to create powerful, reliable and flexible software which today is relied on by tens of thousands of individual users, as well as numerous independent software vendors. NAG serves its customers from offices in Oxford, Manchester, Chicago, Tokyo and Taipei, through local staff in France and Germany, as well as via a global network of distributors.