Distance Learning for the Next Generation of Computational Scientists

In this special guest feature from Scientific Computing World, Dr. James Osborne from HPC Wales writes that distance learning techniques may help train the next generation of computational scientists.

Dr. James Osborne

Simulation and modelling are now widely seen as the third pillar of science, alongside theory and experimentation. The ability to harness today’s high performance computers is crucial for a wide range of endeavours. From developing the next generation of cars we drive, to the medication we take to combat life threatening diseases, and the feature films that entertain us, high-performance computing is playing a key role in helping us to unravel the science that sits just beyond the horizon of our understanding.

To carry out that science requires experts in a particular domain — be that biology, chemistry, physics or engineering – to not only develop specialities within their respective fields, but also with the skills required to maximize the potential of the computing systems available to them.

For the most part, it has been sufficient to run some simulations on desktop computers because, until around 2006 and the advent of multi-core processors, the amount of serial computing power available on the desktop had been doubling roughly every 18 months in accordance with Moore’s law.

Although the basic tenet of transistor density doubling still holds, we have reached the electrical and thermal limits of silicon. Nowadays therefore, instead of increasingly faster clock speeds, we have ever-larger numbers of cores per processor. The amount of parallel computing power available on the desktop continues to keep pace with Moore’s law, yet the multi-core era is forcing domain specialists to become experts in writing parallel code, in order to make their simulations run faster or at higher resolution.

The good news is that, in the field of high performance computing, standards such as MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) have been developed and continue to evolve. These standards have allowed domain specialists to develop parallel codes that run across multiple processors: since around 1992 in the case of MPI; and around 1997 in the case of OpenMP. For the types of simulation that have always run on multi-processor high-performance computing platforms, the multi-core era required only a small change in working practices on the path to ever increasingly powerful systems.

So, how do we train the next generation of domain specialists to become computational scientists as well, and able to think in parallel?

The first challenge is that parallel programming has not been taught to the majority of computer scientists over the past 20 years, let alone the wider range of science and engineering graduates across the globe. There are a few notable exceptions, but because the topic is so complex and the demand for graduates capable of building databases, maintaining servers, and developing serial desktop applications has been far greater, we have a high-performance computing skills gap.

The second issue is that one of the traditional languages of high-performance computing, with an enormous existing code-base, Fortran, is not generally taught to computer-science undergraduates. Often, more modern, object-oriented languages such as Java and C# are now in favor, both of which are interpreted instead of compiled languages making them more expressive, but this comes at a significant cost in terms of computational performance.

The good news is that a small number of domain specialists have been taught Fortran in order to maintain and extend the existing scientific code-base. While it is certainly better to learn Fortran than trying to reinvent the wheel in the language du jour, it is imperative that higher numbers of students are taught this critical language.

How do we address these shortcomings?

With my computer science background, I have taught a range of programming languages, beginning with Java and C#, to computer science students. However, over the past seven years that has broadened out to teaching Fortran to groups of physics researchers, financial modelers, and even architecture graduates. Alongside this, I also teach OpenMP and MPI, in collaboration with Intel and NAG, along with basic Linux skills, so domain specialists can learn how to get the most out of high-performance computing.

As a result of the logistical difficulties of running courses over large distances, I am beginning to use a technique called ‘blended learning’ to support training at a distance. By bringing cohorts of domain specialists together in this way, in order to cover more specialist topics such as OpenMP and MPI, we are starting to turn face-to-face training sessions into ones we can run in different geographical locations. Using technologies such as Moodle, and recording screencasts with a voice-over narrative – scripted and refined over a number of iterations – it is possible to train domain specialists in some of the key concepts required for high-performance computing from the comfort of their own office.

Together with colleagues, I have developed the syllabus for a Postgraduate Certificate in HPC targeted at domain specialists in both business and academia and covering a range of key topics, while teaching good practices from the outset for the development of parallel codes,. Topics range from an introduction to Linux and HPC, through basic programming in Fortran or C / C++, to debugging, profiling, and optimizing codes, and developing codes using OpenMP and MPI, as well as the basics of visualization.

My vision for the future is that we can train the next generation of computational scientists to get the most out of high performance computing, without exposing them to bad practices, and teaching them enough of the basics in a structured way, allowing learners to progress at their own pace and engage with the topics they need from a single accessible resource.

Resource Links:

Latest Video

Industry Perspectives

In this Nvidia podcast, Bryan Catanzaro from Baidu describes how machines with Deep Learning capabilities are now better at recognizing objects in images than humans. “AI gets better and better until it kind of disappears into the background,” says Catanzaro — NVIDIA’s head of applied deep learning research — in conversation with host Michael Copeland on this week’s edition of the new AI Podcast. “Once you stop noticing that it’s there because it works so well — that’s when it’s really landed.” [Read More...]

White Papers

This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos.