Review: Parallel Processing for Scientific Computing

I just finished reading Parallel Processing for Scientific Computing, one of the most recent volumes to join SIAM’s Software, Environments, and Tools series of scientific computing books (Jack Dongarra is the editor in chief of the series). Parallel is organized around the themes and problems presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing, held in San Francisco in 2004 (as fate would have it, I’m writing this review in a hotel in San Francisco); even though 2004 seems like a long time ago, the editors and contributors took care in the creation of this book, and it remains timely today.

The book includes 20 articles from 91 contributors organized into 4 sections. The authors are a computational who’s who — you are sure to recognize names like Simon, Gropp, Lumsdaine, Snavely, Stevens, Bader, Foster, Bailey, and more — and each section includes a mix of both practical and pragmatic articles.

For example the first section, Performance Modeling, Analysis, and Optimization, opens with an article by Jesús Labarta and Judit Gimenez on the changes in structure and implementation that are needed to move performance analysis from an art to a first class science. This article takes a step back and looks at the big picture, but still manages to stay grounded via the authors’ references to their attempts to implement some of their ideas in real software. This is followed up by a survey article that covers much of the state-of-the-practice in architecture-aware scientific computation, written as a collection of mini-articles on specific projects. The section is rounded out by a chapter on specific experiences getting to high performance on an early IBM Blue Gene, and then looks forward with a chapter on application performance modeling for ultra-scale systems.

The entire book follows this structure, with each section featuring a mix of the pragmatic and the theoretical, the strategic and the practical.

Section 2, Parallel Algorithms and Enabling Technologies, covers partitioning and load balancing (with a great section on partitioning in parallel contact/impact applications), non-PDE based computations, adaptive mesh refinement, multigrid, solvers, and fault tolerance. The fault tolerance chapter was of particular interest to me in this section. It is well-written, and a great place to start if you are just beginning to think about one of the main problems facing the practical use of exascale systems in the near future.

Section 3, Tools and Frameworks for Parallel Applications, is a well-written survey by William Gropp and Andrew Lumsdaine that would serve as an excellent primer for a scientist wanting to stand up a cluster and get busy using it to run codes. Other articles in this section include a survey of parallel linear algebra software by Eijkhout, Langou, and Dongarra, as well as two chapters that point to a possible future for HPC software development in component-based software systems and frameworks for scientific computing.

Finally the last section, Applications of Parallel Computing, walks through challenging broad categories of HPC applications. The chapters here include a treatment of PDE-constrained optimization, parallel mixed-integer programming, multicomponent simulations, and computational biology, all with an emphasis on parallel aspects.

The text closes with a capstone article by the editors that looks at the challenges and opportunities for computational science.

The last word

I think the editors have done an excellent job of herding a collective view of scientific computing from what would otherwise have been just another collection of articles. Even though the book is four years old now, and even though the conference that inspired it was held six years ago, Parallel remains quite up to date in some aspects of its outline of the start-of-the-art in computing. Even in those areas where it is beginning to show its age (for example, the Blue Gene performance tuning chapter), the book remains an excellent starting point for more research.

The clear, jargon-free writing style makes for an easy read, and the references alone make exploring this text well worth your time. They are often quite complete: I found several chapters with 100 or more citations that readers can explore to develop a fuller understanding of a topic of particular interest. If you are just starting graduate studies in HPC and want to get a broad overview of the many facets of research in our field, then this book is an outstanding starting place. And if you are a seasoned practitioner, I think you’ll find the text provides a valuable point of view on a broad range of topics, with references that should keep you busy well into many sleepless summer nights.

Be sure to check out the other book reviews we’ve done here at insideHPC.

Resource Links:

Latest Video

Industry Perspectives

In this episode, the Radio Free HPC team splits on the topic of Net Neutrality. The FCC will soon publish its new rules for ensuring an even playing field for Internet Bandwidth. "Dan doesn't like the idea one bit. Henry disagrees and thinks we need Net Neutrality to keep the Comcasts of the world from running amok. As for Rich, he just finds the whole argument rather amusing since it's pretty much a done deal." [Read More...]