Search form

Structured Parallel Programming

The University of Oregon has developed a Parallel Curriculum for a 4xx/5xx level course, and has shared their slides from the course and lab assignments. They use our book Structured Parallel Programming as a key part of their course.

The Intel Parallel Computing Center at the University of Oregon has as its goal the development of an undergraduate parallel computing course to be offered each year in the Department of Computer and Information Science. However, the larger objective is to share our experiences and materials with others in the parallel computing community. In this spirit, offered below are lecture and lab curriculum components of a prototypical 10-week course.

Sixty nine people contributed to this book of high performance programming examples. Optimization work on real world examples, from around the world, are shown with code (which can also be downloaded) and explanations. Examples are run on processors and Intel Xeon Phi coprocessors, using the same code. An outstanding read for those interested in how to optimize code for today's machines.

(James Reinders and Jim Jeffers, editors and contributors, with the book.)

I have a copy of my latest book (with 6 wonderful co-authors)! Based on the SIGGRAPH tutorial we did last year, it reviews successful techniques for parallel programming in applications doing visual effects (think: animated movies!) The most referenced technique is TBB although other methods including OpenCL are discussed.

Piper is an experimental prototype of Intel® Cilk™ Plus that provides library headers and runtime support for pipe-while loops. A pipe-while loop is a new parallel loop construct described in a recent paper on On-the-fly pipeline parallelism, published in July 2013 in collaboration with researchers at MIT. A pipe-while loop is a generalization of an ordinary while loop that allows for pipeline parallelism between iterations.

ABSTRACT:
Parallel programming is important for performance, and developers need a comprehensive set of strategies and technologies for tackling it. This tutorial is intended for C++ programmers who want to better grasp how to envision, describe and write efficient parallel algorithms at the single shared-memory node level. This tutorial will present a set of algorithmic patterns for parallel programming. Patterns describe best known methods for solving recurring design problems. Algorithmic patterns in particular are the building blocks of algorithms. Using these patterns to develop parallel algorithms will lead to better structured, more scalable, and more maintainable programs. This course will discuss when and where to use a core set of parallel patterns, how to best implement them, and how to analyze the performance of algorithms built using them. Patterns to be presented include map, reduce, scan, pipeline, fork-joint, stencil, tiling, and recurrence. Each pattern will be demonstrated using working code in one or more of Cilk Plus, Threading Building Blocks, OpenMP, or OpenCL. Attendees also will have the opportunity to test the provided examples themselves on an HPC cluster for the time of the SC13 conference.

This book fills a need for learning and teaching parallel programming, using an approach based on structured patterns which should make the subject accessible to every software developer. It is appropriate for classroom usage as well as individual study.

We took an approach of teaching parallel programming as programming first, but without requiring a deep prior knowledge of computer architecture. In other words, we approached the problem of teaching parallel programming as we would approach teaching programming traditionally: We start with basic concepts and show common usage modes (also known as patterns). Every parallel programmer should know what a stencil operation is, just as every programmer should know what a stack or a queue is. Knowing what common programming structures have widespread usage affects our thinking for algorithms and coding. These programming patterns are what should be foremost in our minds. Computer architecture is very important, and we dearly love to talk about it, but we believe that these universal patterns are the key to teaching. We do not shy away from computer architecture as a key concern for optimization. We do avoid teaching the architecture as a prerequisite to teaching the programming.

This text offers a method to learn parallel programming for any C or C++ programmer, in a manner that will be highly effective because it uses the most important and successful parallel programming strategies as the teaching mechanism.