If ever there was a language that needed a book on parallel programming it is C++. However the subtitle of this book is: Design Patterns for Decomposition and Coordination on Multicore Architectures and there is a point of view that would say that abstractions like "Design Patterns" sit least well with down-t- earth practical C++. If you are worried that this book might be just a collection of vague recommendations about having the right attitude I can tell you now that it is a fairly practical, if not exactly hands-on, introduction to parallel programming.

The most important things to know is that it uses Microsoft C++ and the Parallel Patterns Library (PPL) - you also need to know how to use the Standard Template Library (STL).

Chapter 1 is about the most waffle-packed in the entire book but even here, if you don't know much about parallel programming, it has something to tell you about scaling. You do need to read it.

Chapter 2 launches in with a look at the parallel For. This explains the basic syntax of the function and the sorts of things that can go wrong i.e. the sorts of coupling between iterations that can make the outcome different to what you might expect. It gives examples and details anti-patterns i.e. general situations where things go wrong.

Chapter 3 extends the ideas to parallel tasks which of course are the foundation on which the parallel for is built. This explains the syntax and operation of tasks and again gives some anti-patterns.

Chapter 4 moves into perhaps what you might call the first real pattern - Parallel Aggregation. This is a cut down version of map-reduce. The key to following this is the use of a combinable class which automates the splitting of the task into parallel portions - the map operation - and then the combining of the results to give a single answer - the reduce operation. This is the point where things get complicated and you have to work quite hard to follow what is going on. Perhaps this portion of the book could be written more simply for beginners in a future edition.

Chapter 5 introduces an even more abstract concept - the future. A future is a stand in for a result you don't know yet. When the value is known the expressions involving the future can be computed.

Chapter 6 is about dynamic task parallelism where a task is split into different sized portions as needed. You can think of it as the parallel implementation of a divide-and-conquer type algorithm. You can use to to do things like search a tree structure or implement a parallel QuickSort - both of which are explained in the chapter.

The final chapter explains the pipleline pattern, i.e producer consumer type architectures, and the example is taken from image processing.

The book closes with a set of appendixes that really could have been included as full chapters - the task scheduler and resource manage, debugging and profiling and a technology overview. I really can't see why these are tucked away at the back in a form that many readers will tend to skip - they are just as valuable as the rest of the book.

So overall - yes this is a really good book. I even managed to smile at the cartoons that start each chapter and even if you don't find them funny they do summarize the core idea being introduced in the chapter and so they do add something.

Of course there is no point in reading this book if you are not programming in C++ and you probably have to have achieved a reasonable mastery C++ to make good use of it. There are places where the explanations could be clearer but this is a short book and so you have to forgive the need to condense some ideas. I have to say that this is the first book I have read for some time where I thought a few more pages would have been welcome - usually I bemoan the waste of paper in going over the obvious.