Stay connected, up-to-date, and informed on all things parallel development via Go Parallel, where you'll find viewpoints, how-to's, software tools, and educational information to help your software development work shine. http://goparallel.sourceforge.net/

Geeknet: The introduction to your recent book (Structured Parallel Programming, with co-authors Mike McCool and Arch Robison), discusses your perception that there’s something wrong with the way we continue to teach parallel programming like an add-on to serial programming. You say that parallel programming is actually much more natural and native, both to computer architectures and human problem-solving, and that it should be taught to programmers from the beginning of their formation, in a pattern-based way. Could you elaborate?

JR: There are really two things I think about on this topic. One is how to best teach parallel programming. And the other is the damage that’s done if you teach serial programming first. I can speak a little to both.

You said we were espousing a pattern-based programming methodology, and I actually don’t think about it that way. I think we are espousing a patterns-based teaching methodology, and I’ll draw a distinction. Patterns, to some people, have gotten a bad name. And as far as I can tell, the only reason patterns got a bad name is because some academics study patterns in great detail, and come up with patterns for many different levels, and very academically break down the problem. And it’s a very interesting field of study.

But when you try to teach programming based on that, many of the patterns that they discuss are perhaps there without having proven their value. Or they may be there to prove out symmetry even though that they aren’t good ideas to actually use in a program. That’s not a bad thing about patterns, but it’s one of the reasons patterns actually have, at least in some circles, gotten a bad name as a place to teach at least beginning programming.

But we look at patterns a different way. We look at patterns as things that actually work. And when we teach regular programming, we teach things that work. We teach pointers – and we teach things you can do with pointers. You can construct lists and queues and doubly-linked lists and trees. We teach them not because you couldn’t use your imagination to come up with them on your own; we teach them to get a shared vocabulary, so that we can talk to each other and share tips and techniques. And so we can show what has worked for other people so you can build upon it. So we took the approach with the book of picking patterns that have proven to be effective in implementing parallel programs. And rather than just teach parallel programming with “here are some basic tools, good luck!” we go through the book teaching patterns that, time and time again, show up in applications. Maybe not exactly the way we show them, but I like to say they ‘rhyme’ – they’ll feel familiar.

And we teach regular programming that way. We eventually teach not just data structures, but we teach concepts like how to do parsing, how to do databases – there are many other things we learn as programmers that are really patterns … things that have been recurring.

The older approach to teaching parallel programming would be to teach the hardware; it would then become readily apparent to you as a programmer that you want to use that hardware. That’s not how we teach regular programming. I don’t sit down with beginning programmers in middle school or high school or wherever and say “Let me explain out-of-order instruction execution and register renaming and let’s look at the internals of a microprocessor until you really understand it, and then we’ll program it.”

Instead, we teach data-structures; we teach programming language syntax. So I actually have a stack of parallel programming books at home. It’s a pretty tall stack, and I’ve gone through them. I’d say by and large most of them spend about half of the book – if you count how they allocate pages- on teaching computer architecture. And I can go grab all my programming books, for regular programming, whether it be Perl or FORTRAN or C or C++ or whatever, Javascript, and it’s almost zero the amount of pages they spend on any form of computer architecture. So we wrote the book thinking “Hmm … If you’re really going to mainstream parallel programming, you have to get away from teaching computer architecture as a crutch to educate people on what they need to know to do a program.” And I can tell you, Michael, Arch and I – we all love computer architecture, we’re not people running away from computer architecture because we don’t like it.

So that’s about how to teach it. Now – damage? Damage is a little easier to envision what we’re worried about. I like to talk about the concept called “serial traps.” I like to point out to people that they innocently write code in certain ways that is– at least according to the programming language that they’re using — pretty much ordering the programming language to do things serially. And they’ll immediately tell me: “Well, I didn’t mean that … and I didn’t need that.” Well, they weren’t sensitive to it when they wrote the program. And of course we weren’t sensitive to it. We wrote these programming languages on machines that weren’t parallel; nobody worried about parallelism. So why would we worry about making something that would hold up? You didn’t make oxcarts that could withstand being driven at 200mph. It just didn’t occur to me.

And we look at things, and my favorite example is a for loop – you take a for loop in C, what it literally says … If you do for(I = 0;I < 10;i++), it says “I want you to do everything with I = 0, and then I want you to do everything with I = 1, and then everything with I = 2,…” It doesn’t give any latitude for them to be done in parallel, at least they’re not allowed to do that. And then we build compilers and so forth, trying to analyze the program enough to see if it’s okay if we happen to run in parallel, while not violating anything that was implied by the original program, which definitely said do them serially – and that’s quite often provably impossible. So people complain “hey, why doesn’t my compiler automatically run this in parallel?” And a better question is: “Why did you tell it to do it in serial and hope that it could magically transform you?” That sort of thing – there’s many, many things that fall into that category of “Hey – you told the C language or FORTRAN or C++ — you told it to do it serially. And now, you’re hoping someone will undo your mess?” And that happens when you teach serial programming and think about parallelism later. It would be like teaching oxcart design to someone, and then one day saying you’re going to put the chassis on a Corvette.

Posted on September 21, 2012 by Jeff Cogswell and John Jainschigg, Geeknet Contributing Editors

Use this MPI library for better application performance on Intel® architecture-based clusters by implementing the high-performance MPI-2 specification on multiple fabrics. Quickly deliver maximum end-user performance even with new interconnects—without requiring major changes to the software or operating environment.