Monday, June 2, 2008

Microsoft has released the June 2008 CTP of Parallel Extension for .Net 3.5 library.
Learning this library and the concepts of well-designed, well-behaved multi-threaded applications is becoming more and more critical to an application's success, as the days of being able to count on faster and faster processors being available by the time our applications release are behind us. Instead, we may be faced with having our applications run on PCs with a greater number of slower cores. This Manycore Shift is upon us already and, as developers, we need to be prepared for it.

Our user-communities expect this of us. Years ago, a Microsoft Word user expected printing a large document to tie up the application for however long it took the printer to spew out the pages. Then print spooling was introduced and the users' expectations changed -- they came to expect the application to be returned to their control faster, because the spooler could send data to the printer in the background.
Today, the user expects an instantaneous return of the application when they select Print -- no matter how large the document. They expect to be able to immediately edit, save and edit the document again while the application sends the document, in the state it was in when they selected Print, to the spooler in the background.
Furthermore, they expect all those other things that used to be synchronous operations (spell check, grammar check, etc.) to now happen behind the scenes without slowing down their use of the application's main functionality. They even expect the application to correct typing errors in the background, while they move on to make new ones.
We, as developers, expect this of our tools, as well -- with Visual Studio Intellisense and syntax checking while we code. One of the first comments made about the recently released Microsoft Source Analyzer for C# was essentially: "Why doesn't it check in the background while I type and display blue-squigglies under what needs to be fixed?"
The expectations are reasonable and achievable, given the computing power available on the average desktop and the tools available, but how many of us writing line-of-business applications truly take the time to understand multi-threaded programming and build these features into our applications?
Threading used to be hard to do for the typical business-software developer. The steps to create and manage new threads were very different from anything they'd been exposed to before and the libraries were arcane and poorly-documented. But all of that's changing, the threading functionality in .Net 2.0 and now libraries like the Parallel Extension Library, PLINQ and CAB insulate the developer from the complexities of threading and make it incredibly simple to start tasks on new threads ... and therein lies a new danger:
A co-worker and I rather regularly send each other emails, the gist of which is: threading is the work of the Devil.
Not because it's difficult to create a thread or start a process on it, but because the implications of concurrency in a large business application with multiple dependencies still have to be dealt with, no matter how easy it is to send work to the background. For the typical business software developer, who's spent an entire career in a single-threaded environment, it's hard. It requires a different conceptual mindset.
As Microsoft continues to work on libraries and extensions to make threading easier to implement, and I'm sure they will, I hope they also put as much effort into learning resources to help developers understand the implications of using these new tools; and I hope that we, as developers, put as much effort into learning the best-practices and fundamental concepts of parallel computing, as we do into learning the mechanics of the tools.