.NET Tips and Tricksby Peter Vogel

Two Questions with Mark Miller

As parallel processing becomes main stream, tool vendors are moving to provide more support. See my conversation with Stephen Forte of tool vendor Telerik, who sees parallel processing and multicore support as having the single biggest impact on .NET development in the near future. The latest version of DevExpress' Refactor! and CodeRush (reviewed recently here), for instance, support new refactorings for parallel processing. I spoke with Mark Miller, chief scientist at DevExpress and a C# MVP, about refactoring in .NET.

Peter Vogel: What are the challenges and opportunities in supporting refactoring for parallel processing in .NET 4 and Visual Studio 2010?Mark Miller: The biggest challenge we faced was the moving target, as these refactorings were designed and built almost a year before .NET 4.0 shipped. With regard to opportunities, supporting parallelization continues our long history of supporting the latest technologies developed by the language and framework teams at Microsoft. Also, refactorings that convert old technology into new make it easier for customers to learn and adopt the new technology.

PV: What was hard to implement?MM: Our biggest challenge was in the implementation of the "Execute Statement Asynchronously (FromAsync)" refactoring. That refactoring converts existing code like this:

using (Document document = new Document()) { document.Print(); }

Into code like this that takes advantage of the support for parallel processing:

Notice that in the conversion above we go from a Print method call into referencing BeginPrint and EndPrint methods passed as parameters into the FromAsync method. The availability check for this refactoring is complex, as it needs to start with the document local variable, then move to the Document class, then search that class (and its ancestors) for methods called BeginXxxxx and EndXxxxx (where Xxxxx" matches "Print" in this case) and ensure those methods have the expected signature.