Tuesday, July 22, 2008

A couple weeks ago I got irritable because a blog posting on how to extend StyleCop for TFS TeamBuild had been removed “at Microsoft’s request”. That posting got a bit of attention on dotNetKicks, quite surprising me.

Apparently the released name of the tool (Microsoft Source Analyzer – now Microsoft StyleCop) confused me a bit – in that, I assumed with a “Microsoft” prefix that it was an official offering with a project team and a budget. Such seems to be not the case, and, according to that clarification blog, it’s actually the work of a single developer at Microsoft ("on evenings and weekends") – which makes it all the more impressive, in my opinion. (If Jason Allor ever leaves Microsoft and shows up at your company for an interview, hire him. StyleCop would be a phenomenal and well-designed if it had a team and a budget, for one guy in his spare time ... )

In addition, the StyleCop blog now announces an upcoming version which will include an SDK and documentation, making my own custom rules tutorial obsolete. This is a good thing.

Monday, July 7, 2008

A couple years ago, my wife and I decided to sign up for eHarmony. You know, the Internet dating service with the patented, million-point matchmaking system?

Yes, we were already married. No, we weren’t having trouble and needing to know what was out there.

It was kitten’s suggestion. (I call my wife “kitten”, the reason for which I explain on my kayaking blog.)

Her theory was that we’d both sign up and see if eHarmony’s sophisticated algorithms would match us with each other. “It’ll be fun,” she said.

Now, as soon as I heard the suggestion, I knew that there was no way this could end well. This was an idea that, upon hearing, you just know is going to screw up your life somehow. But kitten wanted to do it, so I agreed. If I’d only known the true horror of what would come from this, I’d have hacked eHarmony and brought their servers to a grinding halt.

So kitten goes first, signs up, runs through all the questions and, at the end, out pops a list of hundreds, if not thousands, of compatible matches … which, for the low, low price of $49.95 a month, they’ll be happy to put her in contact with.

Then it’s my turn.

I start filling out the questionnaire. Question after question after question … it’s thorough as hell. And I’m answering everything as honestly as I can. I get to the end and ...

See, I'm not sure what would have been worse ... to get to the end and have more matches than her or less. And what would have happened if we weren't in each other's matches? I don't know what would be worse, because none of that's what happened. Instead, I got a message from eHarmony.

It was a nice message. Politely, even cautiously, worded – with the sort of tone people use with mad dogs or, possibly, wild-eyed men running through the streets with firearms. I don't have the exact words, but essentially it said:

We're sorry. Very, very sorry, actually, but there's nothing we can do for you. We have no one in our entire database who we'd even remotely consider subjecting to a date with you. In fact, we won't even try to look anymore.

Keep your money. Put your credit card away. We don't want it. Even if you insist, we still won't take your money -- there's simply nothing we can do for you. Please consider applying the $49.95 per month to a good therapist.

Okay, so I made that last bit about the therapist up, but the rest is the gist of what they said.

That's right ... an online dating service refused to take my money.

For years now, I've been hearing about it from kitten: "You're lucky, because there's no one else who'd have you ... but me, I have options. Hundreds of options. Thousands, maybe. You? You got no options."

I can't even really argue with her.

What does this have to do with software development?

Well, eHarmony did something hard ... they told the customer "no".

They could have taken my money and matched me up with whatever hopeless, unmatchable women they let into the service to accommodate guys like me. After all, eHarmony didn't know I was already married and just wanting to see what the list looked like ... from their perspective, I was a real customer, and what I wanted was to join a dating service, right? They could have even told me "outlook looks bleak, but we'll try" and taken my money.

So as a customer, I told them what I wanted: to join a dating service. And they told me "no".

They told me "no", because they looked beyond what I said I wanted to what would really meet my underlying requirement -- joining the dating service isn't the requirement, it's what the customer thinks is the requirement. The real requirement is to be meet someone compatible. They looked at what the customer really needed and said "no, we can't do that".

If they went ahead and just did what the customer asked, let me join, despite my apparently unmatchable personality, I would have been a dissatisfied customer. But their business-model is built on success-rates, not raw numbers, so they said, “no”.

Telling the customer "no" is hard. Whether as a consultant or as an employee where the customer is your business-user. It's hard because that customer's paying the bills ... he's got the money ... and he's telling you exactly what he wants. But what they want may not get them what they need ... and what they need may not be achievable.

One of the first projects I ever worked on was a program for the court clerk in traffic court. As the judge heard cases, the clerk was to use the program to capture the outcome and sentence, then it would generate all the right paperwork. The customer said they needed to see every option on screen at the same time. All the controls to capture verdict, adjudication, probation, fines, jail time, community service ... everything had to be on the screen at once. The customer said that's what they wanted.

Visual Basic 1.0 -- we quickly ran out of GDI handles, stooping to putting graphics containers on the form and drawing the static text in order to free up the handles from the labels.

We ran out of screen space -- this was when the typical resolution was 640x480, but $500 ATI graphics cards got us up to 1024x768 and solved that problem.

The user's couldn't see the small controls at that resolution, too small. $1500, 17" NEC monitors solved that one for us.

Clerk of Courts, remember -- it's a government contract so money is no object.

And the customer got what they asked for. They hated it. That user-interface sucked.

Then we went to write a similar application for criminal court. How'd we get that contract when the traffic app sucked? It's local politics, you don't want to know the things the owner of that company did to keep their business.

So we meet with the criminal clerks to find out their requirements, and what do you think they want? One of the first things they say is, "everything has to be right there on the screen all the time."

This time we said, "no." This time we asked, "why?"

Turns out, it wasn't about the user being able to see everything at once, it was about speed. They had to get to things fast enough to keep up with the judge speaking. Being able to get to things quickly enough was the real requirement, not having everything visible at once. They weren’t giving us the requirement when they said “everything on screen at once”, they were giving us the technical solution.

The real requirement was one we could meet in other ways, and the criminal application was much better and more usable than the traffic app.

The first time around, we did what the customer told us they wanted -- we let the customer drive the technical solution, instead of having them articulate the problem and then presenting them with a solution.

I've always been glad that I encountered that early in my career, because I've seen the same sort of “requirement” over and over again and that lesson has always served as a reminder to look beyond the customer’s statements – and tell the customer that we aren’t going to do it that way.

As software developers, we should be prepared to analyze every requirement under the microscope of “why”, get to the real, underlying need that the customer has, and present them with a technical solution that meets their real needs.

We also need to manage their expectations – just like eHarmony does.

I recently joined a project and I find myself saying something quite often in response to the customer’s requirements: “Instantaneous is not a service-level agreement.”

Some of the customers have an unrealistic expectations – as unrealistic, apparently, as my finding someone other than kitten who’ll put up with me.

If I don’t manage that expectation and tell them, “no, you will not get an ‘instantaneous’ response to your request, it will actually take some time”, then they’ll be dissatisfied with the result and the project will have failed.

It’s hard and they don’t like it, but we have to remember to do it.

Today, a customer asked us to change the key that moves the focus from field-to-field in a Windows application from Tab to the semi-colon (;). I’ll be telling them “no”.

Thursday, July 3, 2008

I started this blog last month with a series of posts on Microsoft Source Analyzer, or StyleCop, a new product released on Code Gallery and which can best be described as fxCop for source code. Where fxCop works on the compiled binary, StyleCop works on the source code itself. Since we’re currently reviewing our coding standards here, I jumped on this product because checking source code for naming, spacing, etc. standards is a royal pain.

The first thing I ran into was a conflict between our standards and those that ship with StyleCop. We require an underscore start the name of private fields. So, assuming that StyleCop was extensible, as fxCop is, I did a quick Google for “customizing stylecop rules” and found nothing. So I did a little digging (with Reflector) in the public interface of MSA and figured out how to do it. Since the information wasn’t already out there, I figured I now had something to contribute to the community other than long-winded discourse on the need to learn how do do Manycore programming well. And to let people know about it, I responded to posts in the StyleCop forum about custom rules.

In response, I caught some crap from the Microsoft team about violating the license agreement by using Reflector. A few of my posts were deleted. The community argued the whole license issue, because there are two license agreements involved – the Code Gallery one that pops up when you download and the StyleCop-version when you install. I don’t think anyone had a reasonable expectation when installing the product that these would be different.

In any case, the whole discussion about licensing disappeared from the forum and no more was said about it.

Apparently, someone wanted to integrate StyleCop with the Team Foundation Server build process. So he did a little digging with Reflector, figured out how to do it, and blogged about it. Then he responded to a forum query on how do this. Sound familiar?

Where his story differs from mine, is that he apparently had more communication with Microsoft than I did. His blog entry now reads only:

“This MSBuild task has been removed per request by Microsoft.”

Hey, Redmond! What were you thinking?

You released this product to a community of people interested in how technology works … customization and integration with other tools are critical to our work process … how could you not know we’d try to figure it out and make it work for us?

Yeah, maybe these things will start breaking with a future version of StyleCop, but the community wants them now. We’re not idiots, we know that if the API changes our stuff will break … that’s a risk we’re willing to take because we want the functionality now. We want to use the product now because it’s useful to us, but not without these features. Fine, you didn’t have time to provide them in this release … let us do it! That’s what “community” is about.

Or are you seriously suggesting that our use of Reflector somehow endangers your intellectual property. Gentlemen, if it’s truly that important to you, then I submit to you that .Net was the wrong environment to write it in.

There are a lot of smart people out here who are willing to take your work, build on it the things you didn’t have time to, and make your products more useful and better. By making things like custom rules and MSBuild tasks available now, StyleCop becomes more beneficial to more people … people who, without those things, would shrug and say, “Nice idea, but it doesn’t do what I need”, and delete the whole damn thing.

Monday, June 23, 2008

How many of you have sat in a weekly status meeting and heard the phrase “I’m 80% done with that task”? Or 90%, 75%, 50%, etc.?

It’s at this point in a status meeting that my mind starts to wander to other things. It might wander to the real work I need to do once I get out of the meeting or to the kayaking I plan to do over the weekend, but one way or another I’m not paying attention to these status reports any more.

Why? Because, in my mind, the only valid status for a task is binary: complete or not complete.

If you have to report a percentage of an individual task in a weekly status, then your task wasn’t broken down enough to begin with. Now a larger task made up of several steps might be 80% complete after a week of work, but that should be extrapolated from the completeness of its components.

What triggered this post was a status meeting where it was reported that a task was 75% complete – the line-item in Project was scheduled for forty (40) days. It wasn't a rolled-up, summary task; it didn't represent another, more detailed technical schedule -- it was just forty days of some work. And it wasn't alone -- there were plenty of twenty and thirty day tasks to keep it company.

God might have been able to accurately estimate Noah's deluge at 40 days of effort (and even He had to work nights to meet the deadline), but I think this is beyond the abilities of most software developers without breaking it down a bit.

In my opinion, forty hours is too large a task and should be broken down further -- just the act of thinking about the necessary steps will help drive a better understanding of the level of effort involved. And that better understanding of the effort results in better, more accurate estimates.

Something I'm hearing more often in projects is a request for ROM (Rough Order of Magnitude) estimates -- as though changing the terminology from SWAG makes it somehow more acceptable or reliable. Personally, I like the "magnitude" part -- like measuring an earthquake on the Richter Scale, ROMs are logarithmic. The likely error in the ROM grows logarithmically as the initial ROM estimate increases. E.G. the margin for error in a 40-day ROM is going to be logarithmically more destructive than that of a 1-day ROM ... approaching the catastrophic.

Also, like the ROM chip, once that estimate's written, it's read-only. That ROM's what you're stuck with and you'll be held to it.

I'm reminded of a place I used to work that did consulting for county government. Everything the owner estimated was a ROM and every ROM was "two weeks".

"Sure, we can do that for you. Take about two weeks."

Time and materials later, it's amazing how many billable hours there could be in a two-week estimate.

The opposite extreme from the ROM is a schedule and status reporting that's so granular as to impact the ability to do work. I worked with a project lead once who wanted tasks for his MS Project schedule measured in hours and status updates twice a day. More time was spent providing updates (and justifying or explaining a deviation from the estimate) than was spent actually coding. Luckily this didn't last long.

When I'm in charge of an effort, we break things down until the individual tasks are about a day's effort -- many less, some more, but the target's about a day. Then the developers sign up for the tasks they're confident they can complete in each development iteration (typically a week or two) -- within the technical team we estimate the effort of the individual tasks, but from the Project Lead's perspective, they all have a duration of the iteration length. Status at the project level is binary: done / not done. I adapted this a bit from a process I found in Managing Projects with Visual Studio Team System:

If a task is 99% complete, it's reported as not done. If all the tasks for an iteration aren't done, the iteration's not done. Just like with a build -- if 99% of the projects build, the build's still broken.

This is what project leads should be concerned with. Not "did every task take the estimated amount of time", but "is the project on track for completion". Manageable iterations with manageable workloads accomplish this.

It's a little GPS receiver that simply records your position every 15-seconds, and some software that matches the timestamp on your digital photos to the GPS trip record -- it then geotags your photos for services like Panoramio.

Sunday, June 8, 2008

I'm always a bit reluctant to enter credit card numbers when shopping at a new site, because I don't know how good their security is. Once my credit card's past the SSL layer at their site, what becomes of it?

Is it being stuffed into some unsecure database?

Is it being transmitted in clear text as part of a SOAP message throughout their SOA architecture?

Is the order actually being processed by hand, so my credit card's being printed out and stored in a file somewhere? Or shipped around via email?

PayPal has released a browser plug-in that eliminates these concerns for me.

One of its features is generation of one-time or multi-use "Secure Cards" -- MasterCard numbers tied to your PayPal account. This allows your PayPal account to be used securely at any site, even those that don't explicitly support PayPal.

Once installed, the plug-in adds an icon-menu to the browser's toolbar:

"Generate Secure Card" prompts for PayPal login:

With the image-verification to ensure you're sending the information to PayPal and when combined with the PayPal Security Key:

This seems like a very secure login.

You're then prompted to choose either a single- or multi-use card number to generate:

And, presto, you have a secure card number to use for your purchase(s):

Monday, June 2, 2008

Microsoft has released the June 2008 CTP of Parallel Extension for .Net 3.5 library.
Learning this library and the concepts of well-designed, well-behaved multi-threaded applications is becoming more and more critical to an application's success, as the days of being able to count on faster and faster processors being available by the time our applications release are behind us. Instead, we may be faced with having our applications run on PCs with a greater number of slower cores. This Manycore Shift is upon us already and, as developers, we need to be prepared for it.

Our user-communities expect this of us. Years ago, a Microsoft Word user expected printing a large document to tie up the application for however long it took the printer to spew out the pages. Then print spooling was introduced and the users' expectations changed -- they came to expect the application to be returned to their control faster, because the spooler could send data to the printer in the background.
Today, the user expects an instantaneous return of the application when they select Print -- no matter how large the document. They expect to be able to immediately edit, save and edit the document again while the application sends the document, in the state it was in when they selected Print, to the spooler in the background.
Furthermore, they expect all those other things that used to be synchronous operations (spell check, grammar check, etc.) to now happen behind the scenes without slowing down their use of the application's main functionality. They even expect the application to correct typing errors in the background, while they move on to make new ones.
We, as developers, expect this of our tools, as well -- with Visual Studio Intellisense and syntax checking while we code. One of the first comments made about the recently released Microsoft Source Analyzer for C# was essentially: "Why doesn't it check in the background while I type and display blue-squigglies under what needs to be fixed?"
The expectations are reasonable and achievable, given the computing power available on the average desktop and the tools available, but how many of us writing line-of-business applications truly take the time to understand multi-threaded programming and build these features into our applications?
Threading used to be hard to do for the typical business-software developer. The steps to create and manage new threads were very different from anything they'd been exposed to before and the libraries were arcane and poorly-documented. But all of that's changing, the threading functionality in .Net 2.0 and now libraries like the Parallel Extension Library, PLINQ and CAB insulate the developer from the complexities of threading and make it incredibly simple to start tasks on new threads ... and therein lies a new danger:
A co-worker and I rather regularly send each other emails, the gist of which is: threading is the work of the Devil.
Not because it's difficult to create a thread or start a process on it, but because the implications of concurrency in a large business application with multiple dependencies still have to be dealt with, no matter how easy it is to send work to the background. For the typical business software developer, who's spent an entire career in a single-threaded environment, it's hard. It requires a different conceptual mindset.
As Microsoft continues to work on libraries and extensions to make threading easier to implement, and I'm sure they will, I hope they also put as much effort into learning resources to help developers understand the implications of using these new tools; and I hope that we, as developers, put as much effort into learning the best-practices and fundamental concepts of parallel computing, as we do into learning the mechanics of the tools.

Thursday, May 29, 2008

Regardless of whether Microsoft expected it or not, it appears that the user-community is actively interested in writing custom rules for Microsoft Source Analysis for C# (StyleCop).
Sergey Shishkin has been blogging about StyleCop and has worked out the details of test-driven development and unit testing for StyleCop custom rules on his blog.

Tuesday, May 27, 2008

So now it's time to create a real, useful custom rule for StyleCop (Source Analyzer).

The rule I'm going to write is to accommodate the naming standards for private fields where I work: they must begin with an underscore, followed by a lower-case letter. This conflicts with two of the default StyleCop rules, so I'll be turning those off and using my custom rule instead.

The first step is to create a new project with references to the Microsoft.SourceAnalysis and Microsoft.SourceAnalysis.CSharp DLLs from the StyleCop install directory:

Next I set up the project to allow debugging of the rules by following the instructions in Part IIa of this series. Then created a class and XML file as described in Part I:

One caveat I found about the correspondence between the class name and the XML file name. By default, StyleCop will try to load an XML file with the same FullName as the Type it found descending from SourceAnalyzer. You can also specify an XML file resource in the SourceAnalyzer attribute: [SourceAnalyzer(typeof(CsParser), "SomeXmlFile.xml")]

If your custom rule fails to execute and doesn't show up in the Source Analysis Settings, check this correspondence.

Each source file is represented as nested CsElements in the CsDocument. So in a typical .cs file, each using directive would be an element under the CsDocument, then the namespace would be an element; within the namespace, each defined class would be a child element, etc.:

Document

using

using

using

namespace

class

field

field

method

class

field

method

... and so on.

So in order to check the naming of all the fields, we'll need a method to recursively process the elements and all their children -- and the base SourceAnalyzer class has a Cancel property, so we'll want to stop processing if this becomes True:

For this rule, we're interested in fields only, so we want to check the type of each element (CsElement.ElementType) and we want to ensure that we don't run checks on generated code (CsElement.Generated). Since we're going to be validating the naming of these fields, we're primarily interested in the Name property.

But the Name property of CsElement isn't what we're after. That value contains the type of element, so it would have a value of "field _myFieldName". What we really want is CsElement.Declaration.Name property, which will have just the name of the field ("_myFieldName"). Finally the check we're going to make is to ensure that the name starts with an underscore and that the second character of the name is lower-case:

Before moving on to Part III and doing some real work with the custom rule, I want to describe how to set up Visual Studio so you can debug your rules.

First, you'll have to start an instance of Visual Studio to use for working on the custom rule project. This instance must be started without the DLLs for your custom rule being present in the Source Analyzer install directory. This is because you'll want Visual Studio to copy your newly built DLLs there as part of the build process -- if a copy of the DLLs is present when you start Visual Studio, then they'll be loaded by Source Analyzer and will be in use when you do a build.

Next, set your project's Debug properties to start an external program (a new instance of Visual Studio):

The path to Visual Studio depends on where you installed it, but should be something like: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe

And set the Command Line Arguments to the solution file you plan to use for testing your rules -- in my case: "C:\vs\SourceAnalyzerSample\ClassLibrary3\ClassLibrary3.sln"

Finally set the project's Post Build Event to copy the DLLs and PDBs for your custom rule to the Source Analyzer install directory:

In Part I we set up a simple custom rule for the Microsoft Source Analyzer (StyleCop) that displays a rule violation for every source file in the project. Now in Part II, I'll explain the elements of the XML file and source code that went into that.

In Settings, StyleCop create a hierarchy of rules based on the SourceAnalyzer Name-attribute, the RuleGroups and the Rules in the XML file. So the XML above becomes:

when loaded into settings by the Source Analyzer. The SourceAnalzyer element's Name attribute becomes a node under C#; each RuleGroup becomes a node under that; and each Rule is contained in its RuleGroup.

The CheckID attribute of a Rule must consist of two capital letters and four digits.

The Context element of a Rule is what displays in Visual Studio analysis results and can contain {0} string formatting placeholders (which we'll see in Part III).

The Description element of a Rule is what displays to the user in Source Analysis Settings when they're choosing which Rules to enforce.

You can use Reflector (one of the top five utilities a .Net developer must have, in my opinion) to examine the Rules included with StyleCop and the associated XML files:

Our simple example from Part I violates every file it analyzes without actually checking anything -- this was done to demonstrate the minimum code necessary to create a rule and generate a violation. I used Reflector on the included Rules to determine what the minimal code should look like.

First, we need the references and using directives for Microsoft.SourceAnalysis and Microsoft.SourceAnalysis.CSharp.

Then we create a class inherited from SourceAnalyzer and add a SourceAnalyzer attribute on the class, giving it a parameter of typeof(CsParser). StyleCop uses Reflection to find classes inherited from SourceAnalyzer to add to its rules. The CsParser type tells StyleCop that this class analyzes C# source files. Although I didn't find a VB parser or rules in my download, maybe someone at Microsoft is working on one?

We next need to override the AnalyzeDocument method from the SourceAnalyzer base-class. This is the entry point StyleCop will use to run our rule and pass it each source file in the project. Each source file is passed in as a parameter of type CodeDocument.

As part of the Microsoft.SourceAnalysis assembly, they've included a Param class that has a number of methods on it to validate parameters passed to methods. We use this to require that the CodeDocument parameter passed isn't null. As an aside, I've seen similiar functionality in a class called Guard included in a lot of patterns & practices code -- it seems like there's a lot of duplicate code going into validating method parameters ... sounds like framework to me.

Anyway, after ensuring that we were passed a CodeDocument, we want to cast it to a CsDocument. CodeDocument is a base-class and, presumably, there'll be VbDocument and FsDocument coming at some point in the future.

The next step is to check some things on the document. In this case, we're checking to ensure that the document has a RootElement and that it isn't generated-code. The source code is treated a hierarchy of elements containing other elements, which we'll see more of in Part III. We want to avoid analyzing generated-code, because it doesn't make sense to create a bunch of style warnings for code that, theoretically, a human will never have to read. Of course, this presumes that the code generater followed the rules for marking its generated code as such.

Finally, we're going to create the violation. The AddViolation method has a number of overloads:

In general, the method takes:

The CodeElement that violated the rule;

An Enum or String identifying the Rule that's been violated;

An array of Objects -- this array is used to fill {0} placeholders in a formatted string;

You also have the option of passing in a line number identifying the line of code that caused the violation (the Int32 parameters above).

And that's it for the code.

You can use Reflector against the included Rules to learn more about the different types of CodeElements and how to check specific things, which is what we'll be doing in Part III when we create a rule to ensure that private fields begin with an underscore, followed by a lower-case character and have no other underscores in the name.

Since manual code reviews for style and formatting are a huge time waster, I jumped all over this for use in my shop. Unfortunately, my coworkers can't just accept doing things the Microsoft way; they have to be a bit different, so I started investigating what was involved in creating custom rules.

Part I of this tutorial will create a basic custom rule that loads in StyleCop and add a rule violation message. It will add the rule violation for every source file, every time. In Part II, I'll explain what each piece does and then in Part III, we'll change it to actually do something useful.

The default install will go to "C:\Program Files\Microsoft Source Analysis Tool for C#" and add context-menu options to the Solution Explorer in Visual Studio 2008:

Source Analyzer uses Reflection to examine every DLL in its install directory to find custom rules, so all we'll need to do is create a class library with the right classes and attributes and Source Analyzer will automatically load our new rules.

Create a new ClassLibrary project, then add references to the Microsoft.SourceAnalysis and Microsoft.SourceAnalysis.CSharp assemblies:

Then add a new class and an XML file with the same name. Set the Properties of the XML file to be an Embedded Resource and not copy to the output directory:

Build the project and copy its DLL to the Source Analysis install directory. Now, when you run Source Analysis on a project, you should get a warning about the custom rule for every file in the project:

In Part II, I'll explain the elements of the XML file and the code; then, in Part III, I'll demonstrate a useful example.

One of the differences between the standards where I work and Microsoft's is that we require private fields to start with an underscore ("_"), while the Microsoft rules provided by StyleCop require they start with a lower-case letter and contain no underscores. So I'll be turning off the two Microsoft rules (SA1306 and SA1309) and creating a custom rule to enforce our standards.

Tuesday, January 1, 2008

I make the following sessions available to User Groups and Code Camps in Florida. Each session is designed to run approximately one hour. To schedule me to present at your meeting, contact me at pjackson@lovethedot.net.

These sessions are also available for presentation to corporate-groups. Although there is no charge for the session, corporate groups may be asked to cover travel expenses based on the site’s distance from Orlando, FL.

Parallel Programming in .Net 4 – Part I – An Overview

An introduction to the Parallel Extensions coming in .Net 4.0. This session skims the surface of what will be available to developers, covering the basics of Parallel.For, Parallel.ForEach and Parallel.Invoke, followed by a discussion of Tasks and some of the utility classes from the upcoming release.

Parallel Programming in .Net 4 – Part II – A Deeper Dive

Building on Part I, this session takes a much deeper dive into aspects of the .Net 4.0 Parallel Extensions, including the architecture and managing parallel tasks through the TaskManager. In-depth coverage of Parallel.For, covering all of the options for controlling and working with the parallel loop.

Improving Developer Productivity with Guidance Automation

Note: While still interesting and useful, this session is a bit dated, given the extensibility of Visual Studio 2008 and the new extensibility options coming in Visual Studio 2010.

This session introduces the Guidance Automation Extensions and Guidance Automation Toolkit (GAX/GAT) from Microsoft patterns & practices. Explore the concept of software factories and see how generating code can improve developer productivity. Specifically this session covers the building of a guidance package that will implement the UI Threading Pattern (InvokeRequired, BeginInvoke, etc.) for all public methods on a UserControl.

Introduction to Dependency Injection (ObjectBuilder or Unity)

Explore the concepts of Dependency Injection and Inversion of Control with either the ObjectBuilder or Unity dependency injection libraries. Learn what DI is, what the benefits are and why you should be using it.

The importance of usability and user-experience

This non-technical, user-focused discussion covers the concepts of user-interface design and usability testing in order to improve the overall user experience.

Cloud-Computing: Amazon EC2

Classes can run from one to five days, depending on customer customizations and needs.

Each class can be customized to focus on the material your staff needs the most and eliminate concepts you’re unlikely to ever use. This allows you to optimize the learning experience for your staff and maximize your training dollars.

The cost is $500 per day for up to 20 students, location and equipment not included. This comes to $2500 for a full, five-day, instructor-led class, customized with the material of your choice. If your company does not have the facilities and equipment to hold classes, third-parties can typically be found.

One-on-one training is also available for $250 per day. These classes typically go much faster than group training because a single student receives all of the instructor’s attention.