Archive for June, 2008

I arrived Tuesday at NDC2008 full of anticipation and excitement; there were a lot of great talks scheduled as I could see it, and I had trouble choosing which ones to attend. I almost immediately found some old colleagues and class mates, which I hadn’t talked to in several years. That was really an added bonus, and I really appreciated the little "reunions".

Day 1

Scott Hanselman started the show with a keynote, showing us a little LINQ and the new Dynamic Data-bits. Hanselman was witty, and was a great presenter. There might have been a couple of things that did go to fast if you hadn’t seen a lot of .NET 3.5 before, but I guess most got at least a glimpse of what it can do.

After the keynote I was considering several sessions, but I decided to attend Mary Poppendiecks first session titled Thrashing. She went through the reasons for them, and what can be done to remedy it. As a reader of the Mythical Man Month,Slack, Peopleware, and others, I found she conveyed a lot of the same information found there, and I really share their views. A new aspect I hadn’t thought of before was queuing theory, which we apply consciously to hardware and related problems, but seldom to team and people dynamics. I will make a follow up post on the matter.

I’ve lately dabbled with some reflection, so next I attended Roy Osheroves talk Deep Reflection, hoping it would be as deep as promised (level 400 session). It certainly was, and I’m glad I’ve recently been looking at both Reflection.Emit and CodeDom-programming. It also helps to extensively take advantage of the vanilla reflection utilities regularly. This was a prerequisite, but it seemed like a lot of eyes glazed over when it was presented. He ended the session with a song, and I think he did his presentation on this heavy topic in a great way.

Supposed to be doing a talk about agility in Typemock (the firm), I gave Roys next session a chance. But the agenda had changed and we were introduced to Designing for Testability. I had this part mostly under control, so I was a bit disappointed that the original talk was exchanged. It was an introduction to IoC, DI, and IoC-containers, as well as our options when designing for testability with mocks or subclassing. This session ended in a song as well, and the lyrics was funny as always.

There was unfortunately another change in the agenda, Roy had originally a Threading-talk I’d like to see, but it was changed to a Testing your data access layer session. With this change, I attended Mary Poppendiecks talk on The Role of Leadership in Lean Software Development. Contrary to popular belief in most Agile circles, she thinks there is a place for leaders, not only self-organizing teams. I must admit that this is something I’ve personally experienced as well; when everyone is responsible, no one takes responsibility. I won’t go into more detail here, but I think it was a great talk, and she definitively hit home many points with me.

Day 2

I start out attending an Agile Panel discussion hosted by Scott Hanselman, featuring Mary Poppendieck, Roy Osherove, Ken Schwaber, Chet Hendrickson, and Ron Jeffries. An example topic was what are the first steps to become agile. It wasn’t that much of a discussion really, as all the panelists believe in the Agile values.

The next two sessions I followed the Agile crowd in general, and Jeffries & Hendrickson in particular in their first two talks about Natural Laws of Agile Software Development. They presented the same material I saw from Smidig (Agile) 2007 on the economics of releasing early. I think it shows the potential payoff of releasing early, but it misses some aspects of going to early into maintenance mode with the software. I think this has to be explored some more. After showing these teasers, they went more into how early and frequent releases can be done baking quality into the process through the means of TDD and Acceptance Tests.

While I was humming along with Ron & Chet, it seemed like Roy got quite a following. It was almost impossible to get a seat on his Advanced Unit Testing session. It really seems my fellow Norwegians are good & ready for some ALT.NET techniques & practices, especially unit testing. I eventually got a seat on the session, but I must admin I personally was a little bit disappointed as I’ve already been down most of the roads before. Hopefully it was another teaser for all those who are thinking of getting into the whole unit testing business.

Next up, I attended Mads Torgersens Microsoft LINQ Under the Covers: An In-Depth Look at LINQ. And under the covers it was indeed. He gave us a great peek into how a LINQ-expression was disassembled, and showed us the output through Reflector. I must admit it was hard to follow everything, but I was at least familiar with all the constructs. All in all a mindblowing experience, and Mads gets credit for his enthusiasm during the session.

Finally, I attended Mary Poppendiecks session on The Discipline of Going Fast. We got new insights into the Toyota Way, a little bit of history, and specifically the Stop-the-Line practice. I definitively will continue this flirt the Lean methodologies.

Conclusion

I’m very pleased, and I was exhausted after two days packed with great content. The only thing I have a complaint about is that a couple of Roys talks should have been moved to accommodate the massive interest his topics achieved.

How often do find yourself writing code like this to do some things in batch:

BatchCalculatorcalc = newBatchCalculator();

calc.Suspend();

calc.CalculateSomething(something);

calc.CalculateSomethingElse(something);

calc.Resume();

Well, I do and I’m not really happy about sprinkling those Suspends and Resumes around everywhere I need to start and stop something. I see at least two common pitfalls with this solution and a minor hiccup:

Somewhere down the pipe I’m bound to forget the call to Resume explicitly and I’ll have a bug on my hands.

Somewhere in between the Suspend and Resume calls an Exception is thrown, Resume is never reached, and leaving the object in an unwanted state.

It could be better looking! ->

The short using-introduction

With the introduction of .NET and it’s managed environment and non-deterministic garbage collector, there were several figureheads in the industry that raised an eyebrow or two. There were also people raising more than their eyebrows as well, and according to legend and several .NET Rocks shows, Chris Sells (now a blue badge) was one. They allegedly made MS include an IDisposable interface with a simple method Dispose() to fill their garbage collection needs. And if that wasn’t enough, they included the using-statement which is a try-finally in disguise where the finally automatically calls IDisposable.Dispose()!

A couple of regulars in my world in that department are the IDbConnection interface and later (from 2.0 and onwards) the TransactionScope class, but it has also been recommended practice for any implementers of the IDisposable interface.

Yeah yeah, but what can I do with it?

With the aforementioned background in place we can exploit it to create a better and more fluent API for our batch-oriented processes. Let us simply dive into the code, and I introduce without further ado; the changed BatchCalculator:

publicclassBatchCalculator

{

publicIDisposableSuspend(){}

publicvoidResume(){}

publicvoidCalculateSomethingElse(objectsomething){}

publicvoidCalculateSomething(objectsomething){}

}

Suspend now returns an IDisposable and we can replace our calling code to this:

BatchCalculatorcalc = newBatchCalculator();

using(calc.Suspend())

{

calc.CalculateSomething(something);

calc.CalculateSomethingElse(something);

}

Yes! That’s more like it. I definitively like to looks of that.

But how.. do I ensure a call to Resume?

This is where the "magic" happens. Let us make a class which implements IDisposable that gets returned from our Suspend method:

publicclassSuspender : IDisposable

{

privatereadonlyBatchCalculatorm_calculator;

publicSuspender(BatchCalculatorcalculator)

{

m_calculator = calculator;

}

publicvoidDispose()

{

m_calculator.Resume();

}

}

And our revised Suspend method:

publicIDisposableSuspend()

{

returnnewSuspender(this);

}

Now go look at the implemented Dispose-method in our Suspender-class. It just calls our Resume method on our BatchCalculator! So when the using-block is exited, the Dispose-method is called and hooray, mission accomplished.

Finishing touch

To increase the applicability of the Suspender-class I introduce the role interface IResumable:

publicinterfaceIResumable

{

voidResume();

}

And implement it in BatchCalculculator:

publicclassBatchCalculator : IResumable

{

publicIDisposableSuspend()

{

returnnewSuspender(this);

}

publicvoidResume(){}

publicvoidCalculateSomethingElse(objectsomething){}

publicvoidCalculateSomething(objectsomething){}

}

Now the Suspender class can just wrap our new interface:

publicclassSuspender : IDisposable

{

privatereadonlyIResumablem_resumable;

publicSuspender(IResumableresumable)

{

m_resumable = resumable;

}

publicvoidDispose()

{

m_resumable.Resume();

}

}

Final note

If we revisit our weak spots, have we solved them all? A definitive yes; using using guarantees that the Dispose()-method is called which in turn calls our wrapped method. I must also add I really like the syntactic sugar using represents.

This pattern is obviously at a tangent for what using and IDisposable was supposed to be used for. The MSDN library has this to say about IDisposable:

The primary use of this interface is to release unmanaged resources.

But why not leverage what we have available. After all, coding is done once. Reading it is another matter completely.