Wednesday, August 20, 2008

I really like the MVC Framework. I'm currently working on my third commercial project using it and I just love the flexibility it gives me, especially the opportunity to refactor the framework itself.

Here's an example: one of the things I dislike is the way the default Controller class is overly coupled to the HttpContext. HttpContext in itself is a problematic hangover from ASP.NET and to have it bound into the Controller to such an extent is a design mistake IMHO.

You see the reference to Request.Form there? The HttpRequest (and HttpResponse and HttpContext) are all exposed as properties of the base Controller class. In order to test that code I have to mock the HttpContext which is a real PITA.

Here's an alternative that I've started using. An IHttpContextService interface:

And I can set up my IoC container to give me any implementation of IHttpContextService I want. Now because the IHttpContextService is provided by the IoC container, any dependencies that its implementation may require can also be provided by the IoC container which opens up all kinds of interesting opportunities.

I recently had a long and considered comment from Woonboo for my post "MSTest is sapping my will to live". Woonboo is a happy user of MSTest. I started writing a reply in the comments, but it was getting so involved I thought it would be nice to promote it to a new post.

Here's Woonboo's comment:

I know I'm late to the game, but mileage is everything.

I used to use NUnit, but after using MSTest for the past 2.5 years, I wouldn't go back, even with the bugs in VS2008 (which are minor and only hit if you are doing something specific with AppDomain).

Your config file thing is simple - you don't have to use attribute...in the 'settings' (right-click properties) add files to deploy.

It creates a separate test project because too many Morts out there will just put it in the same project otherwise and all your tests will be deployed with the code. I've worked with a number of these folks.

Having the built in ability to test private and internal methods/properties/fields is the biggest reason I love it. No extra code required. No need to loosen the scope (make things public) on what you're testing.

When a test fails, looking at the test results that gives me a hyper-link to every part of the stack trace where the test failed (or threw an exception) as saved me hundreds if not thousands of hours by now not having to navigate to the file, hit CNTL+G and enter the line number; especially if I have to walk up the stack to see if there was a path taken that shouldn't of been causing the problem.

Having the ability to have tests run automatically when I do a build or check-in (easier than I was ever able to with NUnit - but that's a SCC).

Code coverage gets tied to the tests you write.

Pulling up archives into the GUI of who ran what tests when on what machines. Great from the 'team lead' perspective.

You said in another post that technologies change but methodologies don't - don't let NUnit suck you into the 'one tool' mentality (although I could say the same about myself with MSTest). Love your posts generally - but I think you need to give MSTest another chance and ask someone who's been successful with it for help. It was hard for me to switch from NUnit too - but it was like when I switched from VB to C# - I never looked back once I got past the frustration point.

Woonboo's points get to the nub of the problem for me: I don't think MSTest was designed for TDD but for some other high-ceremony style of integration testing. I would submit that he is probably not working in that style. Let me take his points one by one to explain what I mean.

"you don't have to use attribute...in the 'settings' (right-click properties) add files to deploy". Yes, but I shouldn't have to do even this. Why can't it just test what's in the target directory? Building a separate run directory for every test and forcing the user to deliberately choose files to deploy is just another symptom of the high-ceremony MSTest approach.

"It creates a separate test project because too many Morts out there will just put it in the same project otherwise and all your tests will be deployed with the code." I agree that it's best to have a separate test project. But any testing framework should be unintrusive. I shouldn't have to have a separate project type. The only reason it's needed by MSTest is because MSTest requires so much configuration to be useful. And you shouldn't rely on project types to force you to correctly organise your source, otherwise we'd have some crazy Microsoft scheme to have the "domain-project", the "data-access" project, the "service-project". Don't suggest that too near to someone from the TFS team!

Now I have to have a rant here about "The Morts won't understand it" argument. I hear this over and over and over again. It usually goes along the lines of "I understand it fine, but the people I work with, or the 'maintenance' people won't". It's the lamest and most often deployed excuse for bad decisions. If the Morts can't understand something, whoever they are, get rid of them. What's the point in employing people who don't know how to do their job? But usually it's not the real reason, the real reason is lack of leadership, and lack of trust. Most advances in development practices actually make things simpler but it requires that you, if you are the team lead, sit down and work with the people you are supposed to be leading.

"Having the built in ability to test private and internal methods/properties/fields is the biggest reason I love it." One of the core reasons for doing TDD is way it drives your design. If you have to access private members in your tests I would submit that your design is not correctly decoupled. I *want* my testing framework to force me to behave like any other client when I'm writing tests.

Now there might be a situation where you have to write tests for some monolithic legacy code base, but MSTest won't help your there, you need to go and talk to Roy Osherove :)

"When a test fails, looking at the test results that gives me a hyper-link to every part of the stack trace where the test failed." You get exactly the same thing with Testdriven.NET. I have "run-tests" mapped to F8 (I've never found a use for bookmarks). "Run-tests" is context specific so if the cursor is currently inside a test method, only that test gets run. So I just hit F8 to run the test(s), the results appear on the console and I can click on any part of the stack trace to go directly to that line of code. I actually dislike fancy test runners. I don't want to have to click here and there to get to stack traces, I'd much rather everything was just thrown onto the console.

Of course running tests on your CI server is essential. I've always found it simple to do that both with Cruise Control and TFS. I don't think MSTest really adds much here and it doesn't make much sense unless you're using TFS.

"Pulling up archives into the GUI of who ran what tests when on what machines. Great from the 'team lead' perspective." OK, I don't get this. Surely what you should be looking at is code coverage. I run tests every few mintues when I'm coding, you probably wouldn't want to look at or store all those test runs. I think this goes back to my first point. A tool that's designed for high ceremony infrequent testing like MSTest would see storing a test run archive as a useful thing, anyone who's done TDD would see it as a huge waste of resources.

I gave MSTest a chance. I've also recently tried to use MBUnit. neither worked as well as TestDriven.NET + NUnit do for me. I must say that MBUnit came very close, and I wouldn't have too much problem with using if I was in a team that has already settled with it. MSTest was just a nightmare from start to finish. As I said to our project manager, if it was an open source tool, it would never have got any traction and would now be sitting unloved in sourceforge. Maybe it suits some people with a very non-agile, non-TDD methodology, but for anyone doing TDD I would stay well clear.

Woonboo. Thanks very much for provoking me to write this. I love a good debate and would really like to hear your reply. Thanks again!

Sunday, August 17, 2008

I use the excellent Google Analytics for tracking stats on my blog. One of the things it tells me is the browser that you, dear reader, are using:

I guess I shouldn't be too surprised that a technical readership should marginally prefer Firefox. But it's still very nice to see.

What else can I tell you about yourself? Well you're probably American.

I had an interesting discussion in the office last week. I'm British, but I suggested that since the majority of my readership is from the USA I should adopt US spelling. Even mentioning such a thing was like finding a raised toilet seat in a nunnery, so rather than being lynched I'll stick to 'through' rather than 'thru'. It's a marginally interesting factlet that English speaking kids take almost twice as long to read and write than most other European countries because of the awfulness of our spelling.

One last thing, and this really does surprise me: The vast majority of traffic to my blog is driven by Google searches which is the result of random people typing in random search terms and then finding their way here. The curious thing is that the numbers are so consistent. Every week looks the same with between 200 and 250 visits during week days and 50 to 70 at the weekends. I would have expected more variation, but I guess it's a good demonstration of the predictability of randomness that it's like this.

Wednesday, August 13, 2008

An auto mocking container (AMC) sounds pretty scary, but it's a really neat tool if you're writing a lot of unit tests and find yourself forever constructing mock objects. In the same way that an IoC container knits together dependencies at runtime, an AMC can create all your mock objects automatically for your unit tests.

After the tenth time you've had to write this kind of code you get quite bored of typing the mock object creation. It's irritating when you're thinking about using a new dependency in an existing class, and you have to add it to both the class under test and the setup for the test.

As you can see, after we set up the AMC we simply ask it for an instance of the class we wish to test. We don't have to worry about supplying the dependencies because the AMC works out what mocks need to be created and does it for us.

When we setup our expectations we can ask the AMC for the mock objects it's created.

You can get the latest version of the source code for the Rhino Tools AMC by pointing tortoise here:

Friday, August 08, 2008

LINQ has revolutionised the way we do data access. Being able to fluently describe queries in C# means that you never have to write a single line of SQL again. Of course LINQ isn't the only game in town. NHibernate has a rich API for describing queries as do most mature ORM tools. But to be a player in the .NET ORM game you simply have to provide a LINQ IQueryable API. It's been really nice to see the NHibernate-to-LINQ project take off and apparently LLBLGen Pro has an excellent LINQ implementation too.

Now that we can write our queries in C# it should mean that we can have completely DRY business logic. No more duplicate rules, one set in SQL, the other in the domain classes. But there's a problem: LINQ doesn't understand IL. If you write a query that includes a property or method, LINQ-to-SQL can't turn the logic encapsulated by it into a SQL statement.

To illustrate the problem take this simple schema for an order:

Let's use the LINQ-to-SQL designer to create some classes:

Now lets create a 'Total' property for the order that calculates the total by summing the order lines' quantities times their product's price.

LINQ-to-SQL doesn't know anything about the Total property, so it does as much as it can. It loads the Order. When the Total property executes, OrderLines is evaluated which causes the order lines to be loaded with a single select statement. Next each Product property of each OrderLine is evaluated in turn causing each Product to be selected individually. So we've had five SQL statements executed and the entire Order object graph loaded into memory just to find out the order total. Yes of course we could add data load options to eagerly load the entire object graph with one query, but we would still end up with the entire object graph in memory. If all we wanted was the order total this is very inefficient.

Now, if we construct a query where we explicitly ask for the sum of order line quantities times product prices, like this:

One SQL statement has been created that returns a scalar value for the total. Much better. But now we've got duplicate business logic. We have definition of the order total calculation in the Total property of Order and another in the our query.

So what's the solution?

What we need is a way of creating our business logic in a single place that we can use in both our domain properties and in our queries. This brings me to two guys who have done some excellent work in trying to solve this problem: Fredrik Kalseth and Luke Marshall. I'm going to show you Luke's solution which is detailed in thisseries of blogposts.

It's based on the specification pattern. If you've not come across this before, Ian Cooper has a great description here. The idea with specifications is that you factor out your domain business logic into small composable classes. You can then test small bits of business logic in isolation and then compose them to create more complex rules; because we all know that rules rely on rules :)

The neat trick is to implement the specification as a lambda expression that can be executed against in-memory object graphs or inserted into an expression tree to be compiled into SQL.

Here's our Total property as a specification, or as Luke calls it, QueryProperty.

We factored out the Total calculation into a specification called TotalProperty which passes the rule into the constructor of the QueryProperty base class. We also have a static instance of the TotalProperty specification. This is simply for performance reasons and acts a specification cache. Then in the Total property getter we ask the specification to calculate its value for the current instance.

Note that the Total property is decorated with a QueryPropertyAttribute. This is so that the custom query provider can recognise that this property also supplies a lambda expression via its specification, which is the type specified in the attribute constructor. This is the main weakness of this approach because there's an obvious error waiting to happen. The type passed in the QueryPropertyAttribute has to match the type of the specification. It's also very invasive since we have various bits of the framework (QueryProperty, QueryPropertyAttribute) surfacing in our domain code.

These days simply everyone has a generic repository and Luke is no different. His repository chains a custom query provider before the LINQ-to-SQL query provider that knows how to insert the specification expressions into the expression tree. We can use the repository like this:

Note how the LINQ expression is exactly the same as one we ran above which caused five select statements to be executed and the entire Order object graph to be loaded into memory. When we run this new test we get this SQL:

Monday, August 04, 2008

I was just reading Ayende's post 'Thinking About Cloud Computing'. He talks about two very different approaches; Amazon's EC2/GoGrid where you have a VM that sits on Amazon's or GoGrid's servers and Google App Engine where Google provide an application hosting environment. Ayende's take is that the Google approach is the one with legs and I'm inclined to agree with him.

I've been aware of EC2 for a while now because I'm a keen user of S3, but I'd not come across Google App Engine before. I really like the premise; you simply upload your application to the cloud and it just scales as required. At the moment it only supports Python, but this is something that will surely spread to other environments. It can only be a matter of time before someone supplies a Mono based .NET environment. Once that happens Mono will move from being an interesting .NET sideshow to being seriously mainstream.

Up to now, If you're thinking of building the next YouTube or Twitter you've got two choices, you can concentrate on getting a compelling new application out there and hope that you'll be able to deal with scaling it up if it gets popular. Possibly facing a similar to fate to Twitter, which has been having serious scaling issues. Or alternatively, you spend the money up front on infrastructure. Probably wasting money on something that may never fly.

With Cloud computing you don't have this dilemma. You can concentrate on building a fantastic application knowing that you don't have to worry about the infrastructure.

What will Microsoft's response to this be? Their OS monopoly must surely be threatened by a world where anyone can get limitless scalability on a pay-as-you-go basis. Why would anyone every buy a server operating system again? Will the beast soon start selling its own cloud services?

Sunday, August 03, 2008

This is a new one for me, I've been tagged by Ben Hall to follow up on this 'meme' that's doing the rounds. Thanks Ben!

How old were you when you first started in programming?

I was probably 13 years old when I was given a book on BASIC programming. It was a spiral bound, hand written introduction. I wish I could find a reference to it on Google because I remember it as being an excellent tutorial for novice programmers. I spent months hand writing BASIC programs and executing them manually before my parents relented and bought me a TRS-80. I really really wanted an Apple II, but they were far too expensive at the time, three or four times as much as the 'trash 80'. The cassette tape based storage thingy never worked which meant that I would write a program, execute it and then start all over again the next time I turned the machine on. My best effort was probably a space landing game which incorporated Newton's laws of motion. You had to use trusters to land a lunar module on the only flat piece of ground the 'moon'. I'd grown up with the Apollo moon shots and was totally obsessed with space at the time.

What was your first programming language?

BASIC, see above.

What was the first real program you wrote?

Ah ha, I see I've answered all the questions already!

What languages have you used since you started programming?

As a teenager it was pretty much TRS-80 BASIC, but I also studied COBOL for Computer Science 'A' level. I had a friend who could write Z-80 machine code straight from his head which I was very impressed with, but I never managed more than making the screen flicker myself.

Professionally I've been solidly Microsoft: first Visual Basic and TSQL, then ASP VBScript and Javascript, and now C#. I've had a play with Java, Ruby, F# and even read The Little Schemer, but I wouldn't say I'm much past 'Hello World' with any of them.

What was your first professional programming gig?

After being obsessed with programming in my early teens, I abandoned it for the electric guitar. Yes, I was seduced by Rock. I spent the next few years playing in several no-hope bands and backpacking around the world. I was only after doing a degree in Development Studies and working as an English teacher in Japan for two years that I rediscovered computers and found that I still got a huge kick out of programming. I went back to college and did an IT masters degree and then got my first professional programming gig with a small company called SD Partners. It was a great 'in the deep end' experience and I got to write several VB/SQL Server client-server systems in the two years that I was with them.

If you knew then what you know now, would you have started programming?

Oh yes, without a doubt. In fact I wouldn't have stopped for ten years.

If there is one thing you learned along the way that you would tell new developers, what would it be?

There's an art to programming that goes beyond the tools. I didn't really discover this until five years into my professional programming career when I read Agile Software Development by Bob Martin. That book changed my life and it didn't mention a single Microsoft tool that I was currently using. Languages, Frameworks and APIs come and go, but patterns and principles of good software engineering stay around for a lot longer. Concentrate on those.

What's the most fun you've ever had programming?

It's when you discover a great abstraction, one that suddenly turns hundreds of lines of hackery into a beautiful extensible structure. That doesn't happen enough for me, but when it does I go home after work on cloud nine.

Recently I've really enjoyed creating SutekiShop. As a hired-gun developer I rarely get to do things exactly as I want so it was really nice being able to build a show case of exactly how I think an application should be written. The problem is, 3 months later, I've totally changed my mind :P

Friday, August 01, 2008

This follows part 1 where I create a custom data context and part 2 where I create a client application to talk to it.

For your custom data context to allow for updates you have to implement IUpdatable.

This interface has a number of cryptic methods and it wasn't at all clear at first how to write an implementation for it. I'm sure there must be some documentation somewhere but I couldn't find it. I resorted to writing trace writes in the empty methods and firing inserts and updates at my web service. You can then use Sysinternals' DebugView to watch what happens.

So you can see that first all the existing Teachers are returned, then a new Teacher instance is created, it's properties set, SaveChanges is called and then ResolveResource. For my simple in-memory implementation I just added the new Teacher to my static list of teachers:

In part 1 I showed how to create a simple in-memory custom data context for ADO.NET Data Services. Creating a managed client is also very simple. First we need to provide a similar domain model to our server. In this case the classes are identical except that now Teacher has a List<Course> rather than a simple array (Course[]) as it's Courses property:

Next I wrote a class to extend DataServiceContext with properties for Teachers and Courses that are both DataServiceQuery<T>. Both DataServiceContext and DataServiceQuery<T> live in the System.Data.Services.Client assembly. You don't have to create this class, but it makes the subsequent use of the DataServiceContext simpler. You can also use use the 'Add Service Reference' menu item, but I don't like the very verbose code that this generates.

Here's a simple console program that outputs teacher John Smith and his courses and then the complete list of courses. The nice thing is that DataServiceQuery<T> implements IQueryable<T> so we can write LINQ queries against our RESTfull service.

Yesterday I wrote a quick overview of ADO.NET Data Services. We saw how it exposes a RESTfull API on top of any IQueryable<T> data source. The IQueryable<T> interface is of course at the core of any LINQ enabled data service. It's very easy to write your own custom Data Context if you already have a data source that supports IQueryable<T>. It's worth remembering that anything that provides IEnumerable<T> can be converted to IQueryable<T> by the AsQueryable() extension method, which means we can simply export an in-memory object graph in a RESTfull fashion with ADO.NET Data Services. That's what I'm going to show how to do today.

The first thing we need to do is provide a Domain Model to export. Here is an extremely simple example, two classes: Teacher and Course. Note that each entity must have an ID property that the Data Service can recognize as its primary key.

For a read-only service (I'll show insert, update and delete in part 2) you simply need a data context that exports the entities of the domain model as IQueryable<T> properties:

I really enjoyed my 15 minutes of fame at last night's ALT.NET meeting. It was great to meet everyone, and I'm only sorry that I couldn't hang around for the after show drinks. Such is the cost of provincialism.

All the talks were good, but I especially enjoyed Seb Lambla'sOpen Rasta presentation, a very interesting RESTfull approach to ASP.NET. David De Florinier's talk on NServiceBus was also interesting since it's a tool I haven't had a chance to look at.