I always thought that I am up and running the moment Visual Studio is installed on a machine. Unfortunately, life isn’t that easy any more and I thought it might be interesting to share what I consider essential from my toolbox.

IDEs

Although some people say they can work with other IDEs in .NET, I consider Visual Studio a absolute necessity. Not so much for the Studio itself but as a shell for Resharper, the very best tool I have yet come across. It’s so damn convenient and increases productivity by such a margin that I simply can’t use Visual Studio without it any more. Although Resharper isn’t free, I strongly suggest you try it out for 30 days. I feel it’s a good investment.

And Notepad++ is one of many great simple editors that make editing and reviewing of files easy.

Source Control

If you plan on working with open source tools, be prepared to bring their tools to the party. Nothing is worse than needing some source and not being able to access the SCM. I usually install the following:

TortoiseSVN - Subversion right from the Explorer right click menu. Very good and very mature SVN client.

SlikSVN - Unfortunately TortosieSVN doesn’t install SVN binaries, so if you want to be able to run SVN from the command line you better get the conveniently packed SlikSVN subversion binaries.

TortoiseHG - Same idea as with TortoiseSVN but for Mercurial, but it installs the hg binaries so you can use hg from the command line.

msysGit - A GUI for git together with a custom git command line that emulates a *nix shell for git operations. Not so convenient as HG, but Fluent Nhibernate and most tools by Jon Skeet use git.

Build tools

Getting the source is usually not enough, sometimes you need to be able to build it too. While most projects can be built by simply starting up Visual Studio and building, others require you to run a build script like NAnt.

NAnt - NAnt is a free .NET build tool. In theory it is kind of like make without make's wrinkles. In practice it's a lot like Ant.

Download the latest NAnt release and unpack the zip to some convenient folder. Then create a file called nant.bat in your C:\windows folder with the following content:

@echo off
"C:\Program Files\NAnt\bin\NAnt.exe" %*

(Obviously you should change the path to your NAnt executable).
Now whenever you encounter a project with a *.build file you can simply start a command line window and type nant to start building the source (that’s how you build the the Castle Project and NHibernate).

Rake - I don’t use rake, but I sure know Fluent NHibernate does. Rake is the build tool used for Ruby projects, but it’s gaining popularity. On windows installing it was rather simple, just get the Ruby One-Click Installer from their downloads page and install it with gems (gems is used for installing extensions and libaries).

If none of the above apply, usually every project has a howtobuild.txt that instructs you on how to run the build.

Database

Hugely depends on what tools you use. But it never hurts to have the following:

Sql Server 2008 Management Studio Express - It’s free, and allows you to run queries and create databases. Nothing fancy as reporting or real server administration, but what developer really wants to do a DBAs job?

NHProf - If you are using NHibernate for your data access needs (and I believe you should), you will find this tool well worth it’s money. It’s by no means cheap, but it will watch all your database queries, analyze them and point out possible performance bottlenecks for you.

Others

.NET Reflector - Sometimes you don’t have access to the source code, or you don’t want to get the source just to look at one file.

.NET Reflector allows you to look at all the types inside an assembly, and if it’s not obfuscated it allows you to decompile it into your language of choice and look at the code (you could decompile VB programs into C# for example).

Please note that there are myriads of other tools lists out there, and if your are a web developer you’ll need some more tools for debugging HTML/JS. The above are the ones I consider essential to do .NET desktop/backend development when using open source libraries as Castle or NHibernate.

After posting my tools list today I got asked why I didn’t list any testing frameworks. Obviously, I love testing, so why no testing tools like Gallio, NUnit or Testdriven.NET etc? The answer is rather simple, Resharper runs my tests for me.

By default Resharper can run NUnit tests, if you install Mbunit it can run those too, and if you just copy over the resharper support library from the XUnit contrib project to your Resharper/Plugins directory it can also run XUnit.

Also, almost all open source frameworks out there include their test runner in their code tree, so you don’t need to worry about what exotic test frameworks are out there, you’ll be provided with the appropriate runners.

On the testing framework side I recently (~4 months) switched over to xUnit as it’s syntax felt much better than that of xUnit or Mbunit. Also I am currently looking into maybe using a BDD testing framework like MSpec.

I just spent almost 10 hours running in circles collecting different releases of NHibernate, Fluent NHibernate and castle, in a rather futile attempt to make them work together.

NHibernate is currently in 2.1 Beta1 at revision 2.1.0.2001 referencing the latest Castle.DynamicProxy2 at revision 2.1.0.0.

Fluent NHibernate’s current trunk compiles against NHibernate 2.1.0.1001 that in turn is compiled against Castle.DynamicProxy2 2.0.3 thus breaking.

So, while the NHibernate Project and the Castle project managed to match their versions pretty well, the Fluent NHibernate trunk was not.

So, the obvious choice would be to just take all the assemblies that are packed with Fluent NHibernate and work from that. But by doing so I then lack Calst MicroKernel and Windsor, two libraries that are not packed with Fluent NHibernate.

At that point I gave up and simply recompiled the Fluent NHibernate trunk with the latest NHibernate trunk.

I feel like I’m constantly falling behind on stuff I want to post about but don’t get around to. One of which is the version control system Mercurial I have been using now for almost 4 months and loving ever since. Since Google just decided to enable Mercurial on Google Code I figured it’s a great time to write about it.

Introduction

Mercurial is a distributed version control system, or DVCS for short. It is in the ranks of Git and Bazaar, leading a new paradigm of working with version control.

Philosophy

This new paradigm of distributed versioning allows for several things that centralized development does not. Specifically, it provides:

Allows commits/logs even when working offline

Drastic increase in speed for most operations

Ability for anyone to have their own copy of a project, and continue work without explicit "commit access"

No requirement to publish changes

No need to set up a server for version controlling things (self-contained)

So, it’s like your own private version control system. Nobody can mess with it, you own it. Which is great, I mean: How often has a fellow coworker submitted something to your tree that made your code break? Or how often did you update before a commit just to see that the update breaks something (and your changes weren’t commited, so you’re at the mercy of merges)?

What also rocks is it’s simplicity. Mercurial needs no server, so even on my little pet projects I can leverage the power of a SCM system without the headache around setting up something in a central place.

What most people though fail to understand is that the centralized model does also mean that you need to share your private changes with the world at some time. And while doing so on a shared filesystem is very easy (when sharing with a coworker for example), doing so over the wire is non-trivial as it would require you setting up a server somewhere.

Bitbucket's aim is to compensate for this while maintaining the flexibility and benefits of DVCS. It does this firstly by providing a centralized location for a repository which provides a sharing-point for one or more developers to grow their code base. Secondly, it provides a set of tools that ease development and sharing of a code with the rest of the world.

Bitbucket is free and gives you 150mb of disk space, an issue tracker and a wiki for each of your projects. While the limitation is that only one project/repository per account can be private (not open source), there is no limit on how many public repositories you can create.

I suggest reading the guides on Bitbucket (or the book) on how HG differs from SVN and how the usual workflow looks like. Also I suggest installing TortoiseHg, this will install all the hg command-line as well as a nice shell integrated GUI like we are all used to from TortoiseSVN.

What I love most about programmatic configuration is that it’s close to the test. While we were carrying dozens of XML files around for testing before, now with DSL based configuration everywhere the configuration is usually pretty near to the test fixture, instead of residing in some arbitrary XML that only insiders can associate with the test.

The standard sample for using SqlLite and Fluent NHibernate usually looks like this:

Now, Fluent Nhibernate has in-memory databases built into the API. Just remove the UsingFile directive and you replace it with:

.Database(SQLiteConfiguration.Standard.InMemory().ShowSql())

Charming isn’t it? Now the only problem is that you won’t be able to do anything with that DB since there is no schema present.
The in-memory database exists per session, so once you close the ISession the db is gone. Since the schema export from most samples operates in it’s own ISession the subsequent queries will still hit a blank database, and you’ll get an error stating there is no such table.

So my SessionFactory implementation had to change, since I needed to keep the configuration around for doing the schema export:

Hope this helps, quite an annoying problem and imo a far from perfect solution. Someone on the FNH mailing list suggested looking at the OneToManyIntegrationTester class but I couldn’t really extract any terribly useful information from there.

This class would typically be some sort of decorator that covers the IService interface with a thin layer of concerns (like logging/errorhandling/caching). Now, you can just register more than one IService and it will walk the graph for you:

Also now there is the possibility to split the graph at some point like this:

You don’t need three registrations for this, only one since every subresolving of IService will happen on it’s own. Something that isn’t really practical actually since we still lack the ability to influence the resolve process. Both IServices will be populated by the same registered type, it just won’t be the parent IService again.

I also spent almost the whole day refactoring the resolver code since I felt that it became quickly unreadable.

Next:

I still feel like the resolver needs some more refactoring, and I also want to improve the error messages. After that, I’d like to return my focus back to much needed features like lookup strategies and after that auto-configuration.

I’m amazed about how bad the ASP.NET MVC code really is. Why? Because something as trivial as redirecting the output of a View to another TextWriter shouldn’t take more than 5 lines of code and certainly not be impossible!

But from the start, here’s the scenario. ViewResult should not be written to HttpContext.Response but to some arbitrary TextWriter. So, for me the most obvious choice was to alter the ViewResult to write to somewhere else. So redirecting should be as easy as:

Now, nothing left to do but override the ExecuteResult method and call View.Render() with another writer:

So, what do you see here? A method that takes a ViewContext (containing ViewData, TempData, HttpContext etc..) and a TextWriter. Any normal person would now jump to the conclusion, that if I build a ViewContext and pass in my TextWriter, I’m set and all is well.

So, I spent almost 30 minutes trying to find a way on how to construct the ViewContext (without copy/pasting the code from within the framework, btw. forget it – they married ViewContext creation with View rendering, so no way to separate those) just to find out that my output was still written to HttpContext.Response.

That’s when I looked at the WebFormView class that implements the IView interface, but does so badly.
Let me explain:

Do you see the problem? Writer is NEVER used. They violate their own interface in their own code. What a mess.

Something similar has been done with the EmailTemplateService in MVCContrib, but it’s very email specific and works with using a MemoryStream as filter to the HttpContext.Response (not happy with that either, but apparently the only way).

I’ve used my fair share of ORM tools lately and I keep coming back to NHibernate more and more. But sometimes it feels like “too much”. Going for a dedicated data access layer often feels like a bit of overkill and isn’t all that exciting to do, so for smaller projects I sometimes took the easy way and used LinQ to Sql. Usually LinQ to Sql would then come around to bite me at some point and I’d hate myself for using it. So when I lately tried Castle ActiveRecord (watch this wonderful presentation by Ayende and Hammet at InfoQ: Painless Persistence with Castle ActiveRecord) I was blown away: A simpler way to build stuff with NHibernate. No complete abstraction of NHibernate, but a layer ontop that makes life easier. (You can access the Session manually and do your crazy NHibernate stuff if you want to).

Since ActiveRecord was created to avoid some of the crazy configuration stuff that was in NHibernate at that time, I thought it might be interesting to compare ActiveRecord to the current state of NHibernate configuration, namely: Fluent NHibernate.

This is by no means a matchup, what you use hugely depends on your needs. And since both are essentially the same it boils down to taste. I haven’t used either for any real application (only old NHibernate with XML), so this is my personal rundown on the samples and spikes I did lately.

Let’s assume the typical Blog sample model:

I’ll only focus on the BlogPost class since it is on both ends of a parent/child relationship.

NHibernate has the ability to have the Model separated from the mappings, a feature that I like when doing enterprise applications. But for something simpler, not really relevant. Where Fluent NHibernate really blows AR away is the auto mapper feature Ayende is so happy with.

Initialization

ActiveRecord hasn’t departed from XML yet, but there are three ways to load a configuration. The simplest would probably be the inplace configuration source, but it’s more or less a in-memory mirror of the XML config:

The one is more OO, the other is simpler. I have to say I prefer the added complexity since it makes it easier to test that your code interacts with the session object. (Although I’d not overuse that, going against an in-memory db and verifying the results should be the better way to create durable tests).

Since both AR and NH use the NH Criteria API I won’t compare complex queries.

So, how to conclude on the usability? Not having to think about how to pass the session around is quite nice, but when using NHibernate you could easily just write your session to some global and use it from there.

Validation

Now, what would a ORM be without a good validation framework.

ActiveRecord comes prepacked with the Castle.Components.Validator assembly that allows a rather cool attribute based validation API:

While NHibernate doesn’t have a built-in Validation framework, there is the NHibernate Validator (documentation) project that also allows a very clean attribute based validation.

[NotNull]
public static string blacklistedZipCode;

But as before, it’s a bit “heavier” than AR and allows you to have external ValidatorClasses if you need them. Also cool, but something I don’t really care about is that NHV is also XML based, so you could allow your users/admins to tweak the validation rules after deployment.

I guess that’s enough for now, use what you find suits your project better. :)

Important: Most of the information in this post about outdated MonoRail docs is now also outdated since I submitted a patch to the castle documentation project to fix the issues raised in this post.

Yesterday I decided to build a website with Castle.Monorail to learn another take on the MVC (besides the Microsoft one). Since the last official release of Monorail is quite outdated I just compiled the trunk version to start from the current release.

Some time ago Roelof Blom made an effort to make the build more user-friendly, so you don’t need any tools installed besides .NET to compile all castle assemblies. Just run ClickToBuild.cmd and it will invoke nant (also in SVN) etc, run a complete compile of all projects and place the output in

build\net-3.5\release

(Did I mention this is awesome? Building the trunk before was a nightmare!)

I then followed the Monorail getting-started samples from the website and was quite frustrated with how outdated that documentation really is.

Please read through the original documentation as it usually still applies, I’ll just list the things that I had to adapt for the trunk version to work:

If I want a field to contain the date of insertion I’d usually have to hook into the Create method somewhere be it my repository or whatever. With Castle ActiveRecord it’s as simple as overriding the Create() method in your Entity: