I recently started a new project which is very focused on having good test coverage which is great goal to have. However, looking through their tests I started to notice a lot of setup code which was very similar and looked a lot like the following:

Now the above code isn’t all that bad in a single test but what happens when you start to write more tests? Well you could copy and paste the code for creating addresses from test to test which will certainly work but you end up with a lot of very similar code which you have to maintain going forward and update whenever a new address property is added or removed. All this sounds like a lot of work plus it goes against the DRY principle. DRY for those of you who don’t know means “Don’t Repeat Yourself”. The principle encourages that duplication within an application is removed as every line of code written has to be maintained and each line of code can cause bugs. More importantly the principle is about having a single source of knowledge within a system and then using that source to generate instances of that knowledge. In this case the instances of knowledge would be the address objects which are being created within each unit test. The question then is how do we create something which we can use as the source of that knowledge?

Enter the builder

That single source of knowledge within the project I’m working on is provided by using the object builder pattern. Using this pattern we can have a single and consistent approach to creating the entities within tests and the ability to easily extend those entities within our domain and test data. We can also chain the builders together to create complex test data very quickly.

So how do you create a builder? Well below is my implementation of an address builder to get you going.

1:publicclass AddressBuilder : BuilderBase<AddressBuilder, Address>

2: {

3:privateint id;

4:privatestring addressLine1 = GetUniqueValueFor();

5:privatestring addressLine2 = GetUniqueValueFor();

6:privatestring addressLine3 = GetUniqueValueFor();

7:privatestring addressLine4 = GetUniqueValueFor();

8:privatestring city = GetUniqueValueFor();

9:privatestring countryCode = "GB";

10:privatestring postCode = "SE1 9EU";

11:privatestring county = "London";

12:

13:public AddressBuilder WithId(int id)

14: {

15:this.id = id;

16:returnthis;

17: }

18:

19:public AddressBuilder WithAddressLine1(string addressLine1)

20: {

21:this.addressLine1 = addressLine1;

22:returnthis;

23: }

24:

25:public AddressBuilder WithAddressLine2(string addressLine2)

26: {

27:this.addressLine2 = addressLine2;

28:returnthis;

29: }

30:

31:public AddressBuilder WithAddressLine3(string addressLine3)

32: {

33:this.addressLine3 = addressLine3;

34:returnthis;

35: }

36:

37:public AddressBuilder WithAddressLine4(string addressLine4)

38: {

39:this.addressLine4 = addressLine4;

40:returnthis;

41: }

42:

43:public AddressBuilder WithCity(string city)

44: {

45:this.city = city;

46:returnthis;

47: }

48:

49:public AddressBuilder WithCountry(string countryCode)

50: {

51:this.countryCode = countryCode;

52:returnthis;

53: }

54:

55:public AddressBuilder WithPostCode(string postCode)

56: {

57:this.postCode = postCode;

58:returnthis;

59: }

60:

61:public AddressBuilder WithCounty(string county)

62: {

63:this.county = county;

64:returnthis;

65: }

66:

67:publicoverride Address Build()

68: {

69: var address = new Address

70: {

71: Id = this.id,

72: AddressLine1 = this.addressLine1,

73: AddressLine2 = this.addressLine2,

74: AddressLine3 = this.addressLine3,

75: AddressLine4 = this.addressLine4,

76: City = this.city,

77: CountryCode = this.countryCode,

78: County = this.county,

79: Postcode = this.postCode

80: };

81:

82:return address;

83: }

84: }

As you can see the code behind the builder is really very simple with default values assigned to the properties which can then be explicitly set using the “With” methods on the builder. When I’m using builders I also like to add a base for them which provides default behaviour.

There is hopefully no need for me to discuss the benefits of unit testing but with more and more shops adopting agile, “scrum-but” or other TDD-like processes, people start to realise that it’s not all plain sailing. One of the major headaches related to unit testing is the cost of maintaining the tests themselves. Unless you get smart about your testing approach, growing number of unit tests may become a drag over time seriously increasing the cost of any change, becoming a hindrance rather than help.

In order to be successful with any form of TDD it’s worthwhile to consider some basic rules. With notable exceptions of posts by Szczepan Faber and Michael Feathers, Google renders rather poor results on the subject so I decided to give it another go. What you are (hopefully) about to read is the result of the learning I have received with other #Fellows in the “school of hard knocks”. We have learned by our, sometimes grave, mistakes so be smart and do not repeat them.

Rob C. Martin in his Clean Code mentions the following FIRST rules when it comes to unit testing. According to the rules unit tests should be:

Fast, as otherwise no-one will want to run them,

Independent as otherwise they will affect each other’s results

Repeatable in any environment

Self-validating (either failing or passing)

written in a Timely manner (together with the code being tested, not in some yet undetermined “future”)

The rules are by all means valuable but when it comes to practice of programming we need to get a bit more detailed. So let’s start with the basics

Design for testing

The prerequisite to successful unit tests is the “testability” of the code, and for the code to be testable you need to be able to inject it with mock implementation of dependencies. In practice it means that database, network, file, UI etc access has to be hidden behind some form of abstraction (interface). This gives us the possibility to use mocking framework to construct those dependencies and cheaply test various combinations of test inputs. The other obvious benefit is the fact that your tests become independent of difficult to maintain external “variables” as there is no need to deploy files, databases etc. When it comes to UI dependencies (using UI controls etc) avoid them in your testable code as they may introduce thread affinity which cannot always be guaranteed by the environment/test runner.

The testable architecture naturally gravitates towards SOLID design principles as individual classes tend to have single responsibility and have every chance to remain “solid” regardless of the project timescales and turns in direction it may take.

Do not repeat yourself

The primary sin of unit testing is repetition and it is easy to explain it: you start with one test of your class Foo. Then you add another test scenario, so now both of them will have to get an instance of Foo from somewhere and they will obviously “new()” the Foo. Then you will have more methods to tests and more lines of new Foo(). And one day you will need to add a parameter to Foo’s constructor and suddenly all those pesky tests will need to be reworked. You may argue that it should be easy enough to add an overload, but sometimes fabrication of objects gets more complicated: you need to use properties, call some methods on them etc. For this reason my advice is: Do not Repeat Yourself (stay DRY), use factory methods, builders etc in order to avoid the costs of refactoring which will inevitably get you one day. Funnily enough John Rayner of this shire produced recently a very nice automocking container which helps in instantiating types with large number of dependencies.

I find it quite interesting that developers are often willing to accept substantially lower coding standards when it comes to unit tests, often copy-pasting entire test cases only to change one line within it. Before you realise it, your test code base becomes a tangled, repeatable mess and introducing any change becomes a serious cost and challenge. If you ever come across a situation when you need to test similar cases over and over again, re-factor a common method, use a template method pattern or row based testing as implemented in NUnit etc but please, do not repeat yourself!

Keep the intent clear

How many times have you come across a test methods called “Test1”? Not a very helpful name, isn’t it? The purpose of the tests should be crystal clear as otherwise it may be very difficult to decipher over time the original intent. And if the intent needs to be ever deciphered, it adds to the cost of change. I have worked once on a project where the cost of refactoring tests was substantially higher than the cost of changing the code itself and other than violation of the DRY rule, the primary reason was the difficulty of “rebuilding” the tests so that they tested the original behaviour exactly as intended. To make the intent of the test clear I try to follow these rules:

Have a consistent naming convention for tests and test methods: say Ctor_WIthInvalidArgs_ThrowsException(). This naming convention outlines what is being tested, how it is being tested and what is expected. Or better yet…

Consider using NBehave to specify your expectations and assumptions. Side effect of using NBehave is the fact that your expectations and assumptions become crystal clear

Follow “triple A” structure for test methods (Arrange, Act Assert) and make it obvious in the test code what are you doing

Test one thing (method, property, behaviour) per test method

Add messages to asserts. Asserting that (a == b) won’t tell you much when the expectation gets violated.

Do not share static state

In order to stick with the Isolation rule, it’s good practice to give up on shared static state as it is very easy to introduce frustrating dependencies between tests. Other than this, static variables are either difficult or impossible to mock, or lead to seriously hacked code (a singleton with a setter etc). For this reasons any statics, singletons etc have to be avoided for the sake of testability. If you think of this restriction as harsh, think again: there is always a way to replace your singleton with another pattern (say a factory returning a shared instance) which at the end of the day produces the same results, yet is far easier to test and more flexible in the long term.

Avoid mega-tests

I sometimes come across a phenomenon which I call mega-test: this usually involves running a process end-to-end and producing some output file (usually in 10s of MBs) which is then compared to a “master” file with the expectation that it has to be identical. This is probably the worst ever way to test software and the only circumstances where it could be acceptable practice is when refactoring a system which has no unit tests whatsoever. The problem with tests like this is the fact that when they fail, investigating the cause of failure is very expensive as you will have to trawl through tons of code to get to the bottom of the problem. Test failure should make you smarter and you should be able to say what went wrong pretty much immediately. Do not get me wrong, I am not saying here that you should not have “integration tests”. But the fact remains that majority of the testing work should be done through unit testing of individual components rather than slow and laborious “end to end” testing which often requires difficult preparation, setup and deployment.

Avoid strict mocks

This is another constant source of grief as strict mocks do not allow your code to deviate from pre determined expectations: so strict mock will accept the calls you set it to, but anything other than this will fail. If the code under test changes a little and does a bit less or more with the mock, the test will fail and you will need to revisit it adding extra expectations. Do not get me wrong here, strict mocks are valuable tool, but be very careful how you use them and ask yourself what exactly are you testing?

Maintain symmetry

Symmetry between code being tested and unit tests is all about 1:1 relationship between the code and it’s unit tests: so you have one test fixture per class, one test project per assembly etc. The reason for this symmetry is the fact that if you do change a class, there is (hopefully) only one test fixture which you need to execute to get a good idea if the code still works. This setup makes life easier as if you run your tests constantly you do not have to run all of them all the time. It also becomes easier to package code together with tests in cases you want to share it with someone.

Szczepan Faber mentions that testing class hierarchies is difficult and I agree with him, but my approach to resolving this problem is slightly different. If you maintain symmetry, every type (including base and derived types) will have a corresponding test fixture, which means that you will not get into a situation where you test the base through tests designed for the derived class. This is a rather uncomfortable as you will potentially have the same functionality tested multiple times, or worse yet, when the derived type gets scraped, your base class will be left “uncovered”. Maintaining the symmetry will help you alleviate this issue.

Maintaining the symmetry may be tricky in case of abstract types which cannot be instantiated, but in this case you can either derive a special “test only” type or use one of the already derived types hidden behind a factory of sorts.

Get clever

Sometimes in large projects it becomes necessary to follow certain convention in your unit tests, e.g. it sometimes makes sense to run the tests within transaction scope, or follow certain Setup/Teardown standards. In such cases you may want to derive all test fixtures from a common base, and to enforce the rule, write a test inspecting all test fixtures through reflection making sure that the test classes do indeed use common base class. I used this approach in the past to isolate all integration tests from each other through transaction scope.

Do not get too clever!

The most frustrating test failures are related to someone being unusually clever and writing a test which for example sets value held in a private field using reflection as the code cannot be tested otherwise. Then you happen to rename this field in good faith and everything builds, you do not see any reason to run the tests, check in your code and the integration build bombs out. Not a pretty sight but it’s an evil practice so please do not try to get overly smart with your testing.

About auto-mocking

The Problem of Dependencies

When working in an Inversion of Control (IOC) container, best practice dictates that you let the IOC do as much work for you as possible and inject your dependencies. As a rule of thumb, this typically means that classes contain less code and conform better to the Single Responsibility Principle, but they do usually end up with a greater number of dependencies.

In addition to this. IOC-style coding often uses constructor injection for dependencies which are critical to the functioning of the class and property injection (aka setter injection) for other dependencies (see Martin Fowler for explanation of these terms).

The net result of all this, is that classes often end up with quite a few constructor parameters. And then when you come to write some unit tests for your classes, you end up having to create and manage a large number of mock objects, even though not all of those may be relevant to your test.

And do we need another AMC?

With the notable exception of James’s library, you typically need to take a dependency on an IOC and spin it up in your unit tests. This is not too much of a problem if your IOC has some support for an AMC. On my project we are using MEF as an IOC and there isn’t (before now) an AMC that works with that.

The reason I didn’t use James Broome’s library is that Machine.Specifications (aka MSpec) is not really my cup of tea. I was imprinted with a different BDD framework, NBehave. James’s library is quite tightly coupled to the MSpec way of working.

MEF Support

The #Fellows AMC provides support for MEF, but it is not a required dependency. The project files do have a reference to System.ComponentModel.Composition (the MEF assembly) but unless you actually invoke the MEF Dependency Locator through policy, it will never be required at run-time.

And here is a MEF-based view-model with a test using the #Fellows AMC:

The #Fellows AMC will inject against an [ImportingConstructor] (or will invoke a default parameter-less constructor) and will then inject against properties attributed with [Import]. Constructor parameters or properties attributed [ImportMany] are not currently supported for auto mock injection.

Now it takes quite a sharp eye to spot the difference in the test. It is a single line of code that sets up MEF as part of the policy:

container.Policy.Set(new MefDependencyLocator());

What is Policy and how is it applied?

AutoMocking Policy is really what gives the #Fellows AMC its flexibility and is what sets it apart from the other AutoMocking Containers which are out there. Put simply, policy consists of three areas:

Each of these areas be configured on an individual container (as per the above), or it can be configured at a static level where it will affect all containers in the AppDomain that are subsequently created.

ObjectFactory.DefaultPolicy.Set(new MefDependencyLocator());

Policy components that are currently available in #Fellows AMC are as follows:

Dependency Locators

Reflection-based (using the greediest constructor) [default]

MEF-based

Lifecycle Controllers

Shared dependencies [default]

Non-shared dependencies

Mock Generators

Rhino Mocks

The three areas of policy work together to give the #Fellows AMC its overall behaviour.

Policy is an area that I think can be developed a lot further, both in terms of adding new policy components as well as allowing policy to be specified at a more granular level. For instance:

specifying a mock generator that will instantiate object for classes in a given namespace

specifying that some classes should not have auto-mocking applied at all

specifying that certain dependencies should be shared whilst others should not

Furthermore, I suspect it would extremely useful to allow policy to be read from the config file. This would allow us to centrally specify policy for all our tests without needing some code to adjust static state on the ObjectFactory.

Roadmap for the #Fellows AutoMockingContainer

The first thing I want to do is to get Silverlight support. Or more specifically, an assembly which is compiled for Silverlight and against the relevant Silverlight assemblies (e.g. RhinoMocks SL version), since that is all that’s needed. And after that, I’m hoping to extend the policy mechanism significantly.

Where can I get the #Fellows AutoMockingContainer?

Assemblies for .Net 4.0 are attached to this blog post. If you are interested in the source code, then you can download it from the BitBucket repository.