NUnit test discovery mechanism which says that in order for the test to be picked up, the test must be in a public method marked with [Test] attribute, and the method must live in a public non-static class marked with [TestFixture] attribute.

I will start with the basic technique, and sample code:

Foo.cs — contains class under test.

FooTests.cs — contains test fixtures for Foo.

The code under test and the tests themselves live in separate C# projects, as per “tradition”

// I have one C# project FooLibrary.csproj
// I have one C# source file Foo.cs, and it has the code and the tests
// I decided to make everything as internal and private as my design requires
namespace FooLibrary
{
// This is internal now
internal sealed class Foo
{
// This is internal now too
internal int StringToNumber(string s)
{
}
// This is private, as before, but I can test it!
private int SomePrivateMethod(string s)
{
}
// And here go tests. Note the difference with the original.
// The FooTests becomes an inner private class of Foo, thus
// hiding it from anyone and anything outside Foo.
private static class FooTests
{
// And here are standard NUnit tests. Yes these are public
// classes containing public methods. But these are not
// visible outside of Foo because they are inside private
// inner class FooTests
[TestFixture]
public class StringToNumber()
{
// Same test as before
[Test]
public void WorksWithStringsWhichAreNumbers()
{
var foo = new Foo();
var actualResult = foo.StringToNumber("10");
Assert.That(actualResult, Is.EqualTo(10));
}
}
// And this fixture is to test SomePrivateMethod.
[TestFixture]
public class SomePrivateMethod()
{
[Test]
public void ItWorks()
{
var foo = new Foo();
var actualResult = foo.SomePrivateMethod("whatever");
Assert.That(actualResult, Is.EqualTo(1000));
}
}
}
}
}

When I run NUnit over FooLibrary assembly, it happily picks up my tests and everything works.

The obvious observations here are:

I keep tests real close to the code they test. No more hunting over projects.

Moving/renaming Foo class moves the tests too. No more remembering to move the tests.

Much less overhead. No more creating separate projects, files etc.

And most importantly:

My assemblies become self-testing. Just point NUnit to any assembly and run the tests.

I can test private methods! Without using whacky reflection or other nastiness.

Tests are hidden from everyone but NUnit can still run them. And I didn’t have to do anything special like using preprocessor with #if/#else.

And we even get nice test names like this code>FooLibrary.Foo+FooTests+StringToNumber.WorksWithStringsWhichAreNumbers.

So why and how does this work at all? Don’t NUnit test fixtures have to be public for NUnit to pick them up? How can we call private methods? Why are tests hidden from everyone but NUnit still picks them up?

The answer is actually quite simple:

Tests are hidden from everyone because they are wrapped in a private inner class FooTests.

We can test private methods because inner classes have access to private members of outer classes

And NUnit recognises our tests because it doesn’t say anywhere that classes need to be publicly accessible. All NUnit needs is a method which is declared public in a class which is declared public. And that’s exactly what we have: our test fixtures are declared public. And it doesn’t matter if they are not publicly accessible!

Here is how it looks in the NUnit runner:

Scaling the technique using partial classes.

OK, on large projects with lots of source code it may not be practical to keep tests in exactly same source file as the rest of the code. And there is an easy solution for this: use partial classes.

PS. There is one small issue with using “inner classes” technique with ReSharper runner, the way it displays the names of the tests.

Something which is hopefully minor in comparison to the benefits and something ReSharper team may fix in the future.

Update: This technique doesn’t work when the outer class is generic because NUnit would need the actual generic parameters to instantiate the test fixtures, even though the fixtures themselves are not generic. Pity.

Advertisements

Like this:

6 thoughts on “Structuring Unit Tests: the Easy Way”

Hey, I stumbled upon this blog post by accident. I wondered, did you ever get any feedback on this idea? It looks appealing to me, but abit unusual too, so hence I wonder what other people think of this.

I disagree, you are adding unnecessary code to your application which bloats the size of the executable. For a web application this increases JIT compile time, memory needed to run application, etc. In addition, I am sure that most data security groups in large organizations would have many reason to not allow test code to be deployed to production servers. There is a reason “best practice” have unit / integration tests in separate projects.

First, let me answer the original question: did I get any feedback on this.

Yes I did. I ran this idea past our team. General response was positive. We certainly have used in on a real project. It worked well.

Are we using it all the time? No. The reason may be surprisingly because we don’t do a lot of unit testing in it’s classical sense. Instead, we do a lot (and I mean a lot) of end to end integration testing (yes, with NUnit). And the reason for this is because what we do is really all about integration. So this technique is not a very suitable for our use case. But when we do have classic unit tests, I certainly found this technique very useful.

To address the point that “including tests bloats your executable size,iIncreases JIT times etc”.

My generic answer to this, without knowing exact circumstances, is:
– do we care?
– does it matter?
– measure it

For us, it absolutely doesn’t matter. Our performance bottlenecks are elsewhere. The day I start caring about JIT times and executable sizes will be the day I can say “yep, our job is done here”.

On the conceptual level, I don’t see it a bad idea to include test code together with executables. After all, everything I use in real world (except software) has tests and diagnostics built in. Car? Yep. TV? Yep. Broadband router? Yep. Microwave? Yep. BIOS? Yep.

Why should our software be different? I get something from Nuget, I want to run those tests tests on those assemblies as part of the build in MY environment, on MY version of Windows and .NET framework. I deploy my software into production? I want to run those self-tests in THAT environment.

I can’t express how much grief I have seen, and time wasted, and subtle bugs out there because of environmental issues. And I can’t express how much gratitude you get from the ops guys and those who are on the front line when there is a built in diagnostics and self tests as part of the package.

So anything we can do to make lives of our customers, support teams, and anyone who uses our software easier, it is worth the price. And if that price is bigger executables, so be it. Come on, you can’t be serious worrying about the executable size can you? We certainly haven’t reached that threshold of worry levels yet.

As for security, I work for a very (and mean VERY) security aware organisation. And embedding test code is the least of our concerns. It is a concern, but in grand scheme of things it’s really minor.

Well anyway, individual mileage may vary as they say. We don’t use it blankly and everywhere but where it makes sense. It may work for you, but it may not.

This seems like a very useful technique to increase test coverage without incurring the “death by interfaces” scenario that can occur with extensive mocking. I agree that the “code bloat” would be small and thus not a problem.

Thanks for outlining this concept, I’m sure I will use it in the future.