Pages

Thursday, September 24, 2009

I’m sure we’ve all read and can agree to the value of having code-based testing as a standard part of our development process. By including unit tests, etc. we can automate the testing process and obtain a higher degree of confidence in our code before we release it for formal QA testing. By diligently creating tests when defects are found, we get regression protected in the hopes that we can catch problems before they are released.

This is, of course, easier said than done in most cases. Even after nearly 30 years as a developer, I am just as guilty as the next person when it comes to developing real tests that provide not only code coverage but true substance.

I know, for instance, that I should be testing each possible path through my code to make sure that each behaves properly. But that’s just plain hard! Or, I guess, to be more accurate – that takes way too much time!

Unfortunately, as I’ve moved from developer to lead to manager and expanded my portfolio, experience and knowledge along the way, I have to admit that’s not a good enough reason not to do it. So, I find myself in a forced sense of discipline on my new projects to make sure that every type has a matching set of unit tests and that each “sub-system” in my solution has integration/system tests and so on. My goal is to achieve that confidence in my code with one click of a button.

I have a few… conditions, shall we say, on how my solution is structured that cannot be sacrificed for the sake of testability. These are:

Separating all tests in satellite assemblies and not embed them within the same code projects (assemblies). My reasoning is simple and common: I don’t want to deploy test assemblies with my production application. And, even though we can use compiler directives, etc. so that the actual test classes aren’t compiled into the release version of the assembly, those references still exist and all dependencies will be included in the final product. Not good.

We have to be able to run our tests as part of an automated build process with or without true Continuous Integration. Doing this means that we have a high level of assurance that our tests are actually being run. You’d be surprised how often I’ve seen companies that spend the time and effort to develop unit tests then leave then to a developer’s discretion, discipline and memory to run them. Nope, this needs to be a standard part of the build process.

Scope all types defensively to protect our Intellectual Property (IP). This means that everything starts out internal (Friend in Visual Basic). Only when we deliberately expose a type as part of a public API will the scope be changed to public. To support this, we sometimes have to use the InternalsVisibleToAttribute so that these internal types are accessible from other assemblies in our solution. This is how we expose our internal types to our testing assemblies.

There are those that will jump in and point out that simply scoping a type internal doesn’t protect our IP – and they are right. Tools like Reflector have no problem opening up even your internal types for public viewing. However, if you add obfuscation into the mix, you do get to lock away your proprietary code. Obfuscation tools, like the Dotfuscator utility provided with Visual Studio, are much more affective when they know they can safely obfuscate a given type. Sure, there are rules built-in that allow the tool to determine if a type can be obfuscated, but scoping something internal tells the tool it is safe. And even Reflector can’t get around the results.

One thing to note is that obfuscation will not work even when a type if scoped internally if that type is being used in another assembly. This, by its very nature, means that the type is actually public in practice. So, we only include obfuscation as a step when building a release version of the product.

So what does this have to do with Moq as indicated in the post’s title? Well, it turns out that there are a few extra steps and a couple “gotcha’s” we have to know about and keep in mind when trying to use a mocking framework, such as Moq, in our unit tests.

Because we want our tests to have access to our internal types, we have to include an InternalsVisibleTo attribute in the source assembly (the assembly containing our internal types under test) granting access to the test assembly. That seems pretty obvious. But, the gotcha here is that Moq, and other similar mocking frameworks like NMock and RhinoMocks, use the Castle Project's DynamicProxy library to generate a temporary assembly that contain proxies for your types created on-the-fly at runtime. Unfortunately, this means that the dynamic, temporary assembly ALSO needs to be able to reference your internal types. So, we have to include another InternalsVisibleTo attribute in our source assembly as follows:

[assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")]

But there is still another gotcha. The above statement is perfectly valid if, and only if, your source assembly is not strongly-named. If it is, then you will need to include the PublicKey parameter in the statement. In fact, your project won’t even compile without it. So, if you are strong-naming your source assembly, use this statement instead:

Final gotcha – You can’t use the PublicKey if your source assembly is not strong-named. It would certainly make life easier if you could simply use the second version of the attribute all the time but alas, that’s not the case. So be aware and use the right syntax to fit your situation or you might end up with an error message like this: