April 2009

April 18, 2009

The word “test” in software is a very loaded term. The first time I came across a tester (as in, a person performing the quality assurance role) was in my first professional job. We were never taught about testing at college, as we were all there to learn to be “developers”.

It turns out there are lots of different kinds of testing. In a series of blog posts titled Agile Testing Directions, Brian Marick came up with a 2x2 matrix of test types, and describes the activities which happen in each corner of the matrix. If you haven’t read the blog posts, they can help to provide a vocabulary about the kinds of testing that most software teams undertake.

A lot has changed in the nearly 6 years since Brian wrote those posts. Test Driven Development is arguably the most well known and most popular part of the eXtreme Programming software development process. For several years, I’ve been giving talks and coaching teams on how to do Test Driven Development, and by far the most frustrating part of the process has been overcoming people’s expectations when you use the word “test”. For most developers, “testing” is something that’s done by “testers”, not something that’s done by developers. Even those developers who are savvy enough to be doing unit testing are still practicing an activity whose primary purpose is quality assurance.

What the name Test Driven Development has going against it is that it doesn’t properly express the purpose of TDD; namely, that it is a process designed to help you drive and iterate the design of your implementation at the unit level. The result of the design process is unit tests, but their primary purpose is not one of quality assurance; rather, it is an expression of the intended usage of the component under design. In this way, the “tests” that you are writing become the first client of your component, and come into being just before the component’s code is written. The rhythm in TDD is “write a test, watch it fail; write the production code, watch the test now pass; when prudent, refactor the code to increase clarity and remove duplication”.

The unit tests written by TDD have some quality assurance value as a secondary effect of the test, but that is not their primary goal. Their primary goal is to help you design the code, and to give you a safety net with which to refactor your code. Note that when I use the word refactor, I mean it in the classical sense: to change the internal implementation of a component without changing its externally observable behavior; that is, if you find yourself needing to change a unit test, you are not doing refactoring.

Frustrated by the misunderstanding of the purpose of TDD, my friends and I (Peter, Jim, Brian, Scott, and many others, all agile practitioners and coaches) decided to start calling it Test Driven Design. A small change, but it starts to focus on the fact that the process is about design. Unfortunately, that “test” word baggage is still in there, so our next iteration was then Example Driven Design. This worked well too, but “EDD” and “TDD” were still too close together and confusing.

The final iteration ended up being Design By Example (DbE). Now when I talk about TDD, I always call it Design By Example, and explain why we like this name better than TDD. Where TDD(esign) or EDD failed to get traction, people really seem to resonate to Design By Example.

~ ~ ~

As an aside, you’ll note that most of the unit testing frameworks on .NET have the word “test” in them a lot. NUnit started this with the [Test] attribute, which MbUnit adopted, and MSTest converted into [TestMethod]. When Jim and I set out to design xUnit.net, it originally had a [Test] attribute in it as well, which is how it was when we first talked about it publicly at the first ALT.NET event in Austin. Even before ALT.NET, we’d been lamenting the name [Test] for the attribute exactly because of this baggage with the word “test”. We wanted xUnit.net to be a framework that was first and foremost for TDD (erm, DbE) practitioners.

At the Austin event, we decided to rename [Test] to [Fact]. While some users have complained that this is an arbitrary change, we felt it was right because it removed the focus of the word “test” from the code that you were writing when doing DbE. It also lined up very well with the [Theory] feature we added to xUnit.net Extensions to support David Saff’s data theories. The essential difference is: a [Fact] is an expression of some condition which is invariant, whereas a [Theory] is an expression of a condition which is only necessarily true for the given set of data. As such, [Theory]s are driven with external data; if you provide invalid data, then the theory will potentially produce invalid results.

An example of a [Fact] is the condition which says, “if you have failed to properly initialize this object, when you call this method on it, it will throw InvalidOperationException”. There are no variations on this rule; it is always true. An example of a [Theory], using the classic FizzBuzz interview question, is the condition which says, “if you pass a number which is divisible by 3, but not by 5, the result should be ‘Fizz’.” For this theory, you pass values which meet the initial requirement; if you were to pass 4 or 15, though, the result would be a failure, but that doesn’t mean the theory is bad, it simply means the preconditions for the theory weren’t properly met.

April 09, 2009

In .NET 3.5 SP1, the ASP.NET team introduced a new DLL named System.ComponentModel.DataAnnotations, in conjunction with the ASP.NET Dynamic Data project. The purpose of this DLL is to provide UI-agnostic ways of annotating your data models with semantic attributes like [Required] and [Range]. Dynamic Data uses these attributes, when applied to your models, to automatically wire up to validators in WebForms. The UI-agnostic bit is important, and is why the functionality exists in the System.ComponentModel.DataAnnotations namespace, rather than somewhere under System.Web.

For .NET 4.0, the .NET RIA Services team is also supporting DataAnnotations (which have been significantly enhanced since their initial introduction). This means that models you annotate can end up with automatic validation being performed in both client- and server-side code, supporting WebForms (via Dynamic Data) as well as Silverlight (via RIA Services).

In our exploration of data support in ASP.NET MVC, we wrote a model binder which does server-side validation in MVC by relying on the DataAnnotations attributes. Using a preview of the .NET 4.0 DataAnnotations DLL (the same one that we released with Dynamic Data 4.0 Preview 3), we extended the default model binder behavior to include DataAnnotations support, and then released the code as a sample project with unit tests.

How Does It Work?

The MVC DefaultModelBinder class has a lot of extensibility points, some of which are designed specifically with validation in mind. The DataAnnotations model binder leverages those extension points to allow DataAnnotations attributes to contribute to the validation of a model.

Notice how much cleaner the action method is, now that the validation of the model has been moved into the metadata on the model itself. Now the action method can just focus on submission and error handling, without being concerned about how to validate the model. Score one for separation of concerns! :)

To make this work, you need to compile the DataAnnotations model binder project, and then added references to the two DLLs in you find in the src\bin\Debug folder (Microsoft.Web.Mvc.ModelBinders.dll and System.ComponentModel.DataAnnotations.dll).

Then, in your Global.asax.cs file, you make the following changes to register the model binder:

Now when you submit forms, the model binder will automatically find instances of the DataAnnotations attributes on your models and run the validations you’ve specified.

How Do I Test It?

Using the DataAnnotations attributes for your models moves the validation out of the controller actions and into the model binder, which means your unit tests for your controller actions will be simplified.

When you’re writing tests for this, you need to verify three things:

Is the DataAnnotationsModelBinder registered as the default binder?
You’ll only do this once for the whole application, much like the route tests you would write.

Is my model properly decorated with DataAnnotations attributes?
You’ll end up writing tests for each validation attribute that you add to your model.

Does my action method properly react when the model state is invalid?
You’ll only need to write this once per action method.

In your TDD rhythm, you’ll find yourself writing tests like #2 in advance of defining the models that will support those actions and views. Then you’ll find yourself writing tests like #3 in advance of adding new actions (that test obviously isn’t exhaustive, as it doesn’t test the valid model path nor the “throwing an exception” path, but you get the idea).

Where Did My “Validation” Tests Go?

One thing you’ll notice is that there is no test which explicitly says “given an empty first name, a model state error should occur”. The reason for that is simple: the DataAnnotations attributes behave in an AOP-style fashion where their behavior becomes visible only when the whole system is functioning.

You can consider the DataAnnotations model binder like an accepted piece of infrastructure in your project, just like you already do for the default model binder (or the action invoker, or the controller factory, or any of the dozens of other moving parts that makes an ASP.NET MVC application “go”).

Even so, you may still want to have tests which verify that an empty first name edit box, when submitted, returns back a validation error to the user.

The most common way to see the system running as a whole is to do exploratory testing. In this way, you start the application and try using the forms with the validation attributes, and observe the behavior to ensure that the validation is taking place. Many QA departments rely primarily on scripted exploratory testing to ensure that applications are functioning properly before deploying them into production.

An alternative that is popular with agile teams is automated acceptance testing, where developers, testers and customers collaborate to write tests which ensure the functioning of the system as a whole, using tools like the Lightweight Test Automation Framework. These tests allow customers to know that the application is functioning properly with a high degree of confidence, without the delays and manual labor involved with exploratory testing.