Test Driven Development

Share this:

One of the powerful features of software engineering is its dynamic nature. It is much easier to change a piece of code than it is to change the design of a bridge once you’ve started building it. Indeed, the only certainty in the lifetime of a software project is that the requirements will change as you proceed.

As software engineers we need to embrace change but change is scary. It’s like a game of Buckaroo, Kerplong or Jenga, where, with every move you make, you risk bringing down the whole structure.

Clients only get to see the User Interface, they don’t see the scaffolding underneath. It’s hard for them to understand that when they ask you to make a change that seems like a small thing to them like changing a line of text here or there, it may actually require the internal structure to be completely rebuilt.

Test Driven Development isn’t something that will make those changing requirements go away. It won’t stop your structure crumbling around you but it’s something that will help you write code that is easier to maintain and should help you identify a problems fast, and before they become an issue for your users.

So what do we mean by Test Driven Development? The idea is that before you write a piece of functional code, you first write a test that proves whether or not that piece of code meets its acceptance criteria.

This can seem like quite a strange concept at first. How can you test something that you haven’t yet built? To explain, I’ll describe the process of writing code known as Red, Green, Refactor.

Red, Green, Refactor

Say for example, we’re building a simple Calculator that has an add function that takes two numbers and adds them together. First, we create our Calculator class and write the method signature:

At this stage, our test will fail because we are expecting the result to be four but zero will be returned because we have not yet written the code to perform the summation. When the suite of tests is run in a test runner, tests that have failed appear red and tests that have passed are displayed in green, hence the phrase red, green, refactor.

But why do we write the test first then write the code? Could we not have written the functionality, then written the test to verify that the functionality works?

What you have to remember about writing tests is that tests are code too. They can have defects in them, just like the code that you’re testing. By writing a test that fails first time then passes as we implement the desired functionality, we can be more confident that the test has passed because the logic of the code under test is correct, rather than because of a fault in the test itself.

Over the years, I have seen a surprising number of tests that pass for the wrong reason. False passes are worse than no tests at all because they give you a false sense of security.

Once you have written some tests, and then made those tests pass, you can focus on making changes to your code to improve the readability and re-usability without actually changing the functionality. This is called refactoring and this is when your tests start to become useful (providing you have good test coverage). If you inadvertently make a change that breaks the functionality, your tests will tell you exactly what has gone wrong, rather than trying to search for a needle in a haystack.

Your tests are your safety net, but they also become a living specification of how your code is expected to function. This can be used as a discussion point with clients. It’s quite normal to have to delete or modify tests as your clients requirements change, it is important to keep your test coverage up to date. By writing tests first, you eliminate the risk of forgetting to write tests or running out of time!

Testing should be easy

One of the biggest advantages of a test first approach is that it can actually help you improve the quality and maintainability of your code in general. If your code has a lot of branching logic, and hard coded dependencies it will be difficult to test and you will need to write more tests. If you’re writing code test first it encourages you to break your logic down into small chunks that have only single responsibilities. Writing tests should not take long when you get into the swing of it and it quickly becomes second nature.

Types of tests

Broadly speaking, there are three main types of testing as follows:

User-Interaction Tests

User Interaction Tests test the way the user engages with the User Interface. For example, for a web application you may have Selenium tests that check that when a user presses a delete button that a confirmation pop-up window appears. Users judge a piece of software based on the behaviour of the UI. It doesn’t matter how good the underlying functionality is, if a User Interface, is slow, buggy or difficult to use, users will get frustrated. It’s an area that often gets forgotten about but can make a real difference, particularly as a product grows and it becomes too time consuming and impractical to test a UI by hand.

Integration Tests

Integration tests test across multiple boundaries within an application, from the UI’s code entry point such as an MVC controller down to the persistence layer (e.g a database). Their job is to test an entire process, catching any bugs that occur in the interface between one layer of an application and the next.

They tend to involve a lot of set-up work, for example, you might have create a test database and save a number of objects and they can be quite brittle, for example a test may fail because an external element such as a web service is not responding in a timely manner, your test will fail but there may be nothing wrong with your code. As a result of external dependencies, they may also be quite slow to run.

Unit Tests

In contrast to integration tests, Unit tests only test a single unit of code. They should be fast to run and they should be repeatable because all external dependencies that the piece of code under test relies on are taken out of the equation. The test should only be able to fail for one reason and they should be fast to run.

My preference is to write a small number of integration tests that cover the main areas of the system together with a large suite of Unit Tests, that cover as much of the application’s branching logic as possible, including happy paths (tests that test what should happen under normal circumstances) and sad paths (tests that cover how the system handles erroneous inputs.

Even within the confines of Unit Testing, there are two different things you can test.

Testing State

In the example test I provided before, I invoked the calculator class’s add method and tested that the outcome was what I’d expect it to be. This is a state test. In this instance, it was the result of a function but instead it could have been the value of a property set on an object. These are the most common form of test.

Testing Behaviour

Suppose you’ve built an e-commerce website for a company that has loyalty card scheme. You may have some code for checking out that calculates loyalty points for those customers. It could be something like this

As you can see, the logic for actually calculating the loyalty points is made in a separate part of the system. When we are writing unit tests, we only want to test our code. There would be separate tests for the logic that calculates the loyalty points but we may want to test that when a customer has a loyalty card that the loyaltyPointCalculator is asked to calculate the points. We could also write a state test to check that the customer now has the extra loyalty points.

The first thing to notice about the tests above is that the CheckOutService we’re trying to test needs both a loyalty point calculator and a customer retriever but because we are only interested in testing the code in the CheckOut method, we provide fake versions of these dependencies so we can control the customer object to meet our needs.

Instead of calculating the loyalty points, our fake loyalty point calculator simply records the number of time its calculatePoints method is called, and returns a fixed value. This means we can verify that calculatePoints has been called, as well as test that the customers loyalty points value is correct.

Notice that I have covered both the scenario’s where the customer has a loyalty card and the scenario where the customer doesn’t have one so I can be fairly certain that the system is working as intended.

Although I have created fake classes for the purpose of these tests, when you have a lot of dependencies it can quickly become quite annoying to have to write so much code just for the purposes of tests. The alternative is to use a Mocking framework like Moq. Here is an alternative version of the second test using Moq.

With Moq, you only need to tell the framework what type of fake object you want to create and it will create one for you. The Setup method allows you to dictate what Moq should return when a method is called.

The Mock object has a Verify method you can use to identify how many times it’s fake method has been called. In this instance, I’ve told it not to care what basket is passed into the calculatePoints method to do the counting but I could also have asked it to count how many times the method was called with a specific basket created earlier in the test.

A pragmatic approach

Test Driven Development purists would insist that test coverage is as close to complete as possible and there are test coverage tools like dotCover from jetbrains that you can use to analyse your code. They would also insist that every single method you write is written with a test first approach. In an ideal world, this would be a good approach to take but you have to balance conflicting needs in terms of deliverable deadlines and the practicality of writing tests.

I am a great advocate of TDD and use it as much as possible but sometimes, you’re writing exploratory code where you’re experimenting with a new library or framework and it doesn’t make sense to write tests first. For those scenarios, I build my functionality but with testability in mind, then when I have things working, I go back and I comment out my code, write tests that fail, reintroduce my code and make the tests pass. This is a good compromise. You still get the assurance that the tests are passing for the right reasons and you still end up with good test coverage.