I've been doing TDD with a project I'm working on, and I have quite a large number of tests. I have quite a few automated tests around restrictions enforced in code, making sure things that shouldn't be allowed aren't allowed. Now I've been told that due to changes in our licensing model, some of those restrictions need to be removed.

Since I've done the entire project using TDD, I'm not sure how to make a change like this also using the TDD mindset.

So, my question is: Do I make the back end changes and see what tests break, or do I try to find the tests that enforce the behaviour, and change those first?

My concern is that if there are operations A, B, C and D where a check is done to disallow the behaviour, and I have equivalent tests TestA, TestB, TestC, and TestD (possibly in different places) no matter which half I change first, I may miss one or two spots and end up with incorrect behaviour, but still have all the tests pass (e.g. what if I miss TestD?)

4 Answers
4

Having done the entire project with TDD, I'd advise you to stick with it. Many requirements are changing, but take them one at a time. For a new requirement, either write a test for it, or find the existing test that should now have different results. Change it to correspond to the new requirement, and watch it turn red. Now make it pass. When you make it pass, you might find another test (or tests) breaks, because that test depended on the old behavior. That's fine. Review the breaking test and ensure that you haven't really broken behavior (if you have, undo your change). Update the breaking test to correspond to the new desired behavior, and watch it turn green. Repeat until done.

Ideally, find the tests that enforce the behaviour, and change those first. This way you can see the test fail before you make it pass again, which is perfect TDD.

If you miss one then so be it. Pragmatism would usually kick in and you just fix the test.

If you were being really pedantic, you should be able to change the code back, change that test, see them all fail and then fix it again. I am quite strict with myself about TDD and even I'm not that pedantic. I just don't think that step would offer much.

The usual way to deal with this is to introduce a flag and change the behaviour of both test and production code according to the flag. For instance, if you introduce self-served purchases of vanilla frappes, then the test code would check the HAVE_AUTO_FRAPPE flag and either assertTrue() or assertFalse() depending on it. This has the advantage that you have to make the change to your test suite and to the production code only once. When the change takes effect, you just flip the bit and nothing else. (You can of course remove the entire mechanism later if it's definitely not going to change back, e.g. when refactoring for clarity, but you don't have to.)

With TDD, by definition you'd need to change the tests to reflect the new behaviors, and then modify the code until all of the tests pass. However, depending on the size of your code base and the amount of time that you (and your team) have been allotted to complete this transition, there may not be enough time to do it that way. You may have to pick a few key sections, update the tests and then the code, and push the changes in gradually.