I am going to adopt TDD in our team and one of the ideas I have is to review tests first. So one would write interfaces, mocks, and tests first, submit them for a code review and once interfaces and tests (think specification) are approved an actual implementation can be written (theoretically, can be done by another developer). I wonder how viable this idea is?

5 Answers
5

IMHO it sounds a lot like a waterfall development model, where the specifications of the 1980's have been replaced with Tests. i.e. "Must complete step X before you can start step Y". "Dev writes design and any other dev can implement" ... I am almost certain that in the real world it will fail to deliver good product and productivity. Too much time will be wasted making "perfect" tests, leaving not enough time to write robust, quality code. As soon as you set the expectation that developers are interchangeable like Construction site laborers (no disrespect), your management team will believe it, and tasks will be scheduled accordingly.

Waterfall is out of favor and is considered last centuries methodology, however there is nothing wrong with it if that is what you are trying to achieve, and in a few situations TDD combined with Waterfall might be a preferable approach (I am thinking outsourcing to sweat shops where the job goes to the lowest bidder....).

Developing by starting with tests is a great approach, however, requiring the completion of tests before code starts is likely to reproduce the entire class of problems that Waterfall and "design before code" created, and these are easily avoided.

I would suggest you look towards a more flexible approach - such as "Tests should be first reviewed at 30% of allocated task time, and final review at the 80%." This allows the code and tests to evolve together, but clearly sets the expectations that the tests are at least as important than the code.

Final acceptance then requires both code and tests to be complete and reviewed.

If it is not red green refactor, it is hard to call it TDD. That being said, adding a review phase every few hours isn't a bad approach, especially if not paired.
–
JustinCDec 14 '12 at 3:38

2

@JustinC: That mindset comes from purists, who conveniently completely ignores that fact the most developers don't know enough at the start, and improve knowledge as they progress. If you insist all tests are completed before code starts, you are saying "Write tests to test things that you do not know". To me complete means finished, no more changes, that is it, done.... If "Finished" means "just one more tweak here and a tidy up there, and I'll fix up the todo when i know what needs doing" then the model can be made to work - sort of.
–
mattnzDec 14 '12 at 3:54

Maybe that came out wrong, but I agree with the compromise to the process. Especially if as it sounds, and they are coming from a more traditional approach that is big design upfront.
–
JustinCDec 14 '12 at 4:17

Writing unit tests on top of production code is a lot of overhead for many people. No matter how you slice it, with unit tests, you end up writing double if not triple the amount of code you would have had otherwise.

Unit tests are great and they have a huge number of benefits, but a lot of those benefits are long term and they don't start paying back until later (when people need to reference on how to use API, when next co-op breaks existing code... etc).

However, unit tests have huge short term benefit in that, when people write code (with or without TDD), people need to exercise that code at least once before checking it in. In non-TDD environment, they would try to run the code within the context of a larger application, or they might write some throw-away utilities just to invoke the functions they just wrote. Unit tests are awesome because while you are writing them, you get to see your code run and fix it on the spot.

When a person gets into the flow (what is referred to as red/green/refactor cycle), TDD overhead, IMO based on very limited experience (but a lot of research) becomes much smaller than 2x because even though you are producing more code, that code is instantly helping you write your production software that much faster by providing feedback on an order of magnitude sooner than with manual testing.

So ideally to get the benefit with TDD with least amount of overhead, you want to think about next function you want to add to production, write a test, write the function, make sure it passes, cleanup, go on to the next function.... ship it.

What you are proposing will directly kill that short-term benefit I just described. Instead of writing one test and then functionality, you'll end up writing many tests and feedback is much slower since between reviews/approvals it could take hours if not days. And instead of writing the test for exactly the next piece of production code you will work on, you'll end up writing preemptive tests for things you "might" introduce in production.

You shouldn't be afraid to write the tests first, as long as the scope of what it's doing is clear. It's also the case that you'd want to hit the code from more than just the happy path, too.
–
MakotoDec 14 '12 at 6:48

2

I understand your concern, and it is therefor the test should never pass the first time. A TDD cycle goes like this: 1) Write the minimal, sane, test needed for the tests to fail. 2) Write only the production code you need to pass that test. 3) Refactor 4) goto 1. Clearly, you should try your test before even starting writing production code, to verify that your test will catch that bug/missing feature. Then you should implement enough of the feature to make the test pass, refactor, and then write a new minimal test.
–
martiertDec 14 '12 at 7:10

This idea doesn't sound viable at all. It seems that more focus is spent on verifying the tests as opposed to the actual code - which doesn't make sense. Tests can pass, but at the same time, the code it's testing may run for 12 hours in production, which makes the code review on the tests worthless.

It's also the case that the programmer just doesn't know everything about the requirements of the code. It's tough to write tests for something that you're not entirely sure how it should behave, so "happy path" tests are a bit of smoke and mirrors, too.

Adhere more to fail/pass/refactor instead - let your developers write smaller bites of the code, code review the quality of the production-caliber code, then let them continue with the smaller bites of code. Do feel encouraged to review the tests to see if they're actually testing anything, but don't let it interrupt the TDD flow.