I would like to know what the overall impact of resource planning on a software project is, where the requirements and design of the project are driven by automated acceptance tests and unit tests, in contrast to a more "traditional" approach to software development.

What, in your experience, is the overall effect on resource requirements for completing a software project under TDD, as opposed to more "traditional" development methodologies? It seems self-evident to me that quality would increase, and the amount of uncertainty decreases because testing is done earlier, but requiring tests up front seems like it would require more developer hours to accomplish. How much does the development effort increase, or does it actually decrease due to the up-front elimination of bugs?

How much more effort is required from the customer? Do they have to change the way they relate to the project, especially if they are used to big design up front? Does the number of hours required of the customer overall increase, or does it actually decrease?

I would imagine that time estimates would be very vague in an iterative TDD process at the beginning of a TDD project (since there is no Software Development Plan). Is there a point, say, 20% into a project, where confidence increases enough that a more or less stable time and money estimate can eventually be provided to the customer?

Note: I'm not looking for subjective opinions or theories here, so please don't speculate. I'm looking more for real-world experience in TDD.

I'm sure there is no real world data. You only get subjective opinions and theories based on peoeple's real-world experience.
–
EuphoricJul 29 '13 at 5:12

1

@Euphoric: I'm looking for objective observations and realities based on real-world experience. Sorry I didn't make that clear. However, I don't need hard numbers; I will accept general impressions such as: "while our development time did increase substantially, our maintenance costs decreased because the software was more reliable and the customer understood the software better because they took part in the design throughout the development effort."
–
Robert HarveyJul 29 '13 at 5:16

2

So, is this an opinion based question? It certainly sounds like one
–
BЈовићJul 29 '13 at 5:46

5 Answers
5

The first thing that needs to be stated is that TDD does not necessarily increase the quality of the software (from the user's point of view). It is not a silver bullet. It is not a panacea. Decreasing the number of bugs is not why we do TDD.

TDD is done primarily because it results in better code. More specifically, TDD results in code that is easier to change.

Whether or not you wish to use TDD depends more on your goals for the project. Is this going to be a short term consulting project? Are you required to support the project after go-live? Is it a trivial project? The added overhead may not be worth it in these cases.

However, it is my experience that the value proposition for TDD grows exponentially as the time and resources involved in a project grows linearly.

Good unit tests give the following advantages:

Unit tests warn developers of unintended side-effects.

Unit tests allow for rapid development of new functionality on old, mature systems.

Unit tests give new developers a faster and more accurate understanding of the code.

A side effect of TDD might be less bugs, but unfortunately it is my experience that most bugs (particularly the most nasty ones) are usually caused by unclear or poor requirements or would not necessarily be covered by the first round of unit testing.

To summarise:

Development on version 1 might be slower.
Development on version 2-10 will be faster.

I like the explicit juxtaposition of "better code" being different from increasing "the quality of the software", I.e. that the things programmers value in code are not necessarily that it does what the customer wants.
–
user4051Jul 29 '13 at 6:28

Aren't up front acceptance tests and unit tests supposed to clarify the requirements?
–
Robert HarveyJul 29 '13 at 6:58

@RobertHarvey They should be but aren't necessarily. The unit tests and acceptance tests will reflect the developer's understanding of the requirements when they are written. The developers may have anything from a complete understanding to no understanding of the requirements when they begin writing the software. That part of the equation depends far more on the client and product manager than anything else. Theoretically, the tests should help a lot. In practice, well, "it depends".
–
StephenJul 29 '13 at 7:27

1

I should clarify, we're talking about TDD in isolation here, not a a SCRUM implementation which incorporates TDD. In isolation, TDD is all about writing tests so that you write better code and can refactor faster and more safely later.
–
StephenJul 29 '13 at 7:29

1

@Stephen: Perhaps I should have made it clearer that I'm talking about the flavor of TDD that incorporates acceptance tests as part of the requirements gathering process. I've added a graphic to the question to make that clearer.
–
Robert HarveyJul 29 '13 at 15:33

Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

Whether these results are generalisable to your case is, of course, something that proponents of TDD will argue is obvious and detractors of TDD will argue is untrue.

The problem with that study is they didn't unit test the code before adapting TDD. TDD is not a magic tool that decrease number of defects by 40-90% by simply adopting it
–
BЈовићJul 29 '13 at 7:45

1

@BЈовић I don't think they claim "magic" anywhere in that paper. They claim that some teams adopted TDD, some teams didn't, they were given "similar" work and some defect densities and development times were recorded. If they had forced the non-TDD teams to write unit tests anyway just so that everyone had unit tests, it wouldn't be an ecologically valid study.
–
user4051Jul 29 '13 at 16:23

An ecologically valid study? Sorta depends on what you're measuring. If you want to know whether writing your tests up front matters, then everyone needs to be writing unit tests, not just the TDD group.
–
Robert HarveyJul 29 '13 at 16:42

1

@robert Harvey that's a question of confounding variables, not ecological validity. Designing a good experiment involves gradient those off. For example if the control group were writing unit tests post hoc, people would argue the experiment was unsound because the control group were working in a way uncommonly found in the wild.
–
user4051Jul 29 '13 at 17:01

I don't have any research papers or statistics to give you, but I'll relate my experience from working in a team/organization that historically had low-to-average unit test coverage and no end-to-end tests, and gradually moving the bar to where we are now, with more of an ATDD (but, ironically, not traditional TDD) approach.

Specifically, this is how project timelines used to play out (and still play out on other teams/products in the same organization):

This seems like ridiculous overhead but it's actually very common, it's just often masked in many organizations by missing or ineffectual QA. We have good testers and a culture of intensive testing, so these issues are caught early, and fixed up front (most of the time), rather than being allowed to play out slowly over the course of many months/years. 55-65% maintenance overhead is lower than the commonly-accepted norm of 80% of the time being spent on debugging - which seems reasonable, because we did have some unit tests and cross-functional teams (including QA).

During our team's first release of our latest product, we had started retrofitting acceptance tests but they weren't quite up to snuff and we still had to rely on a lot of manual testing. The release was somewhat less painful than others, IMO partly because of our haphazard acceptance tests and also partly because of our very high unit test coverage relative to other projects. Still, we spent nearly 2 weeks on regression/stabilization and 2 weeks on post-production issues.

By contrast, every release since that initial release has had early acceptance criteria and acceptance tests, and our current iterations look like this:

8 days of analysis and implementation

2 days of stabilization

0-2 combined days of post-production support and cleanup

In other words, we progressed from 55-65% maintenance overhead to 20-30% maintenance overhead. Same team, same product, main difference being the progressive improvement and streamlining of our acceptance tests.

The cost of maintaining them is, per sprint, 3-5 days for a QA analyst and 1-2 days for a developer. Our team has 4 developers and 2 QA analysts, so (not counting UX, project management, etc.) that's a maximum of 7 man-days out of 60, which I'll round up to a 15% implementation overhead just to be on the safe side.

We spend 15% of each release period developing automated acceptance tests, and in the process are able to cut 70% of each release doing regression tests and fixing pre-production and post-production bugs.

You might have noticed that the second timeline is much more precise and also much shorter than the first. That's something that was made possible by the up-front acceptance criteria and acceptance tests, because it vastly simplifies the "definition of done" and allows us to be much more confident in the stability of a release. No other teams have (so far) succeeded with a bi-weekly release schedule, except perhaps when doing fairly trivial maintenance releases (bugfix-only, etc.).

Another interesting side-effect is that we've been able to adapt our release schedule to business needs. One time, we had to lengthen it to about 3 weeks to coincide with another release, and were able to do so while delivering more functionality but without spending any extra time on testing or stabilization. Another time, we had to shorten it to about 1½ weeks, due to holidays and resource conflicts; we had to take on less dev work, but, as expected, were able to spend correspondingly less time on testing and stabilization without introducing any new defects.

So in my experience, acceptance tests, especially when done very early in a project or sprint, and when well-maintained with acceptance criteria written by the Product Owner, are one of the best investments you can make. Unlike traditional TDD, which other people correctly point out is focused more on creating testable code than defect-free code - ATDD really does help catch defects a lot faster; it's the organizational equivalent of having an army of testers doing a complete regression test every day, but way cheaper.

Will ATDD help you in longer-term projects done in RUP or (ugh) Waterfall style, projects lasting 3 months or more? I think the jury's still out on that one. In my experience, the biggest and ugliest risks in long-running projects are unrealistic deadlines and changing requirements. Unrealistic deadlines will cause people to take shortcuts, including testing shortcuts, and significant changes to requirements will likely invalidate a large number of tests, requiring them to be rewritten and potentially inflating the implementation overhead.

I'm pretty sure that ATDD has a fantastic payoff for Agile models, or for teams that aren't officially Agile but have very frequent release schedules. I've never tried it on a long-term project, mainly because I've never been in or even heard of an organization willing to try it on that kind of a project, so insert the standard disclaimer here. YMMV and all that.

P.S. In our case, there is no extra effort required from the "customer", but we have a dedicated, full-time Product Owner who actually writes the acceptance criteria. If you're in the "consultingware" business, I suspect it could be a lot more difficult to get the end users to write useful acceptance criteria. A Product Owner/Product Manager seems like a pretty essential element in order to do ATDD and although I can once again only speak from my own experience, I've never heard of ATDD being successfully practiced without someone to fulfill that role.

This is very useful, thanks. It did not occur to me that ATTD might change the character of the TDD effort, but it does make sense, especially when you hear about folks who are capable of turning out well-written, relatively bug-free software on time and on budget without necessarily utilizing unit testing extensively.
–
Robert HarveySep 7 '13 at 18:59

@RobertHarvey: I should clarify - we still create unit tests, just not as part of a TDD process. Typically the acceptance tests come first or in parallel with initial development, then code complete, then unit tests and refactoring. I've sometimes thought that TDD would help certain developers write better code, but I can't back that up (yet). Although I can speak for myself - I often catch a lot of bugs and design flaws in my own code simply during the process of writing the unit tests.
–
AaronaughtSep 7 '13 at 23:33

Resource Requirements

What, in your experience, is the overall effect on resource requirements for completing a software project under TDD, as opposed to more "traditional" development methodologies?

In my experience the cost of requiring upfront tests is immediately mitigated by both defining a clear acceptance criteria up front, and then writing to the test. Not only is the cost of the up front testing mitigated I've also found it generally speeds up overall development. Although those speed improvements may be wiped out by poor project definition, or changing requirements. However, we are still able to respond quite well to those kinds of changes without severe impact. ATDD also significantly reduces developer effort in verifying correct system behavior through it's automated test suite in the following cases:

large refactors

platform/package upgrades

platform migration

toolchain upgrades

This is assuming a team who is familiar with the process and practices involved.

Customer Involvement

How much more effort is required from the customer?

They have to be much more involved on an ongoing basis. I've seen a huge reduction in up front time investment, but a much greater demand ongoing. I haven't measured, but I'm fairly certain is a larger time investment for the customer.

However, I've found the customer relationship greatly improves after 5 or so demos where they are seeing their software slowly take shape. The time commitment from the customer decreases somewhat over time as a rapport is developed everyone get's used to the process and the expectations involved.

Project Estimation

I would imagine that time estimates would be very vague in an iterative TDD process at the beginning of a TDD project (since there is no Software Development Plan).

I have found that's usually a question of how well defined the ask is and if the technical lead(s) are able to card out (including card estimation) the project. Assuming the project is well carded and you have a reasonable velocity average and standard deviation we've found it's easy to get a decent estimate. Obviously the larger the project the more uncertainty there is which is why I generally break a large project into a small project with a promise to continue later. This is much easier to do once you've established a rapport with the customer.

For example:

My team's "sprints" are a week long and we have a running average and std. deviation of the last 14 weeks. If the project is 120 points we have a mean of 25 and a std. deviation of 6 then estimating the completion of a project is:

We use the 2 Std. Deviation rule of thumb for our 95% confidence estimate. In practice we usually complete the project under the first std. deviation, but over our mean. This is usually due to refinements, changes, etc.

So basically what you're saying is that TDD improves the development effort by encouraging stakeholders to do those things that they should be doing anyway, like providing clear, actionable requirements and acceptance criteria.
–
Robert HarveySep 7 '13 at 0:36

1

Well, not just that. As the project progresses the increased participation allows for a better conversation between dev and stakeholders. It allows for things like dev offering less costly alternates as their understanding of what the stakeholder wants get refined further. It allows stakeholders to change requirements earlier as they realize things are missing, or won't work without such an antagonistic response from dev; and without many of the unreasonable expectations that usually come from stakeholders.
–
dietbuddhaSep 8 '13 at 7:00

requiring tests up front seems like it would require more developer hours to accomplish. How much does the development effort increase, or does it actually decrease due to the up-front elimination of bugs?

This is actually not true. If your developers are writing unit tests (and they should), then the time should be approximately the same, or better. I said better, since your code will be completely tested, and they will have to write only the code to fulfil the requirements.

The problem with developers is they tend to implement even things that are not required to make the software as generic as possible.

How much more effort is required from the customer? Do they have to change the way they relate to the project, especially if they are used to big design up front? Does the number of hours required of the customer overall increase, or does it actually decrease?

That shouldn't matter. Whoever do the requirements should do it as good as possible.

If you do agile way of development, then that does not mean big design up front. But, the better the requirements, architecture and design are done - the code quality will increase, and time to finish the software will decrease.

Therefore, if they like to do BDUF let them do it. It will make your life easier as developer.

As I understand it, TDD and BDUF are not generally compatible with each other.
–
Robert HarveyJul 29 '13 at 6:58

3

BDUF is generally not compatible with any good development management practices. But it would be possible to do a BDUF project in a TDD fashion. TDD is a technique for authoring better quality software while BDUF is a technique for requirements elicitation. A bad technique, but a technique nonetheless.
–
StephenJul 29 '13 at 7:38

@RobertHarvey Right, but if they want to do BDUF - it is their choice. If you are really doing agile, then you are free to improve their design, and still do TDD.
–
BЈовићJul 29 '13 at 7:40

so you say that if I write unit test my code will be completely tested and if all test pass, that of course mean that software is bug free (or at least better). So I just need test every method of my software e.g. "function testSqr() { int a = 3; assertTrue(mySqr(a) == 9); } function mySqr(int a) {return 9;}"
–
DainiusJul 29 '13 at 8:12