TDD and unit testing seems to be the big rave at the moment. But it is really that useful compared to other forms of automated testing?

Intuitively I would guess that automated integration testing is way more useful than unit testing. In my experience the most bugs seems to be in the interaction between modules, and not so much the actual (usual limited) logic of each unit. Also regressions often happened because of changing interfaces between modules (and changed pre and post-conditions.)

Am I misunderstanding something, or why are unit testing getting so much focus compared to integration testing? It is simply because it is assumed that integration testing is something you have, and unit testing is the next thing we need to learn to apply as developers?

Or maybe unit testing simply yields the highest gain compared to the complexity of automating it?

What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why?

If you had to pick just one form of testing to be automated on your next project, which would it be?

There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.
If this question can be reworded to fit the rules in the help center, please edit the question.

I'd just like to add a belated link to JB Rainsberger's Integration Tests Are A Scam. It highlights many of the reasons integration tests are a false economy.
–
TimOct 5 '12 at 14:04

This is yet another example of closing a question that was perfectly fine on this site when it was asked three years ago! It is really annoying to contribute to a community where the rules change and you just throw out all old stuff!
–
Bjarke Freund-HansenMay 19 '14 at 11:51

7 Answers
7

One important factor which makes unit tests extremely useful is fast feedback.

Consider what happens when you have your app fully covered with integration/System/functional tests (which is already an ideal situation, far from reality in most development shops). These are often run by a dedicated testing team.

You commit a change to the SCM repo,

sometime (possibly days) later the testing guys get a new internal release and start testing it,

they find a bug and file a bug report,

(in the ideal case) someone assigns the bug report back to you.

This all may take days or even weeks. By this time you have already been working on other tasks, so you don't have the minute details of the code written earlier in your mind. Moreover, you typically don't even have any direct proof of where the bug actually is, so it takes considerable time to find and fix the bug.

Whereas in unit testing (TDD)

you write a test,

you write some code to satisfy the test,

the test still fails,

you look at the code and typically you have an "oops" experience in a few seconds (like "oops, I forgot to check for that condition!"), then

fix the bug immediately.

This all happens in a matter of minutes.

This is not to say that integration/system tests are not useful; they just serve different purposes. With well written unit tests you can catch a large proportion of the bugs in code before they get to the integration phase, where it is already considerably more expensive to find and fix them. You are right that the integration tests are needed to catch those types of bugs which are difficult or impossible to catch with unit tests. However, in my experience those are the rarer kind; most of the bugs I have seen are caused by some simple or even trivial omission somewhere inside a method.

Not to mention that unit testing also tests your interfaces for usability/safety etc., thus giving you vitally important feedback to improve your design and APIs. Which IMHO can considerably reduce the chances of module/susbsystem integration bugs: the easier and cleaner an API is, the less the chance of misunderstanding or omission.

What are you experience with automated unit testing, automated integration testing, and automated acceptance testing, and in your experience what has yielded the highest ROI? and why?

ROI depends on a lot of factors, probably the foremost of them is whether your project is greenfield or legacy. With greenfield development my advice (and experience so far) is to do unit testing TDD style from the beginning. I am confident that this is the most cost effective method in this case.

In a legacy project, however, building up sufficient unit testing coverage is a huge undertaking which will be very slow to yield benefits. It is more efficient to try to cover the most important functionality with system/functional tests via the UI if possible. (desktop GUI apps may be difficult to test automately via the GUI, although automated test support tools are gradually improving...). This gives you a coarse but effective safety net rapidly. Then you can start gradually building up unit tests around the most critical parts of the app.

If you had to pick just one form of testing to automated on your next project, which would it be?

That is a theoretical question and I find it pointless. All kinds of tests have their use in the toolbox of a good SW engineer, and all of these have scenarios where they are irreplaceable.

+1 for "fast feedback" for unit tests. Also, very handy for running as a post-commit on a Continuous Integration server.
–
Frank SheararJan 24 '11 at 15:05

1

"All kinds of tests have their use in the toolbox of a good SW engineer, and all of these have scenarios where they are irreplaceable." - I wish I could vote several times just for that! People that think that they can just find one ideal test tool/method and apply it everywhere are wearing me down.
–
ptyxNov 27 '12 at 19:13

All types of testing are very important, and ensure different aspects of the system are in spec. So to work backwards, "If I had to choose one type of testing..." I wouldn't. Unit testing provides me different feedback than integration testing or interactive testing by a person.

Here's the type/benefit of testing we do:

Unit testing--ensures that units are performing work as we expect them to. We test both provider and consumer contracts for every interface--and this is fairly easy to automate. We also check our boundary conditions, etc.

Integration testing--ensures that the units are working together in concert. This is mainly to test our design. If something breaks here, we have to adjust our unit tests to make sure it doesn't happen again.

System testing--ensures the system meets the requirements/specifications. Usually people do this type of testing.

Acceptance testing--the client does this to verify the end product does what is advertised.

Optional but recommended testing:

User experience testing: While we get decent feedback from system testing, having people from the client preview certain pre-releases can really help determine if we need to change things before it is too late.

Just to understand why Unit Testing has an advantage over integration testing, you have to understand the orders of magnitude additional tests you would need to be comprehensive. For every possible outcome for unit A, there needs to be a test. The same for unit B. Now, if both of those work together for a more complete solution, the number of tests are combinatorial. In short, to test every permutation of interaction between unit A and unit B, you need A*B tests. Add in unit C, and the number of tests for the trio would be A*B*C.

This is where the concept of interfaces and object boundaries become very important. Interfaces represent a certain contract. An implementor of an interface agrees that it will behave in a certain way. Likewise, a consumer of an interface agrees that it will use the implementation in a certain way. By writing a test that collects every class that implements an interface, you can easily test that each implementation observes the interface contracts. That's half the equation. The other half is to test the consumer side--which is where mock objects come in to play. The mock is configured to ensure that the interactions are always in spec. At this point, all that's needed is a few integration tests to make sure that we've got the implementor/consumer contracts correct.

"System testing--ensures the system meets the requirements/specifications Usually people do this type of testing". System/aka functional/aka "end-to-end"/ aka high-level integration tests are good for automation as well.
–
Michael FreidgeimJun 15 '13 at 22:00

At a large web publishing company, when a new release was made, it typically took about 3 testers around an hour or two to visit all of the families of sites and make sure everything was OK, per the test script. With Selenium, we were able to automate the testing, and distribute it over multiple machines and browsers. Currently, when the test scripts are run, it takes 3 PCs about 10 minutes to automatically do the same thing AND spit out a pretty report.

The great thing about selenium is that you can tie it in with unit testing frameworks like XUnit, TestNG, or MSTest.

In my experience, it has been extremely valuable, however the setup of something like that really depends on your project and your team, so ROI will definitely vary.

The question is not so much what's more important, as what can be automated and run quickly.

Unit tests show whether a small component is fit for purpose in a particular way, and typically are small and quick to run. Being small, they're usually reasonably easy to automate.

Integration tests show whether components work together. They're usually a lot bigger, and may not be easy to automate. They can take longer to run. There's value in being able to run them automatically, but it's a bigger task and they won't be run as often anyway.

Acceptance tests show whether a project is acceptable or not. This is normally impossible to fully automate (although automated tests may play a part), since it's normally impossible to nail down the exact and complete requirements in a formal matter that's any less complicated than the source code itself. Acceptance testing usually includes potential users doing things with the system and observing that the results are satisfactory in all respects, which is normally partly subjective. Since acceptance testing is typically run once (different customers will normally want to do their own separate acceptance tests) it really isn't worth automating.

The best ROI is usually the earliest testing that can find that type of bug.

Unit testing should find most of the bugs that are just in one method, or maybe one component. It finds bugs that can usually be fixed in minutes, with total turn-around times of less than one hour. VERY cheap tests, VERY cheap bugs to find and fix! If those bugs made it into integration testing, turn-around could be more like a day (test runs often happen nightly), assuming the bug isn't occluded by another bug; plus there are likely to be more bugs introduced since other code was written against the initial buggy code; plus any redesigns will impact more pieces of code. Also, letting more bugs through means having to do many more test runs before integration testing can be completed. Poor unit testing is often at the heart of long, onerous, expensive test integration cycles. Skipping unit testing might halve development time, but testing will take 3 to 4 times as long at least, the entire project will double in length, and you will still have lower quality than if you'd unit tested IME.

Integration testing is usually the earliest actual testing that can find integration bugs (although review processes can find some bugs before the software is written, or before the code goes to test). You want to find those bugs before you start showing the software to the user or release, since fixing bugs at the last moment (or hot-fixing!) is very expensive. You can't find those bugs earlier, when they would be cheaper to fix, because you need multiple working components to integrate (by definition). Integration testing gets slowed down when bugs that have nothing to do with interacting pieces (like minor typos) need to be resolved first before the more interesting testing can even get started.

User acceptance testing is making sure that the software does what the customer wants (although the customer has hopefully been involved the entire time, to lower the risk of finding a huge gap between expectations and actual software right at the end of the project - VERY expensive!). By the time you get to this stage of testing, you should really believe that most of the bugs in your product, compared to the specs at least, have already been found. User acceptance testing is more about making sure the product fits the customer's needs than about making sure there are no bugs compared to the specifications.

If I were only going to automate one type of testing, it would be unit testing. Integration tests can be done manually and were done that way by many major companies for years. It's more time consuming to manually do work that could be done by a computer over and over, and far more expensive for most projects, not to mention boring and error-prone, but it can be done. User acceptance tests are often manual, since users often don't know how to automate their tests while still testing what they care about. Unit tests MUST be automated. You need to be able to run all the unit tests within a few seconds on demand as frequently as every few minutes. Humans just can't even come close with manual tests. Not to mention that unit tests are in the code, and cannot be executed without calling them through code, i.e., automating them. Unit-testing manually is simply impossible.

One thing to keep in mind is that this is a forum primarily for developers, not testers. Integration testing is primarily implemented by testers. Unit testing is implemented by developers. Naturally, developers talk more about unit testing than other types of testing. This doesn't mean they don't think other testing is important. It just means they don't actually do it (or do it less often).

Thanks especially for that very last paragraph, I didn't think of that.
–
Bjarke Freund-HansenJan 24 '11 at 19:56

FYI, I updated the answer since you read it - realized I didn't read the original question closely enough
–
Ethel EvansJan 24 '11 at 20:20

@EthelEvans, your reasons about unit tests priorities are correct for new projects. For legacy projects, that were written without automating testing coverage in mind, system/aka functional/aka high-level integration tests are the most important. See the last part of programmers.stackexchange.com/a/39370/31574 answer.
–
Michael FreidgeimJun 15 '13 at 22:08

@MichaelFreidgeim, it ultimately depends on how stable the interfaces are. Automating high-level tests can quickly incur prohibitive maintenance costs if they go out-of-date quickly. In this case, automate lower to help build stability and use exploratory testing (possibly session- or checklist-based) for E2E and integration tests to avoid incurring obnoxious automation maintenance costs. Not all legacy projects have stable interfaces, though - especially if they are being refactored or re-architected aggressively.
–
Ethel EvansJun 27 '13 at 19:53