There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they?

Here's a few that I've come up with:

Requires more initial developer time for a given feature

Requires a higher skill level of team members

Increase tooling needs (test runners, frameworks, etc.)

Complex analysis required when a failed test in encountered - is this test obsolete due to my change or is it telling me I made a mistake?

Edit
I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet.

Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are.

"Complex analysis required..." the test isn't the cause of the failure, it's an indicator. Saying having no tests means no complex failure analysis required is no better than sticking your head in the mud.
–
P.Brian.MackeyOct 27 '11 at 13:04

1

* longer build times when tests are run every build, and repeated code when the tests are on the very low level (testing getters and setters)
–
ratchet freakOct 27 '11 at 13:07

2

1. if a developer is using time to test new features the risk of them failing has decreased meaning your product is more stable. 2. Educating your team members for a test-focus approach is a good thing, they can use this knowledge for other things in work (and life). 3. Create an automated installation for the test environment 4. This tells me that 1 test does too much.
–
CS01Oct 27 '11 at 13:13

If the same developer is coding the tests as is coding the actual code, then they will only think of the same test cases to write tests for as the ones they thought about when they were coding.
–
Paul TomblinApr 6 '12 at 17:39

11 Answers
11

You pretty much nailed the most important ones. I have a few minor additions, plus the disadvantage of tests actually succeeding - when you don't really want them to (see below).

Development time: With test-driven development this is already calculated in for unit tests, but you still need integration and system tests, which may need automation code as well. Code written once is usually tested on several later stages.

Skill level: of course, the tools have to be supported. But it's not only your own team. In larger project you may have a separate testing team that writes tests for checking the interfaces between your team's product and other's. So many more people have to have more knowledge.

Tooling needs: you're spot on there. Not much to add to this.

Failed tests: This is the real bugger (for me anyways). There's a bunch of different reasons, each of which can be seen as a disadvantage. And the biggest disadvantage is the time required to decide which of these reasons actually applies to your failed test.

failed, because of an actual bug. (just for completeness, as this is of course advantageous)

failed, because your test code has been written with a traditional bug.

failed, because your test code has been written for an older version of your product and is no longer compatible

failed, because the requirements have changed and the tested behavior is now deemed 'correct'

Non-failed tests: These are a disadvantage too and can be quite bad. It happens mostly, when you change things and comes close to what Adam answered. If you change something in your product's code, but the test doesn't account for it at all, then it gives you this "false sense of security".

An important aspect of non-failed tests is that a change of requirements can lead earlier behavior to become invalid. If you have decent traceability, the requirement change should be able to be matched to your testcode and you know you can no longer trust that test. Of course, maintaining this traceability is yet another disadvantages. And if you don't, you end up with a test that does not fail, but actually verifies that your product works wrongly. Somewhere down the road this will hit you.. usually when/where you least expect it.

Additional deployment costs: You do not just run unit-tests as a developer on your own machine. With automated tests, you want to execute them on commits from others at some central place to find out when someone broke your work. This is nice, but also needs to be set up and maintained.

On failed tests, if the requirements change causing the current tests to fail, the test passes because the previous implementation is no longer valid, if it didn't fail it would mean the implementation doesnt fit the requirements...
–
CS01Oct 27 '11 at 13:18

Case 4 (b) is what test-driven development is all about: you write a failing test, then you extend the product, then you verify that this change makes the test succeed. This protects you against faultily written test that always succeed, or always fail.
–
Kilian FothOct 27 '11 at 13:39

@Frank thanks for the answer. There's a lot of insight there. I especially appreciated the distinctions of different causes of failed tests. Additional deployment costs is another excellent point.
–
RationalGeekOct 27 '11 at 14:17

The amusing thing I've found is that my bug per LOC ratio is far worse for tests than it is real code. I spend more time finding and fixing testing bugs than real ones. :-)
–
Brian KnoblauchDec 22 '11 at 15:09

failed, because your test code has been written for an older version of your product and is no longer compatible - if your tests are breaking because of this then likely your tests are testing implantation details rather than behavior. CalculateHighestNumber v1 should still return the same result as CalculateHighestNumber v2
–
Tom SquiresMar 3 '13 at 22:46

Having just started trying automated tests in our team, the biggest disadvantage I've seen is that it's very difficult to apply to legacy code that wasn't designed with automated testing in mind. It would undoubtedly improve our code in the long term, but the level of refactoring needed for automated testing while retaining our sanity is a very high barrier for entry in the short term, meaning we're going to have to be very selective about introducing automated unit testing in order to meet our short term commitments. In other words, it's a lot harder to pay off the credit cards when you're already deep in technical debt.

That's a really good book, I got a lot out of it. Three main points, have a go, a bit at a time. Some good tests are better than no tests. Stay in scope, don't refactor everything that needs refactoring at once. Be very clear where the boundaries between testable and untestable code are. Make Sure everyone else knows as well.
–
Tony HopkinsonMar 3 '13 at 23:06

Perhaps the most important disadvantage is ... tests are production code. Every test you write adds code to your codebase that needs to be maintained and supported. Failing to do so leads to tests you don't believe the results of, so you have no other choice. Don't get me wrong - I'm a big advocate of automated testing. But everything has a cost, and this is a big one.

The OP already covered time and obsolete code in the question.
–
P.Brian.MackeyOct 27 '11 at 13:07

@P.Brian.Mackey actually the time element is subjective. Time taken to code the test is different from time taken to understand what the test requires and code the test correctly.
–
Adam HouldsworthOct 27 '11 at 13:07

@AdamHouldsworth Thank you those are some good examples of disadvantages. I hadn't really considered the false confidence angle.
–
RationalGeekOct 27 '11 at 14:13

I'd say the main problem with them is that they can provide a false sense of security. Just because you have unit tests it doesn't mean they're actually doing anything and that includes properly testing the requirements.

Test Driven Development helps with the first by requiring a failing test before writing the feature code. Now you know that if the feature breaks, the test will break. For the second, complicated test code is a code smell. Again, by writing the test first you can strive to make it simple and put the difficult work into the feature code that fixes the test.
–
David HarknessMar 3 '13 at 23:17

Code being hard to test is not a code smell. The easiest code to test is a giant chain of function calls masquerading as classes.
–
Erik ReppenJun 14 '13 at 23:31

The 4th point makes me remember of some experience of mine. I worked on a very lean, XP-oriented, Scrum managed company where unit tests were highly recommended. However, in its path to a leaner, less bureaucratic style, the company just neglected the construction of a QA team - we had no testers. So frequently the customers found trivial bugs using some systems, even with test coverage of >95%. So I would add another point:

Automated tests may make you feel that QA and testing are not important.

Also, I was thinking those days about documentation and cogitated a hypothesis that may be valid (to a lesser extend) to tests two. I just felt that code evolves so quickly that it is pretty hard to make documentation that follows such a velocity, so it is more valuable to spend time making code readable than writing heavy, easily outdated documentation. (Of course, this does not apply to APIs, but only to internal implementation.) The test suffers a bit from the same problem: it may be too slow to write when compared with the tested code. OTOH, it is a lesser problem because the tests warn they are outdated, while your documentation will stay silent as long as you do not reread it very, very carefully.

Finally, a problem I find sometimes: automated testing may depend upon tools, and those tools may be poorly written. I started a project using XUL some time ago and, man, this is just painful to write unit tests for such platform. I started another application using Objective-C, Cocoa and Xcode 3 and the testing model on it was basically a bunch of workarounds.

I have other experiences about disadvantages of automated testing, but most of them are listed in other answers. Nonetheless, I am a vehement advocate of automated testing. This saved an awful lot of work and headache and I always recommend it by default. I judge those disadvantages are just mere details when compared to the benefices of automated testing. (It is important to always proclaim your faith after you comment heresies to avoid the auto da fé.)

Beyond very small well defined problem sets, it is not possible to create comprehensive tests. There may and often will still be bugs in our software that the automated tests simply don't test for. When the automated tests pass we all too often assume there are no bugs in the code.

I've been a part of automated QA efforts where it took half a day to run the tests, because the tests were slow. If you're not careful with writing your tests, your test suite could turn out this way too. This doesn't sound like a big deal until your now managing that time, "Oh, I just committed a fix, but it's going to take 4 hours to prove correctness".

Frailty of some test writing methods

Some testing methods (such as automating the UI) seem to break every time you turn around. Especially painful if your script, say, hangs the testing process because it's waiting for a button to appear - but the button has been renamed.

These are both things you can work around, with good testing practice: find ways to keep your test suite fast (even if you have to do tricks like distributing test runs across machines/CPUs). Likewise, some care can be taken to try to avoid frail methods of writing tests.

One of the main disadvantages can be overcome by using self-learning tests. In this situation the expected results are all stored as data and updateable with minimal review by test suite user in self-learning mode (show diffs between old expected result and new actual result - update expected if press y). This testsuite learning mode needs to be used with caution so buggy behaviour is not learned as being acceptable.