I've been going through phpunit's docs and came accross the following quote:

You can always write more tests. However, you will quickly find that only a fraction of the tests you can imagine are actually useful. What you want is to write tests that fail even though you think they should work, or tests that succeed even though you think they should fail. Another way to think of it is in cost/benefit terms. You want to write tests that will pay you back with information.
--Erich Gamma

It got me wondering. How do you determine what makes a unit test more useful than another, aside from what's stated in that quote about cost/benefit. How do you go about deciding which piece of your code you create unit tests for? I'm asking that because another of those quotes also said:

So if it's not about testing, what's it about?
It's about figuring out what you are trying to do before you run off half-cocked to try to do it. You write a specification that nails down a small aspect of behaviour in a concise, unambiguous, and executable form. It's that simple. Does that mean you write tests? No. It means you write specifications of what your code will have to do. It means you specify the behaviour of your code ahead of time. But not far ahead of time. In fact, just before you write the code is best because that's when you have as much information at hand as you will up to that point. Like well done TDD, you work in tiny increments... specifying one small aspect of behaviour at a time, then implementing it.
When you realize that it's all about specifying behaviour and not writing tests, your point of view shifts. Suddenly the idea of having a Test class for each of your production classes is ridiculously limiting. And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable.
--Dave Astels

The important section of that is

*And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. *

So if creating a test for each method is 'laughable', how/when do you chose what you write tests for?

4 Answers
4

How many tests per method?

Well the theoretical and highly impractical maximum is the N-Path complexity (assume the tests all cover different ways through the code ;)). The minimum is ONE!. Per public method that is, he don't test implementation details, only external behaviors of a class (return values & calling other objects).

You quote:

*And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. *

and then ask:

So if creating a test for each method is 'laughable', how/when do you chose what you write tests for?

But i think you misunderstood the author here:

The idea of having one test method per one method in the class to test is what the author calls "laughable".

(For me at least) It's not about about 'less' it's about 'more'

So let me rephrase like i understood him:

And the thought of testing each of your methods with ONLY ONE METHOD (its own test method in a 1-1 relationship) will be laughable.

To quote your quote again:

When you realize that it's all about specifying behaviour and not writing tests, your point of view shifts.

When you practice TDD you don't think:

I have a method calculateX($a, $b); and it needs a test testCalculcateX that tests EVERYTHING about the method.

What TDD tells you is to think about what your code SHOULD DO like:

I need to calculate the bigger of two values (first test case!) but if $a is smaller than zero then it should produce an error (second test case!) and if $b is smaller than zero it should .... (third test case!) and so on.

You want to test behaviors, not just single methods without context.

That way you get a test suite that is documentation for your code and REALLY explains what it is expected to do, maybe even why :)

How do you go about deciding which piece of your code you create unit tests for?

Well everything that ends up in the repository or anywhere near production needs a test. I don't think the author of your quotes would disagree with that as i tried to state in the above.

If you don't have a test for it it gets way harder (more expensive) to change the code, especially if it's not you making the change.

TDD is a way to ensure that you have tests for EVERYTHING but as long as you WRITE the tests it's fine. Usually writing them on the same day helps since you are not going to do it later, are you? :)

Response to comments:

a decent amount of methods can't be tested within a particular context because they either depend or are dependent upon other methods

Well there are three thing those methods can call:

Public methods of other classes

We can mock out other classes so we have defined state there. We are in control of the context so thats not a problem there.

*Protected or Private methods on the same *

Anything that isn't part of the public API of a class doesn't get tested directly, usually.

You want to test behavior and not implementation and if a class does all it's work in one big public method or in many smaller protected methods that get called is implementation. You want to be able to CHANGE those protected methods WITHOUT touching your tests. Because your tests will break if your code changes change behavior! Thats what your tests are there for, to tell you when you break something :)

Public methods on the same class

That doesn't happen very often does it? And if it does like in the following example there are a few ways of handling this:

That the setters exist and are not part of the execute method signature is another topic ;)

What we can test here is if executes does blow up when we set the wrong values. That setBla throws an exception when you pass a string can be tested separately but if we want to test that those two allowed values (12 & 14) don't work TOGETHER (for whatever reason) than thats one test case.

If you want a "good" test suite you can, in php, maybe(!) add a @covers Stuff::execute annotation to make sure you only generate code coverage for this method and the other stuff that is just setup needs to be tested separately (again, if you want that).

So the point is: Maybe you need to create some of the surrounding world first but you should be able to write meaningful test cases that usually only span one or maybe two real functions (setters don't count here). The rest can be ether mocked away or be tested first and then relied upon (see @depends)

*Note: The question was migrated from SO and initially was about PHP/PHPUnit, thats why the sample code and references are from the php world, i think this is also applicable to other languages as phpunit doesn't differ that much from other xUnit testing frameworks.

Very detailed and informative...You said "You want to test behaviors, not just single methods without context.", surely a decent amount of methods can't be tested within a particular context because they either depend or are dependent upon other methods, therefore a useful test condition would only be contextual if the dependents were being tested as well? Or am I misinterpreting what you mean>
–
zcourtsJul 6 '11 at 14:38

@robinsonc494 I'll edit in an example that maybe explains a little better where I'm going with this
–
edorianJul 6 '11 at 14:45

thanks for the edits and example, it certainly helps. I think my confusion (if you can call it that) was that even though I read about testing for "behaviour", somehow I just inherently (perhaps?) thought of test cases that focused on implementation.
–
zcourtsJul 6 '11 at 15:14

@robinsonc494 Maybe think of it this way: If you punch someone he ether punches back, call the cops or runs away. That behavior. It's what the Person does. The fact that he uses his mussels triggered by little electrical charges from his brain is implementation. If you want to test someones reaction you punch him and see if he acts like you expect it. You don't put him in a brain scanner and see if the impulses are send to the mussels. Some goes for classes pretty much ;)
–
edorianJul 6 '11 at 15:18

Testing and Unit Testing are not the same things. Unit Testing is a very important and interesting subset of Testing overall. I'd claim that the focus of Unit Testing makes us think about this kind of testing in ways that somewhat contradict the quotes above.

First, if we follow TDD or even DTH (ie develop and test in close harmony) we are using the tests we write as a focus on getting our design right. By thinking about corner cases and writing tests accordingly we avoid the bugs getting in in the first place, so in fact we write test that we expect to pass (OK right at the start of TDD they fail, but that's just an ordering artefact, when the code is done we expect them to pass, and most do, because you've thought about the code.

Second the Unit Tests really come into their own when we refactor. We change our implementation but expect the answers to stay the same - the Unit Test is our protection against breaking our interface contract. So one again we expect the tests pass.

This does imply that for our public interface, which presumably is stable, we need clear tracability so that we can see that every public method is tested.

To answer your explicit question: Unit Test for public interface have value.

edited in response comment:

Testing private methods? Yes, we should, but if we have to not test something then that's where I'd compromise. After all if the public methods are working then can those bugs down in the private stuff be so important? Pragmatically, the churn tends to happen down in the private stuff, you work hard to maintain your public interface, but if the things you depend on change the private stuff may change. At some point we may find maintaining the internal tests is a lot of effort. Is that effort well-spent?

So just to make sure I understand what you're saying: When testing, focus on testing the public interface right? Assuming that's correct...isn't there a bigger possibility of leaving bugs in private methods/interfaces that you didn't do unit tests for, some tricky bugs in the untested "private" interface could possibly lead to a test passing when it should've really failed. Am I wrong in thinking so?
–
zcourtsJul 6 '11 at 14:30

Using code coverage you can tell when code in your private methods isn't being executed while testing your public methods. If your public methods are fully covered, any uncovered private methods are obviously unused and can be removed. If not, you need more tests for your public methods.
–
David HarknessJul 7 '11 at 2:02

Unit tests should be part of a larger testing strategy. I follow these principles in choosing what types of tests to write and when:

Focus on writing end-to-end tests. You cover more code per test than with unit tests and so get more testing bang for the buck. Make these the bread-and-butter automated validation of your system as a whole.

Drop down to writing unit tests around nuggets of complicated logic. Unit tests are worth their weight in situations where end-to-end tests would be difficult to debug or unwieldy to write for adequate code coverage.

Wait until the API you are testing against is stable to write either type of test. You want to avoid having to refactor both your implementation and your tests.

Rob Ashton has a fine article on this topic, which I drew heavily from to articulate the principles above.

+1 - I don't necessarily agree with all your points (or the article's points) but where I do agree is that most unit tests are useless if done blindly (TDD approach). However, they are extremely valuable if you are smart in deciding what is worthy of spending time doing unit tests on. I agree entirely that you get far, far more bang for the buck in writing higher level tests, in particular subsystem level automated testing. The problem with end-to-end automated tests is that they would be hard, if not entirely impractical, for a system if it has any size/complexity.
–
DunkMar 26 '13 at 14:07

I tend to follow a different approach to unit testing that seems to work well. Instead of thinking of a unit test as "Testing some behavior" I think of it more as "a specification which my code must follow". In this way, you can basically declare that an object should behave in a certain way, and given that you assume that elsewhere in your program, you can be fairly certain it's relatively bug free.

If you're writing a public API, this is extremely valuable. However, you'll always need a good dose of end-to-end integration tests as well because approaching 100% unit test coverage is usually not worth it and will miss things that most would consider "untestable" by unit testing methods(mocking, etc)