5 Answers
5

Of course a passing unit test does not guarantee a functioning system. A buggy unit test could produce a false negative. The system could also use the component in a way the unit test does not, e.g. in a way the component author did not anticipate, or in a way the unit test author did not anticipate.

You can also think about this in terms of the conditions that are necessary and sufficient to cause the system problem:

Neither necessary nor sufficient: the component bug may be real but it has nothing to do with the system problem.

Necessary but insufficient: the system problem is a result of a confluence of bugs, but fixing this bug eliminates the system problem.

Sufficient but unnecessary: several bugs cause the problem; fixing this bug reduces the system problem's frequency but does not eliminate it.

Necessary and sufficient: the component bug -- and no others -- causes the problem.

Here are some conditions that motivate me toward additional testing:

The developer writes a lot of bugs.

The developer is not familiar with the component, e.g. the component author is unavailable to fix it themselves.

The developer had to fix the bug under difficult schedule.

The component has had a lot of bugs, e.g. because it is complicated and "hard to get right".

If the component breaks, the impact on the customer is significant (e.g. the wrong amount of money is deducted from their paycheck; the wrong data is deleted; the site is no longer secure enough)

Conversely, here are some conditions that make me comfortable with stopping at the unit test:

The developer is familiar with the component and their code tends to work the first time.

The component touches (or is used by) only a small number of places in the system, and I am comfortable that the unit test models how the system behaves in those places.

Another thing I try to remember is that while we would like our testing process to be perfect, mistakes will happen. The important thing is to be honest about your mistakes, try to understand what went wrong, and adjust your process accordingly.

I agree with your points! However, most of your points relate to faulty component, incompetent developer, etc. while there might be reasons also on tester’s side, e.g. that the tester isolated the bug incorrectly, and there is no cause-effect relationship between the cause I found and defect that occurred. A particular case of that would be that there were multiple causes of the defect in different components, and we fixed only one of those causes.
–
dzieciouDec 15 '12 at 19:09

Joe, those are great questions but I think I may lack instruments to answer them. My feeling is that system thinking is one of such instruments to understand how system parts relate to each other. What would be other useful instruments here?
–
dzieciouDec 15 '12 at 18:55

1

You have to use your brain, your knowledge of the system under test, your experience with the developer(s) in question, opinions of others, and anything else that proves useful. Testing is mostly an intellectual activity.
–
Joe StrazzereDec 15 '12 at 20:29

There are systems at (at least) two different scopes here: The end-to-end system and the unit, which is also a system.

When I'm analyzing a "defect," I like to think in terms of three parts: Failure, fault, and conditions. The failure is the system's production of incorrect results. The fault is the erroneous element of the system that, under certain conditions, leads to the failure. Though the fault is always there in the system, I may or may not observe a failure, depending on the conditions.

Given that, I would say you have not reproduced "the defect" in a unit test. That is, you have not reproduced the same failure. You have (perhaps) isolated the fault, and you have produced a different failure: a failure at the scope of the unit.

No unit test can verify end-to-end behavior. A unit test may give you confidence in the end-to-end behavior, but your confidence comes from inference, not observation. And your inference is based on your mental model of the system.

So the wisdom of relying only on the unit test rests on a few questions: How good is your mental model of the system? How valid is your reasoning, given your model?

If people are talking about "the same defect" at different scopes, that tells me they have a distorted model of the system. In particular, their model does not distinguish between a unit failure and an end-to-end system failure. I would be skeptical of any inferences people make based on a model that is distorted in that way.

I agree failures at unit and end-to-end levels are two different failures.
–
dzieciouDec 16 '12 at 8:08

But you may build your inference model based on observation, can't you? If you observe that removing a fault X in the component Y removed the failure Z in conditions W then you may suspect there's cause-effect relationship between those elements. As one learns what are the relationships between unit failures and system failures, his or her mental model of the system will become more realistic and thus credible.
–
dzieciouDec 16 '12 at 8:18

Have you ever had a situation that something passed system level test, but not unit level test?
–
dzieciouDec 17 '12 at 8:43

1

yes but only because system-tests were incomplete and not automatized.
–
k3bDec 17 '12 at 13:34

Should add here that the faulty behavior on a unit case caused the app to behave correctly in other cases. The incorrect behavior in this case was caused by the "wrong" but intended behavior elsewhere.
–
Mark0978Dec 17 '12 at 19:59

My point is: "Do not trust the developers". They are not lying. But they believe they are right, even when they are wrong. And most developers tries to find the easiest way to fix the issue without looking around for another issues that may be introduced during the fix.

You may execute the unit-tests to see if there are no problems, but if the problem is found during system workflow---you must retest it with the same workflow. Just because the new bugs may appear after the fix, or the fix was wrong even if the unit test passes.

It is really sad to believe that people still promote a "don't trust developr
–
Bj RollisonDec 17 '12 at 0:43

1

It is really sad to believe that people still promote a "don't trust developer" attitude. I trust my developers, and they trust me. Dec and test should have a symbiotic relationship of mutual trust and respect. Also, any dev who only looks for the easiest fix without considering the ramifications of that fix is either lazy or incompetent. It is unfortunate if that is your experience, but it is certainly not the behavior of "most" developers I've met.
–
Bj RollisonDec 17 '12 at 0:50

Bj, but please do not take my words in negative context. The most of developers I've met are great professional people. But they are people. Ok, I will rephrase: Do not trust any people, even yourself. When I do something important, I always re-check my work. Some times the small mistakes at the beginning may lead to a great time waste at the end.
–
Dmytro ZhariiDec 17 '12 at 8:38

And yes, the developers are lazy bi**hes. The laziest ones are write unit tests just not to do the manual checks, create the build automation because they were bored to do that manually, write on C#/Java and uses JavaScript frameworks instead of writing all the things in C++ or Assembler. Yes, all the developers should be lazy.
–
Dmytro ZhariiDec 17 '12 at 8:41

In our team, from time to time we testers do peer reviews of unit tests written by devs. That's one of the ways to "make sure".
–
dzieciouDec 17 '12 at 12:41