You can see more (current views, but without explicit pointers back to TCS) at my site for the Black Box Software Testing Course (videos and slides available for free), www.testingeducation.org/BBST

The testing culture back then was largely confirmatory.

In modern testing, the approach to unit testing is largely confirmatory--we write large collections of automated tests that simply verify that the software continues to perform as intended. The tests serve as change detectors--if something in other parts of the code and this part now has problems, or if data values that used to be impossible in the real world are now reaching the application, then the change detectors fire, alerting the programmer to a maintenance problem.

I think the confirmatory mindset is appropriate for unit testing, but imagine a world in which all of system testing was confirmatory (for folks who make a distinction, please interpret "system integration testing" and "acceptance testing" as included in my comments on system testing.) The point of testing was to confirm that the program met its specifications and the dominant approach was to build a zillion (or at least a few hundred) system-level regression tests that mapped parts of the spec to behaviors of the program. (I think spec-to-behavior confirmation is useful, but I think it is a small portion of a larger objective.)

There are still test groups that operate this way, but it is no longer the dominant view. Back then, it was. I wrote emphatically and drew sharp contrasts to make a point to people who were consistently being trained in this mindset. Today, some of the sharp contrasts (including the one quoted here) are outdated. They get misinterpreted as attacks on the wrong views.

As I see it, software testing is an empirical process for learning quality-related information about a software product or service.

A test should be designed to reveal useful information.

Back then, by the way, no one talked about testing as a method for revealing "information". Back then, testing was either for (some version of ...) finding bugs or for (some version of ... ) verifying (checking) the program against specifications. I don't think that the assertion that tests are for revealing useful information came into the testing vocabulary until this century.

Imagine rating tests in terms of their information value. A test that is very likely to teach us something we don't know about the software would have a very high information value. A test that is very likely to confirm something that we already expect and that has already been demonstrated many times before, would have a low information value. One way to prioritize tests is to run higher information value tests before lower information value tests.

If I was to oversimplify this prioritization so that it would attract the attention of a programmer, project manager, or process manager who is clueless about software testing, I would say "A TEST THAT IS NOT DESIGNED TO REVEAL A BUG IS A WASTE OF TIME." It's not a perfect translation, but for readers who cannot or will not understand any subtlety or qualification, that's as close as it's going to get.

Back then, and I see it again here, some of the people who don't understand testing would respond that a test designed to find corner cases is a waste of time compared to a test of a major use of a major function. They don't understand two things. First, by the time testers find time to check boundary values, the major uses of the major functions have already been exercised several times. (Yes, there are exceptions, and most test groups will pay careful attention to those exceptions.) Second, the reason to test with extreme values is that the program is more likely to fail with extreme values. If it doesn't fail at the extreme, you test something else. This is an efficient rule. On the other hand, if it DOES fail at an extreme value, the tester might stop and report a bug or the tester might troubleshoot further, to see whether the program fails in the same way at more normal values. Who does that troubleshooting (the tester or the programmer) is a matter of corporate culture. Some companies budget the tester's time for this, some budget the programmers, and some expect programmers to fix corner case bugs whether they are generalizable or not so that troubleshooting is not relevant. The common misunderstanding -- that testers are wasting time (rather than maximizing efficiency) by testing extreme values is another reason that "A test that is not designed to reveal a bug is a waste of time" is an appropriate message for testers. It's a counterpoint to the encouragement from some programmers to (in effect) never run tests that might challenge the program. The message is oversimplified, but the entire discussion is oversimplified.

By the way, "information value" can't be the only prioritization system. It's not my rule when I design unit test suites. It's not my rule when I design build verification tests (aka sanity checks). In both of those cases, I'm more interested in types of coverage than in the power of the individual tests. There are other cases (e.g. high-volume automated tests that are cheap to set up, run and monitor) where power of individual tests is simply irrelevant to my design. I'm sure you can think of additional examples.

But as a general rule, if I could state only one rule (e.g. speaking to an executive whose head explodes if he tries to process more than one sentence), it would be that a low information-value test is usually a waste of time.

+1 for taking the time to answer a question you're the authoritative source for, as well as validating my use of the term "Build Verification Tests" which so many people look at me funny for using... Always nice to see people of your stature taking time to help people around here
–
Jimmy HoffaMar 24 '13 at 20:20

When I read a question that start with "In Such and Such Book, Mr. Famous Computer Scientist said that...." and the following answer starts with "When I wrote Such and Such Book...." I get really excited. Bravo Mr. Cem Famous-Computer-Scientist Kaner for responding to the little people of the internet.
–
Eric GMar 28 '13 at 21:54

Eric G: I think if you re-read that you'll see Cem states that as part of readers understanding that his view on the subject has evolved over time. Or you can just carry on and ignore subtlety and qualifications, to paraphrase Cem. ( and I take "qualifications" not as his credentials, but as exceptions.)
–
Jim HolmesMay 27 '13 at 17:12

Your quote reminds me of something I've observed about science: one can't prove (or even meaningfully support) a scientific theory by conducting experiments one expect to yield results consistent with the theory; the way to support a theory is to make a bona fide effort to device experiments that won't support it, but being unable to do so.
–
supercatyesterday

The idea is, according to Kaner, "since you will run out of time before running out of test cases, it is essential to use the time available as efficiently as possible."

Concept behind the quote you ask about is presented and explained in good detail in the Testing Computer Software article by Cem Kaner, Jack Falk, Hung Quoc Nguyen, in the chapter "THE OBJECTIVES AND LIMITS OF TESTING":

SO, WHY TEST?

You can't find all the bugs. You can't prove the program correct, and you don't want to. It's expensive,
frustrating, and it doesn't win you any popularity contests. So, why bother testing?

THE PURPOSE OF TESTING A PROGRAM IS TO FIND PROBLEMS IN IT

Finding problems is the core of your work. You should want to find as many as possible; the more serious the problem, the better.

Since you will run out of time before running out of test cases, it is essential to use the time available as efficiently as possible. Chapters 7,8,12, and 13 consider priorities in detail. The guiding principle can be put simply:

A test that reveals a problem is a success. A test that did not reveal a problem was a waste of time.

Consider the following analogy, from Myers (1979). Suppose that something's wrong with you. You go
to a doctor. He's supposed to run tests, find out what's wrong, and recommend corrective action. He runs test after test after test. At the end of it all, he can't find anything wrong. Is he a great tester or an incompetent diagnostician? If you really are sick, he's incompetent, and all those expensive tests were a waste of time, money, and effort. In software, you're the diagnostician. The program is the (assuredly) sick patient...

You see, the point of above is that you should prioritize your testing wisely. Testing is expected to take a limited amount of time and it's impossible to test everything in the time given.

Imagine that you spent a day (week, month) running tests, find no bugs and let some bug to slip through because you didn't have time to run a test that would reveal it. If this happens, you can't just say "it's not my fault because I was busy running other tests" to justify this miss - if you say so, you will still be held responsible.

You wasted time running tests that did not reveal bugs and because of this, you missed a test that would find a bug.

(In case if you wonder, misses like above are generally unavoidable no matter how you try, and there are ways to deal with these, but that would be more a topic for a separate question... and probably a better fit for SQA.SE.)

This answer correctly represents his position, but it's worth pointing out that plenty of people think his position is wrong. Given the choice between a test that demonstrates the most important function in the app works correctly (acceptance testing, if you will) and a test that finds a trivial bug (alignment off by one pixel) in a rarely used corner of the app, I know which I would choose in my limited time. And for the doctor analogy: if I am going FOR A CHECKUP rather than in response to symptoms, confirming heart is good, lungs are good etc etc is a fine outcome. So there.
–
Kate GregoryMar 22 '13 at 14:24

@KateGregory I agree, I think the same. I persoanlly find his opinion wrong, we test mainly to gather information..
–
user970696Mar 22 '13 at 14:40

2

@KateGregory agree - I don't think it's accurate to label any passed test as a total waste. Though, I think one point he makes is timeless: if bug slips through release testing, QA would need something more than "oh but we were busy running other tests" to cover their back. I've been through this as a tester myself in the past, and see this around now that I'm a developer, and I don't think it will ever fade away
–
gnatMar 22 '13 at 14:41

are a waste of time. That includes the situation where you already have some tests (it does not matter if they are automatic or just on a checklist), and you add new tests which validate essentially the same cases you already have. So your new tests won't find any more errors than the existing ones.

Such a situation can happen, for example, if you just through a list of randomly - I could say also "brainlessly" (forgive me that word) - choosen test cases at your program, without thinking if they check new edge case, new equivalence classes of your input data, or if they increase code coverage in relation to the already written tests.

If you make a test for a function that validate emails and on the test you only provide valid emails, that test is completely useless. You would have to test this function for "any" string posible, invalid emails, too long emails, unicode characters (áêñç....)

If you code a test that only checks that name@dom.com returns true and name@com returns false, that test is the same of no test at all.