If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Re: How to make sure test case\'s validity

Daniel, what do you mean by "valid"? Do you mean the test is reusable, in a standard format? Or that the test itself actually tests a valid feature of the application? Or that the designed tests cover all of the necessary features of the application?

Re: How to make sure test case\'s validity

How simple or complex is the case? Is it generic &amp; leaves it up to the tester to exercise their curiousity? Do you have to prove e.g. font consistency, tab order, accelerator keys, or is it enough that there is at least one way to complete the feature &amp; that the feature behaves correctly? Does that test case in isolaton prove database integrity, performance, stability and security?

Unless you know where the goal is, the quarterback is throwing to a blind person...

Re: How to make sure test case\'s validity

[ QUOTE ]
Thanks,Linda.
sorry for my confusing and not clear question.
The "valid" I mean is: "the test itself actually tests a valid feature of the application, Is the test case correct and suitable for it?"

thanks

[/ QUOTE ]

If you have requirements then you know if the application is supposed to do something. Sometimes it is judgement (like in the case of I know that a field that is for charging additional premium shouldn't allow a negative value). Sometimes you have to work with the developer to find out if a response or value or whatever is valid and sometimes you have to work with the business customer.

As for suitability, it all depends on what you are trying to test. Is it suitable for me to try to put a negative value in the additional premium field (mentioned earlier)... maybe, maybe not. That is a matter of discretion and judgement a lot of times.

Re: How to make sure test case\'s validity

You've asked a very hard question, Daniel, and I'll answer it to the best of my ability.

Ensuring a test is actually doing what it needs to do is complex. What some do is to map each test back to requirements. So in that way, they can ensure each requirement is associated with one or more tests. This is called a traceability matrix.

This is not really enough and does not do justice to your question, however. Just because I have one document associated with another document does not mean my document does anything useful. For example, maybe the spec says "a valid state must be entered". If that is all I test, I will never test any of the implied conditions, like what happens when I enter no state? Or an invalid state? Etc.

To combat that kind of problem, something that we do here is ensure everyone is trained on how to extrapolate tests from documentation or a system, and once test cases are written, they have to be handed off to another person to ensure they can be understood and run by someone other than the author. We do use agile technique on some projects, and we make sure our staff are trained on agile technique as well.

In addition, every new person on the staff has a mentor and the first time they write tests for us, they are analyzed in detail and suggestions made by the mentor. As this is time-consuming, we only do it one time. After that, they're on their own. We won't revisit it unless we find an inexplicably high number of problems were found in production for the function they tested.

Another key to "how well did we do with these tests?" is how the system performs in production. If we have less than 10% error (and none of that is catastrophic), we know we've done a good job and the tests cover what they need to cover. If many errors are found in production, we go back and figure out how or why we missed them. Right now, our error rate runs at about 4%.

Does that answer your question? Or were you looking for something else?