I have a question about best practices for writing test cases. At my job, I am using a closed source, third party tool that parses files into a database. Since this is a third party tool, is it necessary to write a test suite for it? Should we limit our tests to only testing our own code or should we also test third party code in an attempt to increase reliability. We are testing the data for accuracy before it hits the parser, and the data is then tested subsequently once it is pulled out of the database and used in other parts of the system. We have been using the tool in development for about a month with no unexpected results, so we have the smoke test covered, now I'm wondering about the need for more formal testing.

8 Answers
8

At some point you have to trust the third-party software you work with. That's not to say the software won't have bugs; of course it will. But you can never test everything. For example, I suspect you do not test the operating system, compilers, text editors, router firmware, printer drivers, and web browsers that you depend upon. (At a previous job, our source code was occasionally corrupted by a faulty network stack. Essentially, the act of checking code in could cause bugs (or compile errors) to creep in. Fortunately that was a long time ago.)

If you have a history of problems with the tool, or if you believe you use the tool in an unusual way that may not have been covered by the vendor, then it's worth considering some kind of acceptance test of those specific features for each new version of the tool. An acceptance test may also be justified if you suspect the tool could cause otherwise undetectable errors. Otherwise, I would trust the tool.

I'd agree with the other folks here: it looks like you're testing your application's use of the third party tool, and in most cases that should be enough (if you're working somewhere where a mistake would get someone killed, perhaps not so much).

With a tool that interfaces between some form of data and a database, the essentials are testing that when using known data, it gets to the database correctly, and that known database data is handled correctly by your application when the tool returns it. Effectively, you're testing the interface between your application and the third party tool by validating data at either end of the workflow.

Things to consider would be ensuring that the data format your application delivers matches what the tool expects, and that your application handles the data format delivered by the tool when it's retrieving from the database.

Of course, if your application is doing all the right things, and the tool fails, your organization has the "joy" of working with the tool vendor to get a bug fix through, and in the meantime building (and testing) a workaround for it.

It is always necessary to validate how your tools are working in conjunction with the rest of the application workflow. In your case it sounds like you are already testing the inputs that your are sending the the tool and what you are then receiving from the tool. Beyond that it is a question of where you feel you need or want to spend your time. You would hope that the developer has spent some time testing the tool before selling it (not always the case). I would say it depends some on your level of trust, time, budget. If you have covered critical path items and risk areas my thought is to leave it be at that point.

You have chosen to include the third-party tool in your software. Therefore you are responsible for how it performs in your software. Given that, I'd want to test it.

You may choose to test the third-party tool only as part of testing your software. The danger here is that your tests may not exercise all of the interfaces that your software uses, nor all of the possible sequences of events, nor all of the possible responses from the third party tool. The more critical the tool, the more you'll want to test it independently.

I like to focus on what Michael Feathers calls "characterization tests." These are automated tests designed not so much to test the third party tool as to characterize it.

I want characterization tests that demonstrate these elements of the third party tool:

Basic, common usage.

Boundary conditions (places where different inputs lead to different execution paths).

Non-obvious behaviors.

Sequences that demonstrate the internal state of the third party tool.

Error conditions and exceptions.

I tend to focus on how my software uses the third party tool. This kind of test offers a number of benefits:

Demonstrate to other developers how to use the third party tool.

Represent the assumptions that your code makes about the third party tool.

Detect relevant changes when you upgrade to a new version of the third party tool.

Question: will you be relying on those third party tools for business income?

If so, you probably want to test it.

However, at some point, you have to trust the tools, libraries and more that you use to some extent. They've been tested as well by others. So you don't need to retest them from scratch. However, you should test the code that you use, make sure you get the responses you expect.

Also, remember that the critical reliability affects how much you need to test it. Is it a third party tool you're including in nuclear launch software? Probably worth writing pretty comprehensive tests. Is it a wordpress blog plugin? Probably not so critical to test every aspect of it.

There is no "yes" or "no" answer to this question. As a tester, it will be your job to identify and communicate the level of risk involved in not testing a specific part of your system.

The test lead will then need to decide whether the level of risk is acceptable and structure the test scope accordingly. Depending on the structure of your organisation, this may need to be included in Test documentation to be signed off by someone more senior.

You need to consider what the Likelihood is of something going wrong and that the Impact of that failure would be. Then consider who much more time and effort it would add to the testing.

Just remembert, no matter what the organisation charts might say, as a Tester, YOU are responsible for the quality of the code that goes out. Management are interested in costs and timescales but your job should be primarily about the quality of the product.

If you decide to use a new third-party tool, you must decide if it is fit for the task. That "fit" is determined contextually, by the way your system uses the tool, by the availability of support for the tool, by the expected future of the tool as it interacts with your system, etc.

Your customers will consider this third-party tool as just a component of your overall system, and will expect it to work as well as the rest of your offering.

As Dilbert suggests, if the third-party tool fails now or in the future, you cannot go back to your customer and deny responsibility.

Overall, you should test the third-party tool to the same extent you would test it if you had written it yourself and incorporated it into your system.