8 August 2017

5 traits of badly written test cases (& how to fix such software testing mistakes)

The World Quality Report was at great pains to point out that the costs of software testing are taking off, with no landing pad in sight to slow them down. While there is no single reasons for this phenomenon, they can usually be attributed to three main buckets:

Among the biggest problems facing software testing projects right now is the speed and quality of tests that are being executed. In fact, senior IT executives and decision makers have pointed to software testing effectiveness as a major challenge for them and their testing teams:

While there is no silver bullet to any software testing challenges, effective test design and maintenance will go a long way to resolving the issues that lead to ineffective and inefficient software testing.

Having done many rounds of the software testing world, I can tell a poorly constructed test case from a mile away. Such morale-sapping and utterly frustrating tests usually have share 5 common characteristics. If you can identify your software testing project with any of these 5 characteristics, I urge you to seek the free advice of software testing specialists who will be able to guide you back to the right path.

1. Too specific – run only a specific test condition

Test cases need to consider a variety of conditions that the software will be expected to handle. The test case must be able to comprehensively test the software module with almost all possible combinations of main conditions.

To be able to comprehensively test all combinations of conditions, the software tester must find a way to present these conditions such that it is easy for others to follow, review and amend if the real-world process demands such actions.

2. Cover a small part of functionality – they need to test a larger part of the system

Test cases often focus on a specific function. Often this function is determined by the internal technical design of the software. Such practices are often found in large monolithic applications like SAP or OracleERP systems where a software tester may not always have knowledge of the entire business process, so the test case never ends up reflecting what the test designer doesn't know, but should have made the effort to understand.

Instead, the test cases need to reflect the usage patterns and flows. Every test case should try to cover as much of the flow as reasonably possible – going across technical and modular boundaries of the application.

Remember, you don't know what you don't know is never an excuse for creating and executing flimsy, irrelevant and ineffective software tests.

3. Software test created only for a specific user role

We have often seen test cases written for a very specific user role, with complete disregard for all other users of the application. This limits test case's scope and therefore, compromises their effectiveness significantly. Such test cases effectively test only small elements of the application while deceptively purporting to be complete and robust test cases.

Test cases that are most effective reflect the usage patterns, or what the Agile world refers to as user journeys. A business application, for example, should be tested with test cases that are designed to test the whole business process – covering all the user roles and all the systems that might be involved in the business process.

4. Written to prove that the most common use cases are covered well in the application

This, in my opinion is one of the most common problems and is a result of what I call a 'lazy' approach to test design. The test designer simply converts the requirements document into test cases.

The test designer should instead look for the ‘corner-cases’ or ‘boundary conditions’. Most developers are easily able to write code for the most common use cases. The problems surface the moment there is a condition that is even slightly different to the most common, or intended, use case. A well designed test case, will catch these easily and consistently.

5. Test case cataloguing and version control

Any test case can become completely useless if not catalogued systematically and kept available for use. Imagine a library with books not catalogued and placed systematically on shelves, especially after multiple borrowers have had their fill. It would be impossible to use the books if you can’t find them with ease when you need them.

Often hundreds of test cases are written with much effort and then dumped into a shared folder structure. While this can work if you have very few test cases, a poorly organised system collapses the moment the number of test cases increases beyond a mere hand full.

Therefore, you need a software testing tool that is able to systematically tag and cataloguing test cases. Then your software testing tool should be able to ‘pull out’ test cases when they need to be run. To make this entire process seamless across the entire software testing team, you need a powerful testing tool that is able to effortlessly create and maintain multiple versions of test cases.

If you or your team need help in making your test cases more reliable and great at actually finding bugs before they get into production, take advantage of our a free software quality strategy session to help set you on the right path to achieve your goals.