Georgia Ingham FollowGeorgia is a Java Developer. She has spoken at lots of conferences. Her hobbies include reading, completing puzzle books and cycling.

Test Case Guidelines

November 7, 2017 3 min read

During JavaOne this year I attended many interesting talks. One that I found particularly useful was regarding how we should make sure our test cases are up to scratch. You could write code for a cool new feature and be really proud of it, but if you don’t have adequate tests in place then this beautiful code that you slaved over might be accidentally broken. That’s where test case guidelines come in.

Stuart Marks and Steve Poole introduced the topic to us in their presentation called Ten Simple Rules for Writing Great Test Casesand explained 10 different aspects of the guidelines, which if followed could help improve your whole testing process.

1) Plan first

Activities tend to come out better if you plan them first. This is most likely due to you giving yourself time to mentally work through what needs to happen, then when you actually do start to write them you can follow and improve your first iteration instead of hashing it out quickly. Think about why and what you need to test.

2) Make understandable

Even if you plan on working for your company for the rest of your life you need to make your test cases understandable. Not only does it let your co-workers know you care about them by making it easy for them to read and follow but it also means that someone else will be able to work with that test in the future if it ever needs updating. Having multiple people who know how to run and understand the tests takes away a single point of failure if only one person knows and leaves.

3) Keep small and simple

Smaller tests are better than one huge monolithic test as they can be run in parallel to complete faster. Test cases should also be kept simple so that they are quick and easy to read. Separate test logic from the teardown/setup tasks.

4) Test one thing only

Heard of the Single Responsibility Principle? It’s useful for test cases too. Testing only one feature or aspect of a feature (depends how complex it is) allows you to more easily debug it in the event that a test fails.

5) Tests should be fast

Tests need to be fast otherwise people can tend to avoid running them. Developers only have so many hours in the day and they don’t want to have to wait around for tests to finish before they can push a change, especially if it is just a small one. Having long tests can make people want to skip them in favour of just pushing the change (that couldn’t possibly break anything) and moving onto their next task. In short quicker tests are more likely to be run more frequently and will therefore catch more issues.

6) Absolute repeatability

If a test only works under certain conditions or intermittently then it isn’t a very reliable test. You can’t even trust a test that only very rarely fails as you end up expecting it to fail every now and then and will disregard a failed result under the assumption “it will be back to normal next time it is run”. If the test is run once a day though then all you need to do is assume a result is a false positive once and will be fixed tomorrow before your customers start complaining overnight that a critical feature isn’t working.

7) Independent tests only

Your tests need to have no dependencies. They should not rely on any another test completing first as if that test fails then your second one will either not run or will fail itself. Having your tests independent of each other also allows them to be run in any order, which is useful if you ever want to swap the order or remove/add a test at any point.

8) Provide useful diagnostic data on failure

When a test fails it’s not the end of the world, but if a test fails with either confusing or too little output then it can feel like it. You need to provide useful output messages on asserts and make sure you provide all the necessary data someone will need in order to pinpoint the issue. This could include printing out what the input data was, what the environmental information is and any other details you believe necessary.

9) Don’t hard code environment details

Hard coding values can reduce the portability of your tests as if the test is copied onto another machine there is a high change it will fail if it setup differently from the original machine. You need to make your tests as portable as possible and if you do require hard coded environmental variables put them in a config file so they are all in one place.

10) No extraneous output

Adding further to point 8 – Too much information can also be confusing. Having a lot of output can make it hard to process and therefore slower to debug. To reiterate the earlier point: you only need to print the necessary details to help people understand the current problem. Add additional options for more verbose output if you want but try and keep the normal output as concise as possible. A silent test should be a passing test.