Tips (& Traps) for Continuous Testing

Continuous Testing is a phrase used a lot these days, but what does it mean? On the surface, one definition could be “test all the time” – but that doesn’t really quite cover it.

If you were to ask a developer, a QA engineer, or a CIO, you might get somewhat different definitions based on their particular perspective.

The gurus at Gartner describe it as:

“Systems [providing] automation of the software build and validation process driven in a continuous way by running a configured sequence of operations every time a software change is checked into the source code management repository. “ [link]

It Starts with the Developer

Continuous Testing (CT) begins on the developer’s desktop where unit tests can be run as part of every local build. Once the code is checked in, integration and other system level tests are run automatically. If those tests pass, automated end-to-end tests can run in order to ensure that the system still works as expected. Other testing – stress tests, performance tests, or other large tests – may also run before releasing software to customers. After release, monitoring and alerts are (for some devs anyway) another flavor of testing that may happen during production.

Continuous Testing in a Nutshell

In short, CT is essentially lots of tests, each running at the most appropriate point in the development cycle.

Finding Bugs

We want to create tests that will find issues with the customer experience. We also want to have feedback from tests as quickly as possible and, to that end, we target our tests to find issues at the earliest possible time. Bugs that can be found by unit tests should be found by unit tests. The same is true for acceptance and integration tests, and this is especially important with end-to-end tests. While a lot of automation efforts focus on end-to-end tests, they should only exist for bugs that can only be found with end-to-end tests.

Deliver Changes/Updates

Software products with good CT systems enable teams to deliver changes and updates to customers frequently and safely and to use testing at every single integration point to help us determine whether we’re heading in the right direction or not.

Tips and Traps

Just the Right Amount of Automation

CT is not just about having a lot of automation. It’s about having the right automation running at the best possible time to find issues. For some web services, automated tests may be all you need in order to ship. The amount and types of testing you do, your desired shipping frequency, your customers, and other risks all combine to help you make this business decision.

Not too much…

It’s also easy to fall in the trap of trying to write too much automation. Don’t try to automate everything you can automate; however, you should automate everything that should be automated. Yes, that’s a tautology, but it’s easy to fall into the trap of both under-automating and over-automating if you don’t put enough thought into designing your tests.

If you’re the tester on a team doing CT, it doesn’t mean you write all of the automation! In fact, the team may be better served if you assist and coach in their automation efforts rather than attempting to try to write too much of the automation yourself. Besides, if everyone is writing automation, your team will learn automation tips from each other and you will likely end up with a pretty robust and reliable set of tests. Along these lines, I’ve personally seen a lot of success pairing testers and developers on testing and test automation tasks.

And monitoring…

Finally, even with a great suite of automated tests, don’t neglect monitoring as a means of discovering errors in your software. Even if your robust and complex army of tests show no issues, chances are that customers will see errors you never anticipated. Good monitoring (and alerts) will give you huge insights into what your customers are seeing and often will give you new testing ideas as well.

Thoughts?

Eran Kinsbruner and I are going to talk through and expand on many of these thoughts about Continuous Testing – and a lot more in the upcoming webinar (see below). Hope to interact with you there.

Find this useful? Share it on...

Alan Page has been a software tester for over 25 years, and is currently the Director of Quality for Services (and self proclaimed Community Leader) at Unity Technologies. Previous to Unity, Alan spent 22 years at Microsoft working on projects spanning the company – including a two year position as Microsoft’s Director of Test Excellence.
Alan was the lead author of the book “How We Test Software at Microsoft”, contributed chapters for “Beautiful Testing”, and “Experiences of Test Automation: Case Studies of Software Test Automation”. His latest ebook (which may or may not be updated soon) is a collection of essays on test automation called “The A Word: Under the Covers of Test Automation”, and is available on leanpub .

Our website uses cookies to give you the best online experience and provide functionality essential to our services. By clicking ‘Accept’ or by continuing to use our website, you are consenting to our use of cookies in accordance with our Privacy Policy

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

You can adjust all of your cookie settings by navigating the tabs on the left hand side.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

disable

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.