The first several pages were great. He discussed intangibles that should be left out of an ROI calculation like an immediate reduction in perceived productivity of the test organization as the automation is first developed. He went on to list the falsely expended benefits like the automation of existing manual tests. Then he compared fixed automation costs like scripting tools to variable automation costs like test maintenance.

Finally, Doug got to the formulas. After careful analysis of some 30+ factors, one can start calculating automation ROI and efficiency benefits. I rubbed my hands together and excitedly turned the page. Then…I think I puked into my mouth a little as I saw the following:

In the end, I latched onto one powerful statement Doug made, almost in passing, he said:

If the benefits of automation are required, then the ROI computation is unnecessary, the investment is required…it’s an expense.

If the benefits include reducing risk by performing automated checking that would not be possible by humans (e.g., complex math, millions of comparisons, diffs, load, performance, precision), then say no more…

Look at your calendar (or that of another tester). How many meetings exist?

My new company is crazy about meetings. Perhaps it’s the vast numbers of project managers, product owners, and separate teams along the deployment path. It’s a wonder programmers/testers have time to finish anything.

Skipping meetings works, but is an awkward way to increase test time. What if you could reduce meetings or at least meeting invites? Try this. Express the cost of attending the meeting in units of lost bugs. If you find, on average, about 1 bug per hour of testing, you might say:

“Sure, I can attend your meeting, but it will cost us 1 lost bug.”

“This week’s meetings cost us 9 lost bugs.”

Obviously some meetings (e.g., design, user story review, bug triage) improve your bug finding, so be selective when choosing to declare the bugs lost cost.

Last week I started testing an update to a complex legacy process. At first, my head was spinning (it still kind of is). There are so many inputs and test scenarios...so much I don’t understand. Where to begin?

I think doing something half-baked now is better than doing something fully-baked later. If we start planning a rigorous test based on too many assumptions we may not understand what we’re observing.

In my case, I started with the easiest tests I could think of:

Can I trigger the process-under-test?

Can I tell when the process-under-test completes?

Can I access any internal error/success logging for said process?

If I repeat the process-under-test multiple times, are the results consistent?

If there were a spectrum that showed a focus between learning by not manipulating and learning by manipulating something-under-test, it might look like this:

My tests started on the left side of the spectrum and worked right. Now that I can get consistent results, let me see if I can manipulate it and predict its results:

If I pass ValueA to InputA, do the results match my expectations?

If I remove ValueA from InputA, do the results return as before?

If I pass ValueB to InputA, do the results match my expectations?

As long as my model of the process-under-test matches my above observations, I can start expanding complexity:

If I pass ValueA and ValueB to InputA and ValueC and ValueD to InputB, do the results match my expectations?

etc.

Now I have something valuable to discuss with the programmer or product owner. “I’ve done the above tests. What else can you think of?”. It’s much easier to have this conversation when you’re not completely green, when you can show some effort. It’s easier for the programmer or product owner to help when you lead them into the zone.

That worst is over. The rest is easy. Now you can really start testing!

Sometimes you just have to do something to get going. Even if it’s half-baked.

Now that my engineering team is automating beyond the unit test level, this question comes up daily. I wish there were an easy answer.

If we make a distinction between checking and testing, no “tests” should be automated. The question instead becomes, which “checks” should be automated? Let’s go with that.

I’ll tell you what I think below, ranking the more important at the top:

Consider automating checks when they…

can only feasibly be checked by a machine (e.g., complex math, millions of comparisons, diffs, load, performance, precision). These are checks machines do better than humans.

are important. Do we need to execute them prior to each build, deployment, or release? This list of checks will grow over time. The cost of not automating is less time for the “testing” that helps us learn new information.

can be automated below the presentation layer. Automating checks at the API layer is considerably less expensive than at the UI layer. The automated checks will provide faster feedback and be less brittle.

will be repeated frequently. A simplified decision: Is the time it takes a human to program, maintain, execute, and interpret the automated check’s results over time (e.g., 2 years), less than the time it takes a human to perform said check over the same time span (e.g., 2 years). This overlaps with #2.

check something that is at risk of failing. Do we frequently break things when we change this module?

are feasible to automate. We can’t sufficiently automate tests for things like usability and charisma.

are requested by my project team. Have a discussion with your project team about which checks to automate. Weigh your targets with what your team thinks should be automated.

can be automated using existing frameworks and patterns. It is much cheaper to automate the types of checks you’ve already successfully automated.

are checks. “Checks” are mundane for humans to perform. “Tests” are not because they are different each time.

Who am I?

My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.