When I look at "tools" outside of the software world, it seems like they are there mostly to help us do work faster. For example, it would probably take much longer to break up a sidewalk with a sledge hammer than a jack hammer, unless you are some sort of John Henry-esque legend. We use tools in software for the same reason, to work faster and to do tasks that people aren’t very good at.

The problem is that some of these software tools are more cumbersome than others—they require tedious purchase cycles, have a steep learning curve, and are difficult to use. Some tools just create more problems than they solve. So, rather than big bloated software stacks, let’s take a look at why you might want to move to lighter-weight, modern testing tool.

Slowing Testers Down

One of the best examples I have seen of tools slowing testing down—and an example that most testers have come across at some point in their career—is the test management system. One school of thought says that testing can’t be done without very detailed, step-by-step test cases that are all stuck in a management system. That idea usually comes from managers that are fond of “Taylorism,” the philosophy that workers are not useful unless they are told exactly what to do.

At best, these systems slow testers down. In the past, before I was able to start testing, we were required to do the equivalent of big design up front. Each test idea had to be turned into a procedure and documented in the system, partly to justify the cost of keeping that tool running. When it came time to test, we had to consult the test cases and perform testing strictly by the book instead of truly exploring the application. We couldn’t use our judgement based on what we experienced in the software. At worse, these tools cripple the team by restricting the work we did and giving management a completely misinformed view of what testing looks like.

Using Lightweight Tools

Using lightweight tools shifts the intersection of cost and value. For me, that shift happens when the tool has enough flexibility for me to decide exactly how I want to work. Big design up front (or maybe it should be called “some design up front”) can work well when there are specifications. But, most of the software industry is shifting away from that. What I like to do now is work on describing test knowledge instead of test cases.

Let’s say I’m testing a new feature to create an advertising template that will have an embedded video. I want a place to quickly jot down test ideas:

What platforms are supported?

Should the video autoplay?

What sources do we accept for the video?

Do we need to record data on clicks and plays?

The ‘quickly’ part is important. I get a flood of ideas when I first get the user story and then another wave comes when the software is available. I don’t want to lose that creative energy to a tool that forces me into creating procedures.

Recording the Story

When I’m doing the testing, I want a way to record that story. Part of that happens when I report the problems I find, but there is much more depth to share. I want to be able to write a short paragraph describing the testing I performed and what concerns I have. Then pair that with supplements like screenshots, videos recording specific scenarios, and maybe a mindmap to help show potential paths to test. You need the right tool that will let you do these things.

Keeping track of testing work in this format completely changes the conversation. Some tools shove people into a corner where the only thing that matters are passed test cases, failed test cases, and bug reports. Testers know there is much more to our performance than steps. Most of our work happens in the brain and doesn’t conform to that format. Lighter weight, more flexible tools help introduce the testing story and knowledge.