Those of us who are passionate about delivering valuable, high-quality software to our customers frequently and at a sustainable pace are living in exciting times.

Many are embracing “modern testing” principles. We’re acquiring new skills such as how to help non-testing teammates learn to test, how to analyze production use data, and how to use that data for testing. Testing is at the heart of DevOps culture, providing new opportunities for testers. We have amazing tools to help us with activities such as regression test automation and learning from production monitoring, logging, and observing.

At the same time, I still encounter many companies who are doing testing the way most people approached it 20 or more years ago. They have siloed testing/QA teams who don’t collaborate with development teams, operation teams, or even product and design teams. They have no automated regression tests or are struggling mightily to get traction on that automation. They’re doing only manual regression testing, working from written scripts, and no exploratory testing.

Why??!!!

Teams that do use a whole team approach to testing and quality are successful at improving their processes and their product. The State of DevOps report shows correlations between the use of modern testing approaches and high team performance. So why isn’t everyone trying to use what we’ve seen work well for 20 years now?

I have no actual evidence as to why this is, but I have some unscientific theories which I’d like to share. I’d love to hear your theories too.

Lots of Newbies

The number of new software professionals is growing fast. The Bureau of Labor Statistics predicted a 24% increase in software developers alone from 2016 to 2026. Despite having heard “testing is dead” for the past 15 or so years, I see more and more testing conferences with more and more people attending them, so I surmise our profession is also growing fast.

Universities are still generally poor at teaching modern development and testing approaches, so people come out of school without expertise in agile or DevOps values, principles, and practices. They certainly don’t learn much about testing in university. So, they have to somehow discover all this once they’re on the job. If they join a company whose software process is still stuck in the 90’s, doing poor waterfall at best, they’re unlikely to be exposed to modern testing.

Culture is Hard to Change

In my experience, it’s extremely difficult to change an established company culture, especially in a large organization. Even big enterprise companies who “go agile” often transition from role-based silos to having dozens of siloed Scrum teams that don’t talk to each other. All too often, an IT organization transitions to “agile” and either leaves the testers on their own test/QA team or sticks them in a cross-functional delivery team with no training or support to figure out how they should now work with people in other roles.

Large companies often have a complicated power structure. Upper managers may be more interested in protecting their domain than in delivering better software to their customers more frequently – and they don’t always prioritize a sustainable pace. If nobody educates them on how an investment in software quality – doing things like giving teams time to experiment and learn — pays off in the long run, they just keep imposing unrealistic deadlines while their software teams burn out.

Change is hard. Even when management is receptive, maybe not all team members are willing to try something new. It often takes only one naysayer to kill an effort to move away from “the way we’ve always done it”. Companies that are strapped for money may not be able to see how an investment in learning pays off.

I once worked in a company with a “hero culture” where the person who fixed the problem that brought down the website was lauded and rewarded – so why try to prevent production problems from happening? Even after leading a successful agile project to meet an “impossible” deadline, I couldn’t affect change in that culture.

Life is Challenging

If your management doesn’t support you learning how to improve the way you deliver your software, you have to do it on your own time. As you learn, you can be a change agent and try to help your team improve.

But that takes time, and we all have many demands on our time. Some people have to work two or more jobs to support their family. Some people must spend much of each day caring for a family member. Others have health issues that limit their activities. Perhaps they can’t afford to go to conferences. Perhaps they can’t take an evening off to go to a meetup or watch online video courses. There are so many reasons people aren’t able to learn on their own time.

So, how do we promote adoption of modern software delivery principles and practices?

I don’t have any easy answers, but I’d like to start more conversations about this. I think we can raise awareness that there are better ways to work, that it’s possible to make our customers happier while enjoying our work a lot more. Here are some ideas:

If you can, make time to educate yourself with the many resources available to us these days: online courses, webinars, blogs, books, articles, podcasts, local meetups, conferences.

Share what you learn with your teammates. Help them learn about different types of testing. Try small experiments together to improve.

When you meet other software professionals, for example in a social situation, encourage them to join you at your local tech meetup

Write about your own experiences, share them at meetups and conferences, to show others that improvements are doable and effective (if public speaking scares the pants off you, check out the SpeakEasymentoring program)

Contribute to scholarship programs that help people attend conferences and access online content such as webinars and videos of conference talks

We have new people joining our profession all the time. What ideas do you have to help them embrace 2018 rather than 1988?

Introduction

Until relatively recently, the chances are that if you were a tester on a project, you’d be one of a number of such people. You’d have other members of the team to try ideas out, share the workload, cover for you when you’re away.

With the recent drive towards agile, we’re seeing the makeup of the team change dramatically. Projects can typically be supported by much smaller teams working permanently to evolve the product. This can often result in there only being a single tester on a team.

What are the challenges being the sole tester on such a project? How can you work with these constraints? This has been the subject of a series of workshops with fellow testers within my company, which I’m excited to share the outcome with you …

The Iron Triangle

Before we get underway, it’s useful to revisit the following principle within project management which we found underpinned many of our conversations. It’s useful for thinking about the constraints we’re working within on a project, especially in agile.

In the iron triangle gives us the idea that the “quality” of a project is defined by three attributes of your project – cost, scope, schedule (or time).

You might have heard the adage “cost, scope, schedule … pick two”. However, ideally on a project, there should be only one cast iron attribute – what management consultant Johanna Rothman calls “your project driver” in her book Manage It.

Within any project you can only really have only one attribute which is fixed – it could be “we need this done by X” (schedule) or “there is only Y budget for this” (cost). The skill of a manager is to work with this constraint and plan around what can be done with the other two attributes to achieve this goal.

Within traditional test management, there are a lot of parallels for applying this same theory to test planning. Within this dynamic the fields are,

Scope – how much testing you’d like to achieve

Cost (or rather typically resources) – having more testers allow you to execute more activity

Schedule or timeframe – how long you have to do things

It should be obvious that if you have a large scope, and a short timeframe, one solution would be to have more testers on it. Although of course in the real world, there are constraints as to how much this can be pushed, and good test management revolves around knowing and pragmatically working within these constraints.

Another solution, of course, is less testers, but it means that it takes longer to get through everything you’d like. Great for the test budget, but typically people like developers need to be paid to be on call to fix the bugs and the bugs are found later in the cycle, so developers need to be available longer.

Finally, if you find yourself in a situation where your available people and schedule are fixed, the only thing to do it to prioritise your scope as it’s the only thing you have control of.

Understanding this dynamic and the trade-offs is important because it was a core part of the discussions that were held, together with ways they could be handled and occasionally hacked.

Under pressure

A common initial feeling of someone stepping into the role of a sole tester was that of feeling under pressure.

Especially in an agile project, the timeframe is set by the sprint duration and your testing team size (although this can be “hacked” as we’ll discuss later).

Just back in 2013, one of our projects would have an annual release, which would involve a two-month testing window and would keep our test team of six busy.

Fast forward to 2018, and we’re now working in agile teams where we are creating deliverable code in a two-week sprint window using only two testers.

A key enabler in this was adopting a robust automated testing framework, which was easy to maintain with changes in the system under test. Such a suite did not grow overnight – and required a lot of work between testers and developers to build the right thing from a framework perspective, as well as to work through a prioritised list of useful automated tests to have in place. In working out idea scenarios and prioritisation, testers found themselves well-placed to lead these discussions. Over time, this suite was able to carry the functional regression load.

Automated testing helped, however, it didn’t eliminate the testing role. But testers found that their role did change dramatically. Most manual testing effort now focused on testing new or changed functionality in-depth during a sprint, as well as helping out with increasing ownership on test scenario selection for the automated suite (as well as shock-horror, learning to code their own tests).

In teams which are still undergoing a level of “forming” – a term used to describe those that have relatively new team members, some of whom were relatively new to working in an agile team – it was quite common for the sole tester to feel initially like they were the “point of blame”. If something gets out into production, the inevitable uncomfortable question can be asked of “why didn’t you test that?”

We shared a few of our experiences looking for general themes. Part of the problem that we were acutely aware of was time, and it’s not always possible to test everything you want to.

In many examples of a release where a defect had been undetected, manual testing had always occurred. Typically though, something was missed, or it was not imagined that a particular scenario could have been capable of causing an issue.

It’s worth taking a moment to think about how this was addressed in “classic” waterfall projects. A test lead would create a plan of what’s to be covered in consultation with many people on the project, but especially using the requirements. From this, they would build up a series of scenarios to be covered and make estimations around the resources and timescale.

However, on these classic projects, this was not the end of the story. It was the tester’s job to produce the best schedule they could, but it was known that this would not be perfect on the first draft. This was why such emphasis was put on the importance of reviewing – firstly by peer testers to see if enough testing heuristic variation has been employed, but also by a wider team such as project managers, customers, developers.

The aim with reviews was to find gaps in the plan and address them. This allowed the final testing scheme to be the most robust scheme of testing possible. This could come from developers saying, “we’re also making changes in this area” or our customers stating there’s an expectation that “most people will…”.

Within agile, it can be easy to forget that this level of contribution is still required. It needs to occur, however, it’s in a more informal, often verbal manner.
Within my colleagues, there is a general consensus that the tester becomes more responsible for facilitating a discussion around testing, much closer to what some organisations will call “a quality coach”.

A core tool for having these conversations is the use of mind maps, which the group has been using with success since 2013. A mind map allows the author to show for a particular feature, all the different variations, and the factors that they’re planning to follow in a one-page diagram.

When done well, they’re intuitive to read and can even be posted in common areas for people to look at. Their brevity helps get people to read them — “I haven’t had time to read that thirty-page document you’ve sent yet” is a frequent complaint in IT.

Even with a mind map in place, there is a natural tendency for the rest of the team to rubber stamp things. A sample conversation might go like this:

Tester: Did you have anything to add to the test mind map I sent out?

Team member: Uh … I guess it’s okay?

We all have a tendency to say something along the lines of “I guess so” for something we’ve not properly read. It’s important to still follow up with a brief conversation about what’s in your coverage – this can be individually with each team member, but often it’s better with the whole team. Just after stand-up can be a great time for this to occur.

If a member of the team notices there’s a mistake about the approach or some items that are missing, it’s expected for them to provide that feedback. Likewise, if the developer does more change than initially anticipated, there’s an expectation for them to tell the tester what they might also want to consider.

Often what you’ll read in agile literature about a “whole team approach” is essentially this: the whole team takes responsibility to give feedback whether it’s about how a story is defined, how a feature is being developed, or how testing is being planned.

A good indicator of when a team has made this mind shift is the use in retrospective of “we” instead of “you” – “WE missed this, WE need to fix this”. Teams where this happens have a much more positive dynamic. It’s important that this applies not just to testing.

Other examples include when a developer builds exactly what was on the story card, but not what was actually wanted (“we failed to elaborate”), when a story turns out much bigger than first thought (“we failed to estimate”) etc.

That said though, agile does not mean the breakdown of individual responsibility. A clear part of the tester’s role is to set clear expectations for the team of what they can do, how much effort it will take, and how you’re approaching it. But there needs to be team input to fine tune this to deliver the best value.

Mainly testing will revolve around changes to a product, for which the rest of your team are your first “go-tos” as fellow subject matter experts on the item. Occasionally as a tester though, you will find the value to consult with another peer tester – and there is an expectation that testers who are part of the same organisation but in other teams can be approached to be asked for their advice and thoughts on a test approach. Within our company, there is an expectation that all testers make some time in their schedule to support each other in this way. This, in many ways, echoes the “chapter” part of the Spotify model, with testing being it’s its own chapter of specialists spread across multiple teams/squads who provide test discipline expertise.

Reaching out to other testers like this is important; it creates a sense of community and the opportunity to knowledge share across your organisation.

Waterfall into agile won’t go…

There have been some “agile-hybrid” projects where there has been an expectation of set numbers of people being able to perform a set volume of testing in a set time (sprint). This can sometimes be problematic as the tester involved in execution hasn’t been involved in setting the expectation of what volume of tests are likely. And hence, it can feel like working against an arbitrary measure not based in reality.

In such a situation, it’s like being given an iron triangle where someone has given you “here’s your schedule, here’s your resources … so you need to fit in this much scope”. When faced with so many tests to run, it obviously helps to have them prioritised so that you’re always running the most important test next. When three areas are fixed, what suffers is the quality – it gets squished.

On projects where test scripting was not mandated by contract, there was always a preference for use of exploratory testing – this being because it allowed the manual tester to focus their time on test execution with very little wastage, meaning more tests could be run, which helped reduce the risk.

Summing up for now …

There was so much material, we had to split it up. So far we’ve taken a dip in, looking at how teams found themselves evolving to a whole team responsibility to quality.
Next time we’ll look at how testers found their voice, and some of the key skills and approaches my colleagues found increasingly pivotal in their day-to-day role.

Thank you to Janet Gregory for reviewing, editing, and donating her expertise for this article.

To understand how to reach a zero defect status quo, think about defects like you might think about bread. Bread is the absolute best right out of the oven. Slather it with butter and pop it into your mouth. It’s heaven. The worst bread can get is when it’s molded over and completely inedible, but it can also be helpful.

Defects are often seen as bad things. That’s not necessarily true. Defects tell you something about what’s going on with the state of the application. If defects were to equal hot, tasty bread you pop right into your mouth, would you think about avoiding them? Maybe, maybe not.

There are probably plenty of reasons to avoid bread. Allergies. Carbs. Gluten tolerance. Those reasons are absolutely respected. Replace the idea of bread anything that works just as well. Suggestions are welcome!

Measuring The Freshness Of A Defect

Let’s consider the bread analogy a little further. Bread goes through states of change. That’s how we can talk about piping hot bread and moldy bread. Now if I shift that to talking about defects using a bread timeline it looks something like this:

Bread vs Defect

Hot Out Of The Oven

Less Than An Hour Old

Fresh

Less Than A Day Old

Stale

Less Than 2 Weeks Old

Moldy

More Than 2 Weeks Old

Is It Still Bread?

Older Than A Month

If you have a low number of defects waiting for a while due to complexity or dependencies, that’s probably OK. If you have a large number of defects that could have been resolved within hours of them being found but they are so stale no one really knows if they are still defects, then that would indicate there are some issues in how defects are being handled. Looking at defects with a very visible time measurement makes them valuable in gauging the health of an application.

It’s a balancing act. You want to handle each one correctly and in their own time, but trying to discover what that time frame is and how quickly it needs to be handled is always a trial and error approach. The best way to start surfacing wait times and delays would be to create some rules around defect types or flavors.

Here is an example of some defect flavors and handling rules to go with them:

Severity (functional issues usually)

High – don’t log it–fix it!

Med – log it (but make the logging relevant – make a new story.)

Low – why bother logging it? Note it on the current story card and move on.

Low – If there isn’t a stakeholder, even a hidden one, no defect needed. Note it in your testing logs and move on.

There should be some initial discussion about what defects are ranking where. That’s important.
Also note that the information should be documented somewhere, either in testing logs or the story card.

Don’t let the information get lost. If the defect suddenly becomes relevant or someone wants to know why no one found it before, you can point back to your logs or a story and then move it into a card or a defect management system. Before that point, defects that don’t rank high enough are only creating noise.

Why Defect Backlogs Cost Money And Time

On a maturity model for a company, one of the things that can be seen as a sign of development lifecycle maturity is how quickly and efficiently defects are handled.

If you are not using a zero defect approach of immediately resolving or working towards resolving your defects; you, your team, or management are probably engaged in doing two or three of these things:

A triage meeting to groom the defects which haven’t been resolved or are not in flight

Maintaining a backlog which needs updating and managing

Paying or upkeeping a defect management tool (If you are only using story cards, you are moving in the right direction).

If no one is reviewing defects or maintaining them, the management tool and the backlog are basically where defects go to die. The company engaged in this unfruitful process is wasting money and time. It’s an act of literally writing something that will never be used into a tool that the company is paying for, whether that’s an open source tool on a server or a cloud product, which has no meaning to anyone because no one is looking at it.

Defect backlogs or management systems are only helpful if they are kept current and folks are reviewing them on a regular basis. This costs money to have meetings, upkeep tools, and make sure defects are being handled properly. This method can cost more than the leaving the defects to die because people are engaged in the process. That’s OK. When there is complexity or necessity, this isn’t a bad process to have, but it’s a lot to maintain and threatens slipping back into the habit of the defect graveyard.

Moving to a zero defect process can do a couple of things for a software development lifecycle.

It lets people quickly make a decision about how to handle the defect.

It uses the original story as a reference for the bug OR turns the defect into a story which goes into the backlog for a sprint and is handled like a STORY instead of a defect.

Defects passed to teams from other teams like Customer Service have an answer immediately instead of waiting for someone to tell them it’s fixed, they know it will be resolved by the next release, or never.

Setting Standards For A Zero Defect Process

The goal of a zero defect process is to reduce the time in the backlog or management of a defect to zero by either fixing, converting, or closing the defect. Using the example standards above you’ll want to target at least three areas for setting standards for the following defect types:

Defects which originate from within the team

Defects which originate from external teams but involve your team

When a Defect presented (either internal or external) is changed to a story or closed.

Enhancements Are Not Defects

Some organizations like to lump enhancement requests in with defects, mostly because they are generated from the same place, the customer.

When a customer asks for an enhancement request, that information shouldn’t be hanging out in a defect management tool or story backlog, it should be handled with a weekly report or some kind of handoff to the Product Management folks. Leaving it in a backlog means that someone is missing trends, customer requests, or possibly the next big feature idea.

Customers can ask for some pretty crazy things, but if your business model is centered around reporting and they are asking for a reporting feature, it might be important to tell someone rather than write a bug and close it as “enhancement request.”

Find a different handling method for those requests. Get Product Management to reply to customers directly. You’d be surprised how much customers like being acknowledged even if they know they aren’t going to get exactly what they wanted.

Always Have A Plan

Whether you are moving towards a zero defect practice or dealing with defect graveyards, having a plan is better than no plan at all. Communication is very key in dealing with any defect of any severity. Create standards, tweak them, change them when they stop working, make every effort to make backlogs and graveyards disappear.

Another alternative is to go for the radical option and declare defect bankruptcy and start over! At least then it would be a clean slate for your team or all the departments that want to take that route.

Are there other methods or approaches which have led you to a zero defect practice? Comment here or blog about them. It’s not impossible. It’s much like losing weight or going gluten free, you need to commit to the process and move forward, that’s the only way it gets better.

Melissa Eaden has worked for more than a decade with tech companies and currently enjoys working for ThoughtWorks in Dallas, Texas. Melissa’s previous career in mass media continues to lend itself to her current career endeavors. She enjoys being EditorBoss for Ministry of Testing, supporting their community mission for software testers globally. She can be found on Twitter and Slack @melthetester.

Who can’t relate to this? We’ve all been there. We’ve all made mistakes. However, despite the universality of the experience we all still seem to be afraid of it. We hate making mistakes and doing the wrong things.

In thinking about this natural propensity, many have pointed out the importance of embracing mistakes as a way of learning, and so in the software industry, we have tried to do this. We realize that if we let a fear of mistakes paralyze us, we won’t try new things and we’ll short-circuit the learning process. If everything we do revolves around protection and perfection, we won’t stand out and make the jump to the next level.

But what if you’re a software tester?

Preventing mistakes and risk mitigation – isn’t that why companies hire testers and create QA teams? Isn’t this part of our mission? It’s built right into the very heart of what being a tester is. It’s part our DNA and core make up. We of all people don’t make mistakes, do we? In fact, this is exactly what we are good at – finding mistakes, pointing them out, and making sure our customers aren’t exposed to them. We’re in the mistake prevention business, so should we be OK with messing up?

Making Mistakes

Let’s remind ourselves again: to err is human. We need to be OK with messing up and making mistakes because it will happen. Believe it or not, testers are human too. We are going to make mistakes and mess things up. And the reality is if we spend too much time and energy on trying to prevent mistakes, we aren’t going to have the time to do other valuable work. Let me give an example of what can happen when we get too focused on mistake prevention.

We recently integrated a couple of teams together, and as part of that process, we consolidated automated tests from different teams. One of the most interesting parts of that was how many checks some of the teams were doing for things that were not in their own area. There were a lot of tests that seemed to be about making sure teams weren’t the ones that ‘broke the build.’ Now breaking the build isn’t a good thing. It’s a mistake we don’t want to keep making. But there is a difference between adding integration coverage in a smart way that responds to the realities of what causes the builds to break, and having every team cover their butts in a way that adds excessive coverage. When we reviewed the tests from the different systems, we found many tests that were redundant or duplicated simply because teams wanted to be sure that if anyone broke the build it wasn’t going to be them.

An undue emphasis on preventing mistakes led to an inefficient approach to testing and also prevented some learning opportunities. Each team was so worried about preventing mistakes from their team that they became siloed. Instead of looking at the problem holistically, they were looking at in terms ensuring their specific team didn’t get embarrassed. This reduced very important cross-team collaboration and prevented many learning opportunities. Don’t be too afraid of making mistakes!

Not Making Mistakes

Being OK with making mistakes isn’t the whole picture though. It is true that testers are usually hired to help prevent mistakes. So what kind of mistakes to we need to prevent? What kind of things do we need to make sure don’t happen? Let me give a couple of examples of mistakes you don’t want to make.

Shipping with test data

This is one mistake you really don’t want to make. Don’t ship with a test database or some dummy data. Don’t ship with testing and debug flags on. These are the kinds of mistakes that make a big expensive mess and let’s be honest, it should be pretty easy to automate some quick checks that act as a safety net on the last deploy step. Be careful with this one. The point of having test data is to help you better prevent errors and mistakes. You really don’t want it to be a cause of errors.

Losing private customer data

Our customers give us a lot of private data that they rely on us to protect. Treasure that trust. Trust is easy to lose and hard to get back, so make sure you are being very very careful about the way you treat sensitive or confidential data. I’m usually not a big process guy, but when it comes to confidential customer data you might just want to have some extra process safeguards in place. Don’t be lax on this one!

Destroying customer property or data

I remember a time in our company when someone accidentally committed a change that could call sudo rm -rfin the root directory. If you don’t speak Linux, that is basically saying ‘force a delete of everything on my computer (including the OS itself).’ As you can imagine, this was a not a good place to be. Thankfully we caught the problem after a couple of our test machines had everything deleted, but imagine if we had released with that bug! That isn’t the way to keep customers or make your company money. Pay close attention to the kinds of things that can delete data or cause hardware failures. Your customers will forgive a lot of silliness in your app but they will have a much harder time forgiving you if you mess up data from other apps or systems.

Conclusion

In conclusion, don’t let the fear of making a mistake hinder your progress and productivity, but as a tester remember that you are still in the mistake prevention business. Take the time to think about what kind of mistakes you don’t want to make. It is OK to make mistakes – except for when it isn’t.

About the Author: Dave Westerveld has been testing software since 2008. He’s worked on projects ranging from well-establish products to being involved in the early stages of major new initiatives, and he’s passionate about helping teams efficiently produce high-value software. Dave will also be speaking at the upcoming KWSQA Targeting Quality and Better Sofware East Conferences. To learn more about him, you can check out his blog or follow him on Twitter.