Twice upon a time, in another space, no distance in any direction from here …

Tester as Learner

I’ve always been interested in the different ways that testers think and how those modes of thinking directly apply to the work testers do. What it comes down to for me is how people learn. This ultimately impacts how they evolve their career. And, in a somewhat loaded statement, how a tester has evolved their career tells me how useful they are going to be.

Take, for example, the idea of testing being the “Great Hunt.” James Whittaker had an interesting take on this:

“I see this period of focusing exclusively on the hunt as a rite of passage, an important trial by fire that helps immerse a new tester in testing culture, technique, and mindset. However, if I had to do it for too long as the main focus of my day, I’d go bonkers. This monotony is the reason that many testers leave the discipline for what they see as the more creative pastures of design and development.”

I would maintain that testers who still find a “thrill” in bug finding after many years of doing it are probably showing you all they are ever going to be. The thrill is not in the hunt for an effective and efficient tester. The thrill is in planning the trap. Therefore there is almost no hunt. You just spring the trap!

But that means you need many traps. In strategic locations. Designed to catch certain types of issues. The strategic elements are what testers move to as they go from test analyst to test engineer and, eventually, beyond. So in looking at numerous individuals over the course of time – including myself, I might add! – I’ve tried to distill some conclusions based on the learning modes that we, as human beings, tend to exhibit.

The Learning Modes

These modes can be broken down along a few axes.

What type of information does the tester preferentially perceive: sensory or intuitive?

Through which modality is sensory information most effectively perceived: visual or verbal?

With which organization of information is the tester most comfortable: inductive or deductive?

How does the tester prefer to process information: actively or reflectively?

How does the tester progress toward understanding: sequentially or globally?

The rest of this post will be me trying to make sense of what I just said regarding these modalities, for lack of a better term.

The Learning Modes Applied to a Tester

Sensory/Intuitive

The difference in modality here is in how information is perceived.

This is probably one of the most important things to look for and understand. In a nutshell:

The sensory-based person (who likes details and well-defined solutions to problems, while preferring information gained from their senses) will tend to focus on their actual observations of the software they are testing.

The intuitor (with preference for internally generated information) might then focus instead on their internal model of the software they are testing.

The two testers will probably also vary in their approach to performing their testing. Let’s consider the sensor first.

The sensor will generally apply “rules and tools” — solutions that have worked in the past for specific bugs that they can apply in their current testing to determine whether a particular bug exists.

Their testing will take the form of a series of experiments on the application.

These experiments will tend to be of the form “Does this specific bug exist?”

A strongly sensing tester may also be more likely to begin testing the product before they create any models of the software (mentally or otherwise).

They are more likely to consult the specification and other reference material, and to experiment with the conformance of the documentation and the product.

The learning done by a strong sensor is apt to be more based on experiencing the product.

Given the sensor’s preference for well-defined standard solutions, they are probably going to be more inclined to develop a standard pattern for using an approach like exploratory testing. This pattern can develop into a mental script, shifting the tester’s focus from exploratory testing to scripted testing.

The intuitor is more likely to approach the problem by applying different theories of error to the software.

They will take a risk-based approach to their testing, thinking of ways in which the software can fail and then thinking of tests which will show whether the software actually does fail in that manner.

This is different from the sensor’s testing for specific bugs in that the intuitor is not focusing on bugs they have seen in the past.

Instead, they are using their experience and understanding of the application they are testing to think about all the different failure modes of the application.

They usually like it when their mental model of the software is proven to be incorrect, often viewing the act of bringing their model back in line with reality as a fun challenge to be tackled.

It’s a subtle difference between the two — a difference primarily of level of thought. The sensor is taking specific examples of bugs and checking for them. The intuitor is looking for more general possibilities of failure (each of which may have multiple bugs associated with it) and then deriving tests that could trigger a particular failure mode. The intuitor is also more likely to begin testing by building a model of the software. While the sensor is perhaps doing research designed to predict the behavior of the system, the intuitor might instead be doing research to define and then refine their models, with the intention of then evaluating the model against the product.

My belief is that the most effective and efficient testers have found a way to harness both modalities.

I have observed a few generalities regarding those modes. Given that these are general statements, I have them as tendencies rather than as hard facts that you can categorize someone into.

Sensory testers are observant up to a point; they certainly favor facts and observable phenomena. But that also means they are apt to prefer problems with well-defined standard solutions and dislike surprises and complications that make them deviate from these solutions. They tend to be good, but limited, experimentalists.

The intuitive testers can be really bored with details, but can handle abstraction. They only like the details when they feel they can fit them into an understood abstract concept. Intuitors can be careless when performing repetitive tasks, preferring instead to focus on more innovative (and thus new) things.

Sensory testers tend to want their thinking done for them (as via a script or a tool) whereas intuitors prefer to do the thinking with only guides provided for them.

Visual/Verbal

The difference in modality here is in how information is received.

The major difference between testers with a strong preference for visual learning and those with a preference for verbal learning might reflect the internal mental model that the testers use.

Let’s first consider visual learners.

Visual learners will tend to work off an internal model that is picture-based.

This model could be a set of UML diagrams, flowcharts, or even mental screenshots.

They will also tend to work off visual portrayals of the steps in the tests they are executing.

These portrayals may run like a movie in the visual learner’s head and that’s how they may even convey them: sequences of action.

Alternatively, visual learners may make diagrams and pictures for their notes as they explore

As for the verbal learners…

Verbal learners will instead use a textual model for their testing.

These models might take the form of a textual description of the system (perhaps a textual use case format) or they may take the form of a remembered conversation.

The model will be based around words — the tester will use words to describe the system to themselves and words to describe each step in the process.

In addition to their internal models, these testers may also differ in the types of specification documents they try to get from their analysts and developers.

Not surprisingly, visual testers probably will be more comfortable working with visual models of the system, whether those be state charts, UML diagrams, flowcharts, or whatever other representations the people designing and building the software use to help clarify things for themselves. Verbal testers, on the other hand, are likely to be happier taking the textual specification for the system and wading through it, learning as they go.

In my own experience, it doesn’t really matter which aspect your team focuses on the most, as long as they can craft effective tests from the materials available to them.

Inductive/Deductive

The difference in modality here is in how information is organized.

In general, an inductive learner prefers to work from specifics and derive the generalities, while a deductive learner starts with the generalities and applies them to the specific situations they encounter. A few immediate generalities spring from this:

An inductive tester will tend to adopt an approach to testing where they gather as many specifics (such as techniques, potential defects, changes made to the application, and application history) as possible and generalize them to the application.

A deductive tester will tend to approach testing by keeping a collection of general principles and heuristics and find ways to specifically apply these generalities.

I’ve tend to run across the inductive type tester more, so here’s what I’ve observed in that regard:

The inductive tester will likely take advantage of historical data — looking at the available defect reports, the technical support database, published information about the software being tested and about similar programs, and any other historical documents that they can get a hold of.

From these documents, the inductor will derive a set of specific guidelines that they then can use to guide their testing. For example, the application being tested may have a history of defects in one particular area.

An inductor would take the specific fact of the large number of defect reports and generalize it to show that there could still be a large number of defects remaining in that application area, and thus focus more attention on that area than on another area which has had no defects reported historically.

While an inductive learner is using heuristics for this approach, it is less apt to be a deliberate usage than the deductive learner will have.

As far as the deductive, when I have met such testers, they tend to follow some of the basic principles of inductive learning but there’s also this:

The deductive learner starts with a collection of general heuristics and guidelines and then consciously applies them to the application.

Many of the traditional techniques of software testing are deductive and so the tester learns the basic skill (such as equivalence partitioning) and then determines how to apply it in the specific situation of their testing.

There will be differences in how deductive and inductive testers use bug taxonomies, lists of potential risk factors, and the distinctions between scenarios, conditions, cases, and so on. What I’ve found is that, in general, the deductive tester will tend to gain familiarity with the categories and then be able to come up with new examples within those categories. The inductive tester, on the other hand, can be expected to gain an understanding of the various elements and then be able to see new ways of categorizing them.

You definitely need both styles of thinking. What I’ve found is that it’s pretty hard for any tester to not fall into one of these learning modes, simply by the fact that they are human beings. What I’ve found more often than not, however, is that testers are not able to articulate the mode of thought they most fall into or rely on.

Active/Reflective

The difference in modality here is in how information is processed.

Some people need to use any information they get right away for it to stick in their memories, while others need to think about the information and figure out how it fits into their mental framework before they can use it. From what I’ve seen, active and reflective testers differ most in how they execute tests.

An active tester will usually do very hands-on testing.

They often will perform many test cases rapidly and will view each test case as an experiment, asking, “What happens if I do this?” each time.

An active tester will also tend to be more visibly a part of a testing group, often bouncing ideas and results off other members of the group to solicit their feedback.

A reflective tester is apt to do far fewer tests.

A thought process will precede each test case where the tester is thinking through the test.

Reflective testers make up for their lack of speed in test execution by executing the “good” tests that are most likely to find bugs.

Most reflective testers will probably tend to prefer to work alone or with at most one other person, and so may seem anti-social or outside the group.

The isolation and thinking should give the reflective tester the time to develop more complex tests and scenarios to apply to the application, and thus they should be encouraged to take the time they need. The active tester is more often going to be bouncing the ideas for those more complex tests and scenarios to the rest of the team.

As with all of these modes of learning, you need a healthy mix of both here. It’s not so much that you need each tester to be a good mix of both, but rather you need both types of testers on your team and the recognition for how such personalities tend to work.

Sequential/Global

The difference in modality here is in how information is understood.

This dimension deals with how learners “get” the information they are learning. Sequential learners learn material in a logically ordered progression, learning little bits as they go, and incrementally building on the knowledge they have already learned. Global learners, however, tend to learn in chunks. They will spend some time being lost, then suddenly everything will come together and they will understand the concept.

The sequential tester will often seem to get off to a faster start.

They will build test plans (or approaches) as they go along, step by step.

Not having a piece of information will not normally prove to be a problem for a sequential tester, as they will just work with the information that they do have.

They also will be able to explain their tests clearly to people after they have performed them.

In general, a sequential tester’s test cases will grow in complexity over time as they build a deeper understanding of the system. One issue, however, is that a sequential tester is less likely to ask for information that they should have, but don’t. They will allow assumptions to creep into the tests. Their tests may also be a bit simpler, thus lacking a certain amount of breadth and depth to the testing.

A global tester will get off to a slow start.

They may have problems understanding the point of the application (or their area within it) and need to be shown how to use the application in order to have any idea how to test it.

Once they get the piece of information that brings it all together for them, however, they quickly become able to create detailed, complex tests that often draw on connections that other people on the testing team have not seen.

A global tester’s test cases may be more high-level in nature. Since they see the big picture first and foremost, that’s what they tend to write. They will assume the details can be covered elsewhere (if they are even needed at all). Global testers may sometimes not be able to explain all their steps very well, at least without prodding. The tests of a global tester will tend to be more complex, focusing on end-to-end scenarios.

In terms of test planning and creation, sequential testers will tend to be effective at convergent thinking and analysis. Global testers will tend to be effective at divergent analysis and synthesis. Global testers have much less enjoyment (or tolerance) for working with systems that are only partially or superficially understood. This means they will tend to avoid having assumptions or false information creep into their tests, but it also means it will take them longer to create those tests in environments where knowledge is sparse or unevenly distributed.

So what? What does this tell us?

Well, what I hope is clear in all this is that it’s really a mix of the modes that you want when you are looking to become the most effective and efficient tester you can be. But a mix is just that: a mix. It doesn’t mean that everyone should strive for some perfect balance, if such a thing even exists.

For me, this kind of breakdown really helped me understand what kind of learner I am. I had some blind spots but I had some good habits as well. Likewise, often, so did my teammates. Knowing both what needed to be improved and what could probably be mentored to others was a very liberating experience for me and one that made me feel I was more in control of my career. Being more in control allowed me to focus the direction my career would take. I certainly hope that’s made me a useful tester.

About Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words.
If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.