Editor’s Note: I have just posted version 1.03 of this article. This is the third revision we have made due to typos. Isn’t it interesting how hard it is to find typos in your own work before you ship an article? We used automation to help us with spelling, of course, but most of the typos are down to properly spelled words that are in the wrong context. Spelling tools can’t help us with that. Also, Word spell-checker still thinks there are dozens of misspelled words in our article, because of all the proper nouns, terms of art, and neologisms. Of course there are the grammar checking tools, too, right? Yeah… not really. The false positive rate is very high with those tools. I just did a sweep through every grammar problem the tool reported. Out of the five it thinks it found, only one, a missing hyphen, is plausibly a problem. The rest are essentially matters of writing style.

One of the lines it complained about is this: “The more people who use a tool, the more free support will be available…” The grammar checker thinks we should not say “more free” but rather “freer.” This may be correct, in general, but we are using parallelism, a rhetorical style that we feel outweighs the general rule about comparatives. Only humans can make these judgments, because the rules of grammar are sometimes fluid.

Reader Interactions

Comments

Thanks, it’s been an interesting reading and a great summary of the objection to using “test automation”. The paper left me missing just one thing – something to replace the void created by taking test-automation out. [James’ Reply: No void is created. The cases at the end of the paper show examples of how it works. The list in the middle shows all the ways tools can be used. So, I really don’t know what you are talking about.

We haven’t created a void, we have simply shown what’s going on and named it for what it is. You feel weird about it, maybe, because you are waking up from an illusion.]

It’s something I thought about before, and failing to come up with a better name for it to use myself (let alone convincing others to use it), I really hoped to find something that could help me form a suitable term for what I do (or even easier – embrace something already formed). Because I’m not a toolsmith. and neither am I only “using tools”. A toolsmith, in my eyes, is someone that creates tools for the use of others. [James’ Reply: So you think someone who behaves as a blacksmith can’t be called a blacksmith if he only makes horseshoes for himself? A person with all the skills of a locksmith can’t be called a locksmith if he only works on his own locks?

If you don’t build your own tools then you are using tools other people built. What other category could there be?]

the smith may use those tools, but the main purpose of a toolsmith is to provide other people with them. [James’ Reply: I would call you a technical tester if you build your own tools. But for the record, if you build your own tools, then you are doing toolsmithing for yourself.

It’s also possible you feel you aren’t really building a tool because you think the coding you do to automate a check doesn’t “count” as tool-building. That’s just nonsense created because you have reified the idea of tool-making to “full stack” tool-making only. It’s like those people I encounter who distinguish between scripting languages and programming languages as if that makes any difference. It makes no difference. Coding is coding. In my career I have written code professionally in machine language, assembly language, C, C++, Perl, Ruby, R, Basic, Excel Macros, etc. Coding in Excel Macros is every bit the same sort of intellectual process as coding in machine language or C.

Every programmer uses building blocks provided by some other technologist. That’s what you are doing when you write checks with your favorite tool. You are using a tool to construct a tool. Own that!]

In my area, the recruiters are calling that “automation infrastructure”. I’m not merely using tools, because that’s too general to be of any use to distinguish anything. Everyone uses tools and just about anything is a tool. Sure, the automated checks I write are a tool, but it is a very different type of tool then the occasional tools such as notepad++ or excel (both are extremely helpful for me).

[James’ Reply: What exactly is different in any way that matters?]

The automated checks require very different needs (both from me and from my management), and therefore require a differentiating term. A hunter may have tools – his knife, some traps and baits, even a blanket to fend the cold off. But his gun? his bow? That’s a weapon. When I deploy my automated checks I’m not using a tool, most of the time I’m not even testing anything since I did my testing when developing the check.

[James’ Reply: Yes you are using a tool. Don’t be absurd.]

I’m deploying a safety net that will help me spot specific problems, or I deploy some sensors to help me gather information quickly. [James’ Reply: “Safety net” is a label describing a function of the tool you have contrived. Come on, man. Don’t just play with words for the sake of it. You are using a tool to do that.]

Only question I have, and perhaps you came across better answers that I got, is “how do I call those”?

[James’ Reply: You know what to call it It is automated checking. That’s exactly what it is. This is a form of tool-supported software testing.]

I tried “check automation”, but as it gets shortened to “automation” anyway, it doesn’t help me much (let alone the fact that I have trouble finding a suitable translation for checking\testing difference from English).

It might be a language barrier, or a mental barrier that I am facing, or plainly just habit – but something just doesn’t feel sufficient in the use of those terms. True, you don’t create a void by pointing out that the use of “test automation” is causing some false expectations, but you do hold out a finger and say “look at this gap over here”, so I see the need for a usable term (usable = a term that will help in two tasks: defining what it is for myself & communicating this idea to others).

As for being a toolsmith – No, there’s a difference between toolsmithing and being a toolsmith. Yes, I posses some toolsmithing skills, but calling myself a toolsmith conveys the wrong message as it shifts the focus from the end goal (me, testing better) to the activity (tool creating), and it draws the attention from the fact that creating automated checks is also a testing technique (When I automate stuff, I use it as a way to force myself notice the tiny details I would otherwise miss).

[James’ Reply: Of course, using tools helps you test! The fact that a tool can be part of a test design technique is not surprising at all.]

When toolsmiths creates a tool, they have a goal – to create the best tool they can. When artisans creates their own tools, they have a different mindset – they want to create the tool that will enable them doing precisely whatever they want to do. In that manner I’m more an artisan than a toolsmith – I have a task, and the tools I create, find or buy are meant to help me do the task I specifically have in mind (it might be as narrow as “I want to change this HTTP header before sending it to the server” or as wide as “I want to be able to fully control the responses of the server my software contacts and see where it will lead me”).

(By the way – what’s non-technical in testers that don’t create their own tools? testing software is a technical skill just as much as programming is)

[James’ Reply: When I speak of technical testers I am specifically referring to developer skills brought into the testing arena. Tester skills are not normally the same as developer skills, so that deserves a special label. The most common label for that around the industry (in my travels) is “technical tester.” I don’t think you can fairly say that the typical tester has strong technical skills, but the typical developer must have them.]

Next, the important difference between an occasional tool and automated checks is the fact that the latter requires skilled people maintaining it, not just skilled users of the tool. If you would have an excel template with pre-calculated complex functions and macros, it would be the same, but if you are like me and use excel on the fly as the need rises – it would be different. This sets some important considerations: If I’m able to use a “dirty hack” to make an occasional tool do what I want in a manner that will save me an hour of testing I will do so without second thought (almost), while if it was in the tool I maintain, I might end up investing two days improving the tool properly.

Now, for the part about me “using a tool”. I got a bit carried off. What I meant was that I was not using a testing tool. [James’ Reply: Any tool used for a testing purpose is your testing tool. There is nothing deeply important or special about a tool that someone crafts for the purpose of testing.]

Sure, I use tools all the time. the cutlery I use during lunch are tools, but they are not testing tools. [James’ Reply: We covered this in our article. If you use the tools for a testing purpose, then they are testing tools.]

In some cases, neither are the automated checks I deploy. When I set an automated regression suite to run on a daily basis I am not really testing anything unless it fails (sure, one can argue that with each run I learn “those part are not broken today”, but while being important, I don’t really consider it as testing until someone looks at more than the bottom line “all checks passed”). What’s important for me is that I do an activity that isn’t testing, but is important enough for me to do – and I don’t have a useful term (=same definition as above) for it. [James’ Reply: Checking is part of testing. Unless the checking you are doing is worthless, then it is embedded in your test process. So, yeah, you are testing.]

My problem with the “automated checking” is that it is similar to the response “you are in a hot air baloon, approx. 30 feet above ground” (http://www.design.caltech.edu/erik/Misc/balloon.html ). It is correct and exact in every manner, but it is helpful in only one of the two things terms are used for – after I’ve learned to accept the testing\checking terminology (which did require some thinking effort), I can use “automated checking” to define better what I do. However, when I try to communicate with other who have not done the same journey, they can’t differentiate check automation from test automation (and what’s worse – they won’t call it in either name, we’ll have a lovely conversation about “automation” without them noticing their assumptions).

[James’ Reply: There is no such thing as test automation. So if you are using that phrase then you really don’t understand what a check is. The distinguishing feature of a check is that it CAN be automated, unlike a test which can only be supported but not embodied by an automated process.

I am constantly dealing with people who haven’t gone through this journey. For them, the word check and test mean the same thing. This is okay if we are having a shallow conversation. If we have a deeper conversation, then I have to help them understand that testing is the THOUGHT PROCESS, not the actions they take.

People who refuse to comprehend the difference between checking and testing are, to me, like people who refuse to understand the difference between saying something that is not true and telling a lie. To tell a lie is to deliberately mislead someone, and within that there is the matter of misleading in order to serve their interests and misleading in order to subvert their interests. These are important distinctions, but many people indeed refuse to make them. And, to me, that means they are foolish and careless people.

If someone wants to be treated with respect as a professional by me, then he must strive to speak and act like a professional.]

I might be able to communicate in other ways and to pass the concept, but I the term isn’t unique enough to differentiate the two. If a manager asks me how’s my testing going and I respond “I ran several checks around that area” an immediate dissonance rises and they will notice that I did not use “I ran tests”. over time the difference will be made clear. [James’ Reply: That’s okay. But you may need to remind that manager that the “checks passing” does not mean everything is okay. You have to complete the testing to come to that conclusion.]

the talk around automation is “do we want to create automation for that?”, “what percentage is covered by automation?” and “did you finish automating that?”. Even if I’ll take the more general question “how’s the automation going?” and say that my response is similar – “I’ve automated some checks for this feature”, the other person have heard the word they have used (automation) spoken back to them, so they will feel confirmation for their approach.

(Sorry for the delay, I took some time to think and process). I think I understand a bit better your use of the term “toolsmith” (and I am sure I understand the way I interpret that term, and my drive to distinguish between a tester that writes code and someone I would call a toolsmith, much better), but I feel that either I’m explaining my thought wrong, or we have just got to a fundamental difference of viewpoint that is not worth bridging over at this moment, so for the discussion about that, and for helping me sort things at least in my head, I thank you.

However, when touching the point of “technical tester”, you are using an argument that is plainly invalid in this context (I wanted to say “you cheat”, but something in the back of my mind tells me it might be way more offensive than I mean, so I decided to drop it): You are defending the choice of term by the fact that it is “The most common label for that around the industry”. Well, guess what? So is “test automation”. [James’ Reply: My argument is valid. Perhaps you don’t understand it. First, it is reasonable to consider common usage when deciding on terminology. I always do that. But that is not the only factor in my argument, another factor is the toxicity issue. If a usage is common, but is toxic, then I will abandon it and come up with something else. We’ve done that in the case of “testing” (when checking is the non-toxic alternative) and “test automation” (where there are several alternatives, including “test-tooling”, “test tools”, “tool support”, “automation in testing”, and “automated checking.”

In the case of “technical tester” I see no toxicity, there. It is not misleading. In fact it would be misleading to say that testers have technical skills as a blanket statement when so few seem to have any comfort solving technical problems– a fact I can vouch for since most of what I do is teach and coach testing, around the world, so my face is rubbed in this almost every day. Also for what it’s worth, most testers I encounter are afraid of mathematics, too.]

You are fighting against “test automation” since it does not exist, since naming an activity this way misleads people to think that testing can, or should, be automated.

[James’ reply: Yes, it is a fundamentally misleading term. I once felt that it was misleading in a benign way, but I no longer feel so.]

You fight it because putting a word to something frames the way people are thinking about it. The same applies to “technical tester” – trying to distinguish a coding (or coding-able) tester from a non-coding tester by using “technical”, is very much like trying to separate the “thinking tester” from the “non-thinking tester” – All testers are thinking (as testing is a cognitive activity), and all testers are technical (as testing is a technical activity).

[James’ Reply: I disagree. I think if you say that you either don’t meet many testers or you are unreasonably downgrading the word “technical.” I see no reason to claim that testing is necessarily a technical activity. It’s certainly a thinking activity, yes. It’s a creative activity, a learning activity, a social activity, definitely. But, perhaps as a former developer, I have a higher standard for the word technical than you do? Maybe I’m more snobbish about it?]

In addition, testing being thought of as a “non-technical” profession has negative impact on any tester that has to deal with “the technical people” dumbing down their conversation for them, not to mention the need to overcome the natural patronizing approach non-technical people get from coders (I still do it even though I try not to, and am improving on that field slowly). The typical tester has to be able to see in their mind how the different parts of the system connect, and be able to analyze the risk each change might create (and, if I may use a claim that I think I got from you – they have to be able to explain both their work process and results in professional language). If that’s not “technical” – I don’t know what is. [James’ Reply: Analyzing systems is a part of being technical, but most working testers are not able to do that well, in my experience. So, I am denying your claim that it is normal for testers. If it were normal, we would not have an industry dominated by script following zombies and ceremonial certifications.

I do advocate developing such skills– partly because I am a technical tester, myself. I do think that having people in testing who also have development skills is a powerful thing, though I do not believe all testers need to be or even should be technical. There is also a big need for social testers, empathic testers, administrative testers, etc.

Technical testing skill is not all or nothing. I say someone is a technical tester to the degree that they embody the developer’s skill-set in a testing role. Thus you can have some technical skills and not others. Coding is a dominant skill in that list, however.

I HAVE met people who write automated checks who, despite being able to use their tool, seem to have learned that task by rote and do not seem to be able to reason like an ordinary technical person. I don’t really want to believe that exists, but I suppose it does.]

Sure, I know some (thankfully, not many) non-technical testers. They are not good testers, and for most tasks, I would not consider them adequate testers.If those non-technical testers are “the typical tester”, then the typical tester is not good enough and should become better. Then we need to start looking for ways to improve those that are in the profession of testing.

[James’ Reply: The typical tester is very much not good enough. I started looking for ways to fix this back in 1987. I have had some success with people I have managed to touch.]

In other contexts, where redefining the way words are used wasn’t the whole idea, I might have accepted the “this is the common way to use this term” approach. But when trying uproot commonly used terms such as “test automation”, waiving other terms that are at least as important should have a stronger reasoning. [James’ Reply: When crusading with language, you have to pick your battles. But in this case, I don’t even see a battle that needs to be fought. Common usage of “technical tester” seems mostly reasonable to me.]

I like the notion of toxicity, and I believe that if you would consider it a bit more, you will see the toxicity of “technical tester” as well.

Lets start by gathering some points we are in agreement about (at least, as far as I can gather from the your comments up until now): 1) Most working testers are not good enough. 2) Analyzing systems does require technical skills 3) Coders tend to be snobbish about the term “technical” and equate it often with programming skills. 4) I don’t meet that many testers. [James’ Reply: I agree with 1 and 4. I agree with 2 and 3 conditionally. I would say that technical skills are not all or nothing, but rather there are levels of them. And that some kinds of analysis of systems require technical skills and some don’t; some require stronger skills and others lesser. I don’t agree that coders, or me, equate technical skill with programming skill, only that programming skill is the centerpiece of technical skill in the software world. Centerpiece does not mean only.]

From here, let’s have a look about what testers are typically doing:

* To quote Brendan Connolly – they translate between developers and just about anyone else (http://www.brendanconnolly.net/testers-translators/) Those others might be managers, product owners, customers or OPS guys. * “Tier 4 support” – I know that in my role, the test team is the first go-to for the “support engineers” (as they are called in my workplace) once they get a problem they can’t solve themselves. I have heard also about testers doing the first line support in some cases, but I don’t know how common is that. * Help with deployment * Manage the test environments. * Create, maintain and use testing tools. * Advocate for bugs in a language that will be understood by developers.

[James’ Reply: Some testers do these things. In order to do them well, you better understand development and you better understand technology. You are claiming that testers can have acceptable such knowledge without having any coding knowledge? I doubt that. But testers who are strong in that knowledge, and thus able to do these things effectively, I am happy to call technical testers.]

All these activities require some technical skills, which leads to the conclusion that a tester that does not posses technical skills cannot do them properly, and therefore is an inadequate tester.

Here we found our source of toxicity – regarding testing as a non-technical profession leads to the following: [James’ Reply: I’m not saying that testing is a non-technical profession. I’m saying that technical skills may or may not apply depending on the type of tester you are. In other words, testing is both non-technical and technical, as a field.

It seems like you want to disqualify people who aren’t technical as unworthy of being testers at all. I strongly oppose that. We need non-technical people in testing. They bring a different sensibility.]

1) Recruiting bad testers. If testing is perceived as non-technical, we will have candidates that are looking for an easy way to get in “the high tech world” without possessing any skills.

[James’ Reply: I don’t see that as a problem.]

2) Encouraging mediocrity – if testing is a non technical skill, the drive to invest in growing such skills is lower, which leaves mediocre testers. [James’ Reply: I don’t see how it encourages mediocrity to acknowledge the value of liberally educated people in the very social vocation of testing.

I’m a technical guy. I like being a technical guy. That does NOT mean I dislike or don’t value non-technical guys.]

3) Maintaining poor reputation for the testing profession. When I was in the university, the universal truth in the CS department was that only failed CS students should “Do QA”, or that it can be a first position on the way to being hired as developers. I was lucky enough to take a testing course with a good instructor that took this nonsense out of my head, but the majority of people around me are still seeing testing that way. Being considered as “non-technical profession” is, in my eyes, one of the main reasons for this.

[James’ Reply: Testing doesn’t belong to CS. It has little to do with CS. Testing belongs to Epistemology. CS people are part of why we have a problem with reputation!]

4) Lower investment in testing by management. Be it equipment (testware, IDE, msdn accounts, etc.) or training. [James’ Reply: I see no connection between testing as a liberal arts pursuit and lower investment.]

5) Lower expectations from testers – both from the environment and from the testers themselves. High expectations create improvement. Low expectations create obstacles for the more capable testers. [James’ Reply: I want very high expectations. Yes! But “high” is not the same as “technical.”]

probably there are some other points I’m not aware of, and most of my objections can be summed up as “bad reputation leads to bad performance”, but here’s my objection to the term.

[James’ Reply: The reason you object to the term is that you think ALL testers should embody the skills of the developer in the test team. Your sentiments about Computer Science confirm that. Since you think ALL testers should be technical testers, you don’t see the need for the term technical.

Since I believe that non-technical testers are vital to the health of a good testing profession, naturally I think a special label for technical testers is useful.

As for choosing you battles – This is true. However, There’s a difference between choosing a go on a full scale crusade against a term, and simply using a replacement (“coding tester” or “programming tester” would be my first choice, followed by the less convenient “code-literate tester” or “tester with programming capabilities” ).

[James’ Reply: I don’t understand. What you said about CS clearly indicates that you believe ALL testers must be CS grads with full programming capability. Did I read that wrong?]

It’s quite interesting what you say here. I indeed think that all testers should be technical, to some extent, since their closest colleagues, the developers, are very focused on the technical aspect (to the point that some of them don’t speak non-technical). [James’ Reply: I find that my opinion about what testers should be is delineated by the specific examples of high performing testers I have personally worked with, some of whom I would describe as completely non-technical, others not very technical, some super-technical.

Whenever, I think of a policy, I assess it by thinking about whether it would disqualify or discriminate against any of those people.

That means, to some degree, my opinion is an “accident” related to people I happen to have connected with.]

In addition to that, in most places I know of, the testers are part of the “engineering” department (as part of role-less scrum teams, or with a testing department that is a sub-departement of engineering, or a test manager that reports to the head of R&D, etc.), which is focused of the technical aspects of work (“after all, we are *engineers*”) – for those two reasons alone, I would consider technical skills as survival skills for testers.

[James’ Reply: I think that is mixing cause and effect. One of the reasons why engineering departments take over testing is because testers often have a weak culture. This weakness allows them to be “techified” by a strong nearby tech culture. A strong testing culture can assert its independence from the technocracy.]

I used to think that testing should be a specialization in the CS department and that most testers, save for the occasional paragon, should study CS, but I’m not sure I can still say that this is a prerequisite to being a good tester (there’s a thing with reality that comes in and smashes my favorite ideas from time to time) – I can still point on several important skills I acquired during my CS studies, but I can point some skills which are by no means less important, I have acquired during my comparative literature studies. It took me some time to acknowledge the value of testers that come from different backgrounds, but coming from a non-technical background does not mean being non-technical. I still believe that every tester should be able to read a simple piece of code (and that Universities should study testing as part of CS), but full programming skills is a requirement only for testers that write full scale test tools. [James’ Reply: I am a bit extreme in the other direction. For instance, I don’t necessarily think a tester should even be literate. I mean I can imagine a valuable tester who can’t write coherent sentence, or who cannot read a specification. In fact, I am thinking of two specific testers I know, one of whom was fired (not by me) for poor writing skills, despite a wonderful ability to find good bugs, the other insisted that reading specs or talking to developers was not her style– yet again I loved the bugs she found.

I think of testers as being members of a team, and it’s the team that is the main unit of assessment, rather than the individual tester. If we are talking about one-person teams, then of course more skills are needed.]

I have yet to see for myself value brought to the team by a non-technical tester, and I have seen some non-technical testers that were also bad testers, but I’m not around the testing industry for nearly enough time to consider my own experience as a good indication to what is and what isn’t, so I’m willing to believe that it is possible for a non technical tester to be a great tester. I still have a hard time picturing that tester analyzing failures (when I investigate, I read logs, check the database, and use my knowledge about the system architecture and underlying protocols – all are technical skills), and I think that their ability to go (or think) deeper than the user interface is significantly impaired. Could you perhaps describe me some non-technical activities you think fall under the testing domain? I wonder especially about activities that would justify a person working a full time job. [James’ Reply: Consider social and administrative skills. The former is all about talking to people, helping people, and connecting teams together. The latter is about managing time, documents, and the rhythm of processes. The former works a lot with emotions and motivations. The latter works with todo lists, calendars, and issue lists.

Personally, I am an analytical and technically focused tester. I love to delve deep into a difficult puzzle and solve it. But my time management and motivation management suffers when I do that. I love to work with people who may not be puzzle-solvers, but who help me stay grounded and focused.

Also, I started my testing life as a test manager. I was hired for my technical skills– which I barely used at first. Instead, if you look at my notebooks from the late eighties, you will see mostly list of dates, lists of bugs, and lists of issues. I spent a lot of time in meetings.

I much prefer, now, to work for a test manager instead of being the test manager. Such a person will be twenty years younger than me and perhaps non-technical (a manager pulled over from customer service, maybe). I handle the tech stuff, working within the environment created by less technical or non-technical Others.]

From page 7: “Activities that aren’t themselves testing, such as studying a specification, become testing when done for the purposes of testing.”

By this logic, does checking, when done for the purposes of testing, become testing? I don’t think that is the message you are trying to get across.

[James’ Reply: Checking is part of testing, when it is done for the purposes of testing, yes. If checking is not done in the service of testing then it is not testing. Good checking is always embedded in a testing process.

If someone says “you are not testing, you are just checking” then they are saying that there are important things missing from your test process. It would be like a critic of your driving saying “that’s not driving, that’s steering!” If you replied “steering is part of driving” then you would be missing the point.

So, yes, I think that is the message I am trying to get across.]

On page 6 you present your definition of testing (“testing is evaluating a product by learning about it through…”) then on page 7 you say “Testing is a performance that involves several kinds of ongoing, parallel activities:” listing a few activities.

Is testing a specific activity, a collection of activities (perhaps in furtherance of the testing mission), or an approach which can be applied to many different activities?

[James’ Reply: Testing is an activity. ANY activity is a collection of activities, by the nature of concept. When we distinguish among activities we do so for provisional and practical reasons that depend on the context of the discussion.

We use linguistic cues and nudges to speak of different aspects of activities. For instance, I (and everyone else in common usage) distinguish between performing a test and doing testing. Staring at the ceiling thinking about what kind of data I need can be part of testing, but if I am not actively engaged in an experiment on the product I would not say I am performing a test at that moment.

Testing is not a WAY that I am doing other activities, generally speaking, so I wouldn’t call it an approach, as such. But it is possible to make it into an approach by applying testing attitudes and skills to other situations. I often approach life as a tester.]

Oh cool. Have you published any of these publicly? If not, would you be willing to share some of these privately? I’m looking for inspiration. [James’ Reply: Skype me or email me. (Skype name is: satisfice, email is james@satisfice.com. I just do this sort of thing for fun and practice. For instance, I was recently writing code to discover every possible set of intransitive dice (haven’t completed that one), and to find the best one.

I also have a friend who’s really into mystical numerology, so I have written programs to find numerical patterns in various natural constants.]

I would just like to add a clarification to chapter regarding tools selection. A common mistake I have come across when it comes to tooling is to create “one tool to rule them all” which typically ends up in a tool that does everything really bad. When I read the part about that it is preferred to have tools that can serve multiple purposes rather than a single purpose I can foresee that it can enforce the “one-tool” sentiment even if that is not how I interpret the chapter.

When I talk about this I like to talk about a tool chain, where you string multiple single purpose tools together to create a tool that solves a more complex purpose, rather than talking about multipurpose tools. Apart from the observation that the individual tools tend to be better it also allows for the resistance to change that is describe in the paper. An additional benefit that can come from it is that when you have a toolbox of these it is very easy to create new complex tool with a small effort. [James’ Reply: Multi-purpose is NOT the same as multi-function, but I acknowledge that confusion.

I’m talking about tools with– let’s call it– an OPEN purpose. They can be applied productively to many purposes. This fits the toolchain concept you speak of.]

Growing up in a family of carpenters I would like to liken a good QA to a carpenter in the early 1800s (Trust me there is a point to this) they did everything by hand, hammering, drilling, cutting, trimming finish work, and so on. They were highly skilled and trained at there jobs, they could do things that we still marvel at, and wondering how they did it. The spiral staircases in old churches come to mind as an example. As time went on things started to change so that by the mid 1900s you started to see things like power saws and drills, and pneumatic nail guns came on line. Now let’s compare the mew mechanical tools to thinks like automation.

So during this same time that the new tools came on line, an interesting thing started to happen, houses or any building could be built in a fraction of the time, anyone could now use any of the mechanical tools, unlike before when you had to be trained over years to become a carpenter, now anyone could run to the hardware store and pick up the tools they would need, and start a new job building a building. At the same time there started to be a very worrisome turn, meaning that the century’s old common skills of carpentry started to be lost. For example, the skill to build the beautiful spiral staircases with intricate carvings, at one time was a common thing to know, can no longer be duplicated by the average builder. Even the name “Carpenter” had been replaced by builder.

I am sure that by this time you are pulling you hair out saying “WHAT” so I will get to the point. All automation is, and all it will ever be is one more tool in the QA tool box, that we carry. If we allow it to become the focus, or look at like this “You are only good if…” then just like carpenters QA will no longer exist, as it is, there is already a new less archaic term applied like Dev in Test, who is someone who only creates test scripts. If we use history and other industries as a metric of what going down that path will be, the result, just like the building industry, the products produced will not be of the same quality as those produced before the focus shifted.

I consider myself part of the Context-Driven and RST community and always make an attempt to speak precisely, using the latest terms you and Bolton work through. I have miles to go but I have become comfortable substituting “automated checks” for “automated tests”. It took me years, however. Along the way, I noticed when speaking at conferences or blogging, the larger test community can usually understand “automated checks” even if they don’t understand our use of “check”. I like being able to say “automated check” without having to stop and explain.

[James’ Reply: Yes. Wherever we possibly can, we make our terminology backwards compatible. That means that when I say something like “check” it matches much of what other people probably think it means, and might match all of what other people think it means, but also perhaps matches additional things that they aren’t thinking of.]

What should I be saying to describe a check performed by a human? I’m inclined to say “manual check” but I think that’s out. At one point I would have said “sapient check” but I believe that may be a contradiction. Plus, you stopped using the term “sapient”.

[James’ Reply: A check performed by a human is a check. If I want to talk about a check specifically not performed automatically then I tend to say “human check”– but “manual check” is perfectly correct. It is correct to distinguish between manual and automated for processes that can in fact be done without a machine or entirely by a machine. We don’t say “manual testing” because there is no such thing as automated testing. Testing cannot be automated, as such. If you “automate testing” you are either checking or automating some process within testing or LYING. But checking CAN be automated, so the distinction is meaningful.

Remember why we stopped using “sapient.” It was because the word means “that which requires a human to perform.” Which is all well and good until someone calls something “non-sapient.” Non-sapient means “that which does not REQUIRE a human to perform” but in practice people used it to mean “that which is stupid” such that when I called their testing non-sapient (because I wanted to talk about checking) they got offended.]

Where automated checks get tricky for me, is speaking directly to testers who write test code (check code) in the context of the tool itself. For example, the syntax of an automated check written in C# using the MSTest framework requires the tag “Test Method”. Hell, “test” is embedded in the name of the framework. Most of the automated checking innovation taking place is done in the programming community. I can’t imagine changing a language that is tightly coupled in most automation tools from “test” to “check” any time soon. How should we address this?

[James’ Reply: First, use the word check in casual conversation about this stuff, just for your own discipline. It doesn’t matter if no one else does. They might be a little annoyed but at least not misled.

Other than that, I would suggest going in the other direction. Instead of insisting that everybody name things “checks” or whatever, just remind them of what testing is. You say you have automated a test? Okay, then let’s remember that the “test” that you “automated” is not just the code, but also your design thinking, your analysis thinking, your maintenance process, and your means of evaluating the results– none of which was or is automated.

The danger I want to protect myself and others from is that we get obsessed with artifacts and lose sight of our responsibility to solve the testing problem itself.]

I may be wrong, but at one point I thought “test automation” (as opposed to “automated test”) was accepted by our community to describe what is sometimes called computer-assisted-exploratory-testing. [James’ Reply: For some years I was content to use the term test automation to mean that. I have subsequently come to feel that the term is toxic and is directly corrupting the minds of testers.]

I’m talking about when a tester programmatically collects a bunch of data (using a tool), then examines that data to learn something (e.g., a blink zoom test). Can we call the part the tool does, “test automation”? We can’t call it “check automation” because there is no check.

[James’ Reply: I call it tool-supported testing, or test tooling, or automation in testing.]

I was looking through the results from the 2015 State of Testing survey run by Practitest and was surprised by the results in the Skills survey. “Functional automation and scripting” was rated one of the most important skills, with 65% saying it was Very Important, while “Programming skills” got a very different result, only 24% saying it was Very Important. I interpreted Programming skills as the skills necessary to write automated checks. It’s a bit like saying “it’s important you write a book, but less important that you know how to write.” I spent some time searching through yours and Michael’s blogs looking for how “programming” is used in RST namespace, but I couldn’t find anything. In the survey, I’m not sure why these are separate categories and why they received such mixed responses. I would conclude that survey respondents valued the tools they used above the skills required to use them.

Programming is not part of Rapid Software Testing as such. It is a skill that we use as part of it, just like any other subject matter expertise. I do teach a version of RST just for programmers, but I have to say that many programmers don’t call themselves programmers, and many people who do call themselves programmers (at least in the testing space) don’t seem to be very comfortable with the idea of programming!

Programming in testing is way more than writing automated checks.

I would say that anyone who distinguishes between “writing automated scripts” and “programming” is proving that they don’t really know what programming is. Programming is the process of arranging instructions and other data for the purpose of controlling a machine.]

Hi James, why did you use a check automation anti-pattern in “CASE #3: Automated checking”? Would you agree that nobody in his right mind would ever do that to check the functionality of search and replace? That check could be written in 2 lines and run in milliseconds at the unit level, why choose the most awkward and inefficient way? [James’ Reply: I would have thought that no one in his right mind would make a comment like this, but let me assure you, it happens.

First, surely you have heard of integration testing. Fortunately, I have written all about that. See those blog posts. Unit-level-checking does not deal with all the risk associated with the integrated system. Do I really need to explain that to you? And sometimes we don’t have access to the source code. You know that happens, right?

And I guess you didn’t see this: “Automating low-level checks is a powerful practice that can improve testability and make quality easier to achieve. Like all checking it requires skill and forethought to pull off, and it is blind to many bugs that occur only in a fully integrated and deployed system. Still, it is generally much less trouble and expense than GUI-level checking.”

And I guess you must have missed the fact that the purpose of that example is specifically to describe how GUI-level checking is hard to make work.

And I suppose you must be very attached to that particular case, rather than getting the point of how this is LIKE other situations where you might want to test through the GUI level and you struggle to do that.]

It is hard to make GUI automation work, I agree, but in my humble opinion using an example like that one you fail to demonstrate it.

Using a metaphor, It’s like trying to demonstrate that driving is dangerous by taking a turn at 200 Mph driving a car with a missing tyre.

To me, that specific part of the paper feels like a straw man argument.

I hope I clarified my point

[James’ Reply: The problem is not so much that your concern lacks clarity. The problem I am having is I see no reasonable basis for your concern. I feel that maybe I should read my article to you one sentence at a time to make sure that you have indeed HEARD what we had to say.

As it stands now, I’m thinking that this discussion itself follows a familiar anti-pattern: you complain, you can’t explain, and I feel like I’m talking to a barking dog about some kid who may or may not have fallen down a well.]

[James’ Reply: Grow up, Gus… Learn to let go of conversations when you aren’t willing to engage constructively and intellectually in them.

But, personally, I’d rather that you re-allocate energy away from medicating your wounded pride and more toward exploring what I’m talking about on this blog. I appreciate feedback about what I’ve written. The first thing you should do is please read what I wrote. And my offer still stands: I will read it to you slowly, word by word, if you are having trouble doing that.]

“The more people who use a tool, the more free support will be available and the more libraries, plug-ins, or other extensions of that tool.”

I read that as:

“The more widely a tool is used, the more widely available its community support and the more numerous the libraries, plug-ins, or other extensions.”

Employing “freer,” as recommended, the sentence becomes:

“The more people who use a tool, the freer support will be available and the more libraries, plug-ins, or other extensions of that tool.”

I would take that to mean:

“The more widely a tool is used, the more freely available its support (paid or otherwise) and the more numerous the libraries, plug-ins, or other extensions.”

So, not only would employing “freer” as directed destroy the parallelism, it would change the meaning of “free” from “no charge” to one that lends itself better to a comparative form, like “independent.”

Well, would you look at that! Whether “more” modifies “free” depends on context!

I think “test automation” and the automation of checks suffer from the same preconceived (wrong) ideas about development.

Everyone sees a developer as hacking some lines into a IDE and compiling and voila, we have a product. Ignorance of the thought, planning, analysis, testing, learning, interpreting,… that goes into this task is vast, even among IT professionals. It is no diffrent with automating checks. All everyone seems to see is the tip of the iceberg and is then surprised at the behemoth size of the undertaking.

But even developers (and testers alike) misunderstand this. You can see this clearly when it is time to estimate. Developers tend to get it very wrong a lot of the time. They do learn with experience. Not until processes like scrum came to be did they get a estimation method that would reflect true effort somewhat better and even then it takes sevral iterations to get it right.

So why are we surprised that when it comes to automating checks everyone gets it wrong too? It is a difficult subject that is not easily understood or outlined and as with everything we don’t understand we tend to either shy away or underestimate/misinterpret it based on the needs we have at the moment (i.e. whatever fits our agenda – http://issuepedia.org/Simplicity_bias)

In my opinion a lot of misundertanding of automated check and the work involved is a last ditch attempt at making a complex topic simple enough to fit a goal, budget or/and timeline. Admitting the opposite would have dire consequences.

Hi James, I can appreciate your stand on automation and agree that manual testing is what finds the good bugs, but I have a question. In your article with Michael Bolton, “A Context-Driven Approach to Automation in Testing,” the argument is made that automation that ignores things like a user name printed on a report will miss a bug where the name is printed incorrectly. This is true, but in manual regression testing, done build after build, humans miss a lot of those same small details. Do you have any tips for staying sharp while doing manual regression testing? Thanks!

[James’ Reply: Every time you write “manual testing” I don’t know what you mean. When you are speaking in very broad terms, I guess saying “manual testing” means “being like a user” instead of operating tools that simulate a user. But as soon as we want to “talk shop” and get into the details of how testing works, the term “manual testing” just obscures everything important that we want to talk about.

I never said that “manual testing” is what finds good bugs. I say that testing done well finds good bugs, and to do testing well you need a human (with social competence and social standing that can come only from growing up human) to guide the process. A competent human must design the tests. A competent human must interpret the results. This human may use lots of tools to help. So calling it a “manual” process is both strange and unnecessary. You don’t call programming “manual” so stop saying that about testing.

My tips about staying sharp include using tools to help you stay sharp! Wherever you can inexpensively pass a checking operation to a machine, do that. But you can’t expect to be able to do that, inexpensively, across the board.

To stay sharp in a repetitive environment, consider MINIMIZING REPETITION. Do things in a different way, rather than the same way, each time through. Variation is the spice of testing. But you can use a log file or a checklist to help assure that you touched all the things you need to have touched.

Try pairing with different people to do similar work. Social energy can create sharpness.

Consider separating your standardized checking from your deep testing. In other words, do a “checking pass” quickly before going through and doing new and different things.

Consider not doing repetitive checking each day, but perhaps only periodically.

Improve your coding skills so you can see opportunities to use simple automation.

Work with the devs to create logging that automatically tracks what you do, so that you don’t have to remember what you covered. If you are testing web pages, consider using the coverage analysis feature in Chrome.

Seriously consider asking the devs to PUT bugs into the code for you to find, once in a while. If you KNOW there’s a bug to find, you will be sharper. This is just like how the TSA puts fake bombs through the scanners to train their staff. ]