Blog: Severity vs. Priority

Another day has dawned on Planet Earth, so another tester has used LinkedIn to ask about the difference between severity and priority.

The reason the tester is asking is, probably, that there’s a development project, and there’s probably a bug tracking system, and it probably contains fields for both severity and priority (and probably as numbers). The tester has probably been told to fill in each field as part of his bug report; and the tester probably hasn’t been told specifically what the fields mean—or the tester is probably uncertain about how the numbers map to reality.

“Severity” is the noun associated with the adjective, “severe”. In my Concise Oxford Dictionary, “severe” has six listed meanings. The most relevant one for this context is “serious, critical”. Severity, with respect to a problem, is basically how big a problem is; how much trouble it’s going to cause. If it’s a big problem, it gets marked as high severity (oddly, that’s typically a low number), and if it’s not a big deal, it gets marked as low severity, typically with a higher number. So, severity is a simple concept. Except…

When we’re testing, and we think we see a problem, we don’t see everything about that problem. We see what some people call a failure, a symptom. The symptom we observe may be a manifestation of a coding error, or of a design issue, or of a misunderstood or mis-specified requirement. We see a symptom; we don’t see the cause or the underlying fault, as the IEEE and others might call it.

Whatever we’re observing may be a terrible problem for some user or some customer somewhere—or the customer might not notice or care. Here’s an example: in Microsoft Word 2010’s Insert Page Number feature, choose small Roman numerals as your format, and use the value 32768 (rendered in Roman numerals). Word hangs on my machine, and on every machine I’ve tried this trick on (you can try it too). Now: is this a Severity 1 bug? It certainly appears to be severe, considering the symptom. A hang is a severe problem, in terms of reliability.

But wait… considering that vanishingly few people use lower-case Roman numeral page numbers larger than, say, a few hundred, is the problem really that severe? In terms of capability, it’s probably not a big deal; there’s a very low probability that any normal user would need to use that feature and would encounter the problem.

Except… considering the fact that a problem like this could—at least in theory—present an opportunity for a hacker to bring down an application or, worse, take control of a system, maybe this is a devastatingly severe problem.

There’s yet another factor to consider here. We all suffer to some degree from a bias that can play out in testing. This might be a form of representativeness bias, or of assimilation bias, or of correspondence bias, but none of these seems to be a perfect fit. I think of it as the Heartburn Heuristic, in honour of my dad: for a year or more, he perceived minor heartburn—a seemingly trivial symptom of a seemingly minor gastric reflux problem. What my (late) dad didn’t count on was that, from the symptoms, it’s hard to tell the difference between gastric reflux and esophageal cancer.

The Heartburn Heuristic is a reminder that it’s easy to believe—falsely—that a minor symptom is naturally associated with a minor problem. It’s similarly easy to believe that a serious problem will always be immediately and dramatically obvious. It’s also easy to believe that a problem that looks like big trouble is big trouble, even when a fast one-byte fix will make the problem go away forever.

We also become easily confused about the relationship between the prominence of the symptom, the impact on the customer, and the difficulty associated with fixing the problem, and the urgency of the fix relative to the urgency of releasing the product. (Look at the Challenger and Columbia incidents as canonical examples of how this plays out in engineering, emotions, and politics.) In reality, there’s no reason to believe in a strong correlation between the prominence of a problem and its severity, or the potential impact of a problem and the difficulty of a fix. A missing character in some visible field may be a design limitation or a display formatting bug, or it may be a sign of corruption in the database.

Of course, since we’re fallible human beings, looking for unknown problems in an infinite space with finite time to do it, the most severe problems in a product can escape our notice entirely. So based on the symptom alone, at best we can only guess at the severity of the problem. That’s bad enough, but the problem of classifying severity gets even worse.

Just as we have biases and cognitive shortcomings, other people on the project team will tend to have them too. The tester’s credibility may be called into question if she places a high severity number on what others consider to be a low severity problem. Severity, after all, is subject to the Relative Rule: severity is not an attribute of the problem, but a relationship between the problem and some person at some time.

To the end user who never uses the feature, the Roman numeral hang is not a big deal. To the end user who actually experiences a hang and possible loss of time or data, this could be a deeply annoying problem. To a programmer who takes great pride in his craft, a hang is a severe problem. To a programmer who is being evaluated on the number of Severity 1 problems in the product (a highly dubious way to measure the quality of a programmer’s work, but it happens), there is a strong motivation to make sure that the Roman numeral hang is classified as something other than a Severity 1 problem. To a program manager who has a few months of development time available before release, our Roman numeral problem might be a problem worth fixing. To a program manager who is facing a one-week deadline before the product has to ship (thanks to retail and stock market pressure), this is a trivial bug. (Trust me on that; I’ve been a program manager.)

In light of all this, what is a tester to do? My personal preference (based on experience as a tester, as a programmer, and as a program manager) is to encourage testers to stay out of the severity business if possible. By all means, I provide the project team with a clear description of the symptom, the quality criteria that could be threatened by it, and ideas on how the problem could have an effect on people who matter. I might provide a guess, based on inference, as to the underlying cause. I’ll be careful to frame it as a guess, unless I’ve seen the source code and understand the problem clearly.

My default assumption is that I can’t go by appearances, and that every symptom has an unknown cause with potentially harsh consequences. I assume that every problem is guilty until proven innocent—that it’s a potentially severe problem until the code has been examined, the risk models revisited, and the team consulted.

I’m especially wary of assigning a low severity on a bug report based on an apparently trivial symptom. If I haven’t seen the code, I try to avoid saying that something is a trivial problem; if pressed, I’ll say it looks like a trivial problem.

If I’m forced to enter a number into a bug reporting form, I’ll set the severity of a problem at its highest level unless I have substantial understanding and reason to see the problem as being insignificant. In order to avoid the political cost of seeming like a Cassandra, I’ll make sure my clients are aware of my fundamental uncertainty about severity: the best I can provide is a guess, and if I want to err, I’d rather err or the side of overestimating severity rather than underestimating it and thereby downplaying an important problem. As a solution that feels better to me, I might also request an “unclassified” option in the Severity field, so that I can move on quickly and leave the classification to the team, to the programmers and to the program managers.

As for priority: priority is the order in which someone wants things to be done. Perhaps some people use the priority field to rank the order in which particular problems should be discussed, but my experience is that, usually, “priority” is a tester’s assessment of how important it is to fix the problem—a kind of ranking of what should be fixed first.

Again based on my experience as tester, programmer, and program manager, I don’t see this as being a tester’s business at all. Deciding what should be done on a programming or business level is the job of the person with authority and responsibility over the work, in collaboration with the people who are actually doing the work. When I’m a tester, there is one exception: if I see a problem that is preventing me from doing further testing, I will request that the fix for that problem be fast-tracked (and I’ll outline the risks of not being able to test that area of the product). As tester, one of the most important aspects of my report is the set of things that make testing harder or slower, the things that give bugs more time and more opportunity to hide. Nonetheless, deciding what gets fixed first is for those who do the managing and the fixing.

In the end, I believe that decisions about severity and priority are business and management decisions. As testers, our role is to provide useful information to the decision-makers, but I believe we should let development managers manage development.

There is a strong tradition in the industry for assigning a severity category to a bug based on its symptoms. One common scale is: 1. Crash or data loss 2. Failure w/o workaround 3. Failure w/workaround 4. Minor Problem 5. Enhancement request

These are not, of course, real severity measures. They are attributes interestingly and often correlated with severity. As long as the designation is not taken too seriously, it can be a helpful first-order filter.

Michael replies: I’ve been involved in that kind of system, and I agree. I think you’re emphasizing the helpfulness, and I’m emphasizing the risk and the epistemic humility. Both are worth thinking about.

As for priority, why shouldn’t a tester set this. Yes you’ve said that a tester should not manage the project. Of course, but we may make suggestions based on our understanding of things. So, I treat the priority field as a “guess” of priority, subject to review at the triage meeting.

Yes; I used the word “guess” above, too.

You have to ask how the fields are being used. We used severity and priority quite a lot, when I was at Borland and dealing with 85 new bugs every day, which took an average of four hours per day to triage. We found that filtering on severity and priority was a good way to tackle the elephant, though. And of course testers were instructed to elevate the priority of a bug that wasn’t *technically* severe, if they felt that the triage team needed to look at it quickly.

I’m not eager, as a tester, to oversimplify bug reports. But when we have a lot of them, it helps to shuffle them into rough categories on a temporary and lightly-taken basis.

I agree that it can help, especially when testers and the project managers have the skills, experience, and good will to recognize what’s what and who’s who. When any of those are lacking, I tend to become more wary. And when there’s controversy, it might be interesting to probe what’s behind it.

The problems with a tester assigning severity go deeper than what you describe. I have seen severity applied to specific functions without *any* regard to the overall operation of the software. This narrow definition of severity makes it an extremely dangerous and misleading field in some cases.

I have found it to be most useful if there is a tacit understanding within a team that defects are “medium” or “moderate” unless the tester has a definite reason for assigning them otherwise. In those cases, a notation in the defect description is appropriate to clarify the designation and there is an understanding that the initial setting is an informed opinion of the tester based on observation. Otherwise, a “medium” setting could simply mean the tester has no observed facts to back up a change in designation.

I have similar experiences as James, both as tester and project manager.
Testers assign both severity and priority, but the decision is done by others.
As a positive side-effect, striking differences could spur fruitful discussions (situation is a “product-development-we-are-all-in-this-together)

Michael replies: I think in the long run it can spur fruitful discussions. I’ve seen it go both ways, though: I’ve seen testers try to usurp the program management role, and program managers who try to abdicate it. A healthy culture is the key here.

I also have experiences where the tester is the person with the best ability to assign severity/priority.

A testers’ educated guess, subjective and biased, can be useful information.

It can be, when the tester has sufficient context information, experience, and skill. For me, the more of those things I’ve got, the more I mistrust my tendency towards subjectivity and bias.

I agree with James regarding the need for categorisation, no matter how “simplified” it may appear. I have spent countless hours in Triage meetings and we need a common criteria that is easy to understand by every stakeholder on the project.

Michael replies: Criteria for a first-order classification are easy (see James’ example). The “countless hours in triage meetings”, to me, signify the problem that without investigation, it’s somewhere between difficult and impossible to know the deep truth underlying a problem. I’ve been there too. Want to get over that? Here’s a heuristic: if discussion on next steps takes longer than one minute for any given report, you don’t have enough information to make a good decision, so stop the discussion and assign someone to look into it more deeply.

I am a firm believer in recording everything as a symptom initially (which incidentally most tools don’t have a category for) and then let Triage allocate a classification (requirement misunderstanding, software bug etc.). It is at this point that an initial severity and/or priority can be set (by the Triage attendees) in order that there be some sort of indication as to whether a fix is required or a workaround is developed.

I think I said something a lot like that in the post.

In my experience the best way to deal with this situation is to define the rules for all aspects of testing (as well as the rest of the SDLC) in a Quality Plan. This is then signed off by the PM, the business leads and the technology leads in order that later conflict is minimised. For smaller initiatives an agreed set of “Quality Principles” can suffice.

I’m a little more resistant towards defining rules for all aspects of testing. I prefer guidelines or heuristics (or as you suggest, principles), along with the understanding that we’ll run into lots of stuff that isn’t so clear cut. Our knowledge, our context, and our choices are in a continual state of flux as we’re developing the product. Development is about building and revealing knowledge, in addition to building products. Our experience of the past—which presumably informs our choices of rules—is a highly imperfect guide to what happens in the future. I’d prefer to acknowledge that imperfection and come up with productive and pragmatic ways deal with it efficiently, rather than trying to bottle it. The last paragraph of this post has a lot to say about that.

A tester’s input is a recommendation and is not set in stone. It is a useful starting point which programmers and project management and others can learn from, and they can still override it if need be. Since testers are the ones who found the issue, they’re the best ones to assign the initial priority and/or severity from their first-person perspective, rather than waiting for some project manager to try to understand and visualize the scenario from the written explanation.

Michael replies: Testers are the best ones? Really? Always? Even in the case of (let’s say) the tester who posted the question “what’s the difference between severity and priority”?

Testers are generally the power users of the product and usually have excellent judgment for this task. From my experience, I’d even say that at least 90% of the original priority/severity designations given by testers go unchanged.

That’s not my experience—and where is this 90% figure coming from? If you mean “90% of my original priority/severity designations go unchanged”, I can’t argue with that—except to say, somewhat feebly, that maybe it was 95% or maybe it was 75%. Nor do I have my own rigorous numbers to contradict them. On the other hand, my recollection from being product manager was that the testers tended to over-rate severity and definitely priority—and my recollection from being a tester was that the program managers tended to under-rate severity and priority.

I think it is a cop out to not give your best answer. Sure, we may sometimes be off, but that isn’t a good enough reason not to try. And you shouldn’t be afraid of committing since the values can be changed at any time.

I didn’t indicate a best practice above—only a preference.

Reading a bug description is much easier in the context of the priority/severity assigned to it. If we don’t provide this important assessment, then we’re wasting other peoples’ time (often a manager’s, whose time is very valuable).

I know that you’re not arguing against using priority/severity, just that testers shouldn’t be the ones to do it. I disagree completely. I think we are the best people for the job and it is neglectful not to provide the information from the start.

Which testers, in particular? Best on what basis? The point that I’m trying to raise here is this: is it a good idea to assume that we’re automatically the best people for the job, or is it worth questioning that assumption?

A severity scale can be used as an attempt at an *objective* measurement for the significance of a problem; it’s important that testers have a simple way to categorize the extent of the problems they find. For example, using Michael’s scale, differentiating between a minor, textual problem and a functional problem with no user-available workaround.

I’ve always seen priority used as a *subjective* measure of importance: business has their own reasons why something should be fixed before something else. My current company focuses on money saved or lost over the user experience.

I agree that testers should not set priority, but we will speak up in triage and to developers for problems we think need attention beyond the business point-of-view.

Michael replies: It’s a good idea to be careful about “objective” and “subjective”. When somebody says “objective” about an observation or a conclusion, what he or she usually means is “I believe that no one in a particular community or context would disagree.” Correspondingly, when someone says “subjective”, what they mean is “there is room for disagreement here”. That is, I will assert that your notion of “objective” above is really subjective. 🙂 But it’s a sufficiently useful heuristic we might be able to obtain enough agreement to move on and get some work done. You could come up with a similarly “objective” scheme for evaluating priority, if you wanted to. Like you, I prefer to let the managers manage the project, but I will provide information to support management’s decision-making process.

In my previously submitted reply, I tried to quote you and James by enclosing text within “greater than” and “less than” characters. However, upon submit, I see that they have been stripped, as well as the text within. Here is the Reply again, with those symbols replaced with brackets. I hope it works!

—

I have a lot of thoughts on Severity and Priority. However, to fully understand them, you must first understand my thoughts on defects, since they are related.

(Note: Although I’ve thought a lot about this topic, I’ve never transcribed my thoughts. Plus, I typed this pretty quickly. Thus, what follows may be a bit rough. Please bear with me…)

I use Object-Oriented Modeling (OOM) when I think of defects. I think of a defect as a thing (an “object”) that can be described in many ways (“properties”).

Imagine that you own several apple orchards and want to track your business. One thing that you might want to track is “an apple”. An apple can be described in many ways. You can describe an apple by its color, its size, its shape, its variety. You can also describe an apple by who picked it, where it was picked, when (day/time) it was picked, and how it was picked. Of course, there are many, many more ways to describe an apple, but we’ll stick to these, for now.

Now imagine that each time an apple was picked, all of the “ways to describe an apple” were noted. Each apple had its color, size, shape, and variety noted, as well as who, where, when, and how it was picked.

Using this model, imagine how many different ways you could track (organize and categorize, sort and filter) your apples!

Now, imagine that you also own several orange groves and vineyards and you’d like to track all your fruit (not just the apples). Unfortunately, your current model won’t allow for it. In your current model, the “thing being tracked” is “an apple”. So, what if you expanded the definition of the “thing being tracked”? What if the “thing being tracked” was “a piece of fruit”, and you added a new way to describe “a piece of fruit” – by “type” (apple, orange, grape).

Now your model could track any piece of fruit and describe it by color, size, shape, variety, who, where, when, how it was picked, AND the type of fruit! In fact, if you ever expanded your business and purchased a banana ranch, you could simply add “banana” to the “type” values and your model would still work!

This metaphor explains the importance of defect definition. In the metaphor, a “defect” is the “thing being tracked”. You just need to figure out what that “thing” is. In your organization, is the “thing being tracked” (a defect) an “apple” or “a type of fruit”? That is, in your organization, is a defect a single thing, or does it refer to multiple “types of things”? If multiple “types of things”, is there a plan (or a need!) to further describe the “things”?

Some organizations simply define “defect” as “a software bug”. If so, then very narrow “ways to describe” defects are acceptable.

More commonly, I see organizations defining “defect” as “a software bug, or an enhancement”. If so, then the “ways to describe” defects should be more general and able to describe both.

I’ve seen an organization that defined “defect” as “a bug, an enhancement, a change request, an issue, or an action item”. They used a “type” field to distinguish between them. And the defect of type “problem” was further described as “a problem with the requirement”, “a problem with the test”, “a problem with the app” (a.k.a. a “bug”), “a problem with the process”. And, although different workflows and fields were created for each type and sub-type, a single system was used to track them all. You could use a single system to track “clicking Save crashes the app” and “Joe needs a trashcan”.

Was this right or wrong? It worked well in their particular situation, so I think that in that context it was right.

So, I think that before you can begin talking about Severity and Priority (“ways to describe defects”), you need to understand exactly what a “defect” is (in your organization), to ensure that you’re “describing them properly”.

Now that I’ve explained my thoughts on Defects, I’ll get into my feedback and thoughts on Severity and Priority. And, for clarity, unless otherwise noted, assume that “defect” refers to the common definition (a “bug”).

[The most relevant one for this context is “serious, critical”. Severity, with respect to a problem, is basically how big a problem is; how much trouble it’s going to cause. ]
I agree. In the context of defect tracking, I think a suitable synonym of Severity is “seriousness”. As in, “how serious is this thing”?

I also agree with all your comments regarding defects being “observed symptoms from one viewpoint”. NOTE: Of course, if your organization defines defects as many things (ex: a “software bug” and “an Action Item”), then, then that line of thought doesn’t apply (ex: “Joe needs a trashcan” is not a symptom).

[Severity, after all, is subject to the Relative Rule: severity is not an attribute of the problem, but a relationship between the problem and some person at some time. To the end user who never uses the feature, the Roman numeral hang is not a big deal. To the end user who actually experiences a hang and possible loss of time or data, this could be a deeply annoying problem. To a programmer who takes great pride in his craft, a hang is a severe problem. To a programmer who is being evaluated on the number of Severity 1 problems in the product (a highly dubious way to measure the quality of a programmer’s work, but it happens), there is a strong motivation to make sure that the Roman numeral hang is classified as something other than a Severity 1 problem. To a program manager who has a few months of development time available before release, our Roman numeral problem might be a problem worth fixing. To a program manager who is facing a one-week deadline before the product has to ship (thanks to retail and stock market pressure), this is a trivial bug. (Trust me on that; I’ve been a program manager.)]
I also agree with this. Severity depends on who you’re asking. For this reason, I think it makes sense (and helps avoid assumptions and confusion) if you have multiple “Severity” fields. Have a Severity field for each key individual and/or group. For example, have a field that tracks “Severity according to the end users/customers”. Have another field that tracks “Severity according to the person that found it”. And another field that tracks “Severity according to the currently assigned developer”. You could even have a field that rolls-up/averages all the other “sub” Severity field values for an “overall Severity value”.

[ There is a strong tradition in the industry for assigning a severity category to a bug based on its symptoms. One common scale is: 1. Crash or data loss 2. Failure w/o workaround 3. Failure w/workaround 4. Minor Problem 5. Enhancement request. These are not, of course, real severity measures. They are attributes interestingly and often correlated with severity.]
I agree. As previously noted, “Severity” should be synonymous with “Seriousness”.

I see several obvious problems with a Severity scale with values such as “Crash/data loss”, “Failure w/ or w/o workaround”, “Minor problem” and “Enhancement request”. The values contain too much and mixed information. “Minor” indicates a scale while “Enhancement request” speaks to the *type* of defect and “Crash/data loss” speaks to…something else entirely.

Also, the values are not consistent with one another. The opposite of “Minor” is “Major”. Even if the values are well defined and understood, they are not obvious. So why not make the values well defined and understood AND obvious?

Finally, as I mentioned above, if your organization defines “defect” as more than one type of “thing being tracked”, then the Severity values should be general enough to describe all of them. Severity values like “pretty darn”, “kinda-sorta”, and “not very” are silly, but could be used to describe the “seriousness” of ANY type of defect.

[priority is the order in which someone wants things to be done]
In the context of defect tracking, I think a suitable synonym of Priority is “importance”. As in, “how important is this thing”? To me, that does NOT necessarily determine the order in which things need to be done. I think that the order in which things need to be done is determined by looking at multiple factors. (Ex: A 5-minute fix could resolve 48 unimportant bugs. A 5-day fix could resolve 1 very important bug. It might make sense to do the “quick fix” first).

I also think that, similarly to Severity, Priority depends on who you’re asking. For this reason, I think it makes sense (and helps avoid assumptions and confusion) if you have multiple “Priority” fields. Have a Priority field for each key individual and/or group. For example, have a field that tracks “Priority according to the end users/customers”. Have another field that tracks “Priority according to the person that found it”. And another field that tracks “Priority according to the currently assigned developer”. You could even have a field that rolls-up/averages all the other “sub” Priority field values for an “overall Priority value”. You could even apply a “weight” to the Priority fields, so that one “means more” than another (example: The “Customer priority ‘trumps’ the tester priority”).

Using the fruit metaphor above, I show how describing a piece of fruit by “taste” can be subjective. Therefore, you need to know who is describing the taste (as you do for Severity and Priority).

I think that Severity and Priority suffer from the same problem as bad metrics. They are cherry picked out of context and used in inappropriate ways. How many times have we heard, “There are 14 Urgent Priority bugs!!” My answer might be, “Does the customer care? Do they block testing? How long will it take to fix them? I need more info!”

Finally, I don’t think that any of these values are “sacred”. I don’t see a problem with allowing a tester to set the initial value of Severity based on “what they see”. If it turns out to be more or less serious than originally thought…then change it! The audit trail (and hopefully comments) should keep track of past values, in case anyone is interested.

I think that too much emphasis is placed on Severity and Priority. In my opinion, they are just “2 more ways to describe a thing”! They are no more important than any of the other “wasy to describe a thing” (that your organization has identified). They are just different. For example, if you wanted to look at “all the things you care about for next week” you probably have to also consider the “Status” (or “State” or whatever) of the “things” so that you’re not including “things that are already done”, and you also probably have to consider the “Date” of the “things” so that you’re not including “things in the past”. Severity and Priority are simply “2 more things to consider” when looking at “all the things you care about for next week”.

“[S]everity is not an attribute of the problem, but a relationship between the problem and some person at some time.” Testers are part of the team too, and not second-class. I think it’s perfectly valid for me as a tester to state a problem’s severity relative to how I perceive it affecting my own activities and goals, just like the end user and programmer and project manager do. I’m the person who understands those particular activities and goals best, so I’m most qualified to make that assessment for myself. It’s then the task of the decision-maker to weigh everyone’s assessments relative to the immediate project goals and larger picture when determining what actions to take, and if necessary, harmonize them into a single rating that the team agrees to work from. If the next project goal is to complete all planned tests, but a bug prevents half of them from being executed, then in that context, it’s my assessment that should count the most. I believe the tester’s assessment should not always predominate, but neither should such information be withheld, unless I suppose it’s certain to be mis-used.

Cool article. Lots of info. After my 9 years in QA, this is what I have learnt.

Severity – depends on the how critical the issue is. For example, if you are trying to login to the system and are getting an HTTP Exception that would be considered a severe bug. It would be a high priority bug also as it affects your ability to get into the system.

Priority – based more on business need. For example. The logo of a company is showing the wrong message
TechQA – QA on the go. (the real message in this case was intended to be TechQA – QA in our heart.

So in this case it is NOT a SEVERE bug because it not blocking further testing or causing the system to crash. But it is a PRIORITY issue because a company would not want its image to be promoted incorrectly on its website.

Your email address will not be published. Required fields are marked *

Comment

Name *

Email *

Website

Notify me of follow-up comments by email.

Notify me of new posts by email.

Past Presentations

You can find an extensive list of presentations and courses that I've taught, including the slides and speaker notes for many of them, here.

Coming up—let's meet!

Check out my schedule, and drop me a line if you'd like to get together when I'm in your part of the world. If you'd like me to work with you and it doesn't look like I'm available, remember that my clients' schedules are subject to change, so mine is too. Finally, note that some conferences offer discounts—and even if they don't advertise it, maybe we can work something out.

February 28, 2019

Wherever you are!

A takeover of the Techwell Hub #testing channel. I'll be asking questions interactively all day. Find out more and sign up here.

March 6, 2019

Wherever you are!

An online webinar, Higher-Value Checking. This webinar is based on the session of the same name that I presented at EuroSTAR 2018.

April 1-4, 2019

San Francisco, California, USA

The Software Test Professionals Conference (STPCon). I'll be giving two one-day workshops: Critical Thinking for Testers, and a Rapid Introduction to Rapid Software Testing. Then there's a keynote talk on Testers as Their Own Worst Enemies, and a track talk on Rapid Software Testing in Agile Contexts. More information here.

XP 2019 - The 20th International Conference on Agile Software and Systems Development. I'll be offering a one-day workshop on Analysis for Agilists, and a talk on Refactoring the Agile Testing Quadrants.