Try as you may, to objectively define Severity, and it is still arguably subjective. “Loss of Functionality w/Work Around”…well, if we are creative enough, we can always come up with a work around; let’s use the legacy process. “Data Corruption”…well, if we run a DB script to fix the corruption is this bug still severe?

From my experiences, it has been better for humans to read the bug report description, understand the bug, then make any decisions that would have otherwise been made based on a tester’s severity assessment.

As an example, if the bug report description does not indicate the system crashes, and it does, it is likely a poorly written bug description. One shouldn’t need a Severity to pigeon hole it into.

My advice? Save the tester some time. Don’t ask them to populate Severity. Benefit from the discussion it may force later.

This advice does not scale well to large projects where reading through and understanding each bug is not feasible - severity, priority, and type/class (text, graphical, crash etc.) are the easiest way to triage defects efficiently. The test team should have enough product knowledge to be able to fill in these fields fairly accurately.

I've seen priority owned by different groups on different projects by on every project the test team has needed a way to indicate test blocking bugs somehow.

btw - TFS severity field is customizable and I'm pretty sure those aren't the defaults.

My thoughts - information on defects should be "just enough" for what the project team needs. If this includes having meta data that is needed for reporting or analysis systems, then the testers should assign the appropriate values for things like severity or priority and not just leave the defaults.

I tend to agree with Byron that in large projects where I've had to stay on top of 1000s of defects, having these values filled in helped me to filter and triage what needed to be looked at and what should be passed on to future dev efforts.

Over the last few years I've seen projects moving more and more towards follow the sun testing, and a big consideration that needs to be made when setting up any process is "Does it scale?"

If someone in Thailand is testing and part of the system goes down - do they have the knowledge and permissions to restart it? New team member is starting in India, is their manager empowered enough to set them up with everything they need to start testing?

A large part of my role in testing has been implementing and refining processes which can scale to work with a core team of a few testers to the same team a year down the line when there are 100 or more people involved.

Who am I?

My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.