Clearly both the developer(s) and tester(s), and almost certainly others, should be concerned whenever a defect escapes into production.

The tester should try to determine why it wasn't caught in test (assuming that it actually wasn't detected, rather than having been detected but having the bug deferred), and how to prevent such an escape from happening again. The tester also should be involved in reproducing the problem in the test environment, and verifying an eventual fix.

But clearly, the tester didn't create the bug, so the developer needs to understand the root cause of the bug, how to cure the bug, and how to prevent it from happening again.

Management (both Test and Project Management) might need to understand what led up to this defect escape. Perhaps aggressive schedules, inadequate/unclear Requirements, lack of training, etc, might or might not indicate that some processes need to be revised.

+1 Code doesn't start with the coder & end at test. There are many cooks in the kitchen so to speak. Incorrectly understood customer needs, poor requirements, lack of equipment/tools, unidentified hardware, etc. can all contribute to defects. Use the post-mort to determine what slipped through & why so you can catch it next time around.
–
CKleinJun 2 '11 at 13:20

The whole team regardless of responsibilities should shoulder the blame. It is not a "test escape" but a "team escape". Testing is not only a department, it is also a discipline that everyone in the team should be involved in.

Upvoting. It is at least fruitless and possibly destructive to assign blame to an individual. Moreoever, no non-trivial software is perfect. Fix the bug, and then if you want to assign blame, take a look at your processes and ask whether they need to change.
–
user246May 31 '11 at 19:32

As said, everyone. This is why postmortems are important and valuable, as well as high-communication environments where people feel comfortable taking responsibility for problems without being afraid of blame.

Get everyone who might have had a chance to eliminate this issue - management, PMs, developers, testers, business owners, and so forth into a room. Brainstorm, come up with an action plan. Put someone (probably the PM or management) in charge of implementing that plan, and schedule a meeting to review progress after some period of time. The result is that people get credit for taking responsibility, rather than blame for being responsible for problems.

If you get blamed for being responsible, your goal will be to not be responsible for anything. A vacuum of responsibility doesn't produce quality.

Where I work, which has a very enlightened approach to QA/Dev, we actually have a report that details every bug found at customer sites, why it was missed in QA and what steps are being taken to ensure that similar bugs will not be missed.

We understand that everyone is human. If we expect developers to occasionally wirte a bug into the code, we should expect QA to occasionally miss a bug too.

The trick is to have a process that catches these things as best we can. This is why we report coverage for tests, and test plans are checked periodically.

Blame is not a useful idea to work with. What we try to do is find all the failures that occurred in the process that caused the bug to escape. It usually takes 3-4 failures in the same path for this to happen. Then each failure is dealt with using the Kai-Zen method of 5-whys, if somewhat haphazardly applied.

Obviously, if one of the failures is a test that was reported as done but wasn't really done, we might need to give the person involved a stern talking to. The same goes for changes made in code without notifying QA that they should re-test.

As ever, this varies with the type of software, company size and culture and a thousand other things, so your mileage may vary.

Depending on who asks the question, and the purpose and meaning of "responsible" you can get a better answer using root cause analysis of the test-escape.

The tester missed the bug either by not testing it (coverage not good enough) or by accidentally missing it (human or automation error), this can be divided more to the existence and following of test development procedures (were there clear requirements ? who reviewed the documents ?) , test execution procedures or even work overload that can lead to errors.

If you wish you can continue digging even more (following the 5 whys rule), lets stay with a coverage issue for example- we want to understand who's responsible for why wasn't this case covered. Did anybody reviewed your tests and test implementation ? Are there clear procedures that every test document should be reviewed ?

Etc...Etc...

I think that few more questions will usually lead to one, and only one responsibility- god (didn't give the company's CEO enough brains to hire a good test manager).

I think when you note "coverage not good enough" as a rationale you also need to note schedules, sometimes the date is fixed and no manner of testing is going to get all your tests in a short window. You can try, but sometimes you have to pick and choose while dates slip. Sadly, this is still a case to consider.
–
MichaelFJun 2 '11 at 11:41

Though Architecture-developer-test-Tech Support teams are responsible for the Leaked defect, most of the companies have a ingrained culture of blaming the testers.

I had been in situation where the APIs were continusouly changed on a regular basis and there was no coordinated communication between the teams involved. Testers were continously changing the API test framework, which resulted missing out some of the scenarios which resulted in leak defects in the System Integration testing.

RCA was done badly, we just found the gaps in our test design & how to improve the test design, but never focussed on why we missed them during our test design

After taking the entire blame, we started setting up rules for our test team.

1) Feature Interaction matrix for all the APIs with respect to their interactions

2) Never change the Test design unless there is a published(approved by Dev. manager/Architect in consultation with Test ) API/Feature change and change should come through a Single Point of Contact.

3) Test Reviews to be done diligently and test team should discuss more with in ourselves as well as Dev. team on the reason behind the changes and what type of problems can arise due to the changes.

I find posting the blame is the wrong way to think about it but to answer your question it could be a single person or everyone. I.e could of been a business spec that was not communicated to the dev team or it could be the tester that did not test a specific COS appropriately or everyone for that matter.

Whats most important to me as a QA lead is what can we do to prevent it from happening again. After every roll out we review leaked bugs and analyze what we could have done to prevent this. Our company currently separates the bugs into the following category.

Requirements(COS)

Unit Testing(Low level dev testing)

Acceptance testing(Verification of COS)

Regression testing(Post QA related functionality)

Automated testing

Other

Not sure

I find currently a lot of our recent bugs have fallen into the regression section. Using this model has really helped our team analyze our strengths and weaknesses in terms of preventing bugs being released.