Meta

Chandra and Suraj are friendly rivals who lead software teams in Ed Simpson’s division. Chandra’s team works on the Container Controller® product (CC), while Suraj has Scheme Ranker® (SR).

Chandra dug up previous quarter’s bug fix data for CC and SR

He showed his table to Ed. “Clearly, CC team is doing better”, he said.

Team

Resolved

Not Resolved

Total

Efficiency

SR

750

250

1000

75%

CC

840

160

1000

84%

Suraj was not going to take this lying down. He analyzed the data in more detail, and came up with the following refined table.

Team

Bug Type

Resolved

Not Resolved

Total

Efficiency

SR

Escalation

490

10

500

98%

CC

Escalations

810

90

900

90%

SR

Testing

260

240

500

52%

CC

Testing

30

70

100

30%

“Ed, If you take a closer look at the same data Chandra presented, SR is doing better than CC in both Escalation-related bugs and Testing-related bugs”, Suraj pointed out, “it is misleading to merge the data together, as Chandra did”.

Ed wasn’t sure of what to make of these two opposing claims, so he suggested that each team look to improving their own performance rather than making comparisons with the other team.

But can you tell which team is in truth doing better at bug fixes, and why?