Where the rubber meets the road in an enterprise adopting Agile practices

About the Blog

Software is our business, and perfecting the art and science of delivering it is our mission. The contributors to this blog are passionate about the impact that great teams and good software can have on an organization’s bottom line. They bring decades of experience designing, developing and delivering great software, and each is playing a critical role in Borland’s own transformation.

March 11, 2009

I was close, gettin' closer, just this far away; come to find out close only counts in horseshoes and hand grenades

Who is responsible for quality? The facile answer is QA, but that doesn't work very well in reality. Quality is the responsibility of the team as a whole, developers included. You can't test quality into a product – if everyone isn't concerned with it, it isn't going to happen. Without quality, then your product is merely okay at best, close to the goal but not quite there. In an agile world, quality is largely determined by acceptance, and stories can't get accepted until they are both coded and tested. A common question is,
Photo by LarimdaME
"If QA needs to test what dev creates within the same iteration, what do the developers do at the end of the sprint?" Well, how about... testing?

Unit tests can always be improved, even when practicing test-driven development. And just because QA creates a test case doesn't mean a developer can't execute or automate it. Something our team does periodically is a multi-user test, a "pokeathon" if you will. We get the extended team (support, sales, etc. are invited as well as the engineers) together for an hour, create a conference bridge so we can all talk to one another, and start banging away on a single install of our webapp. This unscientific and undisciplined method of testing has uncovered a wealth of issues for us, improved our overall quality, and turned out to be a bit of fun at times, too. I highly recommend trying it out.

While you have to have enough stuff implemented to make it worth people's time to participate, we've actually been pretty liberal about what we use for the pokeathon testing. We've had the app crash during the test (once about five minutes after starting!), and then started a discussion about what information we'd like to persist about the crash and techniques for figuring out what happened as a group. Pretty interesting.

Another benefit of the rough sessions was they were (naturally) earlier in the release cycle, so the participants in later sessions were able to both see the progress and verify for themselves that they couldn't cause the same trouble after a fix. I believe the openness helped build credibility for us as a group that listens and acts on feedback.