One query I have is how this fits into a workflow where there is QA stage performed by testers who aren't developers.

It seems that many tasks could not easily be QA'd in isolation.

A practical alternative could be that the "To Verify" process should be carried out by another developer in the team, and then to have an additional stage after DONE to QA the whole story (probably after deployment to a stage/test environment).

If you are interested in the concept of a "Scrum Task Board", take a look Kanban. I highly respect Mike Cohn's work and IMO, he is one of the best writers on Scrum. However, Kanban offers a richer set of solutions for those interested in these types of boards.
–
GuyROct 22 '11 at 14:39

3 Answers
3

A practical alternative could be that the "To Verify" process should
be carried out by another developer in the team, and then to have an
additional stage after DONE to QA the whole story (probably after
deployment to a stage/test environment).

Does this seem right?

I assume you are suggesting to redefine DONE as "Each story is done" and add a new column called "Product Ready To Release". If the team is not already delivering a potentially releasable build each sprint then your suggestion makes a lot of sense and is in fact commonly used in Kanban boards.

So, what happens when the product is NOT ready to release? In that situation, consider:

Creating work items to address each bug

Identify which story or stories caused that bug

Undo the "done" state of those stories that were actually "not done"

Add the bugs to the backlog for prioritization

Discuss the root cause in the retrospective

Also, developers are notorious for developing up to the last minute of each iteration. This makes it difficult for the "is it ready to release" non-development team members to do their job. Kanban also helps here by suggesting that agile teams feature-box iterations instead of time-boxing them. However, if it makes more sense for your product to time-box iterations, then you could agree as a team not to accept new stories in the next iteration until you can show the customer that the product is at a releasable level. Of course, you will need to articulate to the PO why the team is not able to accept new work.

I am a dedicated tester that works on a Scrum team working on a web app. What we do with regards to QA tasks on the taskboard, is we first choose the user stories that are to be tackled in the sprint. We then split those stories into sub-tasks, and add in two sub-tasks for QA: to develop acceptance tests and to run them. The task to develop the tests can generally start immediately and in tandem with the engineering tasks, and the task to run the acceptance tests is performed once the engineering work has been completed on the story. It should be noted that the engineering team also has an acceptance test task which is performed on a local machine deployment, and the QA team then performs the more in-depth testing on a deployment closer to production.

It is not useful for the QA team to attempt to test every task the engineers do, as much of it can be refactoring or such small pieces of functionality that it is much better to take as a whole. All commits go through code review anyway. It would waste a lot of time going back and forth between the QA and Eng teams asking how to test things and being told that the functionality is only on the backend and would not be visible to the user etc. Each feature is already split up neatly into stories and so these stories are the units of work that are tested by the QA team as part of the sprint.

The only downside to this approach is that the engineering can either be left with very little to do in the last 2/3 days of a sprint, or that they can easily be overwhelmed by sprint bugs to fix that may overflow into the next sprint. However, we have found that the very short feedback loop on finding bugs (as bugs can be found in just a day or two from coding) is much preferable to having a cleaner sprint plan but a feedback loop of up to two weeks.

This sounds really good but where to get such QA engineers? I met about 60+ "QA engineers" during my career and I don't think that more than 5 were able to do anything more complicated than manual tests. What are you using to "develop acceptance tests"? Do you think that this model can also work in enterprise or standalone products where features without UI exist?
–
Ladislav MrnkaOct 13 '11 at 11:30

Note I did not use the phrase "QA engineers". While we are actually developing an automation framework, all of our current testing is manual and exploratory (we are all new to the product and this is necessary for familiarisation also). Our "develop acceptance tests" task involves detailing test cases and scenarios rather than scripting anything. Focusing on test cases at this point in time makes perfect sense for a web app due to user variation involved, for some more reading on that specific matter I recommend this article: riceconsulting.com/articles/web-test-scripting.htm
–
SnorbuckleOct 13 '11 at 13:06

During the sprint developers must verify that completed users story meets acceptance criteria (definition of done). This should be done by writing automated tests. Product owner must also verify that the user story really meets his expectations and after that it is considered as done and can be presented on review meeting. After the sprint you have shippable increment of your product and you can "ship" it to QA where more intensive testing can be performed - this is actually collecting of feedback because QA can find bugs or some missing or inconsistent features and forward this feedback back to team or product owner. QA can also provide feedback to usability of UI.

QA in this scenario is not part of the team but it doesn't mean team doesn't test their code! QA acts as "end user" or "customer" providing feedback to shipped increment.