3 Answers
3

With Kanban you don't really have the problem of the sprint ending before you are happy that a feature is fully tested and working, so there is no need for an equivalent hardening sprint. Instead you would hold the work item in the test state until you have a robust set of automated test that verify the acceptance criteria. Limiting the work in progress means that is there are a lot of testing tasks then the team will need to focus on reducing this before new work items can begin.

Performance and Load testing should be ongoing and part of the test state and added to for each work item that enters into this state if required.

As for exploratory testing, you can arrange a time, for example every Monday 9am to 11am , that the whole team will do exploratory testing and then discuss the results.

You will also need a mechanism for keeping incomplete features out of a deployed build such as feature switch that will allow you to turn on and off a particular feature.

@glowcoder - your lighthearted comment sparked off a train of thought that led to more questions :) - put them in chat this time though as it's more suited for discussion.
–
testerab♦May 18 '11 at 22:31

@stuartf - Ok - let me check my understanding of what you're doing as I'm not sure: I'm hearing that you're using a combination of automated regression checks plus exploratory testing of the whole app on a regular basis, which doesn't preclude further testing of the feature itself (i.e. that the acceptance criteria checks are part of the feature testing, not all of it). I initially read your answer to be saying that the automated tests were the whole of the feature testing, but I'm now unsure whether that's the right interpretation.
–
testerab♦May 18 '11 at 22:38

@testerab I didn't say that I allow one hour for exploratory testing, I said set a fixed time period to perform exploratory testing each week (you could do an hour each day if you want more), the duration depends on what you think is appropriate. Only do this in the allocated time as you priority is to move the work items through the workflow, so you should focus on moving the work items in the test state to the next state on your workflow. You can deploy at any time, this doesn't happen after exploratory testing, it usually happens when a work item enters the deployment state.
–
stuartfMay 18 '11 at 22:40

It does seem to me that the deployment cadence is almost always of a lower frequency than the delivery (by devs) cadence. Also, that you don't get value to the customer until you pass a critical mass of components. Also, there is often a piece of legacy software that is resistant to automated testing. Also, (but not finally), stuff comes out of the woodwork that no one would think of automating, maybe because the solution, or technical designs just got it wrong.

These make me think that it might always be appropriate to have 'super stories', that each depend on a number of stories being delivered into the pipeline (not deployed), that allow for exploratory and user testing, and which end with a deployment.

In a lot of cases you work hard to have as much of your regression/performance/load tests automated and run on something like a nightly basis (since those tests can often take many hours to run, you don't want to run them with every CI build, but you do want them run regularly as often as is practical. Having the Acceptance/Regression tests for a new feature running is one of the final phases of getting that item through the pipe as it were, and it should not release if those things are not online and running.