In my company, we successfully working with agile practices - but without using iterations. The main reason is that we can't find a clean way to fit in QA in an iteration cycle.

We understand QA as an extra bit of verification to a certain build (release candidate) before this build gets deployed to the customer. The point is to avoid that a single malicious commit damages the whole release. Since you never know which one it is, QA has to wait until all features/commits for the release are in the build. (No famous last words "It was just a tiny change" allowed.)

If QA finds bugs in a release candidate, developers fix these bugs in the respective release branch (and merge it into trunk). When all bugs are fixed, a new build is deployed for QA to re-test. Only when no bugs are found in a certain release candidate, it's offered to the customer for verification.

This usually takes around two to three candidates, about one week, per release. The time to write the fixes is typically much lower than the testing efforts. So in order to keep the developers busy, they work on release N+1 while QA works on N.

Without using iterations, this is no problem because we can overlap the work for releases N and N+1. However, from what I understand, this is not compatible with iteration-based approaches like Scrum or XP. They demand an iteration to be releasable at its end with all testing effort to be incorporated in the iteration.

I find that this necessarily leads to one of the following unwanted results:

(A) Developers are idle at an iteration's end because QA needs time to verify a release candidate and bug fixing work is not fully keeping devs busy.

(B) QA starts working already before first release candidate is ready. This is what is mostly recommended on Stack Exchange. But it's not what my company understands as QA because there is no specific release candidate tested. And the "tiny change" that breaks everything can still be introduced unnoticed.

(C) Bugs are carried over to the next iteration. This is also recommended on Stack Exchange. I don't think it's a solution at all. It basically means that we are never getting a verified build because whenever bug fixes are made, new, unverified commits are added to the same branch as well.

Why does QA take so long? Are your automated tests not catching regressions?
–
psrJan 16 '13 at 0:03

2

@psr: Above unit level it is rare that everything can be automated. AIUI, their QA team is testing at integration and acceptance level. And automated tests can't find everything, especially when timing starts to play a role.
–
Bart van Ingen SchenauJan 16 '13 at 11:54

4 Answers
4

We understand QA as an extra bit of verification to a certain build (release candidate) before this build gets deployed to the customer.

There is nothing inherently incompatible between this form of QA and iteration-based methodologies like Scrum.
Within Scrum, the team delivers a deliverable on an X-weekly cycle it its customer. The important part here is that, if the development team is doing Scrum, then their customer is the QA team, not the end-user of your product.

As a developer, I would consider a product shippable to QA if it has a fighting chance of passing all their tests. This probably means that some of the QA-tests have already been executed on the daily builds, but how that affects the official release tests by the QA team is up to your organisation.

This throw-it-over-the-wall-to-QA approach tends to carry its own problems. It dramatically increases the feedback time when you introduce a bug. If you write something at the beginning of the cycle and QA don't test it til the end, but you've missed some edge-case, your mind has left that particular development behind by the time the bug is reported. Better to have features tested as they're complete.
–
pdrJan 16 '13 at 14:18

@pdr: For that reason it would be a good practice to run a portion of the QA tests unofficially on the nigghtly build. Some industries just need a higher confidence level than "it worked when we tested it on feature completion". They need a "it works correctly in the exact version we delivered to you" confidence level.
–
Bart van Ingen SchenauJan 16 '13 at 16:30

How do you suggest QA finds time to test a future version, when they're under pressure to test the release candidate and get it out of the door?
–
pdrJan 16 '13 at 17:04

@pdr: By not deferring the unofficial tests to QA but to run them yourself as development team. They are primarily meant to increase your confidence level that you are delivering quality software anyway.
–
Bart van Ingen SchenauJan 16 '13 at 17:14

I'd love to agree. It has been my experience that the more you separate out dev and QA, the more culpability rests with QA and the less responsible even otherwise-quality developers become. Again, while under pressure to do development work, the unofficial QA becomes a secondary task and one that doesn't get done, because developers are not the ones who will get into trouble if the software fails in production. If QA and dev work as a single unit to deliver software together, that doesn't happen so much.
–
pdrJan 16 '13 at 17:42

For most real life situations agile stops at delivery to QA/UAT or whatever its called.

The effort to move from QA to Production in a real life environment is often underestimated.
In many cases this involves real business users in testing, management sign off from the real line of business managers, scheduling the release with operations etc. etc. This is not trivial!

In extreme cases the software may need certification by outside agencies, or, be subject to rigorous security testing.

In these circumstances it is simply impossible to envisage more than one release per quarter except for bug fixes.

It gets worse for a serious software product. Documentation needs to be proofed and published. Marketing brochures need to be amended. Sales people need to be told what they are selling (no easy task!) etc. etc. You really don't want to put the business through this more than once a year.

The very short-term solution is to give QA an extra period of time after your iteration to finalise testing. ie. If you have a two-week iteration, don't release til week 3. QA are going to have nothing to test towards the next iteration, during the first week of it, anyway.

But I'll warn you up front what will happen (having seen this in several teams): You'll end up in a situation where one iteration you get two week's work done, QA are overloaded, they're coming to you for that whole QA week and, the following iteration, you'll only get one week's work done. That iteration, QA will have nothing to do and you'll think you've solved the problem. But then the next iteration you'll start the cycle again.

So, as soon as you've added that week on, just to make sure your release is stable (cause one thing I've learned is that if you lose the faith of the business, Agile gets exponentially harder to implement), get straight onto the long-term plan.

Buy a copy of Jez Humble's Continuous Delivery, read it, cover-to-cover, pass it around the team. Get everyone inspired. Then implement everything you can from it.

Make the build process slick as you can. Implement a unit-test policy and get those running on every build. Make the deployment process the slickest thing you've ever seen. Three clicks? Not slick enough.

Once you've done all this, it won't matter so much if the occasional regression bug gets through. You know why? Cause you'll be able to (optionally) roll back, fix it, deploy again, before the business falls to pieces. In fact the night janitor will be able to rollback for you, the process will be so simple.

I know what you're thinking: We don't have time to do all that. Let me tell you, you do. If you're overloading QA, you're deploying too much per iteration. So don't. If you're not overloading them already then ask them why they don't have automated test suites yet. You soon will be.

Do all this with full visibility to the business. Estimate lower and inject some of this work into the iteration. Or, better still, break it into stories and get them to prioritise it, alongside everything else.

Explain to them that a) it'll improve the stability of the release and b) it'll improve your ability to respond to problems for them and c) it will buy them more velocity later. It's a rare company that doesn't want these things. It's certainly not an Agile company that doesn't want them so, if you get resistance, you'll know you have a different problem.

Once you've got Continuous Delivery down pat, you can start shortening the time QA get at the end of the iteration. It's in everyone's interest to bring the iterations back in parallel, as soon as possible. Maybe you'll have one day at the end of the iteration, where you need to fill in time. I've already answered what to do about that elsewhere.

Without using iterations, this is no problem because we can overlap the work for releases N and N+1.

There seem to be a problem with the way how you decided on what exactly constitutes work for release N.

For some strange reason (I can only guess there's some misunderstanding of particular Agile recipes) you somehow decided that agile approach mandates all QA team efforts to be incorporated in the iteration.

If that would be really the case, I suppose Agile popularity wouldn't be even close to what we see now. I can not imagine many projects that could "survive" mandatory syncing of dev team iterations with QA test cycles.

There's a bit more on agility below but first, let's sort out work for release N...

Look, there is just no compelling reason for development team to define work that way. Quite the opposite, from your description it is clear that instead of monolithic "unit of work", there are several such units, with milestones that are easy to feel...

Eg, first "unit" is indicated by distinct milestone when candidate build is passed to testers, and further milestones correspond to changes involved in test cycles performed by QA etc.

Note also that the way you define work for release N is not forced by QA work flow either. From what you describe things looks like they have their own (and pretty reasonable) schedule.

Given above, more realistic way to define work units in your case could be like as follows:

Development activities up to the moment the build is passed to QARelease Candidate N

These are natural and convenient to define, follow and track. This also blends well with QA schedule, allowing for a convenient coordination odf dev and QA efforts.

However, from what I understand, this is not compatible with iteration-based approaches like Scrum or XP. They demand an iteration to be releasable at its end with all testing effort to be incorporated in the iteration.

Above understanding of compatibility with Agile looks fundamentally wrong and here is why...

Assumption you made has nothing to do with Agile, if we take its philosophy at the face value as indicated by its very name, that is an approach that favors and practices agility.

From that perspective, sticking with particular "fixed" workflow and ignoring whether it is convenient or not simply contradicts the spirit of Agile. Slavishly following the "procedure" leads to practices denigrated so eloquently in Half-Arsed Agile Manifesto"...we have mandatory processes and tools to control how those individuals (we prefer the term ‘resources’) interact".

You can find more about this in an answer to another question, quoted below. Take a look at the note on "shippable release", it looks like back then OP has been confused in a similar way:

one should be agile about very application of agile principles. I mean, if the project requirements aren't agile (stable or change slowly), then why bother? I once observed top management forcing Scrum in projects that were doing perfectly well without. What a waste it was. Not only there were no improvements in their delivery but worse, developers and testers all became unhappy.

For me, one of the most important parts of Agile is having a shippable release at the end of each sprint. That implies several things. First, a level of testing must be done to ensure no showstopping bugs if you think you could release the build to a customer...

Shippable release I see. Hm. Hmmm. Consider adding a shot or two of Lean into your Agile cocktail. I mean, if this is not a customer/market need then this would mean only a waste of (testing) resources.

I for one see nothing criminal in treating Sprint-end-release as just some checkpoint that satisfies the team.

dev: yeah that one looks good enough to pass to testers; QA: yeah that one looks good enough for the case if further shippable-testing is needed - stuff like that. Team (dev + QA) is satisfied, that's it.