What do Scrum teams do during the Release Sprint?

During the release sprint or hardening sprint, what do Scrum delivery teams do?

What does the system test or integration team do during development sprints?

How much of a Scrum team’s time should be split between supporting the release-level activities of an integration team, versus planning for the next release?

Shouldn’t we stagger our product releases so that each group stays busy?

Many organizations have a system test team or integration team that is separate from the Scrum delivery team. Sometimes, I get questions like the ones above from organizations once they begin to consider the adoption of an Agile approach. In particular, the question usually comes from large organizations that have to integrate the work of multiple teams. I’ll answer these questions, but let me give you a couple definitions first.

Definitions

A System Test Team:

You may know this team by a different name. This team may go by the name of the:

Independent Verification and Validation Team

Integration and Regression Test Team

Release-Level Integration team.

No matter the name, this team is responsible for the execution of continuous integration and regression testing across teams and managing the resolution of the problems by feeding defects into delivery teams. They work closely with the System Team to continue to improve the continuous integration and testing capabilities.

A System Team :

This team has a couple of different names too. The:

Enablement Team

Build as a Service Team

This team is responsible for building everything needed to support continuous integration, test environments, and test data management. They develop the tools and support the automation. This is often a very small team—as small as one person or maybe just a couple of people. You only need this when the testing is going to require integration of multiple teams’ outputs for testing.

Now for the answers to the questions.

First off, read Dennis Stevens’ blog on six things teams can do that are better than writing code that can’t be tested. During a Release Sprint, delivery teams can be working on learning, reducing technical debt, improving unit testing, and feature level validation, refactoring, or preparing for the next release. They should be instantly available to support any defect found in the hardening or release sprint. Or, at least be directly involved with the testing. Whatever time is remaining should be spent improving their capability and preparing for the next release.

During development sprints, the release level integration team, or system test team, may be doing integration and verification of everything delivered in the prior sprint. They should also be getting ready to test what is coming next. This means collaborating to refine acceptance criteria or to define tests for work about to go to delivery teams. They should be improving their automation and making an effort to understand or critique the delivery team’s unit test coverage. In addition, they should be improving their capability.

Release Sprint Anti-Patterns

It’s important to avoid the late testing anti-patterns.

Anti-Pattern One:

The delivery teams not verifying their product effectively and throwing garbage over to a system test team.

Anti-Pattern Two:

Teams not finishing features (a set of related stories that ultimately need to be tested together as a unit) throughout the release such that nothing is system testable until the end of the release.

Anti-Pattern Three:

Deferring any amount of integration and regression testing until everything is complete.

Conclusion

The goal is to test as much as possible throughout the release, deferring very little to the end. This means delivery teams deliver verified and technically excellent features each sprint. Until they are perfect at this, they have work they can do to improve. This means system test teams integrate, verify and validate frequently (continuously). Until they are capable of doing this, they have work they can do to improve.

If you are looking for simple formula, you will be disappointed: There is not a simple percent-of-time formula for this. Look at where you are, coordinate with all the teams involved, improve your ability to minimize untested code, make sure work is ready ahead of development, and maximize flow of validated product through the system.

* Credit: I took an email from Dennis and massaged it into this blog post.

@AndrewMFuqua is a founding member of XP-Atlanta in 2001. Currently an Enterprise Transformation Consultant, Andrew has previously held positions in management, product management and software development at companies like Internet Security Systems, Allure Global, and IBM. Andrew earned a BS and MS in computer science and has an MBA from Duke University.