Scheduling requirements changes – part 2

This process goes against agile principles on paper, but makes teams more agile in practice.

Scheduling delivery of a project is an exercise in managing complexity. Scheduling changes to the requirements on the fly is really only marginally more difficult. The key to managing changes is to set expectations with our stakeholders. By defining rational deadlines for change requests, we assure ourselves that we can manage the changes. We also demonstrate responsiveness to our stakeholders. Rational deadlines are not arbitrary deadlines nor are they unreasonable deadlines. Deadlines that vary with the complexity of the changes are rational, easy to communicate, and easy to manage.

In part two of this article: We show how define deadlines for change requests based upon the complexity of the proposed change.

Complexity of change (review)

We defined four different buckets into one of which every change request will be dropped

Simple implementation (less than 2 hours)or minimal risk

Easy implementation (less than 1 day) or low risk

Hard implementation (less than 1 week) or appreciable risk

Major implementation (less than 1 release cycle) or high risk

For this article, we will work with the assumption that each release cycle is 4 weeks long, and the development team is between 2 and 10 people. When there is a single developer, it is much easier to handle change, and with more than 10 developers a development team should be grouped into sub-teams that operate in a coordinated but independent way on different elements of the project.

Release schedule timing

When we talk about a schedule, we will talk in terms of a countdown to a release date. A release date is the date that developers stop. If our team uses a code freeze, or delivers to another internal team prior to customer-delivery, that first delivery is the release date. All of the dates we talk about in this post are relative to that development-terminating release date. Everyone is familiar with NASA countdowns – “T minus 20, 19,….” which count down to the point of ignition of the engines. We will use the same language, but instead of talking in terms of seconds, we will be counting in terms of days – specifically week days. “T minus 5” is 5 days prior to the release date.

Incremental delivery sometimes refers to delivering to the customer with each release, and sometimes refers to internal releases that happen between external releases. When we are scheduling releases, we are referring to each incremental delivery (either internal or external). The following diagram shows a timeline for a single release:

S1: The deadline for simple change requests: T-2. Changes that take less than 2 hrs to implement must be vetted at least two days prior to the release.

E1: The deadline for easy change requests: T-5. Changes that take less than 1 day to implement must be vetted at least five days prior to the release.

H1: The deadline for hard change requests: T-10. Changes that take less than 1 week to implement must be vetted at least ten days prior to the release.

M1: The deadline for major change requests: T-20. Changes that take less than a full release cycle to implement must be vetted prior to the start of the release.

[Update 12 Apr 2006]

Thanks Roger for the great comment (below) suggesting that we incorporate timeboxing into this post. We just posted an article on how to use timeboxes when scheduling software delivery. In our diagram above, we show the timing for the vetting of a change request, relative to a single timebox’s release date. These timeboxes would be strung together as part of an incremental delivery plan, as the following diagram shows.

[end update 12 Apr 2006]

Vetting a change request

A fully vetted change request is not a properly documented request. Vetting is the process of validation and verification. A change request is a requirement. It is either a previously scheduled requirement that must be changed, or a requirement newly scheduled for this release, or a new requirement.

A requirement is validated through communication with the stakeholders. Usually a stakeholder submits a change request. The product manager or program manager will then verify the requirement with the stakeholder, usually in an inverview. The PM will also determine the proper priority for the requirement, as well as identify the desired release for the requirement.

A requirement is verified by the development team – usually the development lead or a senior developer. Verification includes the following steps:

Confirm the correct interpretation of the requirement. Does the developer understand the change request? Is his understanding correct?

Estimate the implementation time. The developer must commit to a PERT estimate for delivery of the change request. Creating a good estimate may require design effort or prototyping for hard or major changes.

Assess the risk associated with the change request. The developer and project manager may need to collaborate to determine the risk.

Without validation of the change request, we risk building the wrong functionality – the hardest source of bugs to eliminate. Change requests usually come with a sense of urgency, making it even more likely that we will misinterpret them. Without verification, we don’t know how big the impact of the change is (or might be). Until the requirement is vetted, it must not be accepted by the PM for inclusion in a particular release.

Zero-sum game

All changes are scope creep, because they are asking us to do something more, or something different. Sometimes, we have to do something again. We made the assumption in part 1 of this post that we already have a fully committed development team at the start of the release. To incorporate a change, something must be removed from the release.

Which something should we remove? There is no general answer for that question. We have to look at the skills and availability of our team members. We have to understand the interdependence of tasks in our current schedule. Interdependence is especially tricky because it not only affects sequencing, it affects scoping. Developers will make assumptions when estimating the work to implement a particular requirement. One of those assumptions will usually be that something else is already implemented. For example, implementing a new report is dependent upon the reporting engine being implemented.

Our PM needs to work with the development team (usually the dev lead) to understand which committed features can be pushed out. Actually, every deliverable can be delayed, but sometimes, pushing out feature X also means delaying features Y and Z. A Gantt chart will reveal these dependencies if properly documented and managed. Requirements traceability can also be a source of dependency information – a requirement to show per-item shipping charges will depend upon the ability to show an itemized quote. While the development team may be able to implement the per-item shipping charge functionality in the current release, if the itemized quote functionality is pushed to a future release, the stakeholders will not get any benefit from the per-item shipping charge display capability. This is why we communicate release content in the form of use cases, or enabled capabilities.

This may seem like a burdensome process. It isn’t. We are not pro-process, we are only pro-valuable-process. We’ve had success using this process both to introduce predictability into the development process, and as a simple and clear communication vehicle to stakeholders who may not appreciate the challenges of software development. Our experience is that this process allows more changes to be implemented earlier. The process goes against agile principles on paper but makes teams more agile in practice.

We’ve worked with teams that required their stakeholders to wait for months to deliver functionality – even though they used a monthly release cycle for their applications. Their stakeholders complained about the lack of responsiveness of the IT organization. The IT organization complained about the inability to deliver what the business users asked for. The root cause of their pain – inadequate vetting of the requirements combined with lack of vetting of change requests – if it was important it was approved. The IT team struggled and juggled every month to get stuff done. They relieved the pressure by pushing out commitments until the business was waiting for months to get anything other than bug fixes. Change management was a special event, and required management attention.

For those teams that implemented a process like this one, within a few release cycles, the changes were almost astounding – better quality, better quality of life (for the developers), more predictability and higher satisfaction for the stakeholders. For the teams that didn’t, they still struggle and juggle.

Post navigation

4 thoughts on “Scheduling requirements changes – part 2”

You might want to incorporate time-boxing into this discussion. With time-boxing, you hold the deadlines constant and vary the features you implement within them. So if you’re having any trouble meeting a deadline for an internal or external release, you cut features rather than extend the deadline.

and the analysis of include/don’t include a CR in a release should also look at the value delivered in that release. If you push out one functionality from current release, value of the current release can decrease if the value of the CR is not greater than the value of the pushed out functionality. IMHO, the functionalities that deliver more value (including CRs) should be released early.
what you think?

@sehlhorst on Twitter

Who Should Read Tyner Blain?

These articles are written primarily for product managers. Everyone trying to create great products can find something of use to them here. Hopefully they are helping you with thinking, doing, and learning. Welcome aboard!