Navigation

Post navigation

Defining Done in a DevOps World

InfoQ recently published an article on QA in Scrum. It published a really simple definition of done list.

In a counterpoint post Matt Davey added Acceptance Testing to the list, bringing up Acceptance Tests as part off the definition.

That’s great but I feel they have both missed a critical point.

Your feature is worth nothing if it has not been given over to users or if it is of such low quality that those who are using it stop.

As software delivery specialists (Developers, QAs, Project Managers, Product Managers, SysAdmins etc.) we strive to make useful software products and it is of little value to us or the business if no one is using it. We use “Done” to define when a Unit of work (Feature, Use Case, Story, Task etc.) is complete. Rarely, does that definition take into account the true realisation of value locked in it.

Done
“Done” is probably one of the most variable terms in Software Engineering methodologies. To be fair – it’s succinct and to the point and really should mean what it says. However it is just far too ambiguous.

We have a myriad of different definitions of what it means to be “Done.” All development practices focus on what those are and SCRUM teams have their “Definition of Done” check lists to tell them when a unit of work is complete. Every team I have worked with has had widely different definitions. To get around this we have all heard awful phrases such as “Done, Done” or “Ready, Ready” when we mean more than just completing part of that unit of work. I am as guilty of using these to try and cut through that ambiguity. Which in turn leads to greater ambiguity.

Almost none of the definitions I have seen across various teams completely match with a DevOps culture we should be trying to instil. The idea that the software has to be used for it to have value is not included. The delivery team focuses on build but not operations. In a world of DevOps & Continuous Delivery the lack of the last mile in such a key delivery metric has become a stumbling block. When velocity is measured against a measure of Done that allows a huge batch to build towards release, we are generally asking for failure. With that failure comes the invalidation of the declaration of success that the project tracking has given the team.

A DevOps definition of “Done”

“Released with a high enough level of confidence in it’s quality.”

This, for my team, embodies the necessary premises of DevOps – collaborating to fully deliver the product to our end users from code to production. There are still the check list of what that means but now the team knows when we say Done how far we expect them to have taken it. They understand that it isn’t enough to have finished the development tasks plus the relevant QA, it has to be released and they understand that they need to prepare for this early. Thinking about it up front at the start of the project, means that the first thing a team does is ensure that there is a solid and repeatable pipeline to production. That generally means development collaborating with our operational counterparts right from the start. Doing it early in the project gives us the opportunity to leverage that specialism from the start. And immediately starts to break down the synthetic silos between development and operations.

There’s also little excuse for anyone to deliver without confidence in its quality. Through a shared responsibility we should understand that there is enough test coverage, including those acceptance tests. But only that it is high enough to know it won’t break under the majority of user circumstances but perhaps without test full coverage. You can never truly have 100% confidence – bugs escaping is a fact of life for software engineers. But we can minimise the impact of that through the right amount of testing and assurance as the path proceeds to production.

The definition also does not shy away from the fact that a unit of work cannot count towards our velocity until it has been released. Project progress is measured based on whether it is delivering actual value to the user and the business. This is a bit of a leap of faith but think about it? Is it right to report success without running over that last mile?

Nothing New
Nothing I am saying here is new or ground breaking. It seems common sense to me but DevOps is a cultural shift for many engineers and sys admins. Defining “Done” to be something that ensures the start of that collaboration is one way of starting to instil the culture and values. Seeing the value in the engagement early, leads to more of a willingness to collaborate further. I’ve had some success with it in my own teams. While it takes time to fully embed the practice, if your team is willing then this definitely worth a go.