Why not velocity as an agile metric?

In response to my recent post on Agile Metrics, a reader asked, “Why did you leave out Velocity?”

Even though it’s not perfect, velocity is the best way we have to understand the capacity of teams. It’s the best way we have to bring some reality to planning for releases. Watching velocity over time and looking at patterns in burn downs can alert coaches and managers that something is going on, and they need to investigate.

Velocity is important. But as a metric for gauging how your agile adoption is going, it’s opens a door to danger.

Here’s why.

Velocity is easy to manipulate. Want velocity to go up? Fudge the definition of done and you finish more stories. Change the scale and complete more points (what once was a 2 point story is now a 5 point story).

Velocity is easy to misuse. Managers who don’t see organizations as systems can use it to compare teams or punish teams. Neither of which is helpful.

Velocity—as an agile adoption metric—puts the focus in the wrong place. Focus on velocity implies that if velocity isn’t improving there is something wrong with the team. In some cases, that might be true. But I don’t want people to look by default. When velocity isn’t improving or is erratic, it’s often due to factors that aren’t in the team’s direct control. There might be a problem with the way the work is flowing into the team. Or the team maybe interrupted every hour with production support calls (or what ever). Or the team may not have the tools they need to do their work. That’s something for the team and team coach to work on or raise up as an impediment (where mangers can work on it at the system level).

For assessing the progress of an agile adoption, I choose metrics that emphasize system performance to help managers make the shift from “work harder” thinking to “optimize the whole system” thinking. Managers after all, are responsible for creating the environment (structures, policies) and enabling conditions for teams be successful. To do that, they need a way to asses how the system is functioning. Because I presume that the point isn’t being “agile” but delivering valuable software.

Post navigation

23 thoughts on “Why not velocity as an agile metric?”

I was coaching a team once and they wanted a velocity that matched another team on a different project (35 vs 150) so a few people suggested we multiply our current sizes by 2, voila the team looked better. In the end the team didn’t change but it shows how easy it is to mess with the velocity.

Primarily, it should be the rate at which a product is moving forward (not a team). Stuff that doesn’t move the product forward, but is important for hygenic or human reasons, probably shouldn’t appear. Sadly, this meaning leads people to demand a stop to hygenic practices that don’t add features, which decreases velocity.

Secondarily, velocity is the rate at which a team completes work, which is primarily a measure of how encumbered they are by the local circumstances (lack of knowledge/skill, bureaucratic overhead, legacy code, external vetoes, old ideas). This measure, however, leads people to do “busyness accounting” rather than measuring forward progress. Insisting that there is a high level of activity does not equate to having a high level of accomplishment or quality.

Velocity is a very useful measurement, but only if we never lean on it. If you push a gauge, it no longer reads accurately.

I think paying attention to the _consistency_ of velocity can be a good adoption indicator. As noted, erratic velocity may be an indicator of an issue with how work is flowing to the team. That could mean there is a system problem that needs to be addressed.

Anytime something is expressed numerically, the tendency is to compare it to other metrics. What if we only represented velocity trends in graphic form with no Y axis values? This would de-emphasize the numeric aspect and encourage focus on what’s really of value: the trend/pattern.

– velocities can’t be compared between teams
– working hard says little about how agile you are
– managers outside the team (?), the agile manifesto knows no such role
– managers are responsible for creating a working system (?), that’s the responsibility of everyone

In the Sept/Oct 2011 issue of Better Software magazine, Lee Copeland has an article on Goodhart’s Law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

It doesn’t even take explicit subversion of the intent. The simple fact of turning a measurement into a goal is enough to foil our best intentions.

Great points Esther! I also recently had a post on my blog with some good reasons about why velocity is a bad metric to use as a goal (http://www.developmentblock.com/2011/08/is-working-software-enough/). Basically, just like you pointed out, it is an easy number to manipulate. Often, the simplest way to manipulate it is to try and push a lot of the “design” work off the team so that they can focus on getting working software complete. However, this often results in suboptimal working software. I followed that post up with another (http://www.developmentblock.com/2011/09/agile-metrics/) which lists a few metrics that do make better sense like Customer Satisfaction and a newly defined version of focus factor.

Metrics drive the behaviors of team and these can be positive or negative. I have seen one of my teams velocity trending down as it was made mandatory item on display dashboards, entire team was always discussing about it leaving work progress or problem solving.

Even without manipulation, it’s fairly common for there to be velocity inflation over time. That makes for awkward conversations when your management team advocate starts talking excitedly about how all the new process improvements have increased productivity by X%. Sure, whatever you say boss.

Every time the team drastically underestimates the complexity of a task, they start estimating higher. They either add safety buffer, or at least change their rounding function to be more conservative.

Every time the team becomes self-aware enough to see how a bad practice lead to painful rework later, they estimate higher. “Done” changes for the right reasons. The time per work unit goes up, but momentum improves and find that this often more than compensates.

Velocity is a good internal metric as long as all the things that affect it are kept constant. If you change the way you size stories or even change the development tools you use, velocity will change.

Perhaps velocity is just the wrong word. We hear it an immediately think of the speedometer on a car. While a speedometer gives us an instant readout, the velocity metric does not. Velocity is measured over time. It’s an average over at least several sprints.

Agile teams should track velocity but recognize that it is only one of many ways to measure team performance.

I agree with you about misinterpretation of metrics. In fact, in my current organization we are attempting to adopt agile and there is a lot of disagreement on the definition of done. Lastly, but not the least, metrics should not be used to punish or compare people / teams but find the reasons that we are not able to meet the expectations

Speed and Direction define Velocity. We think we measure velocity when we’re measuring the illusion of Speed and, all the while, Direction is completely AWOL.

We have wrung ALL of the speed & efficiency we’re going to get from dev team. No more speed. No more getting faster. If we want better software it’s time to hammer on the Teflon players in the software equation – The Business. We’d be way ahead if we figured out how stakeholders can provide meaningful direction.

“If you don’t know where you are going, it’s easy to iteratively not get there.” ~David Hussman, 5:40 PM Nov 3rd, 2009 from TweetDeck