Many of the agile projects I have witnessed over the years were in really good shape and churned away story-points at a quite satisfactory rate. Yet, some of them were looked down upon by top management as unsatisfactory from a business point of view and a couple of them even got shut down.

To me this seems to be because of the same 'blind-spot' that could be one of the reasons behind the fact that – according to the Standish Group – 41% of agile projects do not achieve the expected result.

To have a successful project there is much more involved than just writing software and creating 'potentially shippable products' – so our process considerations should not begin and end with the creation of software. Instead they need to start and end at the customer and have to incorporate software-creation as an integral part of this process.

From this perspective measuring and optimizing the development team's velocity can be misleading. In fact, sometimes highly misleading. Apart from the simplest way to enhance your velocity (by just padding the estimates) even a real enhancement of this part of the process does not necessarily speed up the time until a customer is able to use any new features.

Wich brings us back to David Anderson's remark - you really have to measure the whole value chain. Not only inside the sprint but including all the adjacent areas.

the time it takes from idea generation to the decision if the idea is going to make it

the time it takes to really ship a potentially shippable product

the extra iterations it takes to 'harden' the product, reduce 'technical debt' or one of the many other ways to account for thinks that should have been in the sprint in the first place

etc.

When you start measuring lead-times like this – and focus on the flow of singe requirements in these measurements – you'll get a lot more insights into your real process.

Example

Tester (customer): “Oh yes, that was the one where we couldn't do #some_important_thing”

Developer (supplier): “That's been handled for ages - you can do #that_important_thing"

Product manager (customer): “No, I still can't do #that_important_thing”

Manager (supplier): “It is possible to do #that_important_thing"

Tester (customer): “I wanted to try it this morning and it is still not possible to do #that_important_thing“

Developer (supplier): “I am sure! I have implemented that. You definitely CAN DO #that_important_thing”(in an aggravated tone)

[one or two more circles like that, voices getting louder]

Consultant: “Ahem...”

Tester: “What?”

Developer: “What?”

Consultant: “Dear Tester: _In what way_ couldn't you do #that_important_thing?”

Tester: “I don't see any menu entry related to #that_important_thing in my main screen!”

Developer: “Oh - you're trying to do it with your own account! That won't work of course …"

Tester: “There is another account?!? What's the name? Where is it mentioned?”

Developer: “Oups … we might have to work this out a little more …”

And thus both the developer and the tester learned something new about the system and its interaction with the world.

The Problem

The parties clearly communicate on different levels of abstraction – while the developer was referring to the theoretical capabilities of the system the tester was taking about the things he actually was able to do with the system at that point in time.
Abstraction differences like this oftentimes can take days or weeks to become visible, especially if the parties involved communicate intermittently and use media like e-mail or a ticketing-system for their conversations.

A Solution

Go to the real enduser (or as close to the real enduser as possible) and watch her using the newly added system-capability.