Software is becoming more and more critical to every function within a modern organisation which is leading to an increase in demand for software, and therefore the number of projects ongoing at any time. As the number of projects scale it becomes impossible for people to keep track of everything that is going on and to be able to effectively prioritise the work that aligns best with the company strategy. The responsibility for creating situational awareness (tracking and communicating the state of all in-flight initiatives) comes under many names, but a generic name is governance.

What is the purpose of Governance?

Good governance enables a company to prioritise scarce resources to ensure alignment to company strategy. It is clearly a necessary and positive function but, like most parts of software delivery, the traditional ways of working have accumulated process debt that is now holding back performance. The best way to tackle process debt is to return to the underlying principles that the initial process was designed to solve and to determine if there is a better way to solve these problems today.

In terms of underlying principles, governance is really two simple questions:

Where are we spending our money?

Is it working?

Where are we spending our money?

Each department, or sub-function, within a company will have their own targets and priorities so they will compete for the scarce delivery capabilities. Often the loudest voice, or most connected person, will win out. But this is a terrible way to allocate resources as it will very rarely align to what is best for the company as a whole.

This is where a Project Management Office (PMO) is typically introduced to put more structure on the management of projects. Because the PMO needs to report on "where we are spending our money" one of the first steps is to instigate a project initiation process. If nothing can start without going through the PMO then they can have a clear view of what is happening across the organisation.

But the list of projects alone is not sufficient - we need to know their scale. Are we spending 90% of our money on one project and the remaining 10% across a range of other initiatives?

When an idea is proposed at the project initiation stage it's hard to determine the scale. This leads to the introduction of a stage gate process where funding is released incrementally as the scale of the projects becomes more clear.The first gate - High Level Design - provides enough funding to do a "ball-park" estimation of the scale. With this insight, senior management can determine if there is an ROI in the initiative. If it passes the first gate additional funding is then allocated to investigate the problem space further and create a more accurate estimate for the scale of the project. If this Detailed Design phase estimate is approved the project gets funded for delivery.

Traditional Stage Gates

This process achieves its aim of providing visibility to senior management as well as enabling the effective prioritisation of scarce resources. But there are a few problems:

It incentives larger projects - you need to be able to offset the overhead involved in all of these process steps.

It encourages "kitchen sink" requirements. Even the projects that would be big enough to make it through the process become even bigger because people have learned that you must get everything included in the project or else you're not getting it.

It assumes you can know up-front what you need to build - unfortunately history shows that this isn't possible.

We know that software is more successful when it is developed in small, iterative batches because the teams can react as they learn more about the real requirements as they go.

Is it Working?

The best way to determine this is through the business value delivered. Unfortunately, value delivered is a lagging indicator which limits the ability for people to react and proactively resolve issues that are occurring on projects.

The traditional way of tracking projects follows on from the stage-gate process by tracking the delivery against the time and cost estimates that were delivered in the Detailed Design phase. Again there are a few problems with this approach, which can be summarised as "What gets measured, gets managed":

It only focuses on two of the four key risks in software development - feasibility (the cost to deliver) and, by enabling management to cull projects, on business viability (does it align with strategy). There is no continuous monitoring of desirability (will people want it) and usability (can people use it). This is important because the biggest risk is software development is building something that customers don't want.

Estimates are both forecasts and targets which means that people will game the system. If you know you are going to be tracked on your estimates - you're going to go high. Since estimates are gamed, teams tend to perform well against budgets. What is really needed, though, is a way of determining the efficiency of the team.

Projects are measured on time and not quality. Therefore, rushing to hit a date is preferred over maintaining high quality and a low technical-debt code base. This impacts future development but that isn't the metric being measured. Time and Cost tracking incentives short term focus instead of the sustainability of long term development.

The time and cost approach to tracking is based on the assumption that software development is a simple, independent process. It is actually a complex, continuous process that is directly correlated with the performance of many organisations these days. For such an integral function companies can't afford to have a tracking process that works against the efficiency of delivery.

How do we improve governance?

We'll start with tracking first. What metrics can we track that ensures that we are encouraging efficiency? Fortunately a lot of research has been done in this space by DevOps Research and Assessment (DORA). There are four key metrics for software delivery that have a statistically significant correlation with strong business performance.

Lead Time - the average elapsed days from idea until the feature is in production

Release Frequency - the average number of days between each release

Change Failure Rate - the % of releases that require re-work

Mean Time to Recovery (MTTR) - the average time taken to recover after a failed change

Desirability

By focusing on release frequency you need to break down large deliverables into smaller chunks. As you release more frequently you can quickly assess the desirability of your product. If you find out that customers don't want it you have time to make adjustments because you will have invested much less than on a traditional project. This brings us closer to the real objective of governance: tracking the value of what we are delivering.

Gaming

With time and cost estimates it is easy to see how to game the system - increase your estimates and it gives you more slack. How do you game lead time? You can make stories smaller so that you can deliver each story quicker. But this is a positive thing because in order to make stories smaller, and still releasable, you need to reduce the scope to focus on the core customer need. If you game release frequency by putting less in each release this again is positive. Releasing can be very difficult in many organisations. Automation of testing and deployment is really the only way to release regularly while maintaining a low change failure rate. Once those elements are in place it is easy to increase the scope of each release. So we have metrics that can drive business performance and they can't be gamed.

Sustainability

By focusing on averages over time sustainability is built into these metrics.

Together these metrics enforce continuous improvement of the efficiency of software delivery over time.

Tracking Spend

An intentional side effect of these metrics is smaller releases but this leads to a conflict with the transitional stage gate process, which promotes larger projects and releases. There is a natural floor to the size of a traditional project - given the overheads of business cases, funding, team formation and alignment. The answer is to shift from temporary project teams towards long-lived product teams who are continuously delivering against their allocated business objectives. Once you have these fixed teams in place it is easy to answer the question "Where are we spending our money?" because the teams will be fixed in size (static cost) and will be aligned to key business metrics. But these teams shouldn't be thought of as fixed. If the metrics start trending poorly or the business metric that the team is aligned to no longer aligns to the business strategy then the management team can either disband the team or shift their focus to higher priority metrics.

How you design the product teams is critical, and requires a lot of up-front work, but once in place the governance overhead reduces dramatically. If you have the right alignment between a product team and a business goal then, as the team matures and they begin delivering very frequently, you can start using value as a metric to hold the team accountable for because it is no longer really a lagging indicator. Setting up and aligning product teams is a complex challenge in itself so I'll come back to this in a future post.

How do you transition?

Making the transition from tracking time and cost to tracking the four continuous metrics is not easy because there are multiple simultaneous changes that need to be made across multiple different parts of the company, including changes in management and organisational structure, changes in funding and changes in how the teams work on the ground.

Helping teams to make the transition from projects to products is the mission of UXDX so if you want to learn more, you can subscribe to our newsletter and be the first to get access to the coming articles or you can check out the agenda for the conference at https://uxdx.com.