Prioritization and Value Maximization

We all know the story about the emperor’s new clothes. I’ve been thinking about prioritization and scheduling, and as far as I know, no one is promoting that we maximize value – they (and we) have been promoting that we do the most valuable stuff first. Doing the most valuable things first does not result in getting value the fastest. In this article, we show why not.

Genesis

About a month ago, I read an article by Kelly Waters on how to prioritize intuitively. He presents a magic square diagram, showing both the “how valuable is it” and the “how hard is it to do” axes. I’m oversimplifying, read his article for more details – he incorporates elements of risk, complexity, etc. I really liked that he was addressing the “missing element” of how much work is involved. However, his diagrammatic approach, while presenting this information very well, does not really yield insights into what to do first. Kelly and I had a great discussion over the next couple weeks, exploring the interplay of work and value in prioritization, trying to find a way to encourage value-maximizing decisions.

Prioritize By Value

We have talked about prioritizing the mostvaluablerequirements first repeatedly. And in the last of those links, we accidentally hinted at, but didn’t grasp the real goal:

We will only consider those steps where the profitability of change exceeds our hurdle rate for investment.

We’ve also talked in the past about using use cases as the basis for scheduling, as each use case represents realizable value. For the rest of this article, we’ll talk in the context of scheduling use cases across releases.

Consider a very simple example – you have five use cases, with values of 10, 9, 9, 8, and 7 respectively (the units don’t matter). If you sort those use cases in order by value, from left to right, they would look like the following:

The size of each box shows the relative value of having the use case implemented.

Based on our previous guidance (and everyone else’s), you would implement them in order, from left to right. Makes sense. Do the most valuable thing first. Do the next most valuable thing next. Repeat until the value is not high enough to continue.

What About The Amount of Work?

OK, the amount of work required should play a role too. In our time-boxing article, we describe each release as having a given capacity, which you can visualize in terms of cost (resource) and time (duration of applying resources):

We fill up that capacity with use cases, based upon how much work is involved.

We can estimate the work involved with each use case by any of a number of methods – but the earliest estimates can be developed using use case points.

Consider the following “work” measurements, identified for each of our use cases from above:

We have the same sequence of use case implementation (based on value), but now we can visually see that there are different amounts of work associated with each use case. The area of each box represents the relative level of effort required to implement the use case.

Prioritize By Value Results

The best way to explain the flaw with the classical “prioritize by value” approach is to show what happens after the first release. Consider that you can accomplish 30 units of work in the first release.

We can schedule the first two use cases for this release. The size of the time-box above represents the amount of work that can be accomplished. With the first two use cases scheduled, the time-box will look like the following:

We have completely used up the available capacity of the team (Work = W = 30 = 10 + 20) by delivering the two most valuable use cases. We have delivered 19 units of value (Value = V = 10 + 9 = 19).

Value Maximization

When we consider both the value (V) and the cost (in terms of work, W) of each use case, we see that some use cases generate more value per unit of work than other use cases. If we consider the ratios of value to work (V/W), and sort the use cases based on this approach, we would see the following:

And with the previous specific value and work values:

If we were to organize our delivery of use cases based on this ratio, we would be saying “prioritize the most effective use cases in terms of value per unit of work.” This may seem counter intuitive, but it makes sense – get the most bang for the buck earlier, and you will get more value faster.

Consider what our first release would look like:

We would complete three use cases (using 25 units of available work), and we would deliver 25 units of value. We would also be able to start working on one of the use cases that would be delivered in the next release.

True Maximization

To find the mathematically proven maximal value, we have to do a bunch more work. This prioritization exercise is actually an example of the bin-packing problem, an np-complete computer science puzzle. To make a long story short, we can’t use a simple heuristic and guarantee that in all cases it will be optimal. But we can do better than “most valuable first.”

If we use the scheduling rule as follows:

Schedule the use cases in order based on the highest value-to-work ratio, skipping use cases that are “too big” for the current release.

Then we will get value out of the system as fast as possible. There are a couple problems with this approach:

It does not take into account that you can apply work from one release to a use case that is scheduled in a future release. Intuitively, any “remaining time” after scheduling complete use cases should be spent on the next highest-ratio use case. I haven’t proven that mathematically, but it makes sense.

Use cases, and their underlying requirements, and the implementation tasks to support those requirements are not actually independent. You may need to introduce one use case before another – the second use case may not be possible without the first one. Implementing a requirement that is shared across use cases will reduce the “remaining work” for those other use cases – causing a need to recalculate ratios. Implementation tasks are often dependent upon one another. The discrete tasks to support a valuable use case may require implementation work that is also leveraged in lower value use cases. Further, some implementation tasks must be performed sequentially. You can’t optimize a query before defining a database schema, for example.

The second problem actually applies to any prioritization approach that incorporates value. The nature of software development introduces constraints (X must be completed before Y can be started). Those constraints narrow the possible scheduling choices. And they make it impractical to determine the “optimal” solution. At least with commercially available tools, excluding expert systems, which can be used to solve this type of problem.

Conclusion

Sequencing use cases based solely on value does not maximize the delivery of value over time.

Sequencing those use cases based upon the ratio of value to effort will increase the rate at which value is delivered to customers.

It is impractical (and possibly marginally valuable) to determine the optimal sequence for scheduling use cases to maximize value.

You should use the “highest ratio first” approach, and when a use case can’t be delivered yet because of interdependencies, skip it. Also – apply judgement to sanity check if you are doing something that seems odd – like delaying a high ratio, high value use case. Explore with the development team if there are ways to adjust their dependencies to allow for a “more valuable” delivery sequence

Post navigation

13 thoughts on “Prioritization and Value Maximization”

This is a really great article Scott! You’ve really taken on my simplistic way of visualising these two dimensions on to something that can be judged on a more quantitive basis. I really like the idea of a value-to-effort ratio driving the order of priorities.

Thanks Kelly, and thanks for starting the conversation and contributing to the exploration. And I would not describe your visualization as simplistic at all. Your introduction of work into the equation is what got this whole thing started. Without that, and our discussions, I would not have been able to write this.

Thanks again, and thanks for helping everyone here – both with your blog and with your comments at Tyner Blain.

This is an approach that I have used for years, although your analysis and way of thinking about it is a little deeper. Basically once you have ranked the value priority of features you work with your team to determine the amount of work required. High value features that are easy to implement move to the top of the list.

There are a few potential pitfalls though:

– Over many releases you may never get to the bigger items that can truly differentiate your product

– The synergy between use cases (and individual features) isn’t necessarily taken into account. Oftentimes if you implement specific combinations the sum is greater than the whole.

– The competitive environment may force you to implement something that is lower-value but higher-effort but that customers consider a must have in order to purchase your product.

– The end result may provide good value to the customer, but may not be above the bar for you to be able to get any press and pr around the new version of the product. (i.e. the press may just view it as a simple .x rev and not cover it, so the market and your customers may never find out about the new features.)

Your insights are great additions to the discussion! It further emphasizes that you can’t use a simple heuristic for prioritization. The notions of synergy, attracting buyer personas, and getting PR are great ones that you can’t reasonably roll into a working system.

Net net, prioritization will always be something that is guided by (articulated) value, but ultimately finalized with expertise.

Scott,
It would be nice to understand how you determine, in a product context (not a one off consulting project where a single customer determines value), how to establish value for a use case. Your analysis presumes some level of value for each use case. It is easier to determine level of effort than level of value in my experience. I work in an agile environment, where story cards are not fully fleshed in use cases either so the value is even harder to determine at the story card level. Would like some thoughts on these topics.

Thanks for reading and commenting! I’ll tackle your first question in this comment, and your second question in the next comment.

When I approach valuation in a product context, I look at three general areas of perceived value. In no particular order:

Perceived Value to Buyer Personas

Realizable Value to User Personas

Strategic Value to “us” (the software creators)

Buyer personas are important because they influence short-term sales. For consumer products, the buyer is often the user. However, I also pour analysts, reviewers, etc into the buyer persona bucket. This group, in my experience, tends to care more about speeds and feeds in red ocean markets – and tends to be sold on “vision” in blue ocean markets. Admittedly, my background is overweighted in enterprise software – would love to know if folks with more consumer software experience think about it differently.

User personas are important for two main reasons. First, they are the people who affect change, or realize value, by using the software. Second, they are the word-of-mouth marketing engine that can drive product sales, increased adoption and usage (thereby magnifying ROI), and provide feedback and ideas that help make your product even better.

Finally, as a company with a distinct competence, and a market strategy, there will be goals for defining brand, presence, and penetration. Some product features might be strategically relevant while not representing immediate value for the current user base, and while not encouraging purchases today by the current buyers. But they could be important to future positioning.

In that context, I apply strategic “value” as a benevolent dictator – questioning, rethinking, and enforcing a company vision and therefore a product vision into the prioritization mix. I also treat this separately from the relative valuation of use cases, but wanted to get the idea into the answer, without clouding the discussion.

This leaves individual use cases to be valued. A use case has an inherent value to the actors who perform it. This is the same “per user” evaluation that is done for “one off consulting projects.” The only trick is to establish a cross-market valuation. Pragmatic Marketing puts it really well in their training – you aren’t trying to understand your customer, you’re trying to understand your market. I’m trying to capture “use case X has value Y to customers Z.” In this case, customers (Z) are all of the customers that meet a particular profile. Sort of a blend of market segmentation and “persona development for companies.”

Each customer in group Z(or prospect, if you prefer that term) will have their own frequency of use information for a given use case. We can combine this data to say that use case (X) has an average value of (Y) for customers in bucket (Z). And by knowing how big bucket (Z) is, we have a market-based value for use case (X).

Within each class of customers, we estimate the values of the particular use cases. If we were targeting only a single group of customers, this gives us the valuation. The answer is more interesting when there are multiple market segments (groups Z). You can proportionally adjust the values (Y) for the sizes of the groups (Z). And then add them all up and do the most valuable features.

I don’t, however, think that is the best way. If there isn’t a single group of customers (Z) that dominates your market, then you end up trying to be all things to all people. I believe you are much better off targeting a single market segment, and optimizing on it. At some point, you have to “make the call” and switch from the next most valuable use case for that market segment to the most valuable use case for the next market segment – which is now your new focus. Again with the dictator thing.

As part of this strategic (dictator) decision, you have to look at how you can address buyer personas. While not directly valuable (as in, not causing the realization of value for a single company), they are indirectly valuable – as they make it more likely that you will sell your product. I guess you could apply some sort of “probability of impacting a sale” numbers to those buyer focused features. I haven’t seen anyone try that, and I suspect you have to do something less quantitative and make the call.

Anyway, that’s how I would approach it, as described from the bottom up. From the top down, I would say.
1. Segment the market into similar customers.
2. Prioritize those market segments, and attack them in order.
3. For each market segment, prioritize the use cases – incorporating a notion of buyer personas into the prioritization.
4. Plan the sequence in which you will expand to other market segments.
5. Execute.

What I like about agile approaches is that they aren’t “don’t document”, they are “only document enough.” And I would say that if story cards aren’t fleshed out enough to be able to understand (or at least approximate) their value, then they aren’t “enough” and you might want to do enough more upfront work to do some valuation. The more you understand about the value of each story, the better your ability to prioritize them.

At a minimum, get the broad brush understanding of your space, and get the next level of detail for those few stories you suspect are the most valuable.

For exactly this reason (the challenge of valuation), I believe the better approach (at least on projects I’ve worked on) is to use use cases – even if informal ones, combined with persona development to set the stage for agile delivery.

I’ve been a PM in both consumer and enterprise environments, and both differ greatly. If you aren’t careful with managing expectations of product mgmt, both product types can easily become very jagged no matter how you prioritize the features.

That being said, I’ve always felt there are two key ways to plan a release:

1 – Time to features
2 – Features to time

If you construct a set of user stories, have dev estimate (and in some cases, add padding afterward), and say, “OK, based on these numbers, and the knowledge we want to release in 1 week, we can do features 1-5.”

Now, the key missing here is the market data. In both enterprise and consumer products it’s critical. Again though, don’t let the few that shout the loudest get the most. Market data really is about wisdom of crowds, and leveraging that data you glean from your research efforts to solve a market’s pain.

Not enough companies do this properly, or enough, in my opinion. Listening to your market does not mean taking a phone call from a customer that’s dumped a ton of money in to your company and implement whatever they want. But, I digress.

I’d take the ordering of features to time based on the dev estimate and ensure the matched-up to the data I had received. For example, if dev told me they could do features 1 & 2 for the release, but may have to drop feature 3 to hit the release cycle.

If I wasn’t getting a sense from the market that feature 3 was super important, who cares. Release fewer features that are 100% done as opposed to a bunch of half-assed ones that only “kinda” work. Even in an iterative approach this is critical – and becomes silly to even think of ignoring, seeing as you can just fix / tweak / enhance existing foundations week after week, day after day.

So, after all my rambling, and looking at this from a self-serve / consumer market mindset, my thoughts would be:

– Fit the features to your iterative release cycles based on each user story / requirement estimate from dev

– Ensure what you are fitting to the time is actually something the market values and will help solve a problem(s)

– Release features that are done, even if it means dropping them during the iterative cycle when you realize it’s just not going to happen for the release

It’s still easy to stretch yourself too thin. You can always make it up in a release next week though =)

To give Tom Gilb his due, his dynamic prioritization technique using an impact estimation table has been guiding Evolutionary deliveries based on maximal stakeholder value per step (timebox) for years going on decades. Your pictures are better, though!

See Competitive Engineering (Gilb 2005) ISBN 0-7506-6507-6, Chapter 9 for details. For example, on page 265 you can see that while oranges are a “better” solution than apples (470 vs. 260), the performance to cost ratio is far better for apples (5.2 vs. 1.57). Remember this next time someone tells you you can’t compare apples and oranges!

@sehlhorst on Twitter

Who Should Read Tyner Blain?

These articles are written primarily for product managers. Everyone trying to create great products can find something of use to them here. Hopefully they are helping you with thinking, doing, and learning. Welcome aboard!