Product Management 2.0: A Growth Story

Are product roadmaps still relevant? How should product managers prioritize features or improvements? Can product management learn anything from the growth hacking movement?

After 4 years of doing growth hacking, I recently made the switch to full-time product management by joining both the Firefox Sync and Accounts teams.

As a reminder, growth is a combination of product and marketing so I thought “hey, this is going to be a breeze.”

The Early Struggle

It turns out that transitioning from growth to product hasn’t been as easy as I would have thought. The first thing I struggled with as a product manager was the idea of product roadmaps. I felt the obligation to create one since… it’s what product managers do. Right? But committing to 3 months of features felt like a huge constraint. How can you be agile with that? What if new opportunities come up? What if we need/want to make improvements to something we just shipped? Does it push our whole roadmap off? Will we seem unreliable? How do we communicate this? These are merely a few of the questions that ran through my head.

To cement our roadmap even more, both my teams committed to OKRs (Objectives and Key Results) in Q3. Our KRs were specific features or patches to ship. We had committed and needed to deliver.

KPIs

As a product manager, I’ve realized that it’s really easy to get caught up in trying to ship more features. However, it’s important to remember that each feature or patch you ship is meant to move a KPI (key performance indicator). Here are some examples:

Improving your sign-up flow should lead to more new active users.

An onboarding tour should close the gap between what solutions your product provides to a user and what a user knows how to currently do. Improved onboarding is measured through higher engagement and retention rates which will result in more active users.

Fixing bugs and crashes can move KPIs in similar ways but with variable levels of impact.

Shipping a feature that users have been requesting for months should also be easy to tie back to a KPI.

So, how do we efficiently impact KPIs if we commit to a roadmap? In my growth experience, the first version of anything rarely has the desired impact. If everything always worked out as expected, I would be an oracle. Unfortunately, it’s only through rapid iterations, testing and measuring that we can learn and start to move the needle. The growth hacking movement has known this process for years since the only assumption they ever make is that most of their hypotheses will fail.

Let’s set aside OKRs, KPIs and other business acronyms and talk about the approach to making measurable impact.

The Growth Process

Scientific Method

When I worked at ViralNinjas and SociableLabs, my team and I developed a growth process to prioritize by impact and document our learnings (which I thought was revolutionary). Turns out, Brian Balfour came to a similar process of his own and describes here (worth watching!) better than any of the talks I’ve done on the topic. I will only attempt to summarize it here.

This process isn’t too foreign to Mozilla. I implemented it within two weeks of joining Firefox’s growth team and its success has resulted in a gradual adoption by some marketing teams and in part by the Android and the Firefox Accounts team.

So what is this process anyways?

The process approaches improvements or new features using the scientific method. Everything is an experiment with a control. (often called A/B tests)

Scientific Method in Growth and Product Management

I create a document with the outline of all of this before starting a test so that my objective is clear. The Firefox growth team calls this document a “recipe” because it should be detailed enough that all the ingredients are there for someone to reproduce the test. We want recipes to remove any doubts around results when someone looks back at them 6 months later. They’re great for accountability too. Best of all, past recipes are one of the best sources for new strong hypotheses.

Prioritization

Before creating your recipes, how do you know which idea or problem you want to solve will have the biggest impact? Each growth idea someone comes up with needs to be added to a backlog that is prioritized with a score. This will create a product pipeline that will live and breathe and continuously change. Scores are determined by:

Traffic (1 bad — 5 good)How many users will see it? Is the change hidden in the settings or on your homepage?

Impact(1 bad — 5 good)By how much do you think you can improve that feature or that flow? If your signup rate is 10%, getting it to 20% is a 100% improvement. If 80% of users already complete a step in a funnel, getting them to 90% is only a 12.5% improvement.

Ease (1 bad — 5 good)Can this be tested within hours, a day, a week or a month?

Confidence (multiplier)This factor has been less popular in the past but I believe it to be one of the more important ones. Where did you get your idea? Did you formulate your hypothesis from reliable observations? Or did you just think it up?

Merritt Aho reminded me at the 2015 Opticon of the importance of a solid hypothesis (confidence). He managed to quantify something I knew to be true from experience. The likelihood of your test winning depends on the origin of your hypothesis. He categorized product and test ideas into the following sources:

It’s best practice: You have 30% chance of improving a KPI with your test/feature.

Our competitor does it: You have a 20% chance of winning.

Our boss says we should do this: It has 15% of succeeding.

We saw XYZ in a past test, in our metrics or via user testing: In this case, your hypothesis has a great chance at being true and has a 50/50 chance of being confirmed.

Merritt Aho at Opticon 2015

Let’s be honest, A/B testing isn’t an excuse to fail. We’re paid to win! Back your feature ideas with great data and observations to set yourself up for success as often as possible. This is why the “Confidence” is so important to me.

The above process has now become so widely adopted in the industry, Sean Ellis’ GrowthHacker.com recently released Projects to help marketers adopt it.

If you’re just starting (e.g. startup or new product), you often don’t have the luxury of lots of metrics. This highlights the importance of prototyping early on. Prototypes allow you to collect data as early as possible through metrics and user testing.

OK, time to get back to those acronyms. Let’s try to tie those OKRs back into the mix now that we outlined the process that leads to the biggest impact and to improve our KPIs.

Combining Product Management & Growth

If you’ve made it to this point, now the juicy part begins. You’ve caught up to where I am with my own product teams today.

Here are the rough steps I’m taking to maximize product impact (move my KPIs) but to also commit to quarterly OKRs:

Define and measure your company and product KPIs. Without proper data and defined success metrics, you’ll continue to ship features tied to a roadmap without any considerations for impact.

Make the key results of your OKRs… key results. You can only do this once you’ve defined your KPIs and you can measure them. This means:- Don’t focus on deliverables or features, focus on the impact you can have on your KPIs. (e.g. increase sign-up success rate by 10%)- Features are merely tactics to achieve your KRs.This point could be an entire article of its own (oh there it is!). My Engineering Manager, Ryan Kelly, shared it with me after discussions on trying to make our priorities more data focused. It probably contributed to setting everything in motion.

Prioritize your features with a score, the same way that you would an ideas in a growth team.

Drop the roadmap, create a pipeline of features (i.e. tactics) using the scores you assigned to every feature. Work on the features that will have the biggest impact to achieve your key results and ultimately your objectives.

Create your recipes (example template). Think of these as a new format for product requirement documents (PRDs). With a pipeline, you will almost always know what the next 1–2 features/tactics will be. Start building your test plan (recipe) using scientific method as described earlier. (Reminder: Observations, hypothesis, success metrics, variations, results, conclusions, next steps) This will force you to become data driven.

Approaching every feature as you would an experiment with an A/B test. Prototype, measure and iterate. If you prefer, call it a staged roll-out.

After each test, log your results and define next steps in your recipe.

Using your results, review your feature pipeline. Determine if you can have more impact by iterating on the same feature or if you have a better chance with the next tactic you have lined up. It’s my experience that one iteration isn’t usually enough to have any real impact but too many usually end up being a waste of time.

Let’s Wrap This Up

Boom! The traditional product roadmap is dead. Long live the feature pipeline. This will allow us to focus on improving our product KPIs. And to do so, we’ve moved from OKRs focused on shipping features to OKRs that care about real results.

I’m in the middle of rolling this out with the Firefox Accounts team so I will try to write a blog post with the problems we encounter as we move forward. I don’t doubt that we’ll iterate our process as we do our features.

What Are The Limits of The Feature Pipeline?

There are still things I struggle to resolve. For example, users and companies like to be able to see a product roadmap. It gives everyone something to look forward to. A pipeline makes that a lot harder. I am left communicating my objectives rather than my features which have been reduced to tactics. But perhaps users are getting used to that. Most top mobile app makers have reached a point where they no longer even write release notes with each update. Has this become the new norm? Are objectives enough product vision to please managers? What are your thoughts?

Another problem that some teams might face with a process like this is related to cycles. If you work on long release cycles (4–6 weeks in a nightly build + 4–6 weeks in an Alpha build + 4–6 weeks in Beta), your quarter will be over before you even know if you managed to achieve your key results and there’s no way you will manage to iterate quickly. If you want to make any changes after your feature is released and you’ve captured relevant data, you will need to wait another 3 months. Beyond the delays around iterating, I can see how one could get frustrated with KPIs and OKRs focused on results since it becomes so hard to move them. You’d nearly have to nail things perfectly on the first shot, every time.

The life, work, and tactics of entrepreneurs around the world. Welcoming submissions on technology trends, product design, growth strategies, and venture investing. Learn more about how you can get involved at startupgrind.com.

Never miss a story from Startup Grind, when you sign up for Medium. Learn more