Dream Big, Go Small, and the Path to a Minimum Lovable Product

For the past year, my team and I have been building a brand new social analytics solution within Buffer called Buffer Analyze. We’ve done our best to distill research, data, and intuition into a lean, lovable solution, and we’ve been fortunate to find early signs of product/market fit.

There is so much we want to do with Analyze because there are so many things we’re convinced will deliver value. At the same time, we’re tightly constrained by our resources as a very small team within Buffer.

To be honest, I wouldn’t have it any other way. Tight constraints force a creative, disciplined and critical approach to product design and development.

The product is the result of managing a delicate balance of tradeoffs.

I’m excited to share more about these tradeoffs: how we work within constraints, how we scope new product features, and how we build and release value for customers. Keep reading to learn more, and feel free to ask any questions in the comments or drop me a note on Twitter.

Before we scope: Dream big, then go small

When it comes to implementing product features, I usually dream big then go small.

Once I think I’ve identified our biggest opportunities (a process worthy of its own post!), I’ll get together with our Analyze designer, James, to review several potential solutions. Eventually we’ll land on a design we think hits the mark, we’ll flesh out many of the details, and we’ll prototype relevant workflows. We also always include the engineers in this process for early feedback on the design and ideas.

For example, this is one solution we explored for our new hashtag analysis tool. We ended up building something very different!

A lo-fi version of a product feature explorationA hi-fi version of the same product feature exploration as above

The result of this exploration is usually a clickable prototype, along with a detailed specification — we call them “specs” for short — of the choices we made along the way. (Our specs are non-traditional: instead of a long, highly-technical, structured document, we write ours in natural language with well-reasoned breakdowns of our proposed solution.)

The design and spec often represent the ideal, if-we-could-do-it-all version of the feature.

But, of course, we can’t do it all… nor should we!

In a small team with limited resources and high opportunity costs, it is in our users’ best interest for us to optimize for value instead of completeness.

Many times, feature completeness doesn’t directly service the core job to be done but instead provides convenience. Convenience isn’t without value itself, but when a product is at a very early stage, finding product/market fit by delivering core value is paramount. Validating that you’re building the right thing takes precedence above all else.

So, with our fully-featured and convenience-laden prototype, it’s time to step back and take stock of the design.

Do we need to build all of this?

Almost all of the time the answer is no; we do not need all of this to deliver the core value.

Before we actually build a feature, we go through three stages of scoping:

Product scoping

Technical scoping

Cycle scoping

Time to get our scope on!

Step One: Product scoping and user stories

During the product scoping phase, product managers and designers work together to design the feature or features we want to tackle in an upcoming development cycle. What we end up with is a specific, fully-featured, if-we-could-do-it-all version of the feature. This is almost always larger in scope than what we’ll initially build.

Once we’ve got our feature designed, we’ll explicitly take time to scope-down and identify the minimum lovable version, which is the smallest version of the product that still delivers material value to the user.

Anything smaller than the minimum lovable version just doesn’t get the job done and therefore isn’t worth building.

Quick tip!

You can almost always ship a smaller version than what you first think! Push yourself to go small in the spirit of shipping quickly and learning sooner.

Case in point: When I was scoping the minimum version of Analyze, my research and intuition both told me we needed a simple comparison tool as a baseline, minimum feature. I was convinced nobody would pay for our product without it, but I was wrong. The comparison feature took a little longer to build than we planned for, and we decided to ship Analyze without it and see what happened. Lo and behold, we earned our first happy, paying customers even without the comparison tool!

This was an excellent opportunity to test my hypothesis that people needed the comparison tool to get enough value, and that hypothesis was happily invalidated. This experience instilled in me a desire to deeply question every “must-have” aspect of a new product or feature.

For example, we discovered that users could get a lot of value from better understanding how their hashtags affect reach and engagement. After several design explorations, we arrived at this full-featured, if-we-could-do-it-all version of our Hashtag Analysis module:

This is the full-featured, if-we-could-do-it-all version of our Hashtag Analysis module

And then below is what we actually built.

And this is the version that we actually built

Note how the scope is drastically smaller but the value it delivers is almost equal. I’m a huge believer in the 80/20 rule and it applies here: The minimal version of this feature is 20 percent the size of the initial scope, yet it delivers 80 percent of the intended value. This is more than enough value to be sufficient to solve our users’ challenge (understand hashtag performance). This is a good tradeoff in the world of product development.

A user story is a discrete chunk of user-facing value. We use the following format:

“As a <type of user>, I want <some kind of goal>, so I can <some reason>”

and then attach designs, details, and acceptance criteria.

Sometimes one story is enough to represent a whole feature, but more often a feature will end up as several user stories: each a discrete, shippable chunk of value.

Example of a user story: One of our three user stories for Analyze’s upcoming Audience section

We then scale down our complete list of user stories to a smaller subset, which will represent our minimum lovable version. For example, a prototype might result in five user stories, but our minimum version will be only two of those five.

Step Two: Technical scoping and estimates

With designs at the ready, product specification in hand, and our user stories pared down and ready to go, we meet as a whole team and walk through the user stories together. The engineers often ask clarifying questions, and we’ll tweak the details of our stories together as required.

Once the engineers are satisfied that they have a full understanding of the problem and our proposed solution, they’ll break the user stories into engineering tasks.

The engineering tasks are discrete tasks that are estimated with an intentionally coarse set of possible estimates: either one day, three days, or one week. Anything longer than one week of estimated work should get broken down further.

Estimates are derived cooperatively by the engineers and serve two primary purposes:

Ensure all engineers are on roughly on the same page in terms of effort required for each task

Provide the product manager with a general gut check for cycle planning.

Estimates are not used as a measure of accountability for the engineers. We only want to know that we’re half-way decent at estimating engineering tasks (a notoriously difficult thing to do well!), and that we all agree on what the tasks entail.

Step three: Cycle scoping and planning

Now that we have a list of user stories and the engineering tasks that represent their reality, I use this information to plan our development cycle. We’re currently working in six-week development cycles, but I’ll only aim to plan the first four weeks at the outset. In week three of the cycle, I’ll take stock of where we are, re-prioritize things, and plan the second half of the cycle.

With the engineering tasks in place, I’m able to prioritize our user stories in the context of available time. We’ll often work on two or even three features at a time, and the estimates will help me decide which should come first.

For example, if we have three users stories, I might still opt to build the “least valuable” one first if it takes significantly less time than the others. This gets value to our users sooner.

I know, I know! I’ve been preaching the virtues of scoping down and then scoping down some more.

But imagine a jar that can fit five rocks. Even though you can’t fit a sixth rock, there’s still space between the five rocks, and maybe we can fit 10 pebbles in that space.

In this analogy, the jar is our six-week cycle, the rocks are the user stories required to fulfill our minimum lovable feature requirements, and the pebbles are smaller user stories or engineering tasks that go beyond our minimum requirements. It’s a great way to fit in a few “wow factor” aspects to a feature or product that aren’t strictly required to deliver value.

Rocks & Pebbles: How we think about keeping scope small and adding the “wow factor” to new product features

At the end of it all, cycle scoping is largely a matter of balancing getting value to the user sooner and delivering the most value (sometimes you get lucky and these things are the same!). This balance informs both what we build and when we build it.

Let’s recap

Product scoping (Product manager is accountable)

Design your solution to the problem at hand

Create a specification breakdown for the solution

Turn this specification into user stories

Go small: Define your minimum version of this feature that delivers value

Technical Scoping (Engineering manager is accountable)

Get full team alignment and understanding on the specification and user stories

Break user stories into technical tasks

Estimate technical tasks

Cycle Scoping (Product manager is accountable)

Based on estimates, select which stories and tasks will get completed in the current development cycle

Leave out all the rest for another day

In case you’re curious on who’s accountable for what in this scoping process, it looks like this:

How we break down the accountabilities for new feature development on Buffer Analyze

Over to you

This process is the result of many iterations and it continues to evolve over time. It’s by no means the best approach to product development, but we’ve found it effective in our specific product development cycle. That said, we’re always actively looking for ways to improve our process and make it more comfortable and efficient for the team.

If this triggers any questions or thoughts, please do not hesitate to leave a comment below or tweet at me. I’d be thrilled to hear from you!

I wrote a bit on how we scope and develop products at Buffer. Happy to answer any questions it brings up!

Thank you for a great article, I will for sure take some of the learnings and see what can be applied in the context in my organization!

I have two questions though, hope you can endulge me :)

The first, why have a six week iteration with replanning in the middle of it? Is it not better to have a three week iteration, demonstrate what we have to get feedback and take the learning to plan for the next three week iteration?

The second question is about when it is time to explain for the engineers what to do, you write:
“The engineers often ask clarifying questions, and we’ll tweak the details of our stories together as required.”
Have you challenged yourself to have a real potential user brought onboard to answer questions instead? To have the product manager act as a bridge between the team and the potential users/customers instead as a proxy?

🙈This comment slipped past my radar, but thank you so much for the taking the time to read and share your questions. I’m really grateful for that!

> why have a six week iteration with replanning in the middle of it? Is it not better to have a three week iteration, demonstrate what we have to get feedback and take the learning to plan for the next three week iteration?

Great question, and I’ll be the first to admit that we’re always simply evolving and experimenting with our process, and it’s by no means the right way or the best way (if you do come across a best way, please let me know!).

That said, there is a cost to stopping and starting a cycle: there’s context switching for engineers, we slow shipping things, and a general momentum that slows and needs to be restarted.

We can generally set priorities for six weeks, but detailed planning for that full length of time is quite difficult. Therefore in practice, we have an idea of what we’ll do for the next six weeks, but only actually plan for the next three.

Most often, we’re still approaching the last three weeks with the same priorities in mind, we’re just planning for them now with the information we learned in the first three. Perhaps we can carry on as planned, perhaps we need to cut scope, and perhaps we need to reprioritize altogether. I’d say about 50% of the time, we end up cutting cycle scope.

> Have you challenged yourself to have a real potential user brought onboard to answer questions instead? To have the product manager act as a bridge between the team and the potential users/customers instead as a proxy?

I am in constant contact with our customers, and generally act as the voice of our customer, but I must admit I’ve never considered coordinating such a direct feedback session! I think this is a fantastic idea. I do often share designs with customers before we build it to gather feedback, but haven’t yet had that be a full loop with the engineers as well. Definitely going to give this a go!

Thanks again for your thoughts Jan, I really appreciate it!

Jan Nilsson

Hi Tom!

No worries, these comments at the bottom is not that visible perhaps :)

Big thanks to getting back to me with such detailed answer, really interesting how you work and think! And you are welcome, even though I am the one feeling thankful for you taking the time to write! If you give it a shoot with a direct feedback session, please come back with your findings, would love to read that post!

I love the detailed post on how the MVP or the MLP is determined through gradual scoping. I am a great fan and practitioner of Jobs To Be Done both in my professional and personal life!
Was curious as to whether the product-market fit is also tested during the product scoping? In my experience I’ve seen that testing for a product desirability (https://designthinking.ideo.com/?page_id=1542) often results in pivoting the features or even the product once in a while. Would love to know your experience around this.

Indeed, we always some kind of success metrics tied to our functionality. Even if it’s simple, or largely gut-based (though we try avoid that!), we always want to include this perspective in order to force us to think about what success looks like for any given feature.

I have a rule that I like to give a feature 6 months of runway. If it hits signs of P/M fit, based on our success criteria, we should double down on it. That is, take it to its logical next step which is usually a “version 1.0” since our MLP is not quite that.

If the feature is not successful after ~6 months, we either pivot or literally just take it out of the product. Removing things is one of the hardest things we do, but it’s so important. You’ll almost always upset _somebody_, but in service to the majority of your users, simplifying things and enabling focus on the things that are providing real value is the best use of energy.

All this to say, indeed we’ll pivot, rework, re-pivot and remove things with the ultimate goal that we’re delivering real value across all features, and not putting energy into supporting vanity features, or confusing our customers by supplying them. Our pursuit of value drives us, and that can take us down a wonderful, winding road!