The Counter-Intuitive Benefits of Acting as If You Have No Engineers

Act as if you have no engineers. If all you have is an idea, how might you know if that idea is worth pursuing?

If your goal is to test the idea itself, it might be hard to design an experiment without writing code.

But your idea is dependent upon a series of assumptions. You can test those assumptions without building the feature.

The more assumptions you test before building the feature, the more likely the feature will work when you do build it.

To get started ask yourself, “What assumptions have to be true in order for this idea to work?”

Suppose you are responsible for video integration in the Facebook newsfeed and you have an idea about auto-playing the video once the visitor scrolls to it. You could build this functionality and then user test it, but you would be better off examining your assumptions.

What has to be true for this idea to work?

People want to watch the videos in their newsfeed.

People want to watch the videos in their newsfeed right away.

For people who don’t want to watch the videos in their newsfeed, it won’t be too much of a bother to stop an auto-playing video.

You can test the first two assumptions by looking at your usage data. How many people play videos in their newsfeed? What percentage of their videos do they pay? How long after scrolling to the video do they push play?

If these assumptions hold to be true, you can move to the third assumption.

If you find that most people (say 80%) watch most videos right away, you might want to conclude that auto-play is a good idea. But you might want to first try to understand the use-cases where people aren’t watching video right away.

For example, if people aren’t watching videos right away because they are sneaking in a quick Facebook break during a boring meeting, auto-play might be disastrous.

If this is the case, the pain of auto-play for the 20% who don’t watch videos might outweigh the benefit for the 80% who do.

How might you find out why people aren’t watching videos right away? You could:

Interview users and ask them when and where they use Facebook.

Observe people using Facebook around you. How many are in a shared space without headphones?

Survey people to understand their preferences on auto-playback.

Conduct usability studies on a similar product that already includes auto-playback.

Notice how none of these options involve writing code. And yet they all help you collect data about whether or not your idea is worth pursuing.

This is what happens when you test assumptions instead of ideas. Ideas can be hard to test without writing code. But often you already have the data or can quickly design an experiment to test the underlying assumptions.

Surface your assumptions and do the work to test them before you write code. – Tweet This

Slow Down to Go Faster

This process is going to feel slow. Too slow.

You are going to get antsy. You are going to want to start writing code.

For any single idea, it’s going to feel faster to just build it. If it only takes a week of development, why would you spend a week or two experimenting before you build?

It doesn’t just cost a week or two of development.

It costs a lifetime of maintaining it.

It costs the learning curve for your customers to adopt the feature (or to ignore it).

It takes up pixels in your user interface.

And even when it doesn’t work, it’s going to be impossibly hard to remove.

But there’s a more important reason.

Expand your scope beyond one idea. Consider ten ideas. Should you build all of them?

Odds are only two or three are going to work.

Now you are building 7 or 8 features that won’t have an impact, that you’ll have to maintain, that will fill up your user interface, that will burden your customers.

It’s much better to run ten experiments before you write any code and only build the two or three ideas that actually worked.

Comments

This topic really goes back to the debate about what constitutes an MVP. Eric Ries defined “MVP” as follows:

A minimum viable product is version of your “product” that maximizes validated learning for the least amount of effort.

Note that “product” is in quotes, implying that it may not be a full-fledged product built with code. A “product” built with code and put in the hands of customers forms the basis for one form of experiment. You can “build” experiments without code, however.

Indeed, some lean startup practitioners have suggested that a landing page for an experiment could be an MVP. Other practitioners have insisted that an MVP should be more ambitious and “fully baked”.

We can debate who’s right, but if you look at Eric Ries’ quote, I don’t think it’s entirely fair to portray lean startup methods as jumping prematurely into coding an MVP.

I see a different but related flaw in customer development and lean startup methods. A hint of this flaw is at the end of my recent blog entry on design thinking, and I plan to elaborate on it in a future blog entry.

Yes, really I should have written there is a flaw in the way many people interpret The Lean Startup, as I think the intent of the loop is spot on. If you focus on many, rapid cycles through the full loop it doesn’t really matter where you start.

This post was in response to the many questions I get from companies who think they can’t experiment because their engineering teams won’t support it. So I wanted to emphasize that you don’t need engineers to experiment.

I look forward to your future posts. For my Master’s I’ve done research on design thinking and what it is that designers do that is different. I’ll blog about it eventually.

Roger makes a very good point regarding exactly what constitutes an MVP (a term that I think may be more confusing than helpful), or in terms of the Lean Startup cycle (and perhaps more usefully), exactly what the “Build” phase is building. If we’re talking about constructing an experiment instead of coding a product, then starting with Build is probably less problematic.

In the end, I think the Build-Measure-Learn/Learn-Build-Measure cycle is less useful than PDSA, which in itself leaves out a number of implied steps but at least starts at a more logical place. In PDSA, you Plan an experiment to measurably test a hypothesis, you Do the experiment (building whatever is required), you Study the results, you Act on the learning. If we were to really articulate the relevant elements of cycle fully, I think it would be something much clunkier like

Ideate (What’s my initial impetus?)
Surface Assumptions (What has to be true for my idea to be right?)
Create Hypotheses (How can I state those assumptions such that they could be disproven?
Design Experiment (How do I disprove the hypothesis?)
Build Experiment (Write a script, make a landing page, build an MVP)
Execute Experiment (Talk to people, by Adwords, put MVP in front of people)
Study Results (Has my hypothesis been disproven? What have I learned?

I think Eric Ries is responding to the over-planning that often happens before you really know anything and to the vast amount of poorly conducted market research. I don’t disagree with his criticisms and his focus on action. But we are starting to swing too far in the other direction. We should build based off an insight and sometimes we need to learn to uncover an insight.

What he did get right 100% is that iterations through the loop should be fast. And given that, quibbling about where you start is almost a moot point. I just don’t like that some people interpret it to mean you don’t have to understand your customer. So I’m just trying to provide the counterpoint.

I also don’t like that people are using a lack of engineering resources or the inability to make product changes as a reason for not learning. That’s just downright silly.

That’s funny, I like Beck’s formulation, but I don’t like saying hypothesis anymore. We tend to use this term interchangeably with assumption. They are different.

A hypothesis is well defined and falsifiable, an assumption is vague and tends to be subject to heavy confirmation bias. We can test a hypothesis with an experiment. We need to clarify assumptions with generate research, not experiments.

I missed the bit about general research on the first read. Are you suggesting that research an assumption and you run an experiment to test a hypothesis? In that case, I see your point. However, I’d argue that in most cases, a product manager benefits from defining their assumptions in the form of hypotheses when possible, as it helps to get clarity around what you think.

Yes, research assumptions to clarify into hypotheses and then experiment on hypotheses. Forcing experiments around vague assumptions like “If we put up a landing page, some people will click on the signup button” leads to lots of false positives on incredibly badly defined experiments with no fail condition.

Basically, we all suck at writing hypotheses and forcing ourselves to a hypothesis without doing the basic research to generate enough clarity seems to result in a lot of silly experiments. Might work for some people, but as a general rule of thumb, I have not seen a whole lot of success around forcing hypotheses.

Now that I agree with entirely. Writing a good hypothesis is a skill (See The 5 components of a Good Hypothesis) and most people need to invest in learning it. However, as product managers get better at research and experimentation, it’s one they’ll have to get better at.

Hi!
Great post! I agree! I think it’s even possible to start *anywhere* in that loop?

Perhaps Eric Ries saw it from the startup point of view? If you have nothing to start with and you’re thinking about a new (software) product, perhaps you have to start at the Build step? Well, you could perhaps do market research etc, but you won’t learn much until you really have something (product/MVP) to learn from?

It’s this statement: “but you won’t learn much until you really have something (product/MVP) to learn from?”
that I think is the problem.

You can and should learn a lot before you write code. Coding is an expensive way to learn. You can learn a lot by talking to a couple of customers and you will likely learn that what you intended to build isn’t exactly right.

I’m less worried about what Eric Ries intended and more concerned with how The Lean Startup is interpreted. Eric is a bright guy who has spent a lot of time thinking about this. Most practitioners are busy professionals who don’t spend a lot of time thinking about this. If on first glance, their intention is to take Eric’s advice, starting with build is a costly first step. In most cases, they would be better off learning a little bit first.

However, as mentioned in an earlier comment, the key really is to move through the loop as fast as possible so that you maximize iterations and learning. You can spend too much time talking to customers without ever building anything, which is just as bad as building too much before you learn anything.

I think i get it now 🙂 You’re right, it’s not about how Eric defines it. I think I just misinterpreted you when you said “it has a flaw” 🙂 I read it as a bit “mean”, sorry 🙂

I also think you’re right that you shouldn’t start with writing code, there are cheaper ways to see if you’re on to something (as you say). But just as someone said in previous comment, “Build” doesn’t have to be code, it could be anything. In my opinion, the “Build” part is the notion that it’s in the *doing* (whatever “doing” is) that we learn. We can’t really learn until we’ve actually done something. And to do something means “building” something. “Building” can be anything, i.e settings up the manual (or whatever) of what you want to “talk to a couple of customers” about.
Thinking gets you far, but not as far as when you really get out there and do something. But you’re right, “building” doesn’t equal “coding”. And that may be misinterpreted..? (Even though it says “Build” and not “Code”). That’s my two cents 🙂

I’m not sure that I would argue that the underlying structure of PDSA and hypothesis -> experiment -> learn are fundamentally different. But the different language certainly leads to different interpretations and thus different applications.

I think the common structure is an iterative process of induction and deduction where you start with a series of facts or data points, through induction you generate a theory that you then verify through deduction.

In the Plan -> Do -> Study -> Act model, I suspect your plan is based on some facts / data points. You take action based on an inductive theory of the situation. You study the outcome, deducing whether or not you got an expected outcome and then you act again.

With hypothesis -> experiment -> learn, you hypothesis is induced from a a set of data points, you design an experiment to deductively test your hypothesis, and you learn from the results.

I think what gets most misinterpreted in both models, is that people skip the inductive test and thus they act based on their first (or sometimes second) instinct instead of acting as an act of deduction.

Teresa is a product coach helping teams adopt user-centered, hypothesis-driven product development practices. She works with companies of all sizes on integrating user research, experimentation, and the right analytics into the product development process resulting in better product decisions. Read More…

Search Product Talk

Teresa is a product coach helping teams adopt user-centered, hypothesis-driven product development practices. She works with companies of all sizes on integrating user research, experimentation, and the right analytics into the product development process resulting in better product decisions. Read More…