Monday, September 27, 2010

One of the sayings I hear from talented managers in product development is, “good enough never is.” It’s inspirational, always calling the team to try harder and do better. It works to undermine excuses for poor or shoddy work. And, most importantly, it helps team members develop the courage to stand up for these values in stressful situations. Especially in teams that are managing by objectives (or OKR's), the pressure to deliver is intense. Under such pressure, the temptation to cut corners, to quit prematurely, or to hand off shoddy work to another department is overwhelming. It requires courage to stand up and say: "this work is simply not good enough. Sure, we could get away with it, but that's not how we work." Good managers work hard to create an environment where this courage thrives.

On the other hand, there are many stories of companies achieving a breakthrough by shipping something that was only "good enough." One such rumor, which I’ve heard from several sources, tells of the launch of Google Maps. The team was demoing their AJAX-powered map solution, the first of its kind, to senior management at Google. They were impressed, even though the team considered it still an early prototype. Larry and Sergey, so the legend goes, simply said: “it is already good enough. Ship it.” The team complied, despite their reservations and fear. And the rest is history: Google Maps was a huge success. This success was aided by the fact that it did just one thing extremely well – its lack of extra features emphasized its differentiation. Shipping sooner accentuated this difference, and it took competitors a long time to catch up.

So which is it? Is "good enough" good enough? Rules of thumb can be infuriatingly unhelpful. When should you settle for good enough and when should you push yourself to do your best?

This is precisely the dilemma that the doctrine of minimum viable product is designed to solve. And it’s really hard.

Most of us intuitively have a “split the difference” attitude when faced with recurring difficult choices. That is not a long-term solution. The reason: it actively encourages factional strife. Everyone naturally falls along a spectrum, from “ship anything soonest” to “always build it right, no matter what it takes.” When members of a team realize that the final answer will be some kind of average, they face an overwhelming incentive to express desires in the strongest possible terms. After all, someone else’s view will be averaged in, too. Any excesses are likely to be moderated by others. Of course, this logic applies to members of all factions. Over time, such teams either explode due to irreconcilable differences or dramatically slow down. The latter is actually more dangerous. Divided teams usually can’t agree on facts or interpretations. Yet startups rely on collective learning in order to find their way. Factional strife is learning kryptonite. I believe this is one reason why the myth of the dictatorial startup founder has such enduring appeal. Faced with these kinds of disagreements, strong arbitrary action is much superior to paralysis.

But action/paralysis are not the only options. As in many false dichotomies, we can find a third way that gives both factions a positive message to rally around.

Without an affirmative message, managers can cause lasting harm. I certainly have. When people start using quality, reliability, or design as an excuse to delay, it used to make me nervous, even when these suggestions were well intentioned. After all, how would Craig Newmark’s life (and the rest of ours, too) be different today if he had waited to build something with a high-quality design before starting his famous list? Rather than having this repeated argument, I sometimes found it easier to play dictator on the other side, forcing teams to ship sooner than they were comfortable with. As I found out to my dismay, this is a dangerous game: in many cases, you’re asking trained professionals to violate their own code of best practices, for the good of the company. Once you go down that road, you risk opening a Pandora’s box of possible bad behaviors. And yet, it does not have to be that way.

Almost everything we know today about how to build quality products in traditional management has its origins with W. Edwards Deming, the original quality guru. He had two concepts that are especially important to this discussion. The first is that “best efforts are not enough.” Despite what it seems in the moment, most quality problems are not caused by people slacking off or acting maliciously. (It seems that way only because of a psychological phenomenon called the fundamental attribution error.) In reality, most quality problems are systemic in nature. They have to be solved in the boardroom by making a company-wide commitment to building quality into the very systems the company uses to build products. Lean manufacturing, agile software development, and Theory of Constraints are all examples of this idea in action.

However, a commitment to quality alone is not enough. In old school manufacturing, quality was defined as reliability: parts and products that did not wear out, break down, or fail unexpectedly. And so Deming’s contribution was especially prescient, as he saw that “the customer is the most important part of the production line.” This means that quality is defined in the eye of the customer, not necessarily by arbitrary standards loved by insiders to the production process. In today’s world, this is increasingly important, as quality is often defined by factors beyond reliability: design, ease of use, aesthetic appeal, and convenience.

Now we come to the heart of the minimum viable product issue: how can we build quality in if we do not yet know who the customer is? All of our professional standards that lead us to want to get it right the first time – all of them were developed originally in a non-startup context, one where the customer was known in advance. Startups are different, leading to this axiom: if you do not know who the customer is, you do not know what quality is.

the minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning with the least effort.

In other words, the minimum viable product is a test of a specific set of hypotheses, with a goal of proving or disproving them as quickly as possible. One of the most important of these hypotheses is always: what will the customer care about? How will they define quality?

One common worry is that this might lead companies to “release crap,” shipping too soon with a product of such low quality that it alienates potential customers and, thus, causes entrepreneurs to abandon their vision. This critique combines two misunderstandings in one.

First, I want to explore the idea of releasing crap: that our product is of such low quality that we will release it, customers will hate it, and we’ll have accomplished nothing but alienating them. But notice how many hypotheses are baked into this supposedly simple scenario: we believe we have already solved the distribution problem for our product (or else how could customers try it?). We already know who to distribute the product to (or else why would we care what they think?). Naturally, we already know the standard of quality that they will use to judge our product. And, of course, we already know that they will care enough to be offended. In fact, we know so much that we already know what they will care enough about (namely, the product’s quality – as opposed to, say, missing features).

Even better, this is a falsifiable hypothesis. It is entirely possible that we can ship “crap” and have one of the aforementioned facts fail to materialize. In fact, that is one of the best possible outcomes, because it will force us to learn something. What if customers actually like the “crap” product? Or what if we can’t get any of them to even try it? Or what if the features they demand we build are different from the ones we were planning to build? In those cases, we can’t help but learn a great deal. Remember, the minimum in minimum viable product does not mean that you should ship just anything at the nearest possible date. It means to ship as soon as it is possible to learn what you need to learn.

The second misunderstanding is a concern for what will happen if things turn out exactly as we originally predicted (namely, badly). Entrepreneurs, faced with an early defeat, might lose their commitment to seeing their vision through. I understand this fear. It is a direct consequence of the reality distortion field, that ability most visionaries have to get people to believe in a vision as if it was already true. Data can undermine this field. It's easier to believe in a glorious future when you have only zeroes, for everyone: founders, investors, and employees.

But this fear is way overblown, in my experience. The great visionaries I’ve worked with can incorporate a commitment to iteration into their process. However, there are some important ground rules. As I wrote in Don’t Launch, it’s essential to remember that these early minimum viable product launches are not marketing launches. No press should be allowed. No vanity metrics should be looked at. If there are investors involved, they should be fully briefed on the expectation that these early efforts are designed to fail.

Again, even if they do "fail," it is improbable that they will fail in the way we originally expected. In fact, in all of the startups I have worked with, I have never seen this happen. There is always something unexpected when customers react to a product in the real world: we thought they’d be offended by low quality, but actually they refused to download it; we thought they’d share it with their friends, but actually they wanted us to provide the friends; we thought they’d care a lot about our beautiful design, but actually they wanted more features. As in any experiment, the important thing is not the bare fact that the hypothesis was invalidated. More important is to understand the reasons why. This is not an academic exercise; the goal of these experiments is to immediately get up off the mat and design the next one. And the next, and the next, until we have not just learned but proved our learning with hard facts: through the attainment of validated learning.

Minimum viable product is an attempt to get startups to simplify, but it is not itself simple. How do you know which features are essential and which should go? There is no formula, it requires judgment. Any scientific method requires the choice of a hypothesis to test. This leads to two questions:

By what standard is this hypothesis to be chosen? Minimum viable product proposes a clear standard: the hypothesis that seems likely to lead to the maximum amount of validated learning.

How do you train your judgment to get better over time? Again, the answer is derived from the hard-won wisdom of the scientific method: making specific, concrete predictions and then testing them via experiments that are supposed to match those predictions helps scientists train their intuition towards the truth.

(Fans of the history of science will recognize this as Thomas Kuhn’s theory of scientific paradigms. Minimum viable products are not a single hypothesis. They should therefore be properly understood as product paradigms. As in science, the paradigms that survive will be those that allow practitioners to discover the most productive experiments to try, during the period Kuhn calls “normal science.” A paradigm crisis is analogous to a pivot.)

I told you it wasn’t simple. And this leads to a last criticism of minimum viable product that I hear from time to time: it’s just too complicated. Most people prefer simple, short, pithy startup advice. I remember this acutely from my debate with David Heinemeier Hansson, of 37 Signals fame. As I was explaining the MVP concept, I could see the look of horror on his face. His answer, to paraphrase, was something like this: “that’s way too complicated. Just build something awesome, something that you yourself would love, and ship it.”

Other similar forms of this advice abound: “release early, release often,” “build something people want,” “just build it,” etc. This Nike school of entrepreneurship is not entirely misguided. Compared to "not doing it," I think “just do it” is a superior alternative.

But the teams I meet in my travels are often one step beyond this. What do you do the day after you just did it? It really doesn’t matter if you took a long time to build it right or just threw the first iteration over the wall. Unless you achieve instantaneous overnight success, you will be faced by difficult decisions. Pivot or persevere? Add features or remove them? Charge money or give it away for free? Freemium or subscription or advertising?

I won’t apologize for this aspect of the Lean Startup methodology. These are complicated questions. We are drawn to easy answers because we look at the landscape of successful companies with a biased lens. We see examples of startups who did things “our way” and were successful. Unfortunately, that’s true no matter which way we prefer. Even in the narrow field of giant tech companies, their early products were wildly different. Compare eBay and Google, Apple and Sun, Oracle and Seibel. And, of course, there’s incredible selection bias. For every successful company we think we know that “built it right” or “shipped crap” from the start, there are plenty we’ve never heard of, because they followed that same strategy and promptly died. That’s the deep flaw in most startup advice: it argues from selective examples.

So what about the question of whether good enough really is? What’s needed, I believe, is an alternative discipline that teams can get excited about. When we’re talking about being disciplined, following our methodology with rigor, continuous improvement, there is no such thing as good enough. Our pursuit of learning is ongoing and our commitment is absolute. But when it comes to the specific of a product release, business plan, or marketing launch, all that matters is: do we have a strong hypothesis that will enable us to learn? If so, execute, iterate, and learn. We don’t need the best possible hypothesis. We don’t need the best possible plan. We need to get through the build-measure-learn feedback loop with maximum speed.

Over time, I believe we will build a new professional discipline that will seek excellence at this kind of product-centric learning. And then that new breed of managers will, I'm sure, confidently go around saying: good enough never is.

Monday, September 20, 2010

It’s an anguished cry that I have heard often from startup founders. In a way, I don’t blame them. I’ve been there myself. If we’re not attempting something truly new and innovative – what’s the point? If we’re just going to conduct the world’s biggest focus group to decide what to do, why couldn’t any old idiot do it instead? Isn’t the whole point of devoting our life to this enterprise to show the world that we have a unique and visionary idea?

I remember one conversation with a visionary quite well. He had just come back to the office after a few days away, and he was filled with big news. “I have incredible data to share!” which was pretty unusual – a visionary with data? He carefully explained that he had conducted a number of one-on-one customer interviews, showing them an existing product and then documenting their reactions. His conclusions were well thought out, coherently based in the data he was presenting, and painted an alluring picture of a new way forward. His team almost exploded on the spot.

“That’s the same idea you’ve been pushing for months!” “What were the odds? Customers explained to you that we need to do exactly what you wanted to do anyway? Wow!” It was an ugly scene.

We all know that great companies are headed by great visionaries, right? And don’t some people just have a natural talent for seeing the world the way it might be, and convincing the people around them to believe in it as if it was real?

This talent is called the reality distortion field. It’s an essential attribute of great startup founders. The only problem is that it’s also an attribute of crazy people, sociopaths, and serial killers. The challenge, for people who want to work with and for startups, is learning to tell the difference. Are you following a visionary to a brilliant new future? Or a crazy person off a cliff?

True visionaries spend considerable energy every day trying to maintain the reality distortion field. Try to see it from their point of view – none of the disruptive innovations in history were amenable to simple ROI calculations and standard linear thinking. In order to do something on that scale, you need to get people thinking, believing, and acting outside the box. Their greatest fear is categorically not that their vision is wrong. Their real fear is that the company will give up without ever really trying.

This is where data, focus groups, customer feedback, and collaborative decision-making get their bad rap. In many cases, these activities lead to bad outcomes: watered down vision, premature abandonment, and local maxima.

When visionaries say “but customers don’t know what they want!” they are right. That’s the problem with false dichotomies: each side has a kernel of truth within it. You cannot build a great product simply by obeying what customers say they want. First of all, how do you know which customers to listen to? And what do you do when they say contradictory things?

And yet, the people who resist visionaries also have a point. Isn’t a bit scary, maybe even suicidal, to risk everything on a guess – even if it is emotionally compelling?

Like all false dichotomies, if either side “wins” this argument, the whole enterprise loses. If we just follow the blind mantra of “release early, release often” and then become purely reactive, we’re as likely to be chasing our tail as to be making progress. Similarly, if we pursue our vision without regard to reality, we’re almost guaranteed to get some aspects of it wrong.

The solution is synthesis: to never compromise two essential principles. One, that we always have a vision that is clearly articulated, big enough to matter, and shared by the whole team. Second, that our goal is always to discover which aspects of this vision are grounded in reality, and to adapt those aspects that are not.

A vision is like a sculpture buried in a block of stone. When the excess is chipped away, it will become a work of art. But the challenge in the meantime is to discover which parts are essential, and which are extraneous. The only way to do this is to continuously test the vision against reality and see what happens.

So what should you do if you find yourself working with a visionary? Almost every successful visionary has found partners to work with that help them stay grounded in reality. To do this you have to find ways to be supportive of the vision at the same time as reporting the bad news about where the vision falls short. I recommend a mantra that I learned from Steve Blank: always consider your job to find out if there is a market for the product as currently specified. Don’t try and change the vision every time you get new data. Instead, get out of the building and look for customers for whom your product vision is a slam-dunk fit. If and only if, after exhaustive searching, you cannot find any customers that fit the profile, is it time to have a serious conversation about whether and how the vision should be modified (a pivot).

And what should a good visionary do to help find synthesis? Based on the successful visionaries I have had the opportunity to work with up close, I'd like to offer two suggestions for the role a visionary should take on:

Identify an acute pain point that others don’t see. It’s important to specify the vision as much as possible in terms of the problem we’re trying to solve, rather than a specific solution. (Or, to use Clay Christensen's formulation, of the "job" customers are hiring us to do.) Even though the visionary surely has some concrete ideas which are to be tried, he or she should always be asking, “would I rather solve the problem, or have this specific feature?”

Hold the team to high standards. Despite Steve Jobs' incredible talents, he doesn’t personally design and ship every Apple product. It’s much more likely that his main function is to hold everyone who works for him to the same high standard. Once they’ve agreed to try and solve a dramatic problem, it’s the visionary’s job to hold each provisional result up to the light of that vision, and help the team remember that although trade-offs and compromises are always necessary – the real payoff is in solving that acute pain. This can help avoid the trap of the false negative: even if the first few iterations don’t get it right, the vision inspires us to learn from our failures and keep trying.

Let me close with a specific story of a visionary at work. I’ve heard from several sources a story about Jeff Bezos and the invention of one-click shopping. It may be apocryphal, but it’s illustrative anyway. Amazon had tasked a team with building their new one-click shopping feature, which was designed to reduce the friction required to make an impulse purchase. The purpose of naming the feature “one-click” was to clearly communicate to everyone the vision of maximum simplicity. When Bezos was meeting with the team to review their first version of the feature, so the story goes, after he clicked to make his purchase, he was prompted with a confirmation dialog box. He had to click “yes” to continue. In other words, one-click shopping required two clicks!

Now, it’s really important to see this story from both sides. Bezos was surely infuriated that the team had missed so obvious a point about his vision. But see his team’s point of view: they were immersed in a culture of protecting the customer. It was probably considered too dangerous to let someone “shoot themselves in the foot” and make an unintended purchase that could have serious economic consequences.

But by actually building a version of this feature, and doing some simple testing with customers and with Bezos, this team surfaced an issue that probably wasn’t really clear in Bezos’ vision from the get-go. Namely, how are we going to handle the case of customers one-clicking by accident? The synthesis solution is so simple, I’m sure it seems obvious in retrospect (and I’m sure dozens of people, for all I know including Bezos himself, are now sure they came up with it on their own): since mistakes are the uncommon case, give the customer several opportunities to realize and correct them after the fact, rather than trying to prevent them with a confirmation dialog box.

Those are the attributes I admire in successful visionaries: a determination to see the vision through, holding their teams to high standards, and a commitment to iterate in order to get there.

Monday, September 13, 2010

I am a firm believer in the danger of vanity metrics, numbers that give the illusion of progress but often mask the true relationship between cause and effect. Since I first started writing about vanity metrics, I’ve met more and more entrepreneurs who are struggling with a simple question: how can I tell a vanity metric when I see it?

From the outside, vanity metrics are a lot easier to see than from the inside, precisely because of the psychology behind them. Everyone wants to believe that the work they are doing is making a difference. So it’s easy to read positive causes into noisy data, whether it’s really happening or not. (This is called “the illusion of cause” and is discussed at length in the extremely readable book The Invisible Gorilla). Even worse, entrepreneurs are faced with a constant barrage of vanity metrics from competitors and other companies engaged in PR. Vanity metrics are generally bigger. And everyone knows bigger is better, right?

News publications print vanity metrics because they want to give their readers information about the companies they cover. Companies want the coverage, but they don’t actually want to reveal anything useful about their operations. The solution? Vanity metrics. By only releasing vanity metrics, companies co-opt the press into helping them mislead others. Is that really news? I’ll leave that to professional journalists to sort out. For the general public, it’s probably OK to treat company updates as entertainment. But for entrepreneurs, investors, analysts and competitors, it’s quite dangerous.

Here’s my quick heuristic for telling if a given number, graph, or chart is a vanity metric: could it have been caused by the company secretly running a Superbowl ad and not disclosing it?

If yes, it’s very likely to be a vanity metric. Let’s take a look at an example, one of my favorites, the "billions of messages" claim.

Here's Mashable's coverage of Facebook chat reaching "a billion messages a day." Or take a recent TechCrunch article about a startup I won't name: “X billion messages sent since June 2009.” These articles treat this as a huge number, and it is. Probably, it represents tremendous success for the company in question. It’s side-by-side with a number of other vanity metrics. But notice what’s not listed: messages sent per person, churn rates of active users, or activation rate of new user. Even worse, we have no indication of how these numbers are moving over time. Is the company growing because of an amazing viral loop paired with a strong engagement loop? It’s possible, but the article doesn’t say. Most of the article is about the features – new and old – of the product. The unstated implication is that these features are what are leading to this tremendous growth. But is that true? Isn’t it equally possible that this company is spending more money on advertising or marketing than it’s competitors? Or that there is some other external factor at work?

I have no insight into these questions, and I don’t mean to pick on these companies in particular. My point is that this article does not contain the kind of information we’d need to draw reasonable inferences, which is by design. That’s what you pay PR firms for: to get an article written that is entirely factual and yet still provides positive spin for your company. (For context "some 740 billion text messages were sent in the first half of 2009." The PR firm helpfully left out that context.)

So, could these numbers have been generated by a Superbowl ad? Of course. We have no idea when the billions of messages were sent. They could all have been sent very recently. That’s the magic of vanity metrics – you never know what’s really going on. The trouble comes when companies and investors come to rely on these numbers to make consequential business decisions. How should a company like the one above prioritize their next set of features? Hopefully, they have internal reports that show the true correlation between their features and customer results. Are employees paying more attention to those reports than to the positive press coverage? I sure hope so.

Notice that cohort and conversion based metrics do not suffer from this problem. When we look at the same conversion percentage for cohort after cohort, we are effectively getting a new, independent, report card for our efforts each period. Each cohort is mostly unaffected by the behavior of earlier cohorts. And it is much more insulated from external effects, like an advertising or PR blitz, than your typical vanity metric.

It is not difficult to translate a gross metric like total messages sent into cohort terms. Since I’m picking on the TechCrunch example from above, we’re talking about more than a year’s worth of data. Let’s divide it into monthly cohorts. For each month, messages are sent by two kinds of people: new customers and returning customers. In order to make each cohort as meaningful as possible, let’s define them as follows:

New customer: someone who registered for the service in a given month
Returning customer: someone who used the service in the immediately preceding month.

I choose the “preceding month” definition in order to give us a sense for individual people’s behavior. A huge advertising blitz might cause a temporary winback effect by bringing in lots of old customers, but this is generally the kind of effect we want to ignore (unless we’re measuring the short-term effectiveness of the advertising).

Now, let’s plot a single number for each cohort, the percentage of customers in that cohort who sent at least one message in that time period. That makes our numbers denominated in people, not messages, which is much easier to understand. (remember, metrics are people, too). If we wanted to get fancy, we could also plot the average number of messages sent per person in each cohort.

If these numbers are flat month-to-month, then we can draw some strong conclusions about the product features we’re working on: they are basically having no effect on customer behavior. Hopefully, that’s not the case. Hopefully, the numbers are steadily improving month after month.

The data needed to generate this simple graph already exists: it’s the same basic data you’d need to get an accurate count of the total number of messages sent, just presented in a different form. For understanding what’s really going on with a product, this alternate form is far superior. Is it any wonder companies don’t want the press to have it?

It’s my hope that, in time, our industry will start to reject vanity metrics as a serious part of the discourse about customers. But this will take a long time. Investors and journalists have the most leverage to start making this change. Entrepreneurs have a part to play, too. Playing with vanity metrics is a dangerous game. Even if you intend to “only” give that sugar rush to publicists or investors, it’s all-too-easy to be taken in yourself. Your employees probably read the same press you are trying to influence. Your investors may be taken in today, but they will use those same vanity metrics to hold you accountable tomorrow. It’s much easier to rely on actionable metrics in the first place.

Monday, September 6, 2010

Since this blog's earliest days, I have made a habit of surveying you, my subscribers. I did it originally as a demonstration of the advantages of having a pathetically small number of customers, but I found the actual info so incredibly helpful, I have done it severaltimessince. Since the last time, your ranks have grown tremendously, and I thank you all for this incredible support.

So, to celebrate Labor Day here in the US, I've created another survey. If you're willing to take five minutes to fill it out, I would be most grateful:

As usual, I've added a small minimum viable product (I'm starting to think of this technique as the "survey MVP") at the end, as yet another customer validation exercise. I'll post about the results later; to say anything here would bias the survey.