The problem that I've seen time and again is that people fall in love with the process and get so caught up in the notion that they can just iterate through their hypotheses that they create terrible hypotheses.

This is dumb.

People - especially smart people - love the idealized notion of "iteration" but, in practice, it turns into "throw shit against the wall and see what sticks". Then they realize the hypothesis testing iteration process takes time and soon, they start to believe they're making progress merely because time has progressed. Then they start product development too early. Then they... well, you can guess what happens from there. Don't do that. Spend just a bit more time up front. Understand the problem you're solving first, so you can make better product hypotheses from the get-go.

Simplify the Customer Discovery loop into two parts: requirements gathering to define the problem statement, and hypothesis creation that you test with actual potential customers in the Customer Validation step. Here's the difference between me and Steve: you don't iterate on the problem hypothesis - the problem statement is a fact that you have to suss out from your requirements gathering. Your job is to suss out 1) whether or not there is a problem, 2) how big of a problem it is, and 3) how big the market is that's affected by the problem. (You don't want to develop a product to solve a problem that won't return your investment of time and money. This is the result of jumping into the "test product hypotheses" too early.) These are all verifiable facts; the problem statement is not a hypothesis. Anyone else going out to the same customer base and asking the same questions will come up with the same problem statement that you do.

Defining a problem statement really isn't all that hard. Sure, you can read research reports and get market sizes and all that jazz, but the one prerequisite you must do is really quite simple: talk to enough potential customers until you've reached some point of diminishing returns. (You'll know it when you get there.) Often, this doesn't take more than a dozen. If you really want to be rigorous and pretend you've gotten to statistical significance, talk to 30 people. Here's a simple list of questions:

* How do you do [x]?

* What's the worst thing about doing [x]?

* How much extra time/money/energy does it take to deal with the worst thing about [x]?

* If I could solve the worst thing about [x], how much money would you pay me? (Note that if it costs money to deal with the worst thing about [x], theoretically you could charge up to 99.9999% of that amount. Theoretically.)

Everything else is optional. My trick is to keep them talking with one simple phrase: "that's really interesting. Tell me more." You may even (read: you will) get product insights from your problem statement questions.

Eventually, you should be able to hone in on what exactly is the jabbing pain in your customers' eyes. For salespeople, it was that they had to input all their information into ACT and then do extra work when they got to the office to share it with their manager (Salesforce.com). For runners, it was that the shoes they had weren't really comfortable for running long distances (Nike).

The iteration comes only with the product hypotheses - and I use the plural because you want to consider all the different product approaches that can solve the problem.

Let me finish with this: you don't define the Minimum Viable Product. The market does. At the beginning, your job is to form good product hypotheses that you test. These product hypotheses spring forth from spending the time in requirements gathering to sharply define the problem statement that you are going to solve with whatever your product solution ends up being.

If you've done the work to define a robust and accurate problem statement as a result of a rigorous requirements gathering process, you'll receive much better market feedback when testing your product hypotheses. Eventually, when you start market testing your alpha product, you'll have a much shorter process getting those orders/eyeballs/whatever to determine you've hit your MVP.

Next time, I'll talk about when to follow and when to ignore your customers when defining and testing your product hypotheses.