Spark59 Bloghttp://blog.spark59.com
Helping Entrepreneurs SucceedFri, 25 Jan 2013 16:29:25 +0000en-UShourly1http://wordpress.org/?v=3.8.3The Ideation Switchhttp://feedproxy.google.com/~r/Spark59/~3/xMepgzLzCLg/
http://blog.spark59.com/2013/the-ideation-switch/#commentsMon, 21 Jan 2013 20:29:27 +0000http://blog.spark59.com/?p=520]]>Imagine the following situation: You are in the office working on the next release of your software product. Suddenly your co-founder storms in and starts talking about the excellent new idea he just had.

You end up in a long discussion about the future of the company, only to resume work on the old release. You feel exhausted, not motivated and are happy when you finally leave the office.

That story is a symptom of a larger problem – we question what we are doing, whilst we are doing it. In my own experience this has lead to a lot of time wasted, motivation issues and the company running out of money because we kept the same strategy over years (even though we knew better).

Our inability to decide on executing one specific strategy resulted in procrastination, doing irrelevant work, or just having endless brainstorming meetings without ever taking action. I’ve come to believe, that in order to achieve flow in startup and get the business going, you need to keep the time your team spends on alternative directions to a minimum.

That means, either focus on one direction and execute, or discuss alternatives and make a decision as fast as possible. This difference is best described as The Ideation Switch – making a conscious decision between Ideation and Execution:

Ideation is fundamentally different from Execution – we should choose our goals and work processes accordingly.

Ideation is about finding (any) interesting signals in chaos, close to customers and the market. During this time you’d run customer interviews, usability tests, investigate different business models and even look at what the competition is doing.

DuringExecution you have a clear goal of what you want to achieve – and work obsessively towards it, ideally without much distraction. In the Lean Startup sense, this is where Build-Measure-Learn happens, and where you test your hypothesis and run experiments.

You have assumptions both during Ideation and Execution, but they are tested quite loosely and rapidly in Ideation (a couple of interviews can invalidate a strategy) versus being tested thorough in Execution, based on customer data or previous experience.

Now, the biggest risk we have as a startup, is if we spend too much time on either of these, without giving room to the other.

When to Execute & When to Ideate

If we spend too much time Ideating, we end up in analysis paralysis, never focusing on one idea and one strategy to actually do it.

If we spend too much time Executing, we end up in a local maxima, with modest growth and not much potential for actual new innovation. We also run at risk of building something no customer really wants – them saying “nice product”, but not enough actually engaging and using it.

We therefore need to reduce scope by putting both Ideation and Execution into a timebox. This could mean we spend 2 weeks on Ideation through customer interviews, then 3 weeks on Execution and actually building an MVP.

How to Structure Ideation

Ideation consists of two parts: Collecting new data from actual customers by “getting out of the building”, and then analysing that data to gain insights and zoom into one direction to pursue. For collecting new data you intentionally want to open your worldview to diverging, new perspectives – something that is easiest in the beginning, but gets much harder for established businesses.

You can structure Ideation intro three main stages:

Opening: Broaden your perspective & worldview and allow diverging perspectives from different team members

Explore: Understand both your team’s understanding of the customer, and the actual customer’s needs

Close: Converge on a specific insight and agree what to focus on for your next Execution phase

You might recognise these from the book Gamestorming, or the similarities with the Design Thinking process.

Tools & Techniques for Ideation

Customer Interviews: Setup a 30 minute interview session with an actual customer and determine what their job-to-be-done is (look for frustration with status quo & other strong reactions), and whether they are actively looking for a better solution

Business Modelling: Model the essential aspects of your business on the Business Model Canvas, and explore alternatives by just leaving one of the post-its on the canvas. Use with a timer (e.g. 15min) and quick iterations.

If you’d like to learn in more detail how design techniques can be applied to startups, take a look at this Case Study by Mike Krieger from Instagram:

- Mike Krieger @ Warmgun Conference 2012, Talk starts at 22:00

Stop Wasting Your Time

I’ve seen too many companies that still had 6+ months runway burst up in flames because they didn’t actually try any of the options at their disposal, but rather ended up in endless brainstorming meetings, and working on things they didn’t believe in anymore.

This explicit separation and the tools introduced above have helped me avoid that state and move towards a working business faster. To reiterate, do not Ideate and Execute at the same time, it only leads to indecision and procrastination.

Get started today by:

Understand what you are trying to do: If you are doing customer development right now, focus on just that, and do it in the next 2 weeks instead of 2 months

Don’t share new exciting ideas 24/7 with the team: We love to do it, but this can be confusing for the team members, especially when you have misunderstanding between business/marketing and tech

Set a specific time for Ideation: Start with timeboxing your teams divergent thoughts into a 2-3 hour meeting, expand this over time into 1-2 days and include actual customers into the process

Be conscious of why you are (un)productive: If we put the wrong constraints & goals into place we’ll just keep procrastinating. Try to reflect on your own productivity and understand the root cause, don’t try to beat it with sheer willpower.

]]>http://blog.spark59.com/2013/the-ideation-switch/feed/5http://blog.spark59.com/2013/the-ideation-switch/?utm_source=rss&utm_medium=rss&utm_campaign=the-ideation-switchHow We Use Lean Stack for Innovation Accountinghttp://feedproxy.google.com/~r/Spark59/~3/leWROcMboa8/
http://blog.spark59.com/2012/how-we-use-lean-stack-for-innovation-accounting/#commentsTue, 25 Sep 2012 22:25:21 +0000http://blog.spark59.com/?p=510]]>I introduced Lean Stack in my last 2 posts – Part 1 and Part 2. This is a follow-up on how we are using Lean Stack today as our Innovation Accounting framework.

What is Innovation Accounting?

Innovation Accounting is a term Eric Ries described in his book, The Lean Startup:

To improve entrepreneurial outcomes, and to hold entrepreneurs accountable, we need to focus on the boring stuff: how to measure progress, how to setup milestones, how to prioritize work. This requires a new kind of accounting, specific to startups.

Innovation Accounting effectively helps startups to define, measure, and communicate progress. That last part is key.

The true job of entrepreneurs is systematically de-risking their startups over time through a series of conversations. Success lies at the intersection of these conversations and each has a specific function and protocol.

For example,

with customers, we first use interviews and observation techniques to inform our problem understanding, then follow up with an offer and MVP to test our solution.

with investors, we first use pitches to inform our Business Model understanding, and then use periodic board meetings to update that understanding.

Today, I’d like to specifically focus on the conversations we have with our teams.

Experiments are Where the Action’s At

Your initial vision and implementation strategy go through lots of initial thrashing (as they should) but after a while they (should) start to stabilize.

The goal of a Lean Startup is to inform our riskiest business model assumptions through empirical testing with customers – not rhetorical reasoning on a white board.

The focus then shifts more towards empirical validation of your vision and strategy through experiments.

Even though running experiments is a key activity in Lean Startups, correctly defining, running, and tracking them is quite hard.

Here are a few key points to keep in mind:

Experiments are additive versus standalone

There is a natural tension between keeping experiments small and fast, and the expectation of uncovering big insights. The key is realizing that most experiments aren’t standalone.

You will probably never run a single experiment that will remove all risk from your business model in one fell swoop. Rather, it’s more likely that you will incrementally mitigate risks through a series of small experiments.

Every experiment needs to be falsifiable and time-boxed

From the Scientific Method, we know that experiments need to be falsifiable (written as statements that can be clearly proven wrong) in order to clearly declare them validated or invalidated.

I additionally recommend time-boxing experiments so that even when the falsifiable hypotheses have not been met, they are still brought up periodically for review. This is to short-circuit our default tendency to wait “just a little longer” when we don’t get the results we expected.

Time is the scarcest resource in a startup.

Breakthrough insights are usually hidden within failed experiments

I find that many entrepreneurs get depressed when their experiments fall flat. They end up at a loss for what to do next or they make too drastic a course correction – justifying it as a pivot (a change in strategy).

A pivot that isn’t grounded in learning is simply a disguised “see what sticks” strategy.

Failed experiments are not only par for the course but should even be expected and embraced as gifts. At Toyota, the lack of problems is considered a problem because it’s from deep understanding of problems that true learning and continuous improvement emerges.

There is no such thing as a failed experiment, only experiments with unexpected outcomes.
- Buckminster Fuller

When an experiment fails, rather than simply declaring failure and/or using a pivot as an excuse, dig deeper instead. Search for the root cause behind the failure using techniques like 5 Whys, follow-up interviews, lifecycle messaging, etc.

There is a reason the hockey-stick curve is largely flat at the beginning. It’s not because founders are dumb or not working hard, but because uncovering a business model that works starts with lots of things that don’t.

It’s hard to be disciplined about time-boxing experiments which is why we have established a regular reporting cadence we use both with internal and external stakeholders.

Establishing a Regular Reporting Cadence

We utilize daily, weekly, and monthly standup meetings described below:

The Daily Standup
Our daily standups are structured around communicating progress on individual tasks and blocking issues. We use a separate online task board outside the Lean Stack that is broken into various sections (swim-lanes). Most tasks are directly tied back to experiments currently underway. Others are grouped more generally into sections such as bug fixes, code refactoring, writing blog posts, etc.

The Weekly Standup
Our weekly standups are structured around communicating progress on current experiments and defining new experiments. We start on the Validated Learning Board and work our way backwards from right to left. We first discuss experiments that completed (either successfully or unsuccessfully), ran past their time-box (expired), or got blocked.

Each of these discussions needs to end with a clear next action:

If an experiment failed, expired, or is at risk, the next action is scheduling a task to determine why. Once we determine why, the corresponding Strategy/Risks board and Lean Canvas are updated (if applicable), and a new follow-on experiment defined.

If an experiment passed, the next action is determining if the underlying risk we set out to mitigate was completely eliminated. If not, a follow on experiment is defined.

The conversation so far is grounded entirely on empirical learning following the additive rule of experiments.

In addition, we also spend some time discussing any recurring peripheral customer issues and/or feature requests. The level of customer pull is quickly gauged against our current key metric focus and a decision is made to either initiate a “Problem Understanding” initiative or table the issue for now.

A common trap in a startup is overcommitting one’s resources and always being in a state of motion (building too much and/or constantly fire-fighting).

“The only place that work and motion are the same thing is the zoo where people pay to see the animals move around” (not exact phrase).
– Taiichi Ohno

We instead strive to build slack into our schedule – affording us room for continuous improvement. We accomplish this using Kanban work-in-process limits on the Validated Learning Board to constrain the number of experiments we are allowed to run simultaneously. This further forces us to ruthlessly prioritize our next actions so that everything we do is additive and aligned with our current singular key metric focus.

The Monthly Standup
Our monthly standups are structured around communicating progress on the overall business. We compile macro financial and innovation accounting metrics along with a one-page progress report (lessons learned) on the previous month. This is our version of a “pivot or persevere” meeting like the kind Eric describes in his book.

The output of this meeting is also shared with our advisors whose feedback is used to inform our strategic direction.

Applying A3 Thinking

While the Lean Stack does a great job of visually communicating progress, post-it note sized summaries don’t do justice to the complexity of experiment design and progress communication.

To overcome this shortcoming, I borrowed another page from the Toyota playbook – the A3 report.

As you can probably tell by now, I am a huge fan of one-page formats.

The A3 report is a one-page format Toyota developed for solving problems, describing plans, and communicating progress. The name A3 comes from the international paper size which also happened to be the largest paper size fax machines could transmit. Nowadays, Toyota uses the more universal A4 size but the original name still stuck.

Here is what our one-page experiment report looks like:

When we commit to run an experiment, the experiment is assigned an owner (usually the initiator) who starts by filling the left hand side of the report. As the experiment progresses, data from the experiment is filled in on the right hand side. And when the experiment ends, the validated learning section is filled in with a clearly stated next action.

We use additional variations of the one-page A3 report for capturing new feature initiatives (MMFs), risks, and monthly progress (lessons learned).

I know what you’re thinking. That’s way too much process for a startup. Surely, it will get in the way of getting real work done.

Like you, I am averse to needless process. Yes, it’s way faster to iterate in your head alone, but I can tell you from first-hand experience, that it’s hard to scale the “mental leaps” approach over time and especially across a team of size greater than two.

The A3 report is less of a template and more a way of crystalizing and visualizing one’s thinking.

Like the Lean Canvas, the A3 report is deceptively inviting to create but the one-page constraint forces a level of conciseness that cuts out all the noise.

The format of the report itself is rooted in the Deming PDCA cycle (Plan-Do-Check-Act) which has lots of parallels to the Build-Measure-Learn loop and the Iteration Meta-Pattern.

A3 reports become archives for your company learning
Our goal with these reports is not just using them to crystalize current thinking but also to turn them into an accessible source for archiving learning for future use.

The ability to playback experiments through these reports not only communicates learning to new team members but also helps demonstrate the modus operandi of how we work and think.

Putting it to Practice

Last time, I described why and how we implemented the Lean Stack MVP using physical posters. I am still a huge proponent of a physical card wall. The card wall serves as an effective progress radiator (even from 20 feet away) and fosters great in-person discussion.

But the biggest challenge we have had is keeping the card wall synchronized across our geographically distributed team. We needed an online solution and tried cobbling together a solution using existing online kanban tools. But they all fell short – mainly for their lack of swimlane support.

At the end, we came up with a simple and elegant solution built using Keynote and Dropbox that far exceeded our original expectations.

The shared Keynote document holds master templates for all the Lean Stack boards and A3 reports. Adding/moving cards on the board is dead-simple through drag-and-drop. Using hyperlinks, we were able to easily build in click navigation which makes the document behave like an app when in presentation mode. But the biggest benefit was being able to capture all this within a single portable document. We named this document the “Spark59 Playbook” because it captures our Vision, Strategy, and ongoing Product Experiments all in one place.

Like the posters, we are making our playbook template available for early access along with additional tutorial video content and an invitation to participate in the evolution of Lean Stack.

]]>http://blog.spark59.com/2012/how-we-use-lean-stack-for-innovation-accounting/feed/4http://blog.spark59.com/2012/how-we-use-lean-stack-for-innovation-accounting/?utm_source=rss&utm_medium=rss&utm_campaign=how-we-use-lean-stack-for-innovation-accountingTroubleshooting your Activation Funnelhttp://feedproxy.google.com/~r/Spark59/~3/smn0cqyZ1Pg/
http://blog.spark59.com/2012/troubleshooting-your-activation-funnel/#commentsTue, 21 Aug 2012 03:26:31 +0000http://blog.spark59.com/?p=496Every product encounters this scenario: A person creates an account, and then does nothing (or close to it) within the product. Why? Your unique value proposition piqued enough interest to acquire them as a customer, but why did they disappear?

I’m going to focus on the activation funnel, the series of steps a person follows, from acquisition to experiencing the first value point with your product, and share a method for troubleshooting issues.

Two Categories of IssuesWhen diagnosing issues within your funnel, consider two different types: usability issues and motivational issues. Usability issues occur when the construct impedes the actions of the user, preventing, confusing, or frustrating the user to the extent they decide to drop out.

Motivational issues occur when the product’s “perceived value” isn’t high enough to induce action by the user. There can be a multitude of reasons that causes the drop out, and one often overlooked is simply that people like to have a look. This is common in products with a low barrier of entry (no payment needed, little information required for an account) as there’s less investment the user makes.

Whichever the case, let’s explore a process for measuring and improving activation.

Map the Activation Flow using Screenshots

Starting at the sign-up screen, take a screenshot at every user interaction through the end of activation. These screenshots build a step-by-step outline of the activation process. You don’t need fancy tools for this. I use my computer’s built-in snapshotting capability, and then assemble a slidedeck in Keynote.

As you’re going through the activation steps, don’t use lorem ipsum text or dummy data. If a step is creating a task, enter a real task you would do today. Make the content personal, and feel the required effort. It’s through the interacting, thinking, and clicking that you comprehend what the activation experience is.

Very quickly you’ll notice points of improvement: unclear or missing instructions, a button that doesn’t stand out enough, unnecessary fields, etc. Jot the ideas down for now, and continue mapping the entire process.

Establish the Baselines Numbers

It’s time to add data into the mix. Doing so will give you a baseline of the current state, and serve as the baseline from which to evaluate results of future changes (experiments). This is an important component of troubleshooting, so don’t skip it. Not every change you make will have a positive impact on the activation rate.

Also, because activation happens at the top of the funnel, there is the risk of unintended consequences further down. It’s possible to optimize the activation rate, but at the expense of retention or revenue. Baselining the full lifecycle funnel helps empirically prove that overall progress has been made.

We use our own tool, USERcycle, and depending on your environment, do whatever works best, even if it’s manual calculation. The key is to ensure collection of data for the entire lifecycle funnel (acquisition –> referral).

Visualize the Current State

After collecting the flow and data, I build a bird’s-eye view that layers the two together. This visualization of the flow with the actual numbers helps clarify where user issues are versus your own perceived issues (what was jotted down initially in the mapping). I made mine in Photoshop but other options like Keynote or Google docs work. You can even go no tech via printed screenshots taped to a whiteboard (handy for group sessions).

In the first rounds of troubleshooting you will usually confront one or two “cliffs” in the activation flow where 15-30% of users drop off. You’ll also see several 2-9% percent drop offs across series of events. Deciding which issue to fix requires more than prioritizing based on severity. Take into account the implementation cost of the change and its projected impact. Tackling a 6% drop that can be fixed in 2 hours is a better return of investment than a 20% cliff that takes 3 weeks.

Our philosophy is to focus first on the low hanging fruit. Ask yourself:

“What’s the smallest, simplest change we can make that’s likely to keep people from having the problem.”

Steve Krugg

Think in terms of tweaks rather than complete redesigns as the reality is multiple iterations will be necessary.

Wrapping upCycling through the screenshot slidedeck, gathering baseline data, and finding where to start lead to uncovering and identifying “what” issues are in your activation funnel. With this information you are at the cusp of an experiment. One with a goal in sight, a baseline to beat, and a measurement to validate the outcome.

In my next post I’ll discuss getting to the “why” behind the issues and share techniques for identifying usability issues versus motivational issues.

]]>http://blog.spark59.com/2012/troubleshooting-your-activation-funnel/feed/3http://blog.spark59.com/2012/troubleshooting-your-activation-funnel/?utm_source=rss&utm_medium=rss&utm_campaign=troubleshooting-your-activation-funnelHow Customers Sculpt your Product’s Designhttp://feedproxy.google.com/~r/Spark59/~3/J2mM9baAUs8/
http://blog.spark59.com/2012/how_customers_sculpt_your_products_design/#commentsFri, 27 Jul 2012 11:43:33 +0000http://blog.spark59.com/?p=469In my post the Design Dilemma for Lean Startups I discussed how every startup needs to find the Minimum Design Level of their product and do so while being customer-centered when deciding how much to invest in design rather than basing via the resources available to the startup. In this post, I’ll illustrate how extending our effort into understanding who the target customer is provides valuable guidance for our design.

Start Honing Customer Segments

Even if your product has broad appeal to many customers or solves a universal problem, the goal is to focus in on a well defined customer segment that represents your prototypical first customer (early adopter). This is because:

“You can’t effectively build, design, and position a product for everyone.”

Ash Maurya

For example, if we have a startup idea with a problem/solution around file sharing, think of how altering just one characteristic of the customer segment (parents vs. teenagers, lawyers vs. designers, or men vs. women) transforms the entire business model. This same impact holds true for the design. A landing page for software targeting parents won’t look the same as one targeting teenagers. Parents and teenagers have different characteristics, different values, different goals, and as so the elements of our design needs to be unique to them.

The advantage is when we begin to string together multiple characteristics (profession, age, gender, income, etc) of our early adopters we get an outline of knowledge that gives our design clear directions. With a strong sense of who we’re targeting we can tailor: images to showcase the right kind of people, color sets that convey the proper tone, or UI controls that match the technical sophistication of our customer.

Add Empathy with Personas

In the earlier example, we refined our customer segments via demographic characteristics but we can uncover more reference points by connecting with our customers on an emotional level through the invention of personas.

Personas are fictional characters you create of your customers that help guide decisions by understanding who they are, what goals or aspirations they have, what their fears or frustrations are, and other day in the life type of information. There are many variables to choose from with many mapping to psychographics while blending in demographics.

For me, the persona exercise helps frame my thinking through the customer’s point-of-view. It enhances customer understanding and creates a one-page resource that helps avoid focusing too much on just the bits and bytes of a design.

“Personas makes it easier to be human-centered.”

Don Norman

This is an asset as our design, the element that many times serves as the tipping point for sparking interest, desire, and action, needs emotional triggers from which to spring. Commanding attention or making a lasting, visceral impact is easier to achieve when the design touches on the likes/tastes/goals of the customer.

Creating and Using a PersonaWhile personas can be data-driven documents created by professionals who conduct extensive analysis, here at Spark59 we go with the more informal “Ad-Hoc Persona” which taps into the background knowledge we already contain from personal experience. For more depth on them visit Tamara Adlin’s website and Jeff Gothelf’s post on his experience creating and using them in an executive setting.

Here’s an example of one of our personas for USERcycle. We took our early adopter (males who are technical founders building SaaS companies with 10-15 sign-ups a day) and created an ad-hoc persona named “Josh” using our own blend of psychographics and demographics in Apple’s Keynote software.

When it came time to make the teaser page for USERcycle we asked “What would Josh love to see?” The end result was a Matrix inspired teaser page, not because we like the movie (which we do), but rather because Josh liked it.

Getting StartedWhen starting out best guesses are fine as the best outcome is to create a living document. One that is updated on a continuous basis as customer learning increases through getting out of the building.

Not every customer will conform to your early adopter definition, and not every piece of information will be accurate in your persona sheet. That’s ok. The value is not derived from getting the details perfect; it’s gained by pushing forward your critical thinking on customers and keeping them in the forefront of your design decisions.

When new entrepreneurs are introduced to Lean Startup I can often sense their confusion: What is Lean Startup? Structured approach? Set of principles? Cargo cult?

It sure is a good marketing buzzword. And from reading the book its quite difficult to actually take action inside your startup.

The actionable tools start appearing once you go into the different communities.

In recent months I’ve spent considerable time deconstructing understandings and taking a long list of learnings and tools with me. Here’s a short glimpse, which hopefully gives you incentive to explore on your own.

Lean Startup Machine: Get Out of The Building

Most important principle of all: Get out of the building and learn from a customer. Don’t sit inside and come up with the perfect approach.

Lean Startup Machine has perfected this – over a weekend they push over 50 people out of the building and interact with customers on the street, in clubs and over the phone. The process is messy and imperfect, but people always come back having made their first validated learning from customers.

What I’ve learned from them: Getting out of your comfort zone (= your learning bias) is more important than strictly following methodology.

Discussing Customer Learnings at LSM London

UX & Designers: Visual Thinking & User Research Methods

When you talk with customers don’t forget the insights of the UX and design community. They’ve spent decades perfecting user research, before it was glorified. Learn from them how to run customer interviews, how to see patterns in your customers, and how to integrate learnings into your team.

Especially when integrating customer learnings, I can highly recommend grabbing a copy of Dave Gray’s Gamestorming. It contains an wealth of information on visual tools that enable you to make better decisions as a team – instead of endless discussions and talking.

What I’ve learned from them: How (and when) to run customer interviews and other user research. Team work. Always asking “What are we trying to learn?”.

Y Combinator: Focus on Your Customers (and ignore investors)

Paul Graham, the mastermind behind Y Combinator (YC), would not publicly endorse Lean Startup – but I still took one key message away for all entrepreneurs who try to be rigorous and structured.

Teams going through YC get one really important push: One team member focuses on product (= build), one team member focuses on users (= get ouf of the building).

Until demo day there is no time wasted on investors – all is spent on trying to come up with something people want.

What I’ve learned from them: It’s important to keep fundraising out of the entrepreneur’s mind when starting out. Fundraising pushes us towards our comfort zones and vanity metrics – we start believing in our own reality distortion field.

Fred & Russell participating in YC S12

The Movement

Some people feel that Lean Startup is a cargo cult that pushes Eric Ries’ personality. If you’ve ever seen Eric talk, or talked with him personally, you’ll see though that thats really not the point – and that he does all he can to push other leaders in the community around the shared goal of improving the odds of entrepreneurship.

“Our success should be judged by the leaders we develop in the communities – not the leaders we already have.”
- Eric Ries

Lean Startup is a space that enables us to collaborate and think up new approaches – no one ever said its the road to success, but it should make us think and reflect on what we’re actually trying to do, and what methods we use to get there.

Salim Virani summarized this best in my opinion, describing the Leancamp open space as a “15-dimensional Venn diagram”. Leancamp brings together entrepreneurs, researchers and coaches alike, with a focus on a high-bandwidth exchange of knowledge, ideas and experiences.

Leancamp Dublin

Lets inspire many more entrepreneurs to take a closer look and participate in the community – and not lead them to think Lean Startup is a cookie cutter approach to success.

Join a Leancamp, attend a Meetup and above anything else: Share your own thoughts with others, we’re all here to learn and continuously improve.

]]>http://blog.spark59.com/2012/deconstructing-lean-startup/feed/6http://blog.spark59.com/2012/deconstructing-lean-startup/?utm_source=rss&utm_medium=rss&utm_campaign=deconstructing-lean-startupLean Stack – Part 2http://feedproxy.google.com/~r/Spark59/~3/mbPSUoBcmPg/
http://blog.spark59.com/2012/lean-stack-part-2/#commentsWed, 27 Jun 2012 16:45:15 +0000http://blog.spark59.com/?p=471]]>Last time, I outlined the thought process behind the Lean Stack and provided a 3000-foot overview of the toolset. In this post, I’m going to dive a little deeper into the process flow and end with a concrete case-study.

The Lean Stack MVP – A Different Approach

A number of you inquired if the new tools would be integrated into LeanCanvas.com. The answer is yes, eventually, but we are not starting there.

My earlier iterations (Lean Canvas layers, and Feature Kanban board) were all done in software. On the surface, a web app seemed to be the best choice because we already had a large pool of users and software should be fast and easy to change, right? Not quite. Looking back at those experiments – they took too long, cost too much, and created lots of needless waste – not counting hours dealing with UX issues, browser issues, and other defects.

You can almost always find unconventional ways to accelerate learning and reduce waste that doesn’t involve building the final solution you had in mind.

The very first Lean Canvas MVP was a blog post. The canvas was then refined over numerous workshops (through slides and paper exercises) before it was turned into software. While I considered applying the same approach here, running experiments is a more advanced and later stage step that didn’t naturally fit into my 1-day workshops.

So I invented a new learning product – The Running Lean Bootcamp. The idea behind the bootcamp was to go beyond the book and 1-day workshops which typically are characterized with high activation but low follow-through retention. In other words, people leave the workshop very excited but fail to put these principles to practice because real behavior change is hard.

The bootcamp aims to tackle this problem by getting people to run lean on their products for the period of the bootcamp – with accountability and personalized coaching built into the program. We get to share (and experiment) with our latest practices and the teams get to move their businesses forward – making it a win-win for both of us. The flow described below was refined through working with ~20 startup teams from the last bootcamp.

This time around I also decided to experiment with a physical MVP (using posters). This went against my grain because I am more of an abstract thinker. Even when I studied Electrical Engineering back in college, I was always the first one out of the simulation lab, but the last one out of the hardware lab. Despite my initial skepticism, using a physical MVP here was one of the best decisions we made.

Within a couple days, we had the posters designed, printed, and hung on a wall. Here’s a picture of what early versions looked like:

You’ll noticed there is no “Strategy and Risks” board in the top picture. That’s because there wasn’t one when we started. That board was the missing glue that was discovered accidentally as I was working with one the startups. The second picture has a hand-drawn version of that board which was eventually turned into a poster a few days later, and rolled out to all the teams.

With software, we’d have had to go back and code for a few more weeks to get this working. In the physical world, the surrounding wall and post-its provided us with a blank canvas that could be repurposed for anything we wanted. This is just one example of the many liberties the physical boards afforded us through out the process. Who says you can’t iterate quickly with a physical prototype?

Lets walk through the actual flow next.

The Lean Stack Flow

The Vision – Lean Canvas

The first step of the process is still capturing the essence of your vision as a single-page business model diagram using Lean Canvas.

I have already written a lot about Lean Canvas which I won’t repeat here again. But I will share a common pitfall I’ve seen one too many teams fall into – the analysis paralysis trap.

The goal of a Lean Startup is to inform our riskiest business model assumptions through empirical testing with customers – not rhetorical reasoning on a white board.

Yes, over time your canvas should be correctly segmented, focused and concise – and would probably even benefit from deep exploratory exercises like persona and customer flow creation. But achieving these goals on the first canvas is premature-optimization.

Instead, initially focus your efforts on quickly moving through the Lean Stack layers and use the built-in feedback loop to prioritize the areas that need further development. For example, the most rewarding time for a deep dive into personas might be when you get to the build stage of your first experiment – which will probably be a Problem Interview.

The Strategy – Strategy and Risks Board

With your vision documented, you then move on to the Strategy and Risks board. The goal here is to formulate an appropriate plan of attack – one that prioritizes learning about what’s riskest above everything else.

Risk prioritization in a startup can be non-obvious. The best starting point is identifying gaps in your thinking and talking through them with formal and/or informal advisors.

Another great tool is studying pre-existing analogs and antilogs. This is a conceptual framework introduced by Randy Komisar and John Mullins in their book: “Getting to Plan B”.

Analogs and antilogs essentially let you stand on the shoulders of others before you and see further by way of their lessons learned.

After studying a few analogs, some patterns might begin to emerge which helps in formulating your implementation plan. For example, after 37signals success with Basecamp, a number of companies applied their “build an audience through a blog, then follow with a web app” approach. Some succeeded, others didn’t.

While a strategy pattern cannot guarantee success, it can jumpstart your journey.

The Product – Validated Learning Board

With your strategy and risks documented, you are now ready to move on to experiments.

Not surprisingly, the workings of this board raised the most questions. I’ll just jump to the questions:

Question: How does one create a card for a product prior to conducting problem and solution interviews?

The product card is just a placeholder for the idea you plan to implement. All you need is a label to identify the idea or concept (which you can change later). It doesn’t pre-suppose a solution definition or commitment to build it. The only time one could potentially struggle to find a suitable name is if they were randomly fishing for ideas to go implement. But even there one could call this product: “Random idea fishing expedition”.

In practice though, by the time you get to this stage you’ve already got a pretty decent inkling of problem, customer, and even possible solution. Instead of describing how to name your product, I should be expending more words trying to talk you out of pre-maturely naming your product – not spending precious cycles running domain name searches and designing logos for your “product”.

Question: What is a Minimum Viable Feature (MVF)? What is the relation to MVP? How does one know which one to use?

The product card is intended to capture “a unit of product” that is delivered to customers. The first “unit of product” you release to customers is your Minimum Viable Product (MVP).

In my last post, I was assuming a continuous deployment process, like the one we use, where after the MVP, we would deliver subsequent “units of product” as individual feature pushes. Given that not everyone deploys in that fashion, a more general label might have been to call it a Minimum Viable Release (MVR) where the MVP is Release 1.0 and a release can in turn be a single feature (MVP) or a collection of features.

In addition to MVP and it’s follow-on MVRs, the product card can also be used to represent multiple related products on the same board. At Spark59, we use a single Lean Stack to capture the many “tools, content, coaching” products we build.

Question: Can you explain the lifecycle of a product through the 4 stages on the board?

What I found after building a few products the lean way is that the process for going from idea to MVP is/should be the same as going from MVP to Release 2.0, 3.0, etc. Otherwise, it’s very easy to stop listening to customers and be led astray. That process is what I codified into the iteration meta-pattern shown on the board.

Question: Where do you capture product and experiment details?
The Kanban card is intended to visualize and communicate the flow of work. The face of the card is too small to hold all the details that go along with a product or experiment. So we only place the most critical pieces of information on each card.

For a product, that would be an identifier (name) and exit criteria for the specific stage.

For an experiment, that would be an identifier (usually a short action based name like “Run problem interviews”) and a list of one or more falsifiable hypotheses.

For a risk or issue, that would be an identifier typically posed as a question such as “Can we charge $100/mo for this product?”.

If this were an online tool, opening a card would reveal more details. We implement this today using a separate one-page A3 report. A3 reports (named after the international paper size on which it fits) are extensively used at Toyota for various problem solving initiatives which parallel a lot of similar challenges in a startup. In my attempts to grok A3 reports, I uncovered another parallel between the 4 stages of the iteration meta-pattern above and the 4 stages of the Deming cycle: Plan, Do, Check, Act (PDCA). But that’s a whole other can of worms best left for a future post.

The Lean Stack in Action

Time to jump into the concrete case-study.

Now it’s Your Turn

In lean thinking, a process is not something passed down (…) and set in stone, but rather a living product that is owned by the people doing the work.

]]>http://blog.spark59.com/2012/lean-stack-part-2/feed/7http://blog.spark59.com/2012/lean-stack-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=lean-stack-part-2The Lean Stack – Part 1http://feedproxy.google.com/~r/Spark59/~3/m8fz2buiWT0/
http://blog.spark59.com/2012/the-lean-stack/#commentsFri, 15 Jun 2012 12:37:27 +0000http://blog.spark59.com/?p=466]]>Most products fail, not because we fail to build what we set out to build, but because we waste time, money, and effort building the wrong product.

I attribute the entrepreneur’s, often unbridled and singular, passion for the solution as the top contributor to this failure. Sometimes that solution is truly awesome for a lot of people, but more often than not, it’s not.

So how do you overcome this?

For starters, I strongly advocate redefining the “true product” and “real job” of an entrepreneur.

The true product of an entrepreneur is NOT the solution, but a working business model.

The real job of an entrepreneur is to systematically de-risk that business model over time.

This redefinition was my motivation for creating the derivative Lean Canvas format and codifying what I believe to be the universal meta-principles for Running Lean depicted below:

You start out by drawing a line in the sand with your initial Lean Canvas, prioritizing risks, and systematically testing those risks through experiments. There is a built-in learning feedback loop from experiments back to risks back to the business model.

Putting it to practice

While this process works conceptually, I often field questions from other entrepreneurs specifically on how to

1. correctly prioritize risks and define experiments
2. track those experiments so it scales over time (and with more people)
3. reflect the learning from experiments back on to the canvas.

More recently, I have also lived these challenges first-hand as I’ve grown my own team and am attempting to build a learning organization where everyone runs experiments.

The root problem with this process is that each transition between stages requires a mental leap which is often hard to teach and/or share with another person.

Even though running experiments is a key activity in Lean Startups, correctly defining, running, and tracking them is hard.

For inspiration, I turned to Jeffrey Liker who has written a number of books on the Toyota Production System. I’ll come out and admit upfront that I’m not big on process and even have an inner aversion to it. Like many of you, I’ve worked at large companies and lived through many useless “TPS reports” and needless process. But at Toyota they view their TPS reports differently.

“The right process will produce the right results.”
- Jeffrey Liker, The Toyota Way

In lean thinking, a process is not something passed down from the top and set in stone, but rather a living product that is owned by the people doing the work. They are charged with a singular directive – reducing waste (i.e. eliminating non-value add work). You start by documenting your current process (with a value stream map) and then challenge everyone to continuously improve it. That I can get my head around.

After a few iterations and a lot of testing, we have developed a process that works for us and half a dozen other startups – something we’re calling a lean stack which I’ll explain in a bit. But first it would help to explain how we got here.

Version 1: Risks and Experiments Layers to Lean Canvas

Our first approach was to utilize a layers model to the Lean Canvas. The idea was to keep the canvas structure intact but overlay different views for risks and experiments on it.

This is currently what’s built into the online Lean Canvas tool but despite a number of attempts, we couldn’t get ourselves to adopt it into our daily routine.

We found ourselves struggling to constrain an experiment to a single section on the canvas. Many experiments, such as running a solution interview, potentially test a number of business model assumptions/hypotheses at once.

The main issue though is that the canvas is a static view and doesn’t by itself help visualize the flow of work.

“Use visual controls so no problems are hidden.”
- Jeffrey Liker, The Toyota Way

Version 2: Feature level Kanban board

That brought us to our second version which was to create a minimum viable feature (MVF) level Kanban board which I documented in this post: “How we build features” and my book.

As a refresher:

MVF is derived from Minimal Marketable Feature (MMF).

MMF was first defined in the book “Software by Numbers”: as the smallest portion of work that provides value to customers. A MVP is made up of one or more MMFs.

A good test for a MMF is to ask yourself if you’d announce it to your customers in a blog post or newsletter. If it’s too tiny to mention, then it’s not a MMF.

The idea here was to use Lean Canvas to document the business model and risks, and the Kanban board to document the actual work.

The choice of using an MVF for a coarse grained unit of work was intentional. I wanted a macro view of the work that also captured customer learning versus having just a task board (which we keep separate).

While the Kanban board was great for tracking features, it was still a mental leap to go between the canvas and the board. Even though using the canvas to represent risks made sense, since everything on the Lean Canvas needs to be tested, the framing on the canvas is around assumptions. I found that simply marking a section on the canvas as a risk wasn’t sufficient as one still needed to translate those sets of assumptions into risks and then prioritize them.

Mental leaps are bad because different people make different leaps which isn’t a bad thing on its own. You run into trouble when those undocumented mental leaps start driving work for others.

“Standardized tasks are the foundation of continuous improvement.”
- Jeffrey Liker, The Toyota Way

It also became quickly apparent that an MVF wasn’t a single experiment but made up of a number of experiments which in turn was made up of a number of tasks. The board was too macro and as a result slow moving.

A little more inspiration: The Vision-Strategy-Product Pyramid

We needed a process that flowed better end-to-end. The idea for that flow, and subsequent lean stack, came from listening to Eric Ries describe his Vision-Strategy-Product pyramid on a webinar we did together.

I had seen the pyramid many times before in Eric’s book but this time it had a different effect.

Here’s what I took away:

When entrepreneurs get hit by an idea, it all comes in as a single clear signal. Another way to restate the top reason products fail, is that, as entrepreneurs, we often don’t take the time to deconstruct the idea into it’s vision, strategy, and product components. We instead race up the pyramid to the top – only to pre-maturely fall in love with the product.

A good way to explain the pyramid is using a driving metaphor (like Eric does in his book):

Imagine you accepted a new job across town. The vision represents your destination or new place of work. The strategy represents all the possible means of getting there – you might bike, take a bus, drive, etc. You don’t know up front what routes may be optimal so you run experiments. A change in strategy is a pivot. The product is a repeatable roadmap for getting to work.

Another useful model for explaining the pyramid is using Simon Simonek’s Golden Circle. The vision is your why. The strategy is your how. And the product is your what.

Version 3: The Lean Stack

The pyramid is not only a good mental model but it can be weaved into a process that flows by building a visual control system at each stage of the pyramid. That’s exactly what I did with the Lean Stack.

The Vision Layer – Lean Canvas
The lowest layer of the stack, your vision or why, is still best captured by a Lean Canvas. It’s meant to document your current best guess at realizing a working business model.

The Strategy Layer – Strategy & Risks Board
The middle layer of the stack is the glue that connects the other two layers that was missing from my previous attempts. This layer helps to break down the big vision into an implementation plan (strategy) that is both informed through knowledge from studying existing analogs/antilogs as well as having conversations about risks with your team, stakeholders, advisors, customers, even competitors.

The analogs/antilogs framework is a concept described by Randy Komisar and John Mullins in their book: “Getting to Plan B” which I’ll illustrate with examples in my next post.

Risks are simply captured on a Kanban board first in the backlog area where they are prioritized and then used to drive other work (experiments) down the line.

The Product Layer – Validated Learning Board
The top layer of the stack is intended to capture the actual work that goes into building the product(s) that realize the vision.

I capture this work on a Validated Learning Board which is similar to the Feature Kanban board but with a few additional tweaks. For one, it’s been generalized to support any kind of product (not just software) by explicitly breaking out the flow into the 4 stages of the iteration meta-pattern:

Understand Problem: Before you can define a solution, you need to understand the customer and problem.

Define Solution: Instead of rushing to build out the solution, use a demo to define the solution first.

Validate Qualitatively: Then validate the solution at small scale.

Verify Quantitatively: Finally verify the solution scales.

But the big change here is that it models both product and experiments on the same board using a two-tiered board implemented using horizontal swimlanes. If you want to learn more about Kanban boards, I highly recommend David Anderson’s classic book: “Kanban” as a great primer.

The top tier captures the state of the product. A “product” on the board can represent a minimum viable product (MVP), minimum marketable feature (MVP), or a related sub-product. At any given time, a product can only be in one of the four stages. Each stage has a clearly defined exit criteria (such as number of interviews, learning goal, or a time box) which is captured on the product card.

For each product, you can run any number of experiments which are placed in a horizontal swimlane for the product. Each experiment has a clearly defined falsifiable hypothesis and optional time box constraint. An experiment goes through a Build/Measure/Learn cycle represented by similarly named columns on the board.

The build column represents the definition and setup stage of the experiment which often involves things like coding, landing page design, mockups, interview script creation, etc.

The moment the experiment is shown to at least one customer, the card moves into the measure stage.

Once sufficient data has been collected or the time box exceeded, the card is moved into the learning stage where it is either marked validated or invalidated based on the criteria described in the falsifiable hypothesis.

That learning is then internalized to determine whether the stage exit criteria has been met. If it has, the product card moves to the next stage. Otherwise, more experiments are run.

What about tracking all the tasks within experiments?

While it’s conceptually possible to create a third tier for tasks, it’s generally best to avoid nesting beyond 2 levels as the board becomes way too busy. Also, the measure of progress in a Lean Startup is learning, not building, which is another reason to keep the build details off this board.

We track our tasks on a separate task board (also Kanban) that is linked to experiments but for now I’m considering that board off-topic and out-of-scope.

The Lean Stack in Action

This was a lot of information to cover in a single post. Next week, I’ll cover how to wire these boards together and illustrate it in action using an example from one of my products.

]]>http://blog.spark59.com/2012/the-lean-stack/feed/10http://blog.spark59.com/2012/the-lean-stack/?utm_source=rss&utm_medium=rss&utm_campaign=the-lean-stackThe Design Dilema for Lean Startupshttp://feedproxy.google.com/~r/Spark59/~3/DrKwmQ7y2LQ/
http://blog.spark59.com/2012/the-design-dilema-for-lean-startups/#commentsFri, 04 May 2012 03:17:07 +0000http://blog.spark59.com/?p=450This is a post by Emiliano Villarreal who is our “designer” at Spark59.

One of the top questions I get when helping entrepreneurs, especially those who have not achieved product/market fit, is “How much should I invest into the design of my… landing page, MVP, smoke-test, pitch deck,… etc.” The commonly cited answer is that “it depends.” But it depends on WHAT?

The dilemmaCalculating the right level of form in relation to function is one of the longest running debates across all the disciplines of design. In simple terms, it’s deciding the balance between the utility a product provides (function) and the beauty of it (form). As the classic example below taken from the Universal Principles of Design book shows, focusing heavily on one area greatly impacts the end result.

For a StartupWhen discussing this dilemma for Lean startups I like to make the terms more contextual – function becomes features and form becomes design.

Feature is what solves the problem(s) your product is targeting. As an example, if file sharing is the problem the feature could be software that automatically syncs files (Dropbox) or software that uses the power of multiple seeds for fast distribution (p2p).

Design then for a product is its visceral form and how the interaction between the features and user occurs. Sticking with the file sharing example, the design components could be a simple email-like interface (Yousendit) or traditional OS integration (Dropbox).

Why this gets riskyMost startups in the beginning face a limited runway for becoming sustainable. Given a finite amount of time & money, most prioritize features at the expense of design.

Which seems logical as a product that fails to build strong features essentially provides little value to the customer. If the features aren’t compelling, it is likely that customer retention will be low, referrals will be non-existent, and there isn’t much prospect for growth.

On the flip-side, a product that fails to invest enough in its design might suffer from perceptions about quality, frustrations from usability, or simply fail to get a customer’s attention. This renders any value derived from features moot.

The catch is under investing in either one can kill the startup. So the question is not omitting one at the cost of the other, it’s finding the minimum needed for both.

A different approachJust like there is a minimum feature set that must be defined to build your MVP, there is a minimum design level that must be reached for every product to be effective regardless if it is a landing page, smoke test, or paid app.

Where most people are afraid that a MVP means constructing a half-baked product, I find many people fear minimum design level means a slick design with fancy controls. Far from it.

Minimum design level is the basic design components needed to make your features or content effective in delivering their purpose.

The important point is this minimum level of design should not be decided by how much the startup can afford to build but rather by how much is needed by the customer.

It’s a critical distinction that requires a shift in thinking but provides the path for answering the “How much” question. The good news is it’s simple; it starts with understanding your customer segment.

–

In the next post, we’ll dive into how a startup baselines their minimum design level while using Lean principles.

This is an analysis based on publicly available data by Lukas Fittl, who just joined Spark59 as our European outreach.

All the media are abuzz with the news of Facebook acquiring Instagram for $1 billion. Lets dive into what they actually did:

Zoom-in pivot:
Instagram was initially “Burbn”, a check-in app where you could also add photos. They launched after 8 months of private beta, and saw little engagement from customers – apart from photo sharing, which was used actively. At some point they actually sat down and built a prototype of “just photos” – but discarded that again without launching it.

Few weeks later, on a vacation, Kevin Systrom saw someone use a photo app with filters (Hipstamatic was already popular) and wondered why none of these apps had social functionality. And all the existing social photo apps made ugly photos. From that he and Mike Krieger built a simplistic social photo app with just one excellent filter: Instagram.

“If I could give any advice: Stay away from this private beta stuff. Put it out there, find the people that are vocal about it, put it in their hands and listen to what they’re excited about.”
- Foundation 16: Kevin Systrom

They focused on one “must have” use case:Sharing beautiful photos with your friends.

They split it up into three problems to be solved:

Making Photos Beautiful
(based on their observation of filter apps)

Allowing You To Share Them on Multiple Networks
(engineering for viral growth)

“We focused on three – we weren’t trying to reinvent the world of photography. We focused on these three humble problems. And thats what turned Instagram from yet another network tackling photos, into a network people used.”
- Foundation 16: Kevin Systrom

Build-Measure-Learn & Cohort Analysis:Instagram kept a nimble engineering team, and delayed building a proper company. And they were successful because of it. They kept experimenting & improving their metrics, using cohort analysis to stay focused and keep questioning the status quo – not to be distracted by vanity metrics.

“The people who signed up in the first month: Are they still using it today? Often in social startups you’ll see people sign up, use it for a couple of months, and then never use it again. This weird effect where, because your sign-up rate is so high, your active users seems to stay pretty much the same. Like a revolving door.”
- Kevin Systrom at TC Disrupt 2011

When you dive into the details, Instagram tells a fascinating story of focused engineers innovating in small batches and delivering one superb user experience. And they were rewarded for it.

Lets talk more about what actually makes the difference, and not blindly build the “Instagram for X”.

]]>http://blog.spark59.com/2012/instagram-case-study/feed/9http://blog.spark59.com/2012/instagram-case-study/?utm_source=rss&utm_medium=rss&utm_campaign=instagram-case-studyGet Your Customers to Want to Pay Even Before Building Your Producthttp://feedproxy.google.com/~r/Spark59/~3/n71lVlUpVL4/
http://blog.spark59.com/2012/get-your-customers-to-want-to-pay-even-before-building-your-product/#commentsMon, 12 Mar 2012 23:01:14 +0000http://blog.spark59.com/?p=410Don't Ask Customers What They'll Pay. Tell Them I covered how to uncover initial pricing for your product by first spending time to understand your customer, their problems, and their existing alternatives. In this post, I'll talk about how you use that information to test pricing.]]>The following is an excerpt from the book: Running Lean.

In my post Don’t Ask Customers What They’ll Pay. Tell Them I covered how to uncover initial pricing for your product by first spending time to understand your customer, their problems, and their existing alternatives. In this post, I’ll talk about how you use that information to test pricing.

When to Test Pricing

The general feeling around your first release (or minimum viable product) is that it’s embarrassingly minimal so it’s more common to want to discount or give it away in the interest of learning from customers. The mindset most of us have is one of “lowering sign-up friction”. We want to make it as easy as possible for the customer to say yes and agree to take a chance on our product – hoping that the value we deliver over time will earn us the privilege of their business.

Not only does this approach delay validation because it’s too easy to say yes, but a lack of strong customer “commitment” can also be detrimental to optimal learning. Your job is finding early adopters who are at least as passionate about the problems you’re addressing as you are. Lowering sign-up friction makes sense once you’ve got a customer lifecycle that is working. Until then, the goal is maximizing learning, not efficiency.

I believe if you intend to charge for your product, you should start testing pricing even before you build your MVP. Remember from the last post, that pricing is part of the product.

Don’t Lower Signup-Friction. Raise it.

I know this may run counter to your intuition. It did with mine. Here’s a social experiment I ran recently during one of my customer interviews (and have repeated several times since then) that changed my perspective (I’ve left out the names of the product and customer):

I had just finished demoing the solution and validated that we had a real “must-have” problem and solution on our hands.

Me: So lets talk about pricing…

Customer: Do we need to negotiate pricing right away?

Me: This is not really a negotiation. While we have been using this product internally ourselves, we need to justify whether it’s worth productizing externally.

Customer: Oh ok.

Me: So what would you pay for this product?

Customer: I don’t know – probably something in the $15-20/mo range.

Me: Well, that’s not the pricing we had in mind. We want to start with a $100/mo plan. I can understand why you don’t want to pay a lot (because you are pre-revenue) and it’s possible that we’ll offer a freemium or starter plan in the future.

Right now, we are specifically looking for 10 [define early adopters] who clearly have a need for [state top problem]. We will work closely with these 10 companies to validate [state unique value proposition] within 30-60 days or give them their money back.

You mentioned that you’ve spent several developer hours a month building a homegrown system and still haven’t been happy with the results. This product is our third attempt. Given your current homegrown system, can you build your own homegrown system, which is a non-core function, spending less than 2 developer hours a month ($100/mo is less than 2 developer hours a month)?

Customer: Yes, that makes a lot of sense. We want to be on the shortlist. When you put it that way, I can easily justify paying $1200/yr. It’s just a fraction of what we pay our developers. How do we get on the list?

Me: We’re still finalizing some product details and I’ll get back with you once we’re ready.

Customer: We seriously want to part of the initial customer list. I’ll run upstairs and get my checkbook if you want me to…

So what happened there? Why did the customer agree to paying 5x their original amount?

There were a number of principles in play that I’ll summarize:

Prizing: Oren Klaff discusses this framing technique in his book: Pitch Anything. He describes how in most pitches, the presenter plays the role of a jester entertaining in a royal courtyard (of customers). Rather than trying to impress, position yourself to be the prize.

Scarcity: The “10 customer” statement was not a fake ploy. The first objective with your MVP is to learn. I’d much rather have 10 “all-in” early adopters I can give my full attention than 100 “on-the-fence” users any day.

Anchoring: Last time, I illustrated the relativity principle in action using Steve Jobs’ iPad keynote. Even though pricing against “existing alternatives” might seem logical, the customer might not automatically make the reference themselves. If Steve Jobs saw the point to explicitly anchor pricing, none of us have an excuse.

Confidence: Most people are reluctant to charge for their MVP because they feel it’s too “minimal” and might even be embarrassed by it. I don’t subscribe to this way of thinking. The reason I painstakingly test problems and reduce scope is to build the “simplest” product that solves a real customer problem. I have enough confidence in our ability to build and am willing to put my money where my mouth is.

The Solution Interview as AIDA

AIDA is an marketing acronym for Attention, Interest, Desire, and Action. I find it a useful framework for structuring these type of solution interviews.

Attention: Get the customer’s attention with your unique value proposition – derived from the number one problem you uncovered during earlier Problem Interviews.

Interest: Use the demo to show how you will deliver your UVP and generate interest.

Desire: Then take it up a notch. When you lower sign-up friction, you make it too easy for the customer to say yes, but you are not necessarily setting yourself up to learn effectively. You need to instead secure strong customer commitments by triggering on desire. The pricing conversation above generated desire (albeit intentionally) through scarcity and prizing.

Action: Get a verbal, written, or pre-payment commitment that is appropriate. for the product above, we started taking pre-payments for the MVP and utilized a concierge MVP model to find ways to deliver continual value to our early adopters while we incrementally rolled out the MVP.

How is this Different from a Pitch?

While this might look a lot like a pitch, the framing is still around learning. A pitch tends to be an all or nothing proposition. Here, you lead with a clear hypothesis at each stage and measure the customer’s reaction. If you fail to illicit the expected behavior at each stage, it’s your cue to stop and probe deeper for reasons. For instance, you might have your positioning wrong or be talking to a different customer segment.

P.S: Watch the O’Reilly webcast I gave on the same topic here (with slides):