Topics

Featured in Development

Alex Bradbury gives an overview of the status and development of RISC-V as it relates to modern operating systems, highlighting major research strands, controversies, and opportunities to get involved.

Featured in Architecture & Design

Will Jones talks about how Habito, the leading digital mortgage broker, benefited from using Haskell, some of the wins and trade-offs that have brought it to where it is today and where it's going next. He also talks about why functional programming is beneficial for large projects, and how it helps especially with migrating the data store.

Featured in AI, ML & Data Engineering

Katharine Jarmul discusses research related to fair-and-private ML algorithms and privacy-preserving models, showing that caring about privacy can help ensure a better model overall and support ethics.

Featured in Culture & Methods

This personal experience report shows that political in-house games and bad corporate culture are not only annoying and a waste of time, but also harm a lot of initiatives for improvement. Whenever we become aware of the blame game, we should address it! DevOps wants to deliver high quality. The willingness to make things better - products, processes, collaboration, and more - is vital.

Featured in DevOps

Service mesh architectures enable a control and observability loop. At the moment, service mesh implementations vary in regard to API and technology, and this shows no signs of slowing down. Building on top of volatile APIs can be hazardous. Here we suggest to use a simplified, workflow-friendly API to shield organization platform code from specific service-mesh implementation details.

Sell Before You Build

Over the last four years of building my own startups and involvement with various other startups, the most important lesson I’ve learned is "Sell your product/idea before you build it." Seriously!

During this journey, I’ve met many successful founders and their most important advice has always been "Do you have paying customers? If not, first get them and then think of building a product that addresses their real needs." It sounds to me, having been a test-driven development veteran, like "Do you have failing tests before you write/modify any code?"

At Kickstarter, an impressive 44% of projects have met their funding goals before they’ve started building a product. Applying this validation-driven approach to new product/service development certainly sounds fascinating and desirable. I would love to momentarily pause time while people line up on my doorstep waiting to pay me hard cash for a product that doesn't even exist. It’s worth a shot, right?

Related Vendor Content

Related Sponsor

At Edventure Labs, when we started validating our idea of teaching five-year-old kids scientifically proven mental arithmetic skills, which lets them calculate faster than a calculator, we realized our potential customers (parents of five-year-olds) only vaguely understood the problem we were trying to solve. To them, it was about number crunching, which is an obsolete skill replaced by electronic gadgets. To make matters worse, they believed kids had to be born a genius to calculate faster than a calculator. They did not know if their kids were capable of or even interested in acquiring this skill. We simply could not get any real feedback from talking to parents. If we tried to sell them our product (an educational game to teach mental arithmetic), they would immediately change the topic as if we’d uttered some taboo.

We had to figure out how to have a meaningful conversation with our potential customers. We could hire a world-class team, build the product, “hire” some kids to go through our educational program, and then show real data to the parents to prove the importance/feasibility of our product. However, existing teaching methods take a minimum of 10 minutes of practice every day for 18 months to teach a kid this skill. A good 70% of the kids drop out of these programs. Moreover, we had no clue what the product will be. Also, we had to zero in on the technology, the teaching and evaluation techniques, etc. This would easily take another six months. So we were looking at a minimum 24-month horizon, assuming we knew exactly what needed to be done, before we could actually have this conversation with parents and acquire paying customers.

If I were a typical agile product owner, I would say, “It’s a very high-risk project. It’s best to skip it.” However as startup founders, we were convinced that this could change the world and hence the risk was totally worth it. In fact, we told ourselves that because it’s so hard, no one else had yet solved this problem or built a successful product. So let’s go full swing, hire a team, do a collaborative product-discovery workshop with them, come up with a release roadmap, and start sprinting. That was absolutely the best and fastest way to burn all our funds and destroy our dreams.

Instead, we ran a series of experiments to answer open questions and through this process completely refined our strategy and our product. Over the last two years, our vision has remained the same, but we’ve done nine significant pivots and finally we feel we’ve nailed it.

First, we had to figure out an effective teaching technique for five-year-olds. We wanted real data as a baseline to validate the retention power of different teaching techniques. We picked the introduction to abacus (one of the tools we use to teach mental arithmetic) lesson. Quickly, we put together a storyline, wrote a script, hired a professional animator, got a person in the US for the voiceover, and produced a 33-second animation that introduced the abacus.

We got a bunch of kids to watch this animation and afterwards asked each to represent different numbers on the abacus. Fewer than 50% of the kids were able to do this. We quickly realized that kids have so short an attention span that they quickly zone out unless they are able to interact with what they see on the screen. Animations are expensive to create and require a lengthy turnaround even for small changes. It was clearly a bad strategy.

Inspired by mobile games, we came up with a hypothesis that if we created inline instructions and used micro-simulations, the kids would have a better retention and be able to learn better. To quickly test this hypothesis, we found a bunch of images on the Internet within 10 minutes, created a presentation, and added transitions to create an animation effect. Then we exported this presentation as a movie.

The kids could watch this 10-second movie and follow a simulation/inline instruction in our game. Once the simulation showed how to represent a number, we would ask the kid to copy that the abacus. Of course, the kids could not move the on-screen beads but we would be able to test whether the kids tried to move the right ones and assert if they remembered how to represent the number. Ninety percent of the kids could. If they could, we would ask them to represent other numbers not shown in the simulation to see if they could extrapolate what they just learned and apply the logic to other numbers. Most kids could do simple numbers, but were not able to do numbers that involved the upper bead. This was another good lesson from this experiment.

Another major question we had to figure out was our distribution strategy. Would we be able to sell our educational game as an app in the app stores? To test this, we quickly created two small abacus games, Abacus Rush and Abacus Ignite. Rush was free and Ignite was paid. We wanted to see how paid apps performed compare to free apps in our segment. How many free app users would pay for the paid app? We figured 10% conversion would be great. We quickly learned that paid apps would not help us sustain our product development. Free apps did fairly well. Could we use free apps as a marketing tool to find distributors/partners? We launched Abacus Rush in Google Play as a free app called World of Numbers.

(Click on the image to enlarge it)

To our surprise, we crossed 120K downloads in a week’s time. All we’d done was hack something together to test our hypothesis. We had allowed any Android device to download the app and we quickly realized the app had performance and usability issues on lower versions of Android. We had to pull it off the app store to avoid damage to our reputation. We then invested a week to fix those issues and launched another game called Number World. Despite not allowing outdated versions of Android to download the app, we got 93K downloads in four days. These quick experiments helped us get the kind of partnership offers we were looking for. It’s a classic example of how cheap, safe-fail experiments helped us to validate our hypothesis and make progress in our product strategy.

Previously, potential partners would commonly look at our concepts and tell us, “This is too futuristic. This will not work!” The 120K downloads certainly helped us change their mindset.

I can go on with many other experiments we ran to validate our hypothesis and figure out our product strategy, but let me step back and quickly recap what we learned and explore how you can use some of these techniques.

As startup founders, we might have a knack of identifying real opportunities or pain points, but building a sustainable business around it is a whole different ball game. It requires a ton of experimentation to figure out how to package and pitch your product to really appeal to your target customers. For many years, startups primarily focused on building their dream products. They probably spent time creating a business model, but mostly ignored customer development (acquisition and retention). Figuring out a sustainable business model is exponentially harder than building the product itself, yet founders and product owners don’t pay as much attention to customer development as to the product.

What startups really need is a scientific framework for conducting many safe-fail experiments. In lean-startup lingo, let's say you have an idea or a vision for a product or a service. You devise a series of possible strategies you could use to fulfill your vision. It is important to acknowledge that each strategy is based on a list of hypotheses that need to be validated using a series of cheap, safe-fail experiments (via MVPs) to obtain validated learning. Then, based on real data, we pivot or persist the direction of the vision. Either way, you need to constantly keep running a series of experiments with fast feedback cycles to calibrate/validate your progress/direction.

MVP is a safe-fail experiment. The best MVPs are those that give you maximum validated learning for minimum investment (time, effort, and opportunity cost). In other words, life is too short to be wasted building products that no one wants.

This is the lean-startup movement in a nutshell.

When building products, we constantly need to ask these two critical questions:

Agile methods are really good at addressing the second question. To some extent, they also help you answer the first question; however, there is certainly a delay in getting that feedback. Typically you get that feedback at the end of your first iteration/sprint and in some cases only during your first release.

Getting this feedback in a few weeks or at most three months is certainly better than getting feedback a couple of years later like in traditional methods. But startups, which operate under high conditions of uncertainty, might not be able to afford this delay. More importantly, the lack of focus on cheaply and safely validating whether we are building the right product and, if not, then pivoting is essential. And agile methods don't really focus on solving this problem. In general, building something quick and dirty for experimentation with real customers is something many agile folks look down upon.

If you think about it, agile methods flourish when your users are locked in. Agile methods give you opportunities to build a healthy relationship with your (known) customers. Via ongoing collaboration, communication, and feedback with them, the team gains a better understanding of their needs or pain points. But when your users are not captive, they don't really know or recognize their pain points, especially in end-consumer facing products, and relying completely on your product owner and using agile methods seems like a bit of shooting in the dark. It’s a gamble!

I remember sitting at a bar in downtown Chicago in 2007 and discussing this issue with Jeff Patton. Jeff drew the following diagram on a napkin:

Quadrant 1 seems like a natural habitat for agile methods. As you travel out in the wild, however, you might need additional techniques to succeed. This might explain why product companies are not really seeing the benefits they expected from agile methods.

Jeff was one of the pioneers who spotted this gap in agile methods and tried to fill it with user-story mapping and, later, product discovery. The collaborative nature of these product-discovery sessions and their focus on minimum marketable product (MMP) is an important next step for many agile teams.

The lean-startup community really pushed the envelope on a scientific approach to running a series of safe-fail experiments. Lean startup also focused heavily on customer development and business-model validation, which agile methods completely missed. It was mostly left to the product owner to solve these complex puzzles.

This is just the beginning of a new era of scientific experimentation in the product-development space. I’m pretty excited to see how lean-startup principles and thinking are already penetrating large enterprises.

For example, with lean-startup, many enterprises are treating each new initiative at a portfolio level just like a startup. They are running parallel cheap, safe-fail experiments on multiple initiatives and finally choosing the most promising initiative instead of picking one initiative right at the beginning based on personal preference or gut feeling.

Last time I visited a large energy company, I heard one developer ask the product owner in the planning meeting what was the hypothesis behind a new proposed feature. This was a powerful question. Immediately, the focus shifted from “Let’s build as many features as possible and see which one sticks,” to “What is the bare minimum we need to build to obtain validated learning before we decide which feature to invest in.” This is brilliant because it will really help teams to build crisp enterprise software instead of these bloated monsters that are a real pain in the neck for everyone to deal with.

Continuous deployment is another practice that enterprises are trying to embrace. It makes absolute sense for enterprises to be able to quickly validate their new feature direction without having to build the whole damn thing only to realize only 3% of their users use that feature. It feels like a nice progression from CI and other agile practices that organizations are already practicing. Continuous deployment also helps enterprises to use techniques like A/B testing and stub features, which let the product owner make concrete prioritizations based on real data.

The good news is that lean startup has something to offer to everyone, whether you are a startup trying to bootstrap yourself or a large enterprise building mission-critical applications.

About the Author

Naresh Jain is an internationally recognized Technology & Process Expert. As an independent consultant, Naresh worked with many fortune 500 software organizations and startups to deliver mission critical enterprise applications. Currently Naresh is leading two tech-startups, which build tablet-based adaptive educational games for kids, conference management software and social-media search tool. His startups are trying to figure out the secret sauce for blending gamification and social learning using the latest gadgets. In 2004, Naresh founded the Agile Software community of India, a registered non-profit society to evangelize Agile, Lean and other Light-weight methods in India.