Search

As the old saying goes: “You only have one chance to make a first impression”. If you want to increase the likelihood of success for the deployment and adoption of software, you need to make that first impression while your customer is still excited about their purchase decision. It holds true for enterprise software, B2C software, and everything in between, even though complexity and timeframes differ for each of those categories. In the world of SaaS, the post-sale journey to adoption and value can make or break a company.

So how do you effectively manage your deployments so that you’re: A) maintaining that momentum; and B) providing a quality customer experience that is helping them get a return on their investment in your solution? A great metric to use is Time To Value (TTV). Every solution is going to have it’s own onboarding process which can be as simple as an in-app wizard or as complex as a data integration and product configuration/customization project done by a Professional Services team. In either case, a reasonable target TTV should be measured in days or weeks (rather than months), depending on the complexity of the product and the onboarding process. Even for complex enterprise solutions, you should create a phased deployment where you can measure initial value in 4-6 weeks. Loss of momentum in an implementation can create an incredible amount of pain for enterprise software deployments. Too often, the perceived/expected value in an enterprise software deployment looks like the Gartner Hype Cycle where customers fall into the equivalent of the trough of disillusionment (and frustration) prior to seeing any value from their solution. There was a surprising amount of tolerance for this in the world of perpetual licenses. In the SaaS world, however, …not so much:

It should really be a much smoother and consistent curve of increased value demonstration (with aligned customer expectations) over time.

So how do you minimize time to value (TTV) for your customers, manage their expectations through the deployment cycle, and avoid the equivalent of the “trough of disillusionment”?

Step 1: Identify the unit of measure for value and set a quantifiable objective

Unless you define a “currency” by which you will measure value, you won’t know whether you’re delivering it. In some cases, adoption/usage may be used as a proxy, but to the extent that you can use a metric that your customer will use to measure the financial impact of your solution, start with that unit of measure. Examples for different types of solutions include:

increased overall revenue

larger basket size (for e-commerce transactions)

higher customer lifetime value

higher conversion rates

quicker time to transaction

lower cost per transaction

Even for a single given solution, different customers may measure value in different ways, so it’s important to connect with them to understand their specific objectives that will justify the time, effort, and funding they’re allocating to your solution. You’re getting their resources because you can solve a problem for them. Figure out how they’re justifying their investment in your solution and use the same currency to measure value.

Step 2: Identify the phases for delivering that value

Even if your solution can provide an order of magnitude return on investment, don’t try to get there all at once. Provide quick wins for your customers by phasing your deployment. Don’t try to deploy to the entire enterprise (or market) at once. Pick a department or small, logical group, where you can prove adoption and value, then continue to move forward with the additional momentum you’ve created from the internal user base you just identified. Your objective should be to get a customer advocate/thought leader to stand up in under 60 days and say “look at the value we’ve gotten from this solution. We need to roll this out to a broader group.” If your strategy is to land and expand, understand that you haven’t really landed until you’ve proven value.

Providing quick wins not only keeps the momentum going, but it also provides a healthy deployment model where you’re interacting with your customer on a frequent basis and continuously validating that you’re proceeding according to plan. Without frequent checkpoints, customer expectations and deployment realities can diverge quickly. The most common reason I’ve seen behind runaway or at-risk projects has been this disconnect between expectations and reality due to inadequate interaction and communication. The longer the time period between conceptual agreement and validation, the higher the likelihood that the end result will not be what the customer expected. Improving Time To Value will increase your chances of success.

Step 3: Execute and Manage(!) according to that plan.

This one might seem obvious, but execution is key. If deliverables aren’t met, understand why… and quickly. Break your implementation process into stages and understand which of the stages are creating friction/delays. In many cases, if you’re suffering from slow deployments, it’s primarily because of one or two key aspects of your implementation process, not all of them. Even in cases where the entire process needs work, by breaking it down into stages and seeing which of the stages are causing the biggest problems, you can quickly understand how to prioritize your activities around improving the deployment process. Customer Success requires constant iteration, listening, and learning from your customers.

In some cases, implementation delays are seemingly due to lack of responsiveness or loss of momentum on the part of your customers. Re-think this problem and see if there’s a way that you can restructure the deployment process so that you’re being more prescriptive for your customers and not requiring them to do things that take so much time and effort. I recently came across one example with a company that provided configuration options for custom reports in their new social analytics product. As part of their implementation process, they would ask customers to provide them with the top 4 items they wanted to see in their custom reports. Enterprise customers would iterate for weeks trying to get internal consensus on those 4 items, delaying the deployment, and delaying the time to value for customers. The solution: create 4 prescribed reports as defined by the implementation consultant, based on their knowledge of the customer as part of Phase 1 deployment; then allow the customer to modify those reports in Phase 2.

Good social networking and B2C companies really get the importance of Time To Value. They’ve sped up adoption and reduced time to value by implementing sign-up wizards as part of the enrollment process. By the time a new customer has finished creating a new account on Facebook, LinkedIn, or Instagram, they’ve been prompted to import contacts from their address book or other social networks in order to be positioned to get value immediately. In a previous life I ran deployments for an enterprise social networking technology and as part of our implementation process, we “bootstrapped” user accounts so that every user had profiles built on Day 1 that incorporated expertise from their previous 90 days of activity. Look at your implementation process to determine whether there are any similar opportunities to prescribe/recommend configuration options to your customers or speed system readiness for your customers.

Momentum is incredibly powerful. By optimizing your deployments for Time To Value, you can maintain, and even increase that momentum to create successful, loyal customer relationships. That, in turn, will lead to a shorter TTR (Time To Reference) …but that’s a whole other metric!

Brian Ascher, a partner at Venrock, wrote a great blog post a while back about how the waterfall model may be the “single best financial reporting tool ever”. That might actually have been an understatement. I highly recommend reading his post, by the way, if you aren’t familiar with a waterfall model and want a good primer as well as the example spreadsheet below.

In a nutshell, a waterfall model allows you to lay out your projections over a period of time (monthly numbers over a one year period; weekly numbers over a year, or daily numbers over a month, for example) and at the end of every period, compare your actuals to your projections then revise your estimates for the periods moving forward based on what you’ve learned. The waterfall model doesn’t provide you with all the answers; however, it gives you a good idea of how you’re doing with respect to your original and revised plans and as a result, figure out what additional questions you need to ask yourself to understand why. It’s an incredibly powerful tool given its relative simplicity.

VCs and Startup CEOs/CFOs have been using waterfall models for decades to measure progress against plan and to help validate assumptions about growth, cash balance, user adoption, and a number of other important business metrics. Outside of the VC/startup/board community, however, waterfall models seem to be underutilized. Maybe it’s because startups need to move quickly. They’re constantly making assumptions, learning, understanding which assumptions were good and which ones weren’t, then revising their plan of attack quickly as they continue to move forward …and a waterfall model helps them understand that and react quickly. There are a few reasons that waterfalls can be particularly helpful in the area of Customer Success as well, given a similar need to move quickly in order to proactively manage recurring revenue:

Reason 1: You need a plan, and you need to know how you’re tracking according to the plan

A waterfall model enforces management to a plan. The interim checkpoints, by nature, hold you accountable to that plan, and if there is a variance, force you to do three things: 1) Acknowledge the variance. If you set up your waterfall model correctly, the interim periods you define should be frequent enough to allow you to take action while there is time to impact the outcome; 2) Ask why there is a variance; and 3) Re-plan the future periods given what you now know.

Reason 2: Your assumptions aren’t always right

Planning, or more precisely, getting a plan right, is an ongoing process. People make plans based on assumptions. Managing an existing customer base can be tricky, and having frequent enough visibility into key metrics in order to take meaningful action allows you to challenge your assumptions with enough time to take meaningful action. One important point to clarify here: This isn’t an opportunity to make excuses for why you didn’t hit your numbers. This is an opportunity for you to understand what you need to do differently to improve your performance (while there’s still time) and create more accurate plans and forecasts in the future. If you do need to re-plan, the waterfall still allows you to measure against your original plan and your revised plan.

Reason 3: Trends are interesting, but without a comparison to your original plan, trends don’t give you the entire picture

Growth is great. Improvements in key metrics are great. In order to run a business and plan/manage it successfully, though, you also need some predictability. Waterfalls provide you with a historical snapshot of how well you did delivering to plan. You always have historical information on your original plan, your re-plan, and your actual performance for each measurable period – in one table. It’s a simple, yet very effective visual tool. If you ended up growing up-sell revenue 25% quarter over quarter is that good? What if your original plan was to grow at 30% QoQ?

So, with all that justification behind us, here’s an example of where and how I’ve used a waterfall model in Customer Success:

Planning and Forecasting Retention and Churn:

I recently blogged about the many Customer Success Automation solutions coming to market to help companies manage a SaaS customer base more effectively. Whether you’re using one of these products or whether you’re just starting to get your head around managing your customer base, it’s very valuable to understand which of the data elements and assumptions you’re using to identify “healthy/reference customers” or “at risk customers” are accurate, and which ones require you to go back and think again.

A team of mine once needed to forecast churn risk from the existing customer base and had very little valid historical information from which we could create projections. We started by looking at customers using broad-stroke definitions of various health levels. We assigned customers a “health status” of Red, Orange, Yellow, and Green, then based on their contract renewal month, assigned a probability of renewal based on that health status. We eventually began adding criteria to more clearly define health status, including usage metrics (not just frequency of logins, but how effectively were they using the system), customer responsiveness, and other indicators of risk associated with their business and usage model. We looked at our first months data and saw where we were off, then went back to our assumptions and looked at where we might possibly have miscategorized customers. We also looked at whether our percentage ratios by health status were accurate (for example, did x% of our “orange” customers actually cancel). We gradually increased our sophistication level as we gathered more data and continued to refine our assumptions in our waterfall model. By the end of our first full year of deploying the model, we were within 5% accuracy forecasting revenue retention and churn.

In addition to forecasting retention and churn, a waterfall model can be useful in other areas of customer success, including:

Planning and forecasting up-sells

Modeling the rate at which you plan on improving service levels and/or resultant customer feedback scores

Planning and forecasting adoption of certain strategic product features across your user base

Pretty much any key metric you want to track and measure against can be managed using a waterfall model. You may want to start with a couple of the ones above, then determine if tracking others will be useful. Just be ready to dig into the underlying data to ask “why” the variances are occurring… and keep asking “why?” until you see patterns emerge. Then act.