Want to take your lead-gen A/B testing to the next level? We’ve just released a new episode of our Test & Tell podcast featuring our CEO, Claire Vo, sharing insights from her LeadsCon NY talk showing how you can gain better returns by improving your optimization process.

Listen Here:

Enhance the Podcast, Download the Slides Too

For additional expertise, download our LeadsCon slides to follow along with the podcast episode and get test cases & examples for each takeaway.

Here’s a recap of some things you’ll learn:

A Starter A/B Testing Program Is Like the Puppy Bowl

You know the popular, adorable puppy bowl event where puppies run around playing football? That’s your starter A/B testing. You may have achieved a few conversion lifts here and there, but not consistently. This type of testing is a great start but needs to mature into a awesome conversion program that gets repeatable results.

This is what your starter A/B testing program looks like:

Ran first tests – You’ve run tests and have some experience under your belt. That’s a great start, but you want to plan to scale your testing execution as well as increase return.

Integrated testing tool – There are many testing tools out there such as Optimizely and VWO and it’s not always easy to integrate them into your company. By doing this, you’ve already gathered and utilized some of the necessary resources.

Had conversion wins – You got a few conversion lifts! But just like the puppy bowl, the wins (or touchdowns) are random and inconsistent.

You also run into a couple limitations that need to be addressed if you want to reliably drive returns:

Tires easily – Your starter program is likely not a long-term solution. It’s great at running tests here and there but may not at a maximized output rate that gets you more wins. You need a process that plans, develops, and prioritizes tests in the background while executing other experiments simultaneously.

Gets distracted – Your current testing does not receive the type of dedication and buy-in as other departments within your company. A starter program is susceptible to infrequent execution or being abandoned entirely.

What You Need is a “High-Performance, Trained Show Dog” for an Awesome Conversion Optimization Program

5 steps to get to an advanced testing program:

Increase quality, quantity, and coverage

Shoot for twofers

Record your history

Sell your CRO story

Know the true cause of diminishing returns

1. Increase quality, quantity, and coverage (Slide 5)

Increasing quantity – Always be looking at your testing velocity and ensuring that you’re driving the maximum number of tests every month or week. The more tests you output, the faster you’ll learn and get wins.

Increasing coverage – Test more than just your homepage or landing page. Make sure you’re testing your entire funnel to optimize the entire site and how users experience it.

2. Shoot for twofers (Slides 6-9)

Aim for both a conversion lift AND gain additional insight about your business, customer, or site that drives your program forward. Many starter programs forget to get both of these returns.

Your testing needs to use an iterative approach driven by hypotheses so your program can aim for both lifts and those learnings that help your next set of experiments. To achieve this, you’ll want to test changes that are specific enough to identify as probable cause for results. So even if your test doesn’t result in an immediate conversion lift, you’ll always gain behavioral insight into your users to improve future optimization. This leads to more wins and increased results downstream.

Record your history (Slide 10)

Your program becomes much more powerful when you leverage those test results and learnings by recording them historically to share across your business. Many programs use Google spreadsheets, JIRA, Confluence, or an in-house wiki. Starter testing program may use basic communication platforms like email, but as you complete more tests, you’ll want that data to be more organized and shareable. Often, this is where the starting teams differ from advanced testing teams because it takes administrative resources just to manage this. Experiment Engine customers use a platform that automatically documents completed tests. For others, a process needs to be established to ensure results can be accessible for everyone at the business.

Sell your CRO story (Slides 11-12)

Build a way to sell the CRO story so that everyone in your organization know what testing means for them. Show off the process, results, and overall program to make optimization shine and get buy-in internally, especially from executives.

The most important story, though, is how your A/B testing affecting the bottom line. So it’s important to have a framework for defining how ROI is calculated and actually have those calculation available on a regular cadence. Then, that should be communicated to stakeholders in order to validate the investment in testing (and scale it).

Know the true cause of diminishing returns (Slides 12-14)

This is perhaps our favorite step – finding the cause of diminishing returns and resolving it. For starter testing programs, you’ll often hear about a local maximum, the highest a conversion rate can get at certain part of the UX or funnel. That means fewer wins and lower lifts as you go on.

But advanced testing teams know the cause of diminished returns – it’s diminished effort. Starter testing programs run out of ideas and do not consistently execute experiments. But with continuous, incremental testing, programs are able to find additional ways to gain lifts and actually compound them. By dedicating your program to this approach and having a healthy testing velocity, you’ll reach results repeatedly without hitting an optimization brick wall.

Claire Vo is the VP of Product Management at Optimizely. Previously, she was the CEO and co-founder of Experiment Engine which was acquired by Optimizely in 2017. Claire is an expert in high velocity experimentation programs and passionate advocate for women in technology.