Testing 1-2-3

What should you test?

In a straight A/B test, you pick one variable to test, and keep everything else the same. Lots of people make two very common mistakes with this: Either they try to test two variations that are so different that it’s impossible to know what caused the change in performance, or they test something so minor that it’s unlikely to have much influence on performance at all.

Want to test more than one thing? That’s where multi-variate testing comes in. But be careful: Multi-variate testing has a lot of drawbacks. It’s more expensive, for one thing, and it’s harder to see significant results with such small segments. Most businesses are best served by keeping the number of test segments as small as possible.

To determine what to test, it helps to start with figuring out what’s the “most broken” — or, to put it in other words, where the biggest area of opportunity lies. For example, many e-commerce companies find that the biggest pain point for them is the shopping cart checkout process. Testing a variation designed to improve cart checkout rates might be a great use of a testing budget for such a company. Testing costs dollars; the trick is to put the dollars where they’re likely to make the most difference.

Can you measure it?

It may sound obvious, but the first thing to check is whether you’re able to measure the results of your test. Ensuring that all your tracking tools are in place and properly set up to measure conversion. If you’re checking to see whether changing the colours on your page will result in more phone calls, it first makes sense to check whether you can tell which calls come from which page. If you can’t measure it, you can’t test it.

Is your test statistically significant?

Statistics aren’t just for math nerds. Before you run that test, you need to figure out if you will get a statistically significant result out of it. In other words, will you see a big enough difference between your test and control page to be able to say that you’ve learned something?

There are two main variables to consider here:

Volume: How much traffic does your site receive? What’s your conversion rate? High-traffic, high-volume sites will be able to see significant results with even a small improvement in conversion rate, while low-traffic, low-volume sites might have a tougher time getting to that critical mass.

Time: The lower your traffic, the longer it will take to see a statistically significant difference between your test and control. The lower the volume, the longer the testing period. The trouble is, you can’t always just lengthen the test period, because things don’t stay consistent over time. Your business might be seasonal, for instance, or the outside market conditions might change. As a general rule, you want your testing period to be as short as possible to yield quality results.

There are a number of free calculators (like this one) that can help you do the math on this one.

What will you do with what you’ve learned?

Many well-intentioned marketers test without considering how useful their results will be. The obvious answer is “we’ll go with the winning variation”, right? Well, maybe. If you test moving the phone number to the top of the landing page, and it nets more results, then you can design all your future landing pages that way. However, there are some tests whose results will have limited future applicability. For example, if you’re running a short-term promotion that won’t be repeated next quarter, testing the content of that promotion is probably not going to yield results that are particularly useful. Think about what you’ll learn, and how you can extrapolate it to other programs, and you’ll get the most bang for your buck.