How to Optimize Your Marketing Choices Through A/B Testing

One of the best things about marketing is that it is a very fluid process, heavily informed by how an audience reacts and responds to an initiative. This means that it is very important that you identify what marketing choices will optimize your impact. The process of A/B testing can help with this.

Here, we’ll discuss A/B testing in the context of email marketing to set a familiar scene for our examples.

What is A/B Testing?A/B testing, or split testing, is the term given to the highly adaptable process used to identify the most effective marketing choice through the user activity that each option receives. This method provides a side-by-side comparison of each’s performance by only changing a single variable between versions A, B, and so on where applicable (but more on that later). Since the A/B testing process only analyzes one variable at a time, your marketing effort - in this case, your email - can gradually be adjusted to collect the most favorable response that it can. Some email marketing solutions, like MailChimp and to a lesser extent Constant Contact, can assist a user in their A/B testing efforts.

Of course, the fact that we’re focusing on email here doesn’t mean that A/B testing can’t also be used to improve your other marketing efforts. In reality, the technology at our disposal today allows us the means to track almost any kind of marketing, from email to social media to physical marketing pieces and more.

So many viable tests mean that there are a lot of insights that A/B testing can offer you regarding your emails. However, depending on the motivation you have to make a change, you will want to prioritize certain tests over others. Essentially, you want to identify exactly what it is that you want to change, and prioritize testing the factors that more directly contribute to that goal.

For example, let’s say you are trying to increase your email open rates. Assuming this is the goal you have in mind, it doesn’t make much sense to start off by A/B testing the call-to-action that appears at the end of the email. Instead, it would make more sense in this case to A/B test different subject lines, as they are what will more directly influence your open rates.

Better Testing PracticesHere are a few ways to improve the A/B tests that you run by answering a few questions that are commonly asked about this kind of optimization strategy.

How much of my email list should I test?Unless there are certain circumstances in play, there really isn’t any reason not to test based on your entire list. First of all, the larger the sample size of your test, the more reliably you can trust the results. Of course, there are those circumstances to watch out for that limit the percentage of your email list you should involve in your tests.

For instance, if your email marketing solution charges you for each email address on a list, it makes more sense financially to select a random sample of as many of your contacts as you can afford and use them as the test group, the rest serving as a control. Other times that a comprehensive test may not work is if you are under a time crunch (in which case you should run a small, quick test and run with the results of that) or if you’re testing some extreme changes to your approach (in this case, restricting to a smaller group to minimize potential audience alienation).

How much should I test, and for how long?Of course, circumstances will dictate how your tests play out, but it is generally best to test one variable at a time. Otherwise, it would no longer be an A/B test, and would instead be a more complicated multi-variate test, which isn’t something we’re going to get into.

However, this doesn’t mean that you’re limited to just the two options of the same variable. Instead of just an A/B test, you could make it an A/B/C/D/etc. test, as long as you have a large enough group to divide into statistically significant samples.

As for the length of your A/B test, you want to find the Goldilocks length: not too long, but not too short, either. If it is too short, you won’t be able to collect as much data, making your A/B test less statistically accurate. If it’s too long, other variables outside of your control can begin to influence your results. A decent guideline to follow is to run an A/B test for at least a few days, and up to a couple of weeks, depending on the level of response you get.

As you run your A/B tests, you will want to send all versions out simultaneously. This further reduces the likelihood of external factors and circumstances skewing your results.

Are You Ready to Optimize Your Emails, and More?With A/B testing in your wheelhouse, you can make better marketing decisions, especially where your emails are concerned. Are you interested in learning more about how your marketing strategy can be improved? Reach out to our team today!

About the author

Chris is a hopeless Technology Fanatic, an Inbound & Outbound Marketing Expert as well as an Senior IT Advisor, Web, Graphics & Software Designer. When he's not running Directive and JoomConnect he's probably sharpening his skills as an Amateur Photographer and Filmmaker. Chris lives with his wife Charlotte and their 2 sons in Upstate NY. Visit his photography site at www.directivestudios.com.