How to Successfully Run an A/B Test on Your Email Campaign’s Landing Page

We love A/B testing here at Litmus. A/B testing subject lines, calls to action, or preheader text can be a great way to optimize your emails for opens, clicks, and conversions.

But what about after the email?

It’s common practice to send your subscribers to a specific landing page once they click. So, how do you know what’s going to push a micro-conversion to a macro one? There’s more testing to be done!

We talked to Alex Birkett, Growth Marketing Manager at ConversionXL, to understand the ins and outs of optimizing your landing page as part of your email campaigns.

Why should you A/B test the landing page of an email campaign? What are the benefits?

A/B testing is the only way to tell if the changes you make to a page are actually working or not. You can lead with your gut, and many people do this, and you can be quite successful. Or, you could be leaving a ton of money on the table.

Me? I’d like to know if I’m leaving money on the table. It’s like that old quote, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” A/B testing, or controlled online experiments, help you know which elements of a landing page are hurting or helping your conversion rate, and they help you squeeze more out of the traffic that you’re sending to the page, therefore reducing the cost of acquisition.

How does A/B testing on a landing page differ, if at all, from A/B testing emails, like your subject line?

Users interact differently with emails than they do with landing pages. For one, if a user has landed on your page, they have presumably clicked through out of interest from your email campaign. From there, it’s important to align your pre-click (email) and post-click (landing page) messaging (this is known as “message-match” or “scent”).

Another difference is that landing pages have more elements, and therefore more complicated interaction effects. Finally, landing pages are optimized for macro-conversions, whereas you’re probably optimizing an email (in isolation) by micro-conversions, like click through rate.

Of course, you should be analyzing the entire campaign, from email open to credit card confirmation, to ensure the holistic success of a campaign. That alone is a solid reason to invest in a centralized optimization team.

Everyone’s audience is different, so to say that adding client testimonials or a sticky CTA will always work would be misleading. That said, if you follow a heuristic framework, there are some things that will usually lift conversions:

Improve your message’s clarity: Clarity trumps persuasion.

Increase user motivation: Why should they complete this action?

Remove distraction: Does this element assist the conversion? If not, it’s irrelevant.

Reduce friction: Do people trust your site? Is there sufficient proof?

Maintain relevance: Do you maintain the same message throughout the process?

Of course, making your site more usable and more accessible will almost never hurt your conversions (and it’s the moral thing to do). These principles are embodied by different tactics at different points in time. Most of these still hold up if you want actionable “implement this right now” ideas.

Do you recommend A/B testing more than one element at a time? If so, what are some ways to make sure an outcome can be attributed to a specific test?

Yes! There’s this common advice of “only change one element per test” because otherwise, how can you tell what affected the result? Well, this advice is misleading, especially so for people who don’t have Amazon-level traffic.

For one, if you change your headline, CTA, and hero image and you get a 40% lift, how much do you care that you didn’t “learn” which element specifically caused the win? Second, it depends on how you define your Smallest Meaningful Unit. That is to say, you may consider “one unit” to be your headline, or your CTA, or your hero image. But you could also argue that your headline is made up of several different words, all of which could be considered “one unit.” So it depends on the scope of the change you’re looking to make and the effect you want to reach. This is to be determined by your analyst. But I really wouldn’t limit myself to “just one change per test” simply because it limits how you think about optimization.

Do you ever test the entire landing page as an A or B variant?

If you have a good reason to believe that a completely different variant has a good chance of winning, then sure. Otherwise, completely transforming the page may be a waste of resources and a roadblock to moving faster and making more agile changes. Instead of redesigning the whole thing, I would spend my time and money a) doing more customer research to try to learn what really matters or b) increasing the beta of ideas you’re testing (i.e. a/b/c/d/etc. test) if you have enough traffic for it to be valid.

How long should an A/B test typically run before you choose a winner?

It depends on your traffic, minimum detectable effect, validity factors, business cycles, and risk profile, but generally, you should:

Test for full weeks.

Test for two business cycles.

Make sure your sample size is large enough (use a calculator before you start the test).

How do you determine what part of your audience should see the test? Does this relate to the way you segment your email campaign?

This is a really good question with a complicated and a simple answer.

The complicated one? If you’re running email tests, especially with multiple variables, as well as landing page tests, then you’re essentially running multivariate tests (email experience A, B, or C gets bucketed to go to landing page D, E, or F and they all have interaction effects).

This is complicated, and accurate analysis requires lots of traffic. Here’s the thing though: when you’re optimizing a landing page, you’re always dealing with a multitude of interaction effects via different traffic channels and customer journeys.

The simple answer: avoid worrying about the interaction effects from different traffic sources and just worry about optimizing the bottom of the funnel metrics (the macro-conversions). If you’re running at a large scale and really want to know which combination of email subject line, copy, CTA, and landing page headline, copy, and CTA is the absolute optimal, you can UTM tag different email tests and post-test segment them later to dig into differences in conversion rate. This would be a very mature level of operations and would assume a solid amount of traffic and tests run per month. Give that work to your analysts and move on to running your next test.

What kinds of metrics can help determine a test’s winner? How can these relate to the metrics from your email campaign?

This is different depending on what type of business you are and what your goals are, but I would highly recommend optimizing for macro-goals. Yes, track micro-conversions (opens, click through, engagement, bounce rate, etc.), but the number you should make decisions off of is the one that most affects your business. For lead gen, this is the quantity and quality of leads that fill out a form. For e-commerce, this could be average order value or revenue per visitor. It depends. But don’t make decisions off of clicks or something like that—tie it to the macro-conversion.

Once your test has a “winner,” what do you recommend as the next steps? How many times do you recommend running a test for the same element before it’s “done?”

If you ran the test correctly, implement the winner. Take your money and move on to testing new things.

What are some examples of successful landing page tests that you’ve seen in the past?