The first treatment was designed based on the hypothesis that visitors did not convert because the copy didn’t engage them enough, so it took a direct response tone. The second treatment was based on the hypothesis that visitors experience high levels of anxiety over potential high-pressure salespeople or spam phone calls. This treatment took a more “customer service”-oriented tone.

Email is a great medium for testing. It’s low cost, and typically requires less resources than website testing. It’s also near the beginning of your funnel, where you can impact a large portion of your customer base.

Sometimes it can be hard to think of new testing strategies, so we’ve pulled from 20 years of research and testing to provide you with a launching pad of ideas to help create your next test.

In this post and next Monday’s, we’re going to review 16 testing opportunities you can test around seven email campaign elements.

To start you out, let’s look at nine opportunities that don’t even require you to change the copy in your next email.

Subject Line Testing

Testing Opportunity #1. The sequence of your message

Recipients of your email might give your subject line just a few words to draw them in, so the order of your message plays an important role.

Personalization is not new to email marketing; but has it lost some of its appeal with marketers?

Only 36% of marketers said they dynamically personalize email content using first names in subject lines and geo-location, according to the MarketingSherpa 2013 Email Marketing Benchmark Report. The report also revealed that only 37% of marketers segment email campaigns based on behavior.

However, marketers from various industries have seen incredible success with personalization. I dove into the library of MarketingSherpa, MarketingExperiments’ sister company, to find out how marketers have used both tried-and-true personalization tactics and innovative, tech-savvy strategies to better engage their customers and email audience.

No tactic or strategy is foolproof, so we suggest using these campaign tactics as testing ideas to see what works with your audience when it comes to email personalization.

Idea #1. Turn your email into a personal note, not a promotional email

Data is officially everywhere. It’s even infiltrating the design of emails — and for good reason.

“The more you know about your audience, obviously the better you can tailor an email design to someone,” Justine Jordan, Marketing Director, Litmus, said.

Justine sat down with Courtney Eckerle, Manager of Editorial Content, MarketingSherpa (sister company of MarketingExperiments), at MarketingSherpa Email Summit 2015, to discuss what tools marketers can access to better their email creatives.

When asked what is the biggest asset email marketers have when designing their next email, Justine answered data.

“Data can be a really powerful tool for helping a designer decide how to layout their campaigns,” she said.

Watch the whole interview here:

How can data make design better?

In the interview, Justine shared a few types of data that can benefit email designers:

What people have looked at in the past

What kind of email services people are opening up

What type of content has resonated with clients in the past

When asked how one of these could be applied to campaigns, Justine talked about technical compatibilities. For instance, GIFs don’t work properly in Outlook 2007. By using past data, you can know beforehand if a portion of your readers use that email service. If they do, and you use a GIF, then your email campaign won’t be as effective as it would have been if you had segmented that audience to use a more Outlook 2007 friendly design.

This blog post ends with an opportunity for you to win a stay at the ARIA Resort & Casino in Las Vegas and a ticket to Email Summit, but it begins with an essential question for marketers:

How can you improve already successful marketing, advertising, websites and copywriting?

Today’s MarketingExperiments blog post is going to be unique. Not only are we going to teach you how to address this challenge, we’re going to also offer an example to help drive home the lesson. We’re going to cover a lot of ground today, so let’s dive in.

Give the people what they want …

Some copy and design is so bad, the fixes are obvious. Maybe you shouldn’t insult the customer in the headline. Maybe you should update the website that still uses a dot matrix font.

But when you’re already doing well, how can you continue to improve?

I don’t have the answer for you, but I’ll tell you who does — your customers.

There are many tricks, gimmicks and types of technology you can use in marketing, but when you strip away all the hype and rhetoric, successful marketing is pretty straightforward — clearly communicate the value your offer provides to people who will pay you for that value.

Easier said than done, of course.

How do you determine what customers want and the best way to deliver it to them?

Well, there are many ways to learn from customers, such as focus groups, surveys and social listening.

While there is value in asking people what they want, there is also a major challenge in it.

According to research from Dr. Noah J. Goldstein, Associate Professor of Management and Organizations, UCLA Anderson School of Management, “People’s ability to understand the factors that affect their behavior is surprisingly poor.”

Or, as Malcom Gladwell more glibly puts it when referring to coffee choices, “The mind knows not what the tongue wants.”

This is not to say that opinion-based customer preference research is bad. It can be helpful. However, it should be the beginning of your quest, not the end.

… by seeing what they actually do

You can use what you learn from opinion-based research to create a hypothesis about what customers want, and then run an experiment to see how they actually behave in real-world customer interactions with your product, marketing messages and website.

The technique that powers this kind of research is often known as A/B testing, split testing, landing page optimization or website optimization. If you are testing more than one thing at a time, it may also be referred to as multivariate testing.

To offer a simple example, you might assume that customers buy your product because it tastes great and because it’s less filling. Keeping these two assumptions in mind, you could create two landing pages — one with a headline that promotes that taste (treatment A) and another that mentions the low carbs (treatment B). You then send half the traffic that visits that URL to each version and see which performs better.

Here is a simple visual that Joey Taravella, Content Writer, MECLABS created to illustrate this concept:

That’s just one test. To really learn about your customers, you must continue the process and create a testing-optimization cycle in your organization — continue to run A/B tests, record the findings, learn from them, create more hypotheses and test again based on these hypotheses.

This is true marketing experimentation, and it helps you build your theory of the customer.

Try your hand at A/B testing for a chance to win

Now that you have a basic understanding of marketing experimentation (there is also more information in the “You might also like” section of this blog post that you may find helpful), let’s engage in a real example to help drive home these lessons in a way you can apply to your own marketing challenges.

To help you take your marketing to the next level, The Moz Blog and MarketingExperiments Blog have joined forces to run a unique marketing experimentation contest.

In this blog post, we’re presenting you with a real challenge from a real organization and asking you to write a subject line that we’ll test with real customers. It’s simple; just leave your subject line as a comment in this blog post.

We’re going to pick three subject lines from The Moz Blog and three from the MarketingExperiments Blog and run a test with this organization’s customers.

Whoever writes the best performing subject line will win a stay at the ARIA Resort in Las Vegas as well as a two-day ticket to MarketingSherpa Email Summit 2015 to help them gain lessons to further improve their marketing.

To test emails, you just send out two versions of the same email. The one with the most opens is the best one, right?

Wrong.

“There are way too many validity threats that can affect outcomes,” explained Matthew Hertzman, Senior Research Manager, MECLABS.

A validity threat is anything that can cause researchers to draw a wrong conclusion. Conducting marketing tests without taking them into account can easily result in costly marketing mistakes.

In fact, it’s far more dangerous than not testing at all.

“Those who neglect to test know the risk they’re taking and market their changes cautiously and with healthy trepidation,” explains Flint McGlaughlin, Managing Director and CEO, MECLABS, in his Online Testing Course. “Those who conduct invalid tests are blind to the risk they take and make their changes boldly and with an unhealthy sense of confidence.”

These are the validity threats that are most likely to impact marketing tests:

Instrumentation effects — The effect on a test variable caused by an external variable, which is associated with a change in the measurement instrument. In essence, how your software platform can skew results.

An example: 10,000 emails don’t get delivered because of a server malfunction.

History effects — The effect on a test variable made by an extraneous variable associated with the passing of time. In essence, how an event can affect tests outcomes.

An example: There’s unexpected publicity around the product at the exact time you’re running the test.

Selection effects — An effect on a test variable by extraneous variables associated with the different types of subjects not being evenly distributed between treatments. In essence, there’s a fresh source of traffic that skews results.

An example: Another division runs a pay-per-click ad that directs traffic to your email’s landing page at the same time you’re running your test.

Sampling distortion effects — Failure to collect a sufficient sample size. Not enough people have participated in the test to provide a valid result. In essence, the more data you collect, the better.

An example: Determining that a test is valid based on 100 responses when you have a list with 100,000 contacts.