Split testing and special characters

October 3, 2013

It’s that time of year again when we release our new Corporate Christmas e-card designs. They are all now nicely displayed on our christmas ecards microsite and so we turned to the business of designing and sending the first of our email campaigns to promote them.

This can sometimes be a difficult internal process for us in a company full of email marketing experts! We were able to settle on a design (humorous turkey image –it’s the Christmas campaign so we can lighten up a little) but colleagues disagreed about the subject line. We had chosen ‘Don’t be a turkey…’ which some of the team thought was too flippant. Other colleagues favoured using festive special characters.

We’ve written blogs recently about the use of special characters in email subject lines. For our normal emails we probably would not use them. But for a Christmas campaign we thought we could get away with it. We found some great Christmas special characters of a snowman and some snowflakes. They looked great when we chose them but looked like squashed flies when they landed in the inbox. We then settled on two stars as they were stronger and bolder and looked a lot better. You may not like them but you have to admit that they stand out in a B2B inbox.

Email marketing is never about absolutes and split testing allows you to decide on a winning message. We had a database of about 8,000 records and we isolated our main CRM list for the test. We sent 1,000 records with each subject line in order to conduct the split test. Batch 1 had the subject line ‘don’t be a turkey’ and batch 2 contained the special characters. Split testing normally requires a batch size of 10x this but we wanted to demonstrate that you can get results with a smaller data set.

Most ESPs now will allow automated split testing. This lets you set up the two versions, choose your criteria and the time you want to leave it and then the system sends out the winning version to the rest of the data. For our campaign we did it manually and cast a human eye over the results after the initial test. The key to split testing is to only change one variable in any test. Otherwise your results become meaningless

Our test results showed that vesion 2 had the highest unique click through rate of 19.3% compared with 14.1% for vesion 1. The unique open rate for version 1 was also higher at 58%. This was totally against popular opinion in the office.

This shows the value of split testing. It’s tempting to create your campaign, believe in its quality and message but fail to test it. Your opinions won’t always match that of your readership and an element that it isn’t your favourite may be a winning message.