Adding To The Brand Conversation

“Data Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today's column is written by Mukund Ramachandran, General Manager, Brand Advertising Solutions, DataXu.

Over the summer, Bob Arnold of Kellogg started a column called Brand Aware on AdExchanger. Bob wrote something everyone in digital marketing can relate to: the industry doesn’t do brand advertising well, and until we do, those dollars - orders of magnitude larger in terms of spend – will simply not flow into digital.

In an effort to add to the conversation, I’d like to share some of the principles I have learned about brand advertising.

Online brand surveys are becoming more and more common. Unfortunately, in my experience, online brand surveys are used more as a CYA than as a legitimate approach to address the brand advertising question. It is my belief that if digital marketers and planners thought about the following 4 key principles when they ask their platform partners for a campaign with ‘branding’ as an objective, they will be well-served.

1. Active brand optimization vs. passive brand impact studies

The first and perhaps the most important question you have to ask your media partner is this: Is a brand study a post-campaign study, or does it actively influence campaign delivery to drive lift?

This has implications on aspects like survey design and most importantly, sample size. As little as 50 responses can be statistically significant if you are running a brand impact study.

However, if you want to achieve active brand optimization, a sample size of 50 is simply insufficient. Brand optimization requires an understanding of all the parameters surrounding the positive survey response. The more robust the optimization model, the more parameters will be analyzed and so more survey responses will be required.

If you are not sure, another way to identify the answer will be to find out the expected number of survey completions. Because survey completions cost money, you can be sure that media partners won’t overstate the number of survey completes they are going to get – and this will give you a good indication of methodology.

2. A well-thought single question works best

Think hard about a well-crafted single question that addresses the brand objective – be it awareness, or favorability, or intent. The logic behind this is very simple: each additional question reduces response rates by roughly half. Think of your own behavior when confronted with a survey that requires a progress bar.

For optimization campaigns, resist the urge to ask additional questions like demo of the responder, consideration of competitive products etc. We can’t argue that having that extra information is not valuable. But the simple trade off in good survey design means that your best chance of improving response rate (hence, success) is having a single well-designed question about the specific brand metric you want to drive.

Marketer Tip:If your media partner doesn’t fight back when you come to them with a 3, 4 or even 6 question survey, ask them about how many responses you can expect to see for *each* question. You will quickly find that the number of responses that you get for each question reduces so dramatically that you won’t be able to make much statistical inference with that data – which defeats the purpose of including them in the first place.

3. Concurrent Test & Control

Many academic studies have shown that the best methodology for brand studies is the concurrent test and control methodology. Even for test and control studies, I have seen many study designs where the control impressions are staggered to run only at the beginning of the campaign. During the course of a six or eight week flight, the chances of external events affecting the outcome – such as a TV buy, a competitor campaign, or even a news event – is huge. Running control impressions throughout the campaign produces a truly randomized set of responses, unbiased toward any particular time interval.

MarketerTip:Ask your media partner for the test methodology. And even if you are convinced that it is test and control, ensure that both the control and exposed groups are surveyed through the campaign by asking for sample sizes per week during the entire length of the campaign, or at least setting up frequent checkpoints.

4. The footprint has to remain the same

Control and exposed populations must be characteristically identical and randomly selected. If your control and exposed populations aren’t the same, you should assume any results are invalid. Sometimes, it’s a rookie mistake. A media partner will create a control group with a different GEO concentration than the exposed, or split the cohorts into two content categories. Sometimes, it’s done deliberately. Wouldn’t it be easy to show lift if your control was predisposed to dislike your offering?

Marketer Tip:Think about which characteristics you are holding constant when you create your control and exposed populations.

In conclusion, online brand studies, if done correctly, can have a great advantage over traditional brand health measurement. In the traditional world, the environment where you’ve surveyed consumers to understand their sentiment was different from the media – for example, think about TV advertising versus random digit dialing for surveying. This means that the results from the survey can never be real-time to impact digital marketing planning.

Online branding solutions offer a strategic way to address spend. We can apply what we have learned in digital advertising using big data and analytics to glean better insights into customer motivations and more accurately measure brand health. Applying these four principles will improve results, encouraging more and more marketers to allocate digitally.