Adwords Campaign Experiments Brings Split Testing to PPC

This week, Google Adwords announced a new tool for its paid search advertisers: ACE (Adwords Campaign Experiments) (Beta) which allows you to split test your PPC campaigns.

Until now, decisions to raise bids, add / delete keywords, change keyword match type or create different campaign structures have been made on gut feel without hard data on their efficacy. Results had to be measured by comparing metrics like traffic, spend, average CPC and conversion before and after a change, which is never as reliable as running split tests. When you “test” in succession rather than concurrently, your data may be muddied by external factors like seasonality, competitor behavior and consumer confidence.

50/50 vs. shifting your risk

A neat feature in ACE is the ability to shift the risk of your experiment to a smaller slice of your traffic. For example, you may choose to send 80% of traffic to your existing campaign (Control) and 20% to your Experiment. If the Experiment is a bomb, there’s less impact on your total spend and conversion. Keep in mind, however, that such experiments take longer to complete because you require a certain level of traffic in the Experiment, and the data won’t be as reliable as a 1:1 relationship between Control and Experiment.

Statistical significance shortcuts

Another great feature for mathophobes like me is the clear markers that indicate statistical significance. A statistically significant result is unlikely to have happened by chance, so you can be confident that applying such results will improve your campaign performance. These markers make it easy to spot from a glance exactly what changes had impact.

One question I have for Google is whether testing impacts a campaign’s Quality Score. If it doesn’t, perhaps there’s a loophole for advertisers with lower Quality Scores – they can set up an identical “Experiment” campaign that’s scrubbed of the click through history accrued from the existing campaign.