RSAs use machine learning to match the best ad variation with each search using data from ad performance segmented by query. To get this data, you need to delve into the scripts and Ads API reports.

Advertisers know that to be effective with Google search ads they must be able to stand out in a competitive environment, and that often means having ads that qualify to be shown in the coveted positions above the search results. And since PPC is hardly a secret anymore, there are always more advertisers willing to compete for those few highly desirable ad slots. So when we hear that we can show our ads on new inventory when using a new Google Ads feature, like Responsive Search Ads (RSAs), that gets our attention. So where exactly is Google getting this new inventory that they say Responsive Search Ads (RSA) ads may qualify for? And what is the right way to evaluate the performance of this new inventory? Let’s take a look.

Ad tests need to limit the variables

When we do A/B ad tests through our tools in Optmyzr (my company), we firmly believe that we should compare apples to apples. In a perfect world, we would be able to replicate the exact same conditions for a search multiple times and show a different ad variant every time to get data about which ad drives the best results, whether that result is the best CTR, conversion rate, or conversions per impression (a combination of the previous 2 metrics).

But we don’t live in a perfect world so we have to sacrifice a bit of precision and try to limit the variables of our tests as much as possible. In Google Ads, that is really, really difficult because there are a lot of factors that change and that we can’t always control for. In some cases, the best we can do is to compare similar ad formats (e.g., expanded text ads) within one ad group where it is targeting the same audience. While that may sound like an apples-to-apples comparison, it’s often not because the ad group has different keywords, that match to even more different queries, and the ads are shown to entirely different people.

In some cases, we will compare different ad formats against one another but that only makes sense if those ads were competing for the same ad slot. For example, it’s a fair test when an ad slot on the display network could have accepted either a display ad or a text ad. But it’s not a fair test when some of that inventory could only have shown text ads and others only display ads. That’s an extra variable that muddles the results.

RSAs should be evaluated on incrementality

With RSAs, a simple comparison of the performance of the RSA to the ETAs in the same ad group is a flawed test because Google says the RSAs have access to inventory where the ETA could not have appeared. Google’s Matt Lawson says “There are all sorts of instances where you might end up serving impressions in a low CTR placement that you would never have qualified for before.” He’s not talking purely about Google showing ads in places that didn’t show ads before the arrival of RSAs. He’s also talking about showing ads on searches where your static expanded text ads (ETAs) may previously have had too low an Ad Rank to garner an impression. I’ve spoken with other Googlers to confirm that at least some of this new inventory is not new at all. It’s inventory that’s always been there, but where we may not have had good enough ads to make it available to us.

So given that the inventory is sometimes different, what we need to measure is whether the incremental volume driven by the RSA justifies keeping those ads active. Andy Taylor makes the point that only Google can measure the incrementality of new ad formats like RSAs in his post describing why click-through and conversion rates matter less than before. He says this is the case “since there’s no way for advertisers to quantify ad placements that are off limits to ETAs for whatever reason.” But if we can define some of these new placements, I would argue that we can measure the performance.

How to find the new inventory RSAs gave access to

The new placements are search terms for which you previously didn’t qualify. Which ones exactly? Run a search terms report from right before and right after you enabled RSAs in an ad group. Finding the date when you first started RSAs is easy enough with the Change History.

Use the Change History in Google Ads to find the date when you first added responsive search ads to an ad group. Screenshot from Google.

Then see if there are entirely new queries for which there were no impressions before and some impressions after. While there certainly can be other reasons for these queries to have started showing your ads, like bid changes, algorithm changes by Google and changes in the competitive landscape, the addition of an RSA ad unit is also a plausible reason for the new inventory opening up to you.

Use the date range comparison on the Search Terms tab to find queries that only started showing impressions after responsive search ads were added to an ad group. Screenshot from Google.

Finding new inventory long after RSAs started

RSAs use machine learning to match the best ad variation with each search. As the system learns, it may turn off inventory that performs poorly and turns on inventory that appears promising. So it’s worthwhile to keep an eye on what this new inventory is on an ongoing basis. You can get this data by looking at ad performance segmented by query. Surprised at this recommendation because you’ve never seen a search term segment on the ads performance page in the Google Ads front-end (AWFE)? You’re right, that data is not there, but if you look beyond what’s in the Google Ads interface and delve into scripts and Ads API reports, it’s available.

As I’ve covered in previous posts, there are over 40 Google Ads reports available through the API and they’re chock full of details you simply won’t see in AWFE (AdWords Front End).

First, download an Ad Performance Report and include the Id and the AdType.

The Ad Performance Report can be downloaded using scripts or the API. Screenshot from Google.

Then download the Search Query Performance Report and include the Query, CreativeId, and all the metrics you want to check, like Impressions, Clicks, Conversions, etc.

The Search Query Performance report contains a reference to the CreativeId which can be used to connect search term performance with the ad that was shown for a particular search term. Screenshot from Google.

Finally, use spreadsheets to do a vlookup between the two sets of data so that next to each unique combination of a query for an ad, you see if the ad was an ETA or RSA.

Use the vlookup function in spreadsheets to connect the ad data with the search term data. Screenshot from Google.

Then sort the resulting sheet by query and you’ll start to see when a particular query has shown with both RSA and ETA ads. You’ll also see queries that have shown ONLY for RSA ads.

Rows highlighted in blue show search terms that only showed with RSAs, new inventory that should be judged on its incremental nature. Rows in green showed ads with multiple ad formats and can be compared in a more traditional A/B test. Screenshot from Google.

With these two techniques, you’ll start to get a sense of the incrementality Google is referring to.

Why didn’t my old ads qualify for these search terms?

So why do RSAs open up new search term inventory in the first place? Remember Google Ads is an auction where the order of the ads is determined by Ad Rank which is a function of the CPC bid, Quality Score, and other factors that impact CTR like the use of additional features like ad extensions.

Every time a search happens, a new auction is run. While an advertiser’s CPC bid may not change from one auction to the next, the QS can change dramatically based on Google’s machine learning Quality Score system whose job it is to predict the likelihood of an ad getting a click (driving CTR). That likelihood is significantly impacted by how the query relates to the ad. And when the QS system is limited to a handful of static ETA ads, it may not be able to pick an one that is good enough to have the QS necessary to get the ad to show on the page. But when the ad group contains an RSA, the system can try to find a combination of components that will have a high QS for that particular search. And when it succeeds at that, the ad is all of a sudden eligible to participate in more auctions, hence getting access to new inventory. So it’s not so much that Google has unlocked some new inventory that previously didn’t exist. Instead, machine learning has helped figure out a way to create an ad that is relevant enough to access inventory that’s always been there.

Machine learning needs a teacher; it needs you!

Now some advertisers say that RSAs don’t perform as well as ETAs. As Andy, Matt, and I have already pointed out, that may be a finding based on incomplete information because it may ignore the fact that the different formats trigger ads for entirely different queries. But what if you’ve accounted for that and they do perform worse? That sounds like an optimization opportunity rather than a reason to pause RSAs.

Help the machine learn to do better and don’t just turn off the feature. That may not happen though because humans are funny about how they treat technology. Automotive researchers found that people tend to be quite forgiving of mistakes made by other humans who are learning something. If your 16-year old drives poorly, you give advice and trust that they will learn from it and get better with experience. When a self-driving car, on the other hand, makes the same mistake as your 16-year old, humans tend to chalk it up to bad software and turn off the self-driving feature and instead continue to drive manually.

Many of the exciting automation features we’re seeing in PPC these days is driven by machine learning and as the name implies, there is some learning that needs to happen before results may get good enough. Learning is part of the name of the technology driving this. How quickly a learning system can become good depends on having a good teacher. And even with the best teacher, an algorithm needs time to first train itself and later update what it’s learned as it starts to see the real-life results of its decisions.

So with RSAs, they can only be as good as the ad components we’ve provided. Google has guidelines for what constitute good components and I’ve provided a few scripts to mine for good ad components and n-grams in ads in your historical ads data. Once the ad unit has great text to experiment with, give it some time to do the experiments.

Just like broad match will eventually stop showing ads for related queries that fail to turn impressions into clicks, so will RSAs. Google makes no money from impressions; they make money from clicks. And Google is pretty strict about not showing things that seem irrelevant, i.e., RSA variations that never get clicked because these are a waste of space and a detriment to Google’s position as the provider of useful answers.

Conclusion

It’s easy to get carried away with anecdotes heard from other advertisers and decide that maybe RSAs don’t work all that well. I hope that the more you understand how to properly measure them, and the technology that will improve performance, the better you will be in a position to make an informed decision about where RSAs fit into your strategy for managing successful PPC accounts in a world that is ever more driven by automation. And if you’d like to learn more about the role of humans in digital marketing in an AI world, check out my first book.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

About The Author

Frederick (“Fred”) Vallaeys was one of the first 500 employees at Google where he spent 10 years building AdWords and teaching advertisers how to get the most out of it as the Google AdWords Evangelist.
Today he is the Cofounder of Optmyzr, an AdWords tool company focused on unique data insights, One-Click Optimizations™, advanced reporting to make account management more efficient, and Enhanced Scripts™ for AdWords. He stays up-to-speed with best practices through his work with SalesX, a search marketing agency focused on turning clicks into revenue. He is a frequent guest speaker at events where he inspires organizations to be more innovative and become better online marketers.