Dịch vụ sửa máy giặt Electrolux tại Hà Nội - (04)3 758 9868

Testing for only SEO or onlyCRO isn't always ideal. Some changes result in higher conversions and reduced site traffic, for instance, while others may rank more highly but convert less well. In today's Whiteboard Friday, we welcome Will Critchlow as he demonstrates a method of testing for both your top-of-funnel SEO changes and your conversion-focused CRO changes at once.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. Welcome to another Whiteboard Friday. My name is Will Critchlow, one of the founders at Distilled. If you've been following what I've been writing and talking about around the web recently, today's topic may not surprise you that much. I'm going to be talking about another kind of SEO testing.

Over at Distilled, we've been investing pretty heavily in building out our capability to do SEO tests and in particular built our optimization delivery network, which has let us do a new kind of SEO testing that hasn't been previously available to most of our clients. Recently we've been working on a new enhancement to this, which is full funnel testing, and that's what I want to talk about today.

So funnel testing is testing all the way through the funnel, from acquisition at the SEO end to conversion. So it's SEO testing plus CRO testing together. I'm going to write a little bit more about some of the motivation for this. But, in a nutshell, it essentially boils down to the fact that it is perfectly possible, in fact we've seen in the wild cases of tests that win in SEO terms and lose in CRO terms or vice versa.

In other words, tests that maybe you make a change and it converts better, but you lose organic search traffic. Or the other way around, it ranks better, but it converts less well. If you're only testing one, which is common — I mean most organizations are only testing the conversion rate side of things — it's perfectly possible to have a winning test, roll it out, and do worse.

CRO testing

So let's step back a little bit. A little bit of a primer. Conversion rate optimization testing works in an A/B split kind of way. You can test on a single page, if you want to, or a site section. The way it works is you split your audience. So your audience is split. Some of your audience gets one version of the page, and the rest of the audience gets a different version.

Then you can compare the conversion rate among the group who got the control and the group who got the variant. That's very straightforward. Like I say, it can happen on a single page or across an entire site. SEO testing, a little bit newer. The way this works is you can't split the audience, because we care very much about the search engine spiders in this case. For the purposes of this consideration, there's essentially only one Googlebot. So you couldn't put Google in Class A or Class B here and expect to get anything meaningful.

SEO testing

So the way that we do an SEO test is we actually split the pages. To do this, you need a substantial site section. So imagine, for example, an e-commerce website with thousands of products. You might have a hypothesis of something that will help those product pages perform better. You take your hypothesis and you only apply it to some of the pages, and you leave some of the pages unchanged as a control.

Then, crucially, search engines and users see the same experience. There's no cloaking going on. There's no duplication of content. You simply change some pages and not change others. Then you apply kind of advanced mathematical, statistical analysis trying to figure out do these pages get statistically more organic search traffic than we think they would have done if we hadn't made this change. So that's how an SEO test works.

Now, as I said, the problem that we are trying to tackle here is it's really plausible, despite Google's best intentions to do what's right for users, it's perfectly plausible that you can have a test that ranks better but converts less well or vice versa. We've seen this with, for example, removing content from a page. Sometimes having a cleaner, simpler page can convert better. But maybe that was where the keywords were and maybe that was helping the page rank. So we're trying to avoid those kinds of situations.

Full funnel testing

That's where full funnel testing comes in. So I want to just run through how you run a full funnel test. What you do is you first of all set it up in the same way as an SEO test, because we're essentially starting with SEO at the top of the funnel. So it's set up exactly the same way.

Some pages are unchanged. Some pages get the hypothesis applied to them. As far as Google is concerned, that's the end of the story, because on any individual request to these pages that's what we serve back. But the critically important thing here is I've got my little character. This is a human browser performs a search, "What do badgers eat?"

This was one of our silly examples that we came up with on one of our demo sites. The user lands on this page here. What we do is we then set a cookie. This is a cookie. This user then, as they navigate around the site, no matter where they go within this site section, they get the same treatment, either the control or the variant. They get the same treatment across the entire site section. This is more like the conversion rate test here.

Googlebot = stateless requests

So what I didn't show in this diagram is if you were running this test across a site section, you would cookie this user and make sure that they always saw the same treatment no matter where they navigated around the site. So because Googlebot is making stateless requests, in other words just independent, one-off requests for each of these of these pages with no cookie set, Google sees the split.

Evaluate SEO test on entrances

Users get whatever their first page impression looks like. They then get that treatment applied across the entire site section. So what we can do then is we can evaluate independently the performance in search, evaluate that on entrances. So do we get significantly more entrances to the variant pages than we would have expected if we hadn't applied a hypothesis to them?

That tells us the uplift from an SEO perspective. So maybe we say, "Okay, this is plus 11% in organic traffic." Well, great. So in a vacuum, all else being equal, we'd love to roll out this test.

Evaluate conversion rate on users

But before we do that, what we can do now is we can evaluate the conversion rate, and we do that based on user metrics. So these users are cookied.

We can also set an analytics tag on them and say, "Okay, wherever they navigate around, how many of them end up converting?" Then we can evaluate the conversion rate based on whether they saw treatment A or treatment B. Because we're looking at conversion rate, the audience size doesn't exactly have to be the same. So the statistical analysis can take care of that fact, and we can evaluate the conversion rate on a user-centric basis.

So then we maybe see that it's -5% in conversion rate. We then need to evaluate, "Is this something we should roll out?" So step 1 is: Do we just roll it out? If it's a win in both, then the answer is yes probably. If they're in different directions, then there are couple things we can do. Firstly, we can evaluate the relative performance in different directions, taking care that conversion rate applies generally across all channels, and so a relatively small drop in conversion rate can be a really big deal compared to even an uplift in organic traffic, because the conversion rate is applying to all channels, not just your organic traffic channel.

But suppose that it's a small net positive or a small net negative. What we can then do is we might get to the point that it's a net positive and roll it out. Either way, we might then say, "What can we take from this?What can we actually learn?" So back to our example of the content. We might say, "You know what? Users like this cleaner version of the page with apparently less content on it.The search engines are clearly relying on that content to understand what this page is about. How do we get the best of both worlds?"

Well, that might be a question of a redesign, moving the layout of the page around a little bit, keeping the content on there, but maybe not putting it front and center to the user as they land right at the beginning. We can test those different things, run sequential tests, try and take the best of the SEO tests and the best of the CRO tests and get it working together and crucially avoid those situations where you think you've got a win, because your conversion rate is up, but you actually are about to crater your organic search performance.

We think this is going to just be the more data-driven we get, the more accountable SEO testing makes us, the more important it's going to be to join these dots and make sure that we're getting true uplifts on a net basis when we combine them. So I hope that's been useful to some of you. Thank you for joining me on this week's Whiteboard Friday. I'm Will Critchlow from Distilled.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Google is all about serving up results based on your precise location, which means there’s no such thing as a “national” SERP anymore. So, if you wanted to get an accurate representation of how you’re performing nationally, you’d have to track every single street corner across the country.

Not only is this not feasible, it’s also a headache — and the kind of nightmare that keeps your accounting team up at night. Because we’re in the business of making things easy, we devised a happier (and cost-efficient) alternative.

Follow along and learn how to set up a statistically robust national tracking strategy in STAT, no matter your business or budget. And while we’re at it, we’ll also show you how to calculate your national ranking average.

Let’s pretend we’re a large athletic retailer. We have 30 stores across the US, a healthy online presence, and the powers-that-be have approved extra SEO spend — money for 20,000 additional keywords is burning a hole in our pocket. Ready to get started?

Step 1: Pick the cities that matter most to your business

Google cares a lot about location and so should you. Tracking a country-level SERP isn’t going to cut it anymore — you need to be hyper-local if you want to nab results.

The first step to getting more granular is deciding which cities you want to track in — and there are lots of ways to do this: The top performers? Ones that could use a boost? Best and worst of the cyber world as well as the physical world?

When it comes time for you to choose, nobody knows your business, your data, or your strategy better than you do — ain’t nothing to it but to do it.

A quick note for all our e-commerce peeps: we know it feels strange to pick a physical place when your business lives entirely online. For this, simply go with the locations that your goods and wares are distributed to most often.

Even though we’re a retail powerhouse, our SEO resources won’t allow us to manage all 30 physical locations — plus our online hotspots — across the US, so we'll cut that number in half. And because we’re not a real business and we aren’t privy to sales data, we'll pick at random.

From east to west, we now have a solid list of 15 US cities, primed, polished, and poised for our next step: surfacing the top performing keywords.

Step 2: Uncover your money-maker keywords

Because not all keywords are created equal, we need to determine which of the 4,465 keywords that we’re already tracking are going to be spread across the country and which are going to stay behind. In other words, we want the keywords that bring home the proverbial bacon.

Typically, we would use some combination of search volume, impressions, clicks, conversion rates, etc., from sources like STAT, Google Search Console, and Google Analytics to distinguish between the money-makers and the non-money-makers. But again, we’re a make-believe business and we don’t have access to this insight, so we’re going to stick with search volume.

A right-click anywhere in the site-level keywords table will let us export our current keyword set from STAT. We’ll then order everything from highest search volume to lowest search volume. If you have eyeballs on more of that sweet, sweet insight for your business, order your keywords from most to least money-maker.

Because we don’t want to get too crazy with our list, we’ll cap it at a nice and manageable 1,500 keywords.

Step 3: Determine the number of times each keyword should be tracked

We may have narrowed our cities down to 15, but our keywords need to be tracked plenty more times than that — and at a far more local level.

True facts: A “national” (or market-level) SERP isn’t a true SERP and neither is a city-wide SERP. The closer you can get to a searcher standing on a street corner, the better, and the more of those locations you can track, the more searchers’ SERPs you’ll sample.

We’re going to get real nitty-gritty and go as granular as ZIP code. Addresses and geo coordinates work just as well though, so if it’s a matter of one over the other, do what the Disney princesses do and follow your heart.

The ultimate goal here is to track our top performing keywords in more locations than our poor performing ones, so we need to know the number of ZIP codes each keyword will require. To figure this out, we gotta dust off the old desktop calculator and get our math on.

First, we’ll calculate the total amount of search volume that all of our keywords generate. Then, we’ll find the percentage of said total that each keyword is responsible for.

For example, our keyword [yeezy shoes] drew 165,000 searches out of a total 28.6 million, making up 0.62 percent of our traffic.

A quick reminder: Every time a query is tracked in a distinct location, it’s considered a unique keyword. This means that the above percentages also double as the amount of budgeted keywords (and therefore locations) that we’ll award to each of our queries. In (hopefully) less confusing terms, a keyword that drives 0.62 percent of our traffic gets to use 0.62 percent of our 20,000 budgeted keywords, which in turn equals the number of ZIP codes we can track in. Phew.

To temper this, we highly recommend that you take the log of the search volume — it’ll keep things relative and relational. If you’re working through all of this in Excel, simply type =log(A2) where A2 is the cell containing the search volume. Because we're extra fancy, we'll multiply that by four to linearly scale things, so =log(A2)*4.

We then found a list of every ZIP code in each of our cities to dole them out to.

The end. Sort of. At this point, like us, you may be looking at keywords that need to be spread across 176 different ZIP codes and wondering how you're going to choose which ZIP codes — so let our magic spreadsheet take the wheel. Add all your locations to it and it'll pick at random.

Of course, because we want our keywords to get equal distribution, we attached a weighted metric to our ZIP codes. We took our most searched keyword, [adidas], found its Google Trends score in every city, and then divided it by the number of ZIP codes in those cities. For example, if [adidas] received a score of 71 in Yonkers and there are 10 ZIP codes in the city, Yonkers would get a weight of 7.1.

We'll then add everything we have so far — ZIP codes, ZIP code weights, keywords, keyword weights, plus a few extras — to our spreadsheet and watch it randomly assign the appropriate amount of keywords to the appropriate amount of locations.

And that’s it! If you’ve been following along, you’ve successfully divvied up 20,000 keywords in order to create a statistically robust national tracking strategy!

Step 4: Segment, segment, segment!

20,000 extra keywords makes for a whole lotta new data to keep track of, so being super smart with our segmentation is going to help us make sense of all our findings. We’ll do this by organizing our keywords into meaningful categories before we plug everything back into STAT.

Obviously, you are free to sort how you please, but we recommend at least tagging your keywords by their city and product category (so [yeezy shoes] might get tagged “Austin” and “shoes”). You can do all of this in our keyword upload template or while you're in our magic spreadsheet.

Once you’ve added a tag or two to each keyword, stuff those puppies into STAT. When everything’s snug as a bug, group all your city tags into one data view and all your product category tags into another.

Step 5: Calculate your national ranking average

Now that all of our keywords are loaded and tracking in STAT, it’s time to tackle those ranking averages. To do that, we’ll simply pop on over to the Dashboard tab from either of our two data views.

A quick glimpse of the Average Ranking module in the Daily Snapshot gives us, well, our average rank, and because these data views contain every keyword that we’re tracking across the country, we’re also looking at the national average for our keyword set. Easy-peasy.

To see how each tag is performing within those data views, a quick jump to the Tags tab breaks everything down and lets us compare the performance of a segment against the group as a whole.

So, if our national average rank is 29.7 but our Austin keywords have managed an average rank of 27.2, then we might look to them for inspiration as our other cities aren't doing quite as well — our keywords in Yonkers have an average rank of 35.2, much worse than the national average.

Similarly, if our clothes keywords are faring infinitely worse than our other product categories, we may want to revamp our content strategy to even things out.

Go get your national tracking on

Any business — yes, even an e-commerce business — can leverage a national tracking strategy. You just need to pick the right keywords and locations.

Once you have access to your sampled population, you’ll be able to hone in on opportunities, up your ROI, and bring more traffic across your welcome mat (physical or digital).

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Content promotion isn’t tweeting or upvoting. Those tiny, one-off tactics are fine for beginners. They might make a dent, but they definitely won’t move the needle. Companies that want to grow big and grow fast need to grow differently.

Here’s how Kissmetrics, Sourcify, Sales Hacker, Kinsta, and BuildFire have used advanced content promotion tips like newsjacking and paid social to elevate their brands above the competition.

1. Use content to fuel social media distribution (and not the other way around)

Prior to selling the brand and blog to Neil Patel, Kissmetrics had no dedicated social media manager at the height of their success. The Kissmetrics blog received nearly 85% of its traffic from organic search. The second biggest traffic-driver was the newsletter.

Social media did drive traffic to their posts. However, former blog editor Zach Buylgo’s research showed that these traffic segments often had the lowest engagement (like time on site) and the least conversions (like trial or demo opt-ins) — so they didn’t prioritize it. The bulk of Zach’s day was instead focused on editing posts, making changes himself, adding comments and suggestions for the author to fix, and checking for regurgitated content. Stellar, long-form content was priority number one. And two. And three.

So Zach wasn’t just looking for technically-correct content. He was optimizing for uniqueness: the exact same area where most cheap content falls short. That’s an issue because many times, a simple SERP analysis would reveal that one submission:

Today’s plagiarism tools can catch the obvious stuff, but these derivatives often slip through the cracks. Recurring paid writers contributed the bulk of the TOFU content, which would free Zach up to focus more on MOFU use cases and case studies to help visitors understand how to get the most out of their product set (from the in-house person who knows it best).

They produced marketing guides and weekly webinars to transform initial attention into new leads:

They also created free marketing tools to give prospects an interactive way to continue engaging with their brand:

In other words, they focused on doing the things that matter most — the 20% that would generate the biggest bang for their buck. They won’t ignore social networks completely, though. They still had hundreds of thousands of followers across each network. Instead, their intern would take the frontlines. That person would watch out for anything critical, like a customer question, which will then be passed off to the Customer Success Manager that will get back to them within a few hours.

New blog posts would get the obligatory push to Twitter and LinkedIn. (Facebook is used primarily for their weekly webinar updates.) Zach used Pablo from Buffer to design and create featured images for the blog posts.

Then he’d use an Open Graph Protocol WordPress plugin to automatically add all appropriate tags for each network. That way, all he had to do was add the file and basic post meta data. The plugin would then customize how it shows up on each network afterward. Instead of using Buffer to promote new posts, though, Zach likes MeetEdgar.

Why? Doesn’t that seem like an extra step at first glance? Like Buffer, MeetEdgar allows you to select when you’d like to schedule content. You can just load up the queue with content, and the tool will manage the rest. The difference is that Buffer constantly requires new content — you need to keep topping it off, whereas MeetEdgar will automatically recycle the old stuff you’ve previously added. This saved a blog like Kissmetrics, with thousands of content pieces, TONS of time.

He would then use Sleeknote to build forms tailored to each blog category to transform blog readers into top-of-the-funnel leads:

But that’s about it. Zach didn’t do a ton of custom tweets. There weren’t a lot of personal replies. It’s not that they didn’t care. They just preferred to focus on what drives the most results for their particular business. They focused on building a brand that people recognize and trust. That means others would do the social sharing for them.

Respected industry vets like Avinash Kaushik, for example, would often share their blog posts. And Avinash was the perfect fit, because he already has a loyal, data-driven audience following him.

So that single tweet brings in a ton of highly-qualified traffic — traffic that turns into leads and customers, not just fans.

2. Combine original research and newsjacking to go viral

Sourcify has grown almost exclusively through content marketing. Founder Nathan Resnick speaks, attends, and hosts everything from webinars to live events and meetups. Most of their events are brand-building efforts to connect face-to-face with other entrepreneurs. But what’s put them on the map has been leveraging their own experience and platform to fuel viral stories.

Last summer, the record-breaking Mayweather vs. McGregor fight was gaining steam. McGregor was already infamous for his legendary trash-talking and shade-throwing abilities. He also liked to indulge in attention-grabbing sartorial splendor. But the suit he wore to the very first press conference somehow managed to combine the best of both personality quirks:

This was no off-the-shelf suit. He had it custom made. Nathan recalls seeing this press conference suit fondly: “Literally, the team came in after the press conference, thinking, ‘Man, this is an epic suit.’” So they did what any other rational human being did after seeing it on TV: they tried to buy it online.

“Except, the dude was charging like $10,000 to cover it and taking six weeks to produce.” That gave Nathan an idea. “I think we can produce this way faster.”

They “used their own platform, had samples done in less than a week, and had a site up the same day.”

“We took photos, sent them to different factories, and took guesstimates on letter sizing, colors, fonts, etc. You can often manufacture products based on images if it’s within certain product categories.” The goal all along was to use the suit as a case study. They partnered with a local marketing firm to help split the promotion, work, and costs.

“The next day we signed a contract with a few marketers based in San Francisco to split the profits 50–50 after we both covered our costs. They cover the ad spend and setup; we cover the inventory and logistics cost,” Nathan wrote in an article for The Hustle. When they were ready to go, the marketing company began running ad campaigns and pushing out stories. They went viral on BroBible quickly after launch and pulled in over $23,000 in sales within the first week.

The only problem is that they used some images of Conor in the process. And apparently, his attorney’s didn’t love the IP infringement. A cease and desist letter wasn’t far behind:

This result wasn’t completely unexpected. Both Nathan and the marketing partner knew they were skirting a thin line. But either way, Nathan got what he wanted out of it.

3. Drive targeted, bottom-of-the-funnel leads with Quora

Quora packs another punch that often elevates it over the other social channels: higher-quality traffic. Site visitors are asking detailed questions, expecting to comb through in-depth answers to each query. In other words, they’re invested. They’re smart. And if they’re expressing interest in managed WordPress hosting, it means they’ve got dough, too.

Both Sales Hacker and Kinsta take full advantage. Today, Gaetano DiNardi is the Director of Demand Generation at Nextiva. But before that, he lead marketing at Sales Hacker before they were acquired. There, content was central to their stratospheric growth. With Quora, Gaetano would take his latest content pieces and use them to solve customer problems and address pain points in the general sales and marketing space:

By using Quora as a research tool, he would find new topics that he can create content around to drive new traffic and connect with their current audience:

He found questions that they already had content for and used it as a chance to engage users and provide value. He can drive tons of relevant traffic for free by linking back to the Sales Hacker blog:

Kinsta, a managed WordPress hosting company out of Europe, also uses uses relevant threads and Quora ads. CMO Brian Jackson jumps into conversations directly, lending his experience and expertise where appropriate. His technical background makes it easy to talk shop with others looking for a sophisticated conversation about performance (beyond the standard, PR-speak most marketers offer up):

Brian targets different WordPress-related categories, questions, or interests. Technically, the units are “display ads, but they look like text.” The ad copy is short and to the point. Usually something like, “Premium hosting plans starting at $XX/month” to fit within their length requirements.

4. Rank faster with paid (not organic) social promotion

Kinsta co-founder Tom Zsomborgi wrote about their journey in a bootstrapping blog post that went live last November. It instantly hit the top of Hacker News, resulting in their website getting a consistent 400+ concurrent visitors all day:

Within hours their post was also ranking on the first page for the term “bootstrapping,” which receives around 256,000 monthly searches.

How did that happen?

“There’s a direct correlation between social proof and increased search traffic. It’s more than people think,” said Brian. Essentially, you’re paying Facebook to increase organic rankings. You take good content, add paid syndication, and watch keyword rankings go up.

Kinsta’s big goal with content promotion is to build traffic and get as many eyeballs as possible. Then they’ll use AdRoll for display retargeting messages, targeting the people who just visited with lead gen offers to start a free trial. (“But I don’t use AdRoll for Facebook because it tags on their middleman fee.”)

Brian uses the “Click Campaigns” objective on Facebook Ads for both lead gen and content promotion. “It’s the best for getting traffic.”

“It’s almost not even worth posting if you’re not paying,” confirms Brian. Kinsta will promote new posts to make sure it comes across their fans’ News Feed. Anecdotally, that reach number with a paid assist might jump up around 30%.

If they don’t see it, Brian will “turn it into an ad and run it separately.” It’s “re-written a second time to target a broader audience.”

In addition to new post promotion, Brian has an evergreen campaign that’s constantly delivering the “best posts ever written” on their site. It’s “never-ending” because it gives Brian a steady-stream of new site visitors — or new potential prospects to target with lead gen ads further down the funnel. That’s why Brian asserts that today’s social managers need to understand PPC and lead gen. “A lot of people hire social media managers and just do organic promotion. But Facebook organic just sucks anyway. It’s becoming “pay to play.’”

“Organic reach is just going to get worse and worse and worse. It’s never going to get better.” Also, advertising gets you “more data for targeting,” which then enables you to create more in-depth A/B tests.

We confirmed this through a series of promoted content tests, where different ad types (custom images vs. videos) would perform better based on the campaign objectives and placements.

Almost every single stat shows that remarketing is one of the most efficient ways to close more customers. The more ad remarketing impressions someone sees, the higher the conversion rate. Remarketing ads are also incredibly cheap compared to your standard AdWords search ad when trying to reach new cold traffic.

There’s only one problem to watch out for: ad fatigue. The image creative plays a massive role in Facebook ad success. But over time (a few days to a few weeks), the performance of that ad will decline. The image becomes stale. The audience has seen it too many times. The trick is to continually cycle through similar, but different, ad examples.

His team will either (a) create the ad creative image directly inside Canva, or (b) have their designers create a background ‘template’ that they can use to manipulate quickly. That way, they can make fast adjustments on the fly, A/B testing small elements like background color to keep ads fresh and conversions as high as possible.

All retargeting or remarketing campaigns will be sent to a tightly controlled audience. For example, let’s say you have leads who’ve downloaded an eBook and ones who’ve participated in a consultation call. You can just lump those two types into the same campaign, right? I mean, they’re both technically ‘leads.’

But that’s a mistake. Sure, they’re both leads. However, they’re at different levels of interest. Your goal with the first group is to get them on a free consultation call, while your goal with the second is to get them to sign up for a free trial. That means two campaigns, which means two audiences.

Facebook’s custom audiences makes this easy, as does LinkedIn’s new-ish Matched Audiences feature. Like with Facebook, you can pick people who’ve visited certain pages on your site, belong to specific lists in your CRM, or whose email address is on a custom .CSV file:

If both of these leads fall off after a few weeks and fail to follow up, you can go back to the beginning to re-engage them. You can use content-based ads all over again to hit back at the primary pain points behind the product or service that you sell.

This seems like a lot of detailed work — largely because it is. But it’s worth it because of scale. You can set these campaigns up, once, and then simply monitor or tweak performance as you go. That means technology is largely running each individual campaign. You don’t need as many people internally to manage each hands-on.

And best of all, it forces you to create a logical system. You’re taking people through a step-by-step process, one tiny commitment at a time, until they seamlessly move from stranger into customer.

Conclusion

Sending out a few tweets won’t make an impact at the end of the day. There’s more competition (read: noise) than ever before, while organic reach has never been lower. The trick isn’t to follow some faux influencer who talks the loudest, but rather the practitioners who are doing it day-in, day-out, with the KPIs to prove it.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Today we're tackling a question that many of us have asked over the years: how do you increase your chances of getting your content into Google News? We're delighted to welcome renowned SEO specialist Barry Adams to share the framework you need to have in place in order to have a chance of appearing in that much-coveted Google News carousel.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. I'm Barry Adams. I'm a technical SEO consultant at Polemic Digital and a specialist in news SEO. Today we're going to be talking about how to get into Google News. I get a lot of questions from a lot of people about Google News and specifically how you get a website into Google News, because it's a really great source of traffic for websites. Once you're in the Google News Index, you can appear in the top stories carousel in Google search results, and that can send a lot of traffic your way.

How do you get into Google News' manually curated index?

So how do you get into Google News? How do you go about getting your website to be a part of Google News' manual index so that you can get that top stories traffic for yourself? Well, it's not always as easy as it makes it appear. You have to jump through quite a few hoops before you get into Google News.

1. Have a dedicated news website

First of all, you have to have a dedicated news website. You have to keep in mind when you apply to be included in Google News, there's a team of Googlers who will manually review your website to decide whether or not you're worthy of being in the News index. That is a manual process, and your website has to be a dedicated news website.

I get a lot of questions from people asking if they have a news section or a blog on their site and if that could be included in Google News. The answer tends to be no. Google doesn't want news websites in there that aren't entirely about news, that are commercial websites that have a news section. They don't really want that. They want dedicated news websites, websites whose sole purpose is to provide news and content on specific topics and specific niches.

So that's the first hurdle and probably the most important one. If you can't clear that hurdle, you shouldn't even try getting into Google News.

2. Meet technical requirements

There are also a lot of other aspects that go into Google News. You have to jump through, like I said, quite a few hoops. Some technical requirements are very important to know as well.

Have static, unique URLs.

Google wants your articles and your section pages to have static, unique URLs so that an article or a section is always on the same URL and Google can crawl it and recrawl it on that URL without having to work with any redirects or other things. If you have content with dynamically generated URLs, that does not tend to work with Google News very well. So you have to keep that in mind and make sure that your content, both your articles and your static section pages are on fixed URLs that tend not to change over time.

Have your content in plain HTML.

It also helps to have all your content in plain HTML. Google News, when it indexes your content, it's all about speed. It tries to index articles as fast as possible. So any content that requires like client-side JavaScript or other sort of scripting languages tends not to work for Google News. Google has a two-stage indexing process, where the first stage is based on the HTML source code and the second stage is based on a complete render of the page, including executing JavaScript.

For Google News, that doesn't work. If your content relies on JavaScript execution, it will never be seen by Google News. Google News only uses the first stage of indexing, based purely on the HTML source code. So keep your JavaScript to a minimum and make sure that the content of your articles is present in the HTML source code and does not require any JavaScript to be seen to be present.

Have clean code.

It also helps to have clean code. By clean code, I mean that the article content in the HTML source code should be one continuous block of code from the headline all the way to the end. That tends to result in the best and most efficient indexing in Google News, because I've seen many examples where websites put things in the middle of the article code, like related articles or video carousels, photo galleries, and that can really mess up how Google News indexes the content. So having clean code and make sure the article code is in one continuous block of easily understood HTML code tends to work the best for Google News.

3. Optional (but more or less mandatory) technical considerations

There's also quite a few other things that are technically optional, but I see them as pretty much mandatory because it really helps with getting your content picked up in Google News very fast and also makes sure you get that top stories carousel position as fast as possible, which is where you will get most of your news traffic from.

Have a news-specific XML sitemap.

Primarily the news XML sitemap, Google says this is optional but recommended, and I agree with them on that. Having a news-specific XML sitemap that lists articles that you've published in the last 48 hours, up to a maximum of 1,000 articles, is absolutely necessary. For me, I think this is Google News' primary discovery mechanism when they crawl your website and try to find new articles.

So that news-specific XML sitemap is absolutely crucial, and you want to make sure you have that in place before you submit your site to Google News.

Mark up articles with NewsArticle structured data.

I also think it's very important to mark up your articles with news article structured data. It can be just article structured data or even more specific structured data segments that Google is introducing, like news article analysis and news article opinion for specific types of articles.

But article or news article markup on your article pages is pretty much mandatory. I see your likelihood of getting into the top stories carousel much improved if you have that markup implemented on your article pages.

Helpful-to-have extras:

Also, like I said, this is a manually curated index. So there are a few extra hoops that you want to jump through to make sure that when a Googler looks at your website and reviews it, it ticks all the boxes and it appears like a trustworthy, genuine news website.

A. Multiple authors

Having multiple authors contribute to your website is hugely valuable, hugely important, and it does tend to elevate you above all the other blogs and small sites that are out there and makes it a bit more likely that the Googler reviewing your site will press that Approve button.

B. Daily updates

Having daily updates definitely is necessary. You don't want just one news post every couple of days. Ideally, multiple new articles every single day that also should be unique. You can have some sort of syndicated content on there, like from feeds, from AP or Reuters or whatever, but the majority of your content needs to be your own unique content. You don't want to rely too much on syndicated articles to fill your website with news content.

C. Mostly unique content

Try to write as much unique content as you possibly can. There isn't really a clear ratio for that. Generally speaking, I recommend my clients to have at least 70% of the content as unique stuff that they write themselves and publish themselves and only 30% maximum syndicated content from external sources.

D. Specialized niche/topic

It really helps to have a specialized niche or a specialized topic that you focus on as a news website. There are plenty of news sites out there that are general news and try to do everything, and Google News doesn't really need many more of those. What Google is interested in is niche websites on specific topics, specific areas that can provide in-depth reporting on those specific industries or topics. So if you have a very niche topic or a niche industry that you cover with your news, it does tend to improve your chances of getting into that News Index and getting that top stories carousel traffic.

So that, in a nutshell, is how you get into Google News. It might appear to be quite simple, but, like I said, quite a few hoops for you to jump through, a few technical things you have to implement on your website as well. But if you tick all those boxes, you can get so much traffic from the top stories carousel, and the rest is profit. Thank you very much.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.

In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?

In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.

Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.

So how should we mitigate these risks? How should we work together?

But first, some evidence

There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:

And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:

But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:

So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?

Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:

But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.

An example CRO scenario: The business impact of conversion rate testing

In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.

We start on a high with a winning test in our first month:

After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:

Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:

By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):

Is this the full picture, though?

What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:

If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:

Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:

So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:

Let’s make some more sensible decisions, considering the SEO impact

Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:

We would not:

Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:

When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:

The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:

Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:

Most of the time, we want to see if making a change willimproveperformance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up?

But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales?

In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant.

If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.

Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.

All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.

The future for effective, accountable SEO

There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:

1. We’re going to need to do more testing generally

I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:

2. User signals are going to grow in importance

The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.

I love talking about all of this, so if you have any questions, feel free to drop into the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!