Originally posted on Inside AdMob blogPosted by Chris Jones, Social Team, AdMob.
Native is the next big thing in mobile advertising with spending on native ads expected to grow to $21 billion in 2018. This is a huge potential opportunity for app developers, but how can you use native ads to help boost your UX and monetize your app? Here’s a quick overview of what native advertising is, and how you can get started.

What’s Native?
Native advertisements match both the form and function of the user experience in which they’re placed. They also match the visual design of the app they live within. Native ads augment the user experience by providing value through relevant ads that flow within the context of surrounding app content. Put simply; native ads fit in.

Native advertising isn’t a new thing.
Since the golden days of wireless radio and daily newspapers, advertisers have looked for innovative ways to match up their brands and messages with the environment in which they’re served to consumers. In the digital advertising world, Google was one of the first “native” advertisers, developing search ads that directly matched the information on the search results page.

But today, consumers are everywhere.
And we’ve had to adapt; delivering higher quality content that can flex to different screens and sizes. For example, mobile-optimized websites now have big buttons and fonts and mobile apps let users scroll up and down or left and right, rather than having to click through to the pages they want to view. Native also understands that preserving that user experience is vital to successful advertising.

Our content has evolved, and our ads need to follow.
Nobody likes their app experience to be side-swiped by an obtrusive, ugly ad. Native advertising offers a simple solution: ads that fit the form and function of a developer’s content. We help you create ads that are beautiful and engaging, so consumers can maintain their good buzz.

Native ads are cohesive.
They never stand out like a sore thumb. They’re made to match the look and feel of the app, and are consistent with platform behavior, so that the viewer feels like the ad fits seamlessly with their content experience. In other words - they’re ads with UX in mind.

Here at Google, we’re serving more than a hundred APIs to ensure that developers have the resources to build amazing experiences with them. We provide a reliable infrastructure and make it as simple as possible so developers can focus on building the future. With this in mind, we’re introducing a few improvements for the API experience: more flexible keys, a streamlined 'getting-started' experience, and easy monitoring.

Faster, more flexible key generation

Keys are a standard way for APIs to identify callers, and one of the very first steps in interacting with a Google API. Tens of thousands of keys are created every day for Google APIs, so we’re making this step simpler -- reducing the old multi-step process with a single click:

You no longer need to choose your platform and various other restrictions at the time of creation, but we still encourage scope managementas a best practice:

Streamlined getting started flow

We realize that many developers want to get straight to creation and don’t necessarily want to step into the console. We’ve just introduced an in-flow credential set up procedure directly embedded within the developer documentation:

Click the 'Get a Key' button, choose or create a project, and then let us take care of enabling the API and creating a key.

We are currently rolling this out for the Google Maps APIs and over the next few months we'll bring it to the rest of our documentation.

API Dashboard

We’re not just making it easier to get started, we’re simplifying the on-going usage experience, too. For developers who use one or more APIs frequently, we've built the new API Dashboard to easily view usage and quotas.

If you’ve enabled any APIs, the dashboard is front and center in the API Console. There you can view all the APIs you’re using along with usage, error and latency data:

Clicking on an API will jump to a detailed report, where you’ll see the traffic sliced by methods, credentials, versions and response code (available on select APIs):

We hope these new features make your API usage easier, and we can't wait to see what you’re going to build next!

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Offline call data, AdWords budget creep & AMP fire hose appeared first on Search Engine Land.

Are you feeling spooked by a mysterious rise in ad spend? Columnist Pauline Jakober takes a look at what you can do in AdWords to solve the mystery. The post 3 mysterious and scary ways AdWords budget creep can happen to you appeared first on Search Engine Land.

How has the rise of mobile changed the way people view Google SERPs? Contributor Kristi Kellogg summarizes a session from SMX East in which Mediative's Chris Pinkerton discusses the results of eye-tracking studies. The post How mobile has changed the way we search, based on 10+ years of...

Columnist Barb Palser believes that the broad surfacing of AMP content in mobile search will expose a universe of AMP content that’s been hidden from view. The post Google opens the AMP fire hose appeared first on Search Engine Land.

We want to be able to answer questions about why one page outranks another.

“What would we have to do to outrank that site?”

“Why is our competitor outranking us on this search?”

These kind of questions — from bosses, from clients, and from prospective clients — are a standard part of day-to-day life for many SEOs. I know I’ve been asked both in the last week.

It’s relatively easy to figure out ways that a page can be made more relevant and compelling for a given search, and it’s straightforward to think of ways the page or site could be more authoritative (even if it’s less straight-forward to get it done). But will those changes or that extra link cause an actual reordering of a specific ranking? That’s a very hard question to answer with a high degree of certainty.

When we asked a few hundred people to pick which of two pages would rank better for a range of keywords, the average accuracy on UK SERPs was 46%. That’s worse than you’d get if you just flipped a coin! This chart shows the performance by keyword. It’s pretty abysmal:

While I remain confident when building strategies to increase overall organic visibility, traffic, and revenue, I’m less sure than ever which individual ranking factors will outweigh which others in a specific case.

The strategic approach looks at whole sites and groups of keywords

My approach is generally to zoom out and build business cases on assumptions about portfolios of rankings, but it’s been on my mind recently as I think about the ways machine learning should make Google rankings ever more of a black box, and cause the ranking factors to vary more and more between niches.

In general, "why does this page rank?" is the same as "which of these two pages will rank better?"

I've been teaching myself about deep neural networks using TensorFlow and Keras — an area I’m pretty sure I’d have ended up studying and working in if I’d gone to college 5 years later. As I did so, I started thinking about how you would model a SERP (which is a set of high-dimensional non-linear relationships). I realized that the litmus test of understanding ranking factors — and thus being able to answer “why does that page outrank us?” — boils down to being able to answer a simpler question:

Given two pages, can you figure out which one will outrank the other for a given query?

If you can answer that in the general case, then you know why one page outranks another, and vice-versa.

It turns out that people are terrible at answering this question.

I thought that answering this with greater accuracy than a coin flip was going to be a pretty low bar. As you saw from the sneak peak of my results above, that turned out not to be the case. Reckon you can do better? Skip ahead to take the test and find out.

(In fact, if you could find a way to test this effectively, I wonder if it would make a good qualifying question for the next moz ranking factors survey. Should you only listen only to the opinion of those experts who are capable of answering with reasonable accuracy? Note that my test that follows isn’t at all rigorous because you can cheat by Googling the keywords — it’s just for entertainment purposes).

Take the test and see how well you can answer

With my curiosity piqued, I put together a simple test, thinking it would be interesting to see how good expert SEOs actually are at this, as well as to see how well laypeople do.

I’ve included a bit more about the methodology and some early results below, but if you'd like to skip ahead and test yourself you can go ahead here.

Note that to simplify the adversarial side, I’m going to let you rely on all of Google’s spam filtering — you can trust that every URL ranks in the top 10 for its example keyword — so you're choosing an ordering of two pages that do rank for the query rather than two pages from potentially any domain on the Internet.

I haven’t designed this to be uncheatable — you can obviously cheat by Googling the keywords — but as my old teachers used to say: "If you do, you’ll only be cheating yourself."

Unfortunately, Google Forms seems to have removed the option to be emailed your own answers outside of an apps domain, so if you want to know how you did, note down your answers as you go along and compare them to the correct answers (which are linked from the final page of the test).

You can try your hand with just one keyword or keep going, trying anywhere up to 10 keywords (each with a pair of pages to put in order). Note that you don’t need to do all of them; you can submit after any number.

You can take the survey either for the US (google.com) or UK (google.co.uk). All results are considering only the "blue links" results — i.e. links to web pages — rather than universal search results / one-boxes etc.

What do the early responses show?

Before publishing this post, we sent it out to the @distilled and @moz networks. At the time of writing, almost 300 people have taken the test, and there are already some interesting results:

It seems as though the US questions are slightly easier

The UK test appears to be a little harder (judging both by the accuracy of laypeople, and with a subjective eye). And while accuracy generally increases with experience in both the UK and the US, the vast majority of UK respondents performed worse than a coin flip:

Some easy questions might skew the data in the US

Digging into the data, there are a few of the US questions that are absolute no-brainers (e.g. there's a question about the keyword [mortgage calculator] in the US that 84% of respondents get right regardless of their experience). In comparison, the easiest one in the UK was also a mortgage-related query ([mortgage comparisons]) but only 2/3 of people got that right (67%).

Compare the UK results by keyword...

...To the same chart for the US keywords:

So, even though the overall accuracy was a little above 50% in the US (around 56% or roughly 5/9), I’m not actually convinced that US SERPs are generally easier to understand. I think there are a lot of US SERPs where human accuracy is in the 40% range.

The Dunning-Kruger effect is on display

The Dunning-Kruger effect is a well-studied psychological phenomenon whereby people “fail to adequately assess their level of competence,” typically feeling unsure in areas where they are actually strong (impostor syndrome) and overconfident in areas where they are weak. Alongside the raw predictions, I asked respondents to give their confidence in their rankings for each URL pair on a scale from 1 (“Essentially a guess, but I’ve picked the one I think”) to 5 (“I’m sure my chosen page should rank better”).

The effect was most pronounced on the UK SERPs — where respondents answering that they were sure or fairly sure (4–5) were almost as likely to be wrong as those guessing (1) — and almost four percentage points worse than those who said they were unsure (2–3):

Is Google getting so me of these wrong?

The question I asked SEOs was “which page do you think ranks better?”, not “which page is a better result?”, so in general, most of the results say very little about whether Google is picking the right result in terms of user satisfaction. I did, however, ask people to share the survey with their non-SEO friends and ask them to answer the latter question.

If I had a large enough sample-size, you might expect to see some correlation here — but remember that these were a diverse array of queries and the average respondent might well not be in the target market, so it’s perfectly possible that Google knows what a good result looks like better than they do.

Having said that, in my own opinion, there are one or two of these results that are clearly wrong in UX terms, and it might be interesting to analyze why the “wrong” page is ranking better. Maybe that’ll be a topic for a follow-up post. If you want to dig into it, there’s enough data in both the post above and the answers given at the end of the survey to find the ones I mean (I don’t want to spoil it for those who haven’t tried it out yet). Let me know if you dive into the ranking factors and come up with any theories.

There is hope for our ability to fight machine learning with machine learning

One of the disappointments of putting together this test was that by the time I’d made the Google Form I knew too many of the answer to be able to test myself fairly. But I was comforted by the fact that I could do the next best thing — I could test my neural network (well, my model, refactored by our R&D team and trained on data they gathered, which we flippantly called Deeprank).

I think this is fair; the instructions did say “use whatever tools you like to assess the sites, but please don't skew the results by performing the queries on Google yourself.” The neural network wasn’t trained on these results, so I think that’s within the rules. I ran it on the UK questions because it was trained on google.co.uk SERPs, and it did better than a coin flip:

So maybe there is hope that smarter tools could help us continue to answer questions like “why is our competitor outranking us on this search?”, even as Google’s black box gets ever more complex and impenetrable.

If you want to hear more about these results as I gather more data and get updates on Deeprank when it’s ready for prime-time, be sure to add your email address when you:

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

At Google I/O this May, Firebase announced a new suite of products to help developers build mobile apps. Firebase Analytics, a part of the new Firebase platform, is a tool that automatically captures data on how people are using your iOS and Android app, and lets you define your own custom app events. When the data's captured, it’s available through a dashboard in the Firebase console. One of my favorite cloud integrations with the new Firebase platform is the ability to export raw data from Firebase Analytics to Google BigQuery for custom analysis. This custom analysis is particularly useful for aggregating data from the iOS and Android versions of your app, and accessing custom parameters passed in your Firebase Analytics events. Let’s take a look at what you can do with this powerful combination.

How does the BigQuery export work?

After linking your Firebase project to BigQuery, Firebase automatically exports a new table to an associated BigQuery dataset every day. If you have both iOS and Android versions of your app, Firebase exports the data for each platform into a separate dataset. Each table contains the user activity and demographic data automatically captured by Firebase Analytics, along with any custom events you’re capturing in your app. Thus, after exporting one week’s worth of data for a cross-platform app, your BigQuery project would contain two datasets, each with seven tables:

Diving into the data

The schema for every Firebase Analytics export table is the same, and we’ve created two datasets (one for iOS and one for Android) with sample user data for you to run the example queries below. The datasets are for a sample cross-platform iOS and Android gaming app. Each dataset contains seven tables -- one week’s worth of analytics data.

The following query will return some basic user demographic and device data for one day of usage on the iOS version of our app:

Since the schema for every BigQuery table exported from Firebase Analytics is the same, you can run any of the queries in this post on your own Firebase Analytics data by replacing the dataset and table names with the ones for your project.

The schema has user data and event data. All user data is automatically captured by Firebase Analytics, and the event data is populated by any custom events you add to your app. Let’s take a look at the specific records for both user and event data.

User data

The user records contain a unique app instance ID for each user (user_dim.app_info.app_instance_id in the schema), along with data on their location, device and app version. In the Firebase console, there are separate dashboards for the app’s Android and iOS analytics. With BigQuery, we can run a query to find out where our users are accessing our app around the world across both platforms. The query below makes use of BigQuery’s union feature, which lets you use a comma as a UNION ALL operator. Since a row is created in our table for each bundle of events a user triggers, we use EXACT_COUNT_DISTINCT to make sure each user is only counted once:

User data also includes a user_properties record, which includes attributes you define to describe different segments of your user base, like language preference or geographic location. Firebase Analytics captures some user properties by default, and you can create up to 25 of your own.

A user’s language preference is one of the default user properties. To see which languages our users speak across platforms, we can run the following query:

Event data

Firebase Analytics makes it easy to log custom events such as tracking item purchases or button clicks in your app. When you log an event, you pass an event name and up to 25 parameters to Firebase Analytics and it automatically tracks the number of times the event has occurred. The following query shows the number of times each event in our app has occurred on Android for a particular day:

SELECT event_dim.name, COUNT(event_dim.name) as event_count FROM [firebase-analytics-sample-data:android_dataset.app_events_20160601]GROUP BY event_dim.nameORDER BY event_count DESC

If you have another type of value associated with an event (like item prices), you can pass it through as an optional value parameter and filter by this value in BigQuery. In our sample tables, there is a spend_virtual_currency event. We can write the following query to see how much virtual currency players spend at one time:

Building complex queries

What if we want to run a query across both platforms of our app over a specific date range? Since Firebase Analytics data is split into tables for each day, we can do this using BigQuery’s TABLE_DATE_RANGE function. This query returns a count of the cities users are coming from over a one week period:

Getting a bit more complex, we can write a query to generate a report of unique user events across platforms over the past two weeks. Here we use PARTITION BY and EXACT_COUNT_DISTINCT to de-dupe our event report by users, making use of user properties and the user_dim.user_id field:

If you have data in Google Analytics for the same app, it’s also possible to export your Google Analytics data to BigQuery and do a JOIN with your Firebase Analytics BigQuery tables.

Visualizing analytics data

Now that we’ve gathered new insights from our mobile app data using the raw BigQuery export, let’s visualize it using Google Data Studio. Data Studio can read directly from BigQuery tables, and we can even pass it a custom query like the ones above. Data Studio can generate many different types of charts depending on the structure of your data, including time series, bar charts, pie charts and geo maps.

For our first visualization, let’s create a bar chart to compare the device types from which users are accessing our app on each platform. We can paste the mobile vs. tablet query above directly into Data Studio to generate the following chart:

From this chart, it’s easy to see that iOS users are much more likely to access our game from a tablet. Getting a bit more complex, we can use the above event report query to create a bar chart comparing the number of events across platforms:

Check out this post for detailed instructions on connecting your BigQuery project to Data Studio.

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Penguin & link building, PPC leads & social appeared first on Search Engine Land.

At this year's SMX East, Googlers Jerry Dischler and Babak Pahlavan shared recent updates and what's coming to AdWords and Google Analytics. Columnist Mark Traphagen was on hand to cover the highlights. The post What’s new and cool at Google from SMX East 2016 appeared first on Search Engine...

Columnist Kristi Kellogg recaps a session at SMX East that dives into how marketers can integrate their paid search and social efforts for better marketing results. The post Up close at SMX: Using paid search and social together appeared first on Search Engine Land.

Columnist Jeff Baum explains that when properly set up, call tracking can help you both measure the value of your PPC campaigns and optimize them for better ROI. The post Why call tracking helps improve PPC lead generation account performance appeared first on Search Engine Land.

Google recently released Penguin 4.0, and the Penguin filter now updates in real time. Columnist Marcus Miller explores what this means for SEO and link building. The post Authority & link building with real-time Penguin appeared first on Search Engine Land.

In this week’s Search In Pictures, here are the latest images culled from the web, showing what people eat at the search engine companies, how they play, who they meet, where they speak, what toys they have and more. GoogleBot at the AngularConnect conference: Source: Twitter DJs in suits at...

If you've been stressing over how to optimize your SEO for RankBrain, there's good news: you can't. Not in the traditional sense of the word, at least. Unlike the classic algorithms we're used to, RankBrain is a query interpretation model. It's a horse of a different color, and as such, it requires a different way of thinking than we've had to use in the past. In today's Whiteboard Friday, Rand tackles the question of what RankBrain actually is and whether SEOs should (or can) optimize for it.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about RankBrain SEO and RankBrain in general. So Google released this algorithm or component of their algorithm a while ago, but there have been questions for a long time about: Can people actually do RankBrain SEO? Is that even a thing? Is it possible to optimize specifically for this RankBrain algorithm?

I'll talk today a little bit about how RankBrain works just so we have a broad overview and we're all on the same page about it. Google has continued to release more and more information through interviews and comments about what the system does. There are some things that potentially shift in our SEO strategies and tactics around it, but I'll show why optimizing for RankBrain is probably the wrong way to frame it.

What does RankBrain actually do?

So what is it that RankBrain actually does? A query comes in to Google. Historically, classically Google would use an algorithm, probably the same algorithm, at least they've said sort of the same algorithm across the board historically to figure out which pages and sites to show. There are a bunch of different ranking inputs, which we've talked about many times here on Whiteboard Friday.

But if you search for this query today, what Google is saying is with RankBrain, they're going to take any query that comes in and RankBrain is essentially going to be a query interpretation model. It's going to look at the words in that query. It's potentially going to look at things possibly like location or personalization or other things. We're not entirely sure whether RankBrain uses those, but it certainly could. It interprets these queries, and then it's going to try and determine the intent behind the query and make the ranking signals that are applied to the results appropriate to that actual query.

So here's what that means. If you search today — I did this search on my mobile device, I did it on my desktop device — for "best Netflix shows" or "best shows on Netflix" or "What are good Netflix shows," "good Netflix shows," "what to watch on Netflix," notice a pattern here? All five of these searches are essentially asking for the very same thing. We might quibble and say "what to watch on Netflix" could be more movie-centric than shows, which could be more TV or episodic series-centric. That's okay. But these five are essentially, " What should I watch on Netflix?"

Now, RankBrain is going to help Google understand that each of these queries, despite the fact that they use slightly different words and phrasing or completely different words, with the exception of Netflix, that they should all be answered by the same content or same kinds of content. That's the part where Google, where RankBrain is determining the searcher intent. Then, Google is going to use RankBrain to basically say, "Now, what signals are right for me, Google, to enhance or to push down for these particular queries?"

Signals

So we're going to be super simplistic, hyper-simplistic and imagine that Google has this realm of just a few signals, and for this particular query or set of queries, any of these, that...

Keyword matchingisnot that important. So minus that, not super important here.

Link diversity, neither here nor there.

Anchor text, it doesn't matter too much, neither here nor there.

Freshness, very, very important.

Why is freshness so important? Well, because Google has seen patterns before, and if you show shows from Netflix that were on the service a year ago, two years ago, three years ago, you are no longer relevant. It doesn't matter if you have lots of good links, lots of diversity, lots of anchor text, lots of great keyword matching. If you are not fresh, you are not showing searchers what they want, and therefore Google doesn't want to display you. In fact, the number one result for all of these was published, I think, six or seven days ago, as of the filming of this Whiteboard Friday. Not particularly surprising, right? Freshness is super important for this query.

Domain authority, that is somewhat important. Google doesn't want to get too spammed by low-quality domains even if they are publishing fresh content.

Engagement, very, very important signal here. That indicates to Google whether searchers are being satisfied by these particular results.

This is a high-engagement query too. So on low-engagement queries, where people are looking for a very simple, quick answer, you expect engagement not to be that big. But for something in-depth, like "What should I watch on Netflix," you expect people are going to go, they're going to engage with that content significantly. Maybe they're going to watch a trailer or some videos. Maybe they're going to browse through a list of 50 things. High engagement, hopefully.

Related topics, Google is definitely looking for the right words and phrases.

If you, for example, are talking about the best shows on Netflix and everyone is talking about how hot — I haven't actually seen it — "Stranger Things" is, which is a TV program on Netflix that is very much in the public eye right now, well, if you don't have that on your best show list, Google probably does not want to display you. So that's an important related topic or a concept or a word vector, whatever it is.

Content depth, that's also important here. Google expects a long list, a fairly substantive page of content, not just a short, "Here are 10 items," and no details about them.

As a result of interpreting the query, using these signals in these proportions, these five were basically the top five or six for every single one of those queries. So Google is essentially saying, "Hey, it doesn't matter if you have perfect keyword targeting and tons of link diversity and anchor text. The signals that are more important here are these ones, and we can interpret that all of these queries essentially have the same intent behind them. Therefore, this is who we're going to rank."

So, in essence, RankBrain is helping Google determine what signals to use in the algorithm or how to weight those signals, because there's a ton of signals that they can choose from. RankBrain is helping them weight them, and they're helping them interpret the query and the searcher intent.

How should SEOs respond?

Does that actually change how we do SEO? A little bit. A little bit. What it doesn't do, though, is it does not say there is a specific way to do SEO for RankBrain itself. Because RankBrain is, yes, helping Google select signals and prioritize them, you can't actually optimize for RankBrain itself. You can optimize for these signals, and you might say, "Hey, I know that, in my world, these signals are much more important than these signals," or the reverse. For a lot of commercial, old-school queries, keyword matching and link diversity and anchor text are still very, very important. I'm not discounting those. What I'm saying is you can't do SEO for RankBrain specifically or not in the classic way that we've been trained to do SEO for a particular algorithm. This is kind of different.

That said, there are some ways SEOs should respond.

If you have not already killed the concept, the idea of one keyword, one page, you should kill it now. In fact, you should have killed it a long time ago, because Hummingbird really put this to bed way back in the day. But if you're still doing that, RankBrain does that even more. It's even more saying, "Hey, you know what? Condense all of these. For all of these queries you should not have one URL and another URL and another URL and another URL. You should have one page targeting all of them, targeting all the intents that are like this." When you do your keyword research and your big matrix of keyword-to-content mapping, that's how you should be optimizing there.

It's no longer the case, as it was probably five, six years ago, that one set of fixed inputs no longer governs every single query. Because of this weighting system, some queries are going to demand signals in different proportion to other ones. Sometimes you're going to need fresh content. Sometimes you need very in-depth content. Sometimes you need high engagement. Sometimes you don't. Sometimes you will need tons of links with anchor text. Sometimes you will not. Sometimes you need high authority to rank for something. Sometimes you don't. So that's a different model.

The reputation that you get as a website, a domain earns a reputation around particular types of signals. That could be because you're publishing lots of fresh content or because you get lots of diverse links or because you have very high engagement or you have very low engagement in terms of you answer things very quickly, but you have a lot of diverse information and topics on that, like a Dictionary.com or an Answers.com, somebody like that where it's quick, drive-by visits, you answer the searcher's query and then they're gone. That's a fine model. But you need to match your SEO focus, your brand of the type of SEO and the type of signals that you hit to the queries that you care about most. You should be establishing that over time and building that out.

So RankBrain, yes, it might shift a little bit of our strategic focus, but no, it's not a classic algorithm that we do SEO against, like a Panda or a Penguin. How do I optimize to avoid Panda hitting me? How do I optimize to avoid Penguin hitting me? How do I optimize for Hummingbird so that my keywords match the query intent? Those are very different from RankBrain, which has this interpretation model.

So, with that, I look forward to hearing about your experiences with RankBrain. I look forward to hearing about what you might be changing since RankBrain came out a couple of years ago, and we'll see you again next week for another edition of Whiteboard Friday. Take care.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Google Penguin recoveries, voice enabled maps & Landy Awards appeared first on Search Engine Land.

Tracking return on investment from SEO can be tricky, especially since it often assists other marketing channels. But columnist Janet Driscoll Miller lays out a plan for proving organic search's ROI and securing budget for the next fiscal year. The post Prepping SEO for 2017: it’s all about...

There is a lot of confusion surrounding how Google handles duplicate content, but columnist Patrick Stox aims to clear it up once and for all. The post The myth of the duplicate content penalty appeared first on Search Engine Land.

Ask any PPC professional the secret of their success, and they'll often point to tools that help them do their jobs better. Columnist Pauline Jakober recaps a session at SMX East where the best of these tools were discussed. The post Updating your SEM toolbox with new, shiny tools –- SMX East...

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of Moz, Inc.

We all know building backlinks is one of the most important aspects of any successful SEO and digital marketing campaign. However, I believe there is an untapped resource out there for link building: finding your competitors' broken pages that have been linked to by external sources.

Allow me to elaborate.

Finding the perfect backlink often takes hours, and it can can take days, weeks, or even longer to acquire. That’s where the link building method I've outlined below comes in. I use it on a regular basis to build relevant backlinks from competitors' 404 pages.

Please note: In this post, I will be using Search Engine Land as an example to make my points.

Ready to dive in? Great, because I'm going to walk you through the entire link building process now.

First, you need to find your competitor(s). This is as easy as searching for the keyword you’re targeting on Google and selecting websites that are above you in the SERPs. Once you have a list of competitors, create a spreadsheet to put all of your competitors on, including their position in the rankings and the date you listed them.

Next, download Screaming Frog SEO Spider [a freemium tool]. This software will allow you to crawl all of your competitors website, revealing all their 404 pages. To do this, simply enter your competitors' URLs in the search bar one at a time, like this:

Once the crawl is complete, click "Response Codes."

Then, click on the dropdown arrow next to "filter" and select "Client Error 4xx."

Now you'll be able to see the brand's 404 pages.

Once you've completed the step above, simply press the "Export" button to export all of their 404 pages into a file. Next, import this file into to a spreadsheet in Excel or Google Docs. On this part of the spreadsheet, create tabs called "Trust Flow," "Citation Flow," "Referring Domains," and "External Backlinks."

Now that you’ve imported all of their 404 pages, you need to dissect the images and external links if there are any. A quick way to do this is to highlight the cell block by pressing on the specific cell at the top, then press "Filter" under the "Data" tab.Look for the drop-down arrow on the first cell of that block. Click the drop-down arrow, and underneath "Filter by values," you will see two links: "Select all" and "Clear."

Press "Clear," like this:

This will clear all preset options. Now, type in the URL of the competitor's website in the search box and click "Select all."

This will filter out all external links and just leave you with their 404 pages. Go through the whole list, highlighting the pages you think you can rewrite.

Now that you have all of your relevant 404 pages in place, run them through Majestic [a paid tool] or Moz’s Open Site Explorer (OSE) [a freemium tool] to see if their 404 pages actually have any external links (which is what we're ultimately looking for). Add the details from Majestic or Moz to the spreadsheet. No matter which tool you use (I use OSE), hit "Request a CSV" for the backlink data. (Import the data into a new tab on your spreadsheet, or create a new spreadsheet altogether if you wish.)

Find relevant backlinks linking to (X’s) website. Once you've found all of the relevant websites, you can either highlight them or remove the ones that aren’t from your spreadsheet.

Please note: It's worth running each of the websites you're potentially going to be reaching out to through Majestic and Moz to find out their citation flow, trust flow, and domain authority (DA). You may only want to go for the highest DA; however, in my opinion, if it's relevant to your niche and will provide useful information, it's worth targeting.

With the 404s and link opportunities in hand, focus on creating content that’s relevant for the brands you hope to earn a link from. Find the contact information for someone at the brand you want the link from. This will usually be clear on their website; but if not, you can use tools such as VoilaNorbert and Email Hunter to get the information you need. Once you have this information, you need to send them an email similar to this one:

Hi [THEIR NAME],

My name is [YOUR NAME], and I carry out the [INSERT JOB ROLE – i.e., MARKETING] at [YOUR COMPANY'S NAME or WEBSITE].

I have just come across your blog post regarding [INSERT THEIR POST TITLE] and when I clicked on one of the links on that post, it happened to go to a 404 page. As you’re probably aware, this is bad for user experience, which is the reason I’m emailing you today.

We recently published an in-depth article regarding the same subject of the broken link you have on your website: [INSERT YOUR POST TITLE].

Here's the link to our article: [URL].

I was wondering if you wouldn’t mind linking to our article instead of the 404 page you’re currently linking to, as our article will provide your readers with a better user experience.

We will be updating this article so we can keep people provided with the very latest information as the industry evolves.

Thank you for reading this email and I look forward to hearing from you.

[YOUR NAME]

Disclaimer: The email example above is just an example and should be tailored to your own style of writing.

In closing, remember to keep detailed notes of the conversations you have with people during outreach, and always follow up with people you connect with.

I hope this tactic helps your SEO efforts in the future. It's certainly helped me find new places to earn links. Not only that, but it gives me new content ideas on a regular basis.

Do you use a similar process to build links? I'd love to hear about it in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Have you been waiting two or more years to recover from your Penguin penalty? Google said the recoveries are starting now. The post Google says Penguin recoveries have started to roll out now appeared first on Search Engine Land.