Thursday, June 30, 2016

On-page content is certainly not one of the sexier topics in digital marketing.

Lost in the flashing lights of "cool digital marketing trends" and things to be seen talking about, it's become the poor relative of many a hyped "game-changer."

I’m here to argue that, in being distracted by the topics that may be more "cutting-edge," we're leaving our most valuable assets unloved and at the mercy of underperformance.

This post is designed not only to make it clear what good on-page content looks like, but also how you should go about prioritizing which pages to tackle first based on commercial opportunity, creating truly customer-focused on-page experiences.

What is "static" or "functional" content?

So how am I defining static/functional content, and why is it so important to nurture in 2016? The answer lies in the recent refocus on audience-centric marketing and Google’s development of the Knowledge Graph.

Whether you call your on-page content "functional," "static," or simply "on-page" content, they're all flavors of the same thing: content that sits on key landing pages. These may be category pages or other key conversion pages. The text is designed to help Google understand the relevance of the page and/or help customers with their buying decisions.

Functional content has other uses as well, but today we're focusing on its use as a customer-focused conversion enhancement and discovery tactic.

And while several years ago it would have been produced simply to aid a relatively immature Google to "find" and "understand," the focus is now squarely back on creating valuable user experiences for your targeted audience.

Google’s ability to better understand and measure what "quality content" really looks like — alongside an overall increase in web usage and ease-of-use expectation among audiences — has made key page investment as critical to success on many levels.

We should now be looking to craft on-page content to improve conversion, search visibility, user experience, and relevance — and yes, even as a technique to steal Knowledge Graph real estate.

The question, however, is "how do I even begin to tackle that mountain?"

Auditing what you have

For those with large sites, the task of even beginning to understand where to start with your static content improvement program can be daunting. Even if you have a small site of a couple of hundred pages, the thought of writing content for all of them can be enough to put you off even starting.

As with any project, the key is gathering the data to inform your decision-making before simply "starting." That’s where my latest process can help.

Introducing COAT: The Content Optimization and Auditing Tool

To help the process along, we’ve been using a tool internally for months — for the first time today, there's now a version that anyone can use.

This link will take you to the new Content Optimisation and Auditing Tool (COAT), and below I’ll walk through exactly how we use it to understand the current site and prioritize areas for content improvement. I'll also walk you through the manual step-by-step process, should you wish to take the scenic route.

The manual process

If you enjoy taking the long road — maybe you feel an extra sense of achievement in doing so — then let's take a look at how to pull the data together to make data-informed decisions around your functional content.

As with any solid piece of analysis, we begin with an empty Excel doc and, in this case, a list of keywords you feel are relevant to and important for your business and site.

Running this process manually is labor-intensive (hence the need to automate it!) and to add dozens more keywords creates a lot of work for little extra knowledge gain, but by focusing on a couple you can see how to build the fuller picture.

Stage one

We start by adding our keywords to our spreadsheet alongside a capture of the search volume for those terms and the actual URL ranking, as shown below (NOTE: all data is for google.co.uk).

Next we add in ranking position...

We then look to the page itself and give each of the key on-page elements a score based on our understanding of best practice. If you want to be really smart, you can score the most important factors out of 20 and those lesser points out of 10.

In building our COAT tool to enable this to be carried out at scale across sites with thousands of pages, we made a list of many of the key on-page factors we know to affect rank and indeed conversion. They include:

This is far from an exhaustive list, but it's a great place to start your analysis. The example below shows an element of this scored:

Once you have calculated score for every key factor, your job is to then to turn this into an average, weighted score out of 100. In this case, you can see I've done this across the listed factors and have a final score for each keyword and URL:

Stage two

Once you have score for a larger number of pages and keywords, it's then possible to begin organizing your data in a way that helps prioritise action.

You can do this simply enough by using filters and organising the table by any number of combinations.

You may want to sort by highest search volume and then by those pages ranking between, say, 5th and 10th position.

Doing this enables you to focus on the pages that may yield the most potential traffic increase from Google, if that is indeed your aim.

Working this way makes it much easier to work in a way that delivers the largest positive net impact fastest.

Doing it at scale

Of course, if you have a large site with tens (or even hundreds) of thousands of pages, the manual option is almost impossible — which is why we scratched our heads and looked for a more effective option. The result was the creation of our Content Auditing and Optimisation Tool. Here's how you can make use of it to paint a fuller picture of your entire site.

Here's how it works

Enter your domain, or a sub-directory of the site if you'd like to focus on a particular section

Add the keywords you want to analyze in a comma-separated list

Click "Get Report," making sure you've chosen the right country

Next comes the smart bit: by adding target keywords to the system before it crawls, it enables the algorithm to cross-reference all pages against those phrases and then score each combination against a list of critical attributes you'd expect the "perfect page" to have.

Let’s take an example:

You run a site that sells laptops. You enter a URL for a specific model, such as /apple-15in-macbook/, and a bunch of related keywords, such as "Apple 15-inch MacBook" and "Apple MacBook Pro."

The system works out the best page for those terms and measures the existing content against a large number of known ranking signals and measures, covering everything from title tags and H1s to readability tests such as the Flesch-Kincaid system.

This outputs a spreadsheet that scores each URL or even categories of URLs (to allow you to see how well-optimized the site is generally for a specific area of business, such as Apple laptops), enabling you to sort the data, discover the pages most in need of improvement, and identify where content gaps may exist.

In a nutshell, it'll provide:

What the most relevant target page for each keyword is

How well-optimized individual pages are for their target keywords

Where content gaps exist within the site’s functional content

It also presents the top-level data in an actionable way. An example of the report landing page can be seen below (raw CSV downloads are also available — more on that in a moment).

You can see the overall page score and simple ways to improve it. This is for our "Digital PR" keyword:

The output

As we've already covered in the manual process example, in addition to pulling the "content quality scores" for each URL, you can also take the data to the next level by adding in other data sources to the mix.

The standard CSV download includes data such as keyword, URL, and scores for the key elements (such as H1, meta, canonical use and static content quality).

This level of detail makes it possible to create a priority order for fixes based on lowest-scoring pages easily enough, but there are ways you can supercharge this process even more.

The first thing to do is run a simple rankings check using your favorite rank tracker for those keywords and add them into a new column in your CSV. It'll look a little like this (I've added some basic styling for clarity):

I also try to group keywords by adding a third column using a handful of grouped terms. In this example, you can see I'm grouping car model keywords with brand terms manually.

Below, you'll see how we can then group these terms together in an averaged cluster table to give us a better understanding of where the keyword volume might be from a car brand perspective. I've blurred the keyword grouping column here to protect existing client strategy data.

As you can see from the snapshot above, we now have a spreadsheet with keyword, keyword group, search volume, URL, rank, and the overall content score pulled in from the base Excel sheet we have worked through. From this, we can do some clever chart visualization to help us understand the data.

Visualizing the numbers

To really understand where the opportunity lies and to take this process past a simple I’ll-work-on-the-worst-pages-first approach, we need to bring it to life.

This means turning our table into a chart. We'll utilize the chart functionality within Excel itself.

Here's an example of the corresponding chart for the table shown above, showing performance by category and ranking correlation. We're using dummy data here, but you can look at the overall optimization score for each car brand section alongside how well they rank (the purple line is average rank for that category):

If we focus on the chart above, we can begin to see a pattern between those categories that are better optimized and generally have better rankings. Correlation does not always equal causation, as we know, but it's useful information.

Take the very first column, or the Subaru category. We can see that it's one of the better-optimized categories (at 49%) and average rank is at 34.1. Now, these are hardly record-breaking positions, but it does point towards the value of well-worked static pages.

Making the categories as granular as possible can be very valuable here, as you can quickly build up a focused picture of where to put your effort to move the needle quickly. The process for doing so is an entirely subjective one, often based on your knowledge of your industry or your site information architecture.

Add keyword volume data into the mix and you know exactly where to build your static content creation to-do list.

Adding in context

Like any data set, however, it requires a level of benchmarking and context to give you the fullest picture possible before you commit time and effort to the content improvement process.

It’s for this reason that I always look to run the same process on key competitors, too. An example of the resulting comparison charts can be seen below.

The process is relatively straightforward: take an average of all the individual URL content scores, which will give you a "whole domain" score. Add competitors by repeating the process for their domain.

You can take a more granular view manually by following the same process for the grouped keywords and tabulating the result. Below, we can see how our domain sizes up against those same two competitors for all nine of our example keyword groups, such as the car brands example we looked at earlier.

With that benchmark data in place, you can move on to the proactive improvement part of the process.

The perfect page structure

Having identified your priority pages, the next step is to ensure you edit (or create them) in the right way to maximize impact.

Whereas a few years ago it was all about creating a few paragraphs almost solely for the sake of helping Google understand the page, now we MUST be focused on usability and improving the experience for the right visitor.

This means adding value to the page. To do that, you need to stand back and really focus in on the visitor: how they get to the page and what they expect from it.

This will almost always involve what I call "making the visitor smarter": creating content that ensures they make better and more informed buying decisions.

To do that requires a structured approach to delivering key information succinctly and in a way that enhances — rather than hinders — the user journey.

The best way of working through what that should look like is to share a few examples of those doing it well:

Tredz is a UK cycling ecommerce business. They do a great job of understanding what their audience is looking for and ensuring they're set up to make them smarter. The "Top 5" pages are certainly not classic landing pages, but they're brilliant examples of how you can sell and add value at the same time.

Below is the page for the "Top 5 hybrids for under £500." You can clearly see how the URL (http://ift.tt/29eH2DW), meta, H tags, and body copy all support this focus and are consistently aligned:

This is a really cool business concept and they also do great landing pages. You get three clear reasons to try them out — presented clearly and utilizing several different content types — all in one package.

Finance may not be where you'd expect to see amazing landing pages, but this is a great example. Not only is it an easy-to-use experience, it answers all the user's key questions succinctly, starting with "What is an installment loan?" It's also structured in a way to capture Knowledge Graph opportunity — something we'll come to shortly.

Outside of examples like these and supporting content, you should be aiming to

Claiming Knowledge Graph

There is, of course, one final reason to work hard on your static pages. That reason? To claim a massively important piece of digital real estate: Google Featured Snippets.

Snippets form part of the wider Knowledge Graph, the tangible visualization of Google’s semantic search knowledge base that's designed to better understand the associations and entities behind words, phrases, and descriptions of things.

The Knowledge Graph comes in a multitude of formats, but one of the most valuable (and attainable from a commercial perspective) is the Featured Snippet, which sits at the top of the organic SERP. An example can be seen below from a search for "How do I register to vote" in google.co.uk:

In recent months, Zazzle Media has done a lot of work on landing page design to capture featured snippets with some interesting findings, most notably the level of extra traffic such a position can achieve.

Having now measured dozens of these snippets, we see an average of 15–20% extra traffic from them versus a traditional position 1. That’s a definite bonus, and makes the task of claiming them extremely worthwhile.

You don’t have to be first

The best news? You don’t even have to be in first position to be considered for a snippet. Our own research shows us that almost 75% of the examples we track have been claimed by pages ranked between 2nd and 10th position. It's far from being robust enough yet for us to formalize a full report on it, but early indication across more than 900 claimed snippets (heavily weighted to the finance sector at present) support these early findings.

Similar research by search data specialists STAT has also supported this theory, revealing that objective words are more likely to appear. General question and definition words (like "does," "cause," and "definition") as well as financial words (like "salary," "average," and "cost") are likely to trigger a featured snippet. Conversely, the word "best" triggered zero featured snippets in over 20,000 instances.

This suggests that writing in a factual way is more likely to help you claim featured results.

Measuring what you already have

Before you run into this two-footed, you must first audit what you may (or may not) already have. If you run a larger site, you may already have claimed a few snippets by chance, and with any major project it's important to benchmark before you begin.

Luckily, there are a handful of tools out there to help you discover what you already rank for. My favorite is SEMrush.

The paid-for tool makes it easy to find out if you rank for any featured snippets already. I'd suggest using it to benchmark and then measure the effect of any optimization and content reworking you do as a result of the auditing process.

Claiming Featured Snippets

Claiming your own Featured Snippet then requires a focus on content structure and on answering key questions in a logical order. This also means paying close attention to on-page HTML structure to ensure that Google can easily and cleanly pick out specific answers.

Let’s look at a few examples showing that Google can pick up different types of content for different types of questions.

1. The list

One of the most prevalent examples of Featured Snippets is the list.

As you can see, Media Temple has claimed this incredibly visual piece of real estate simply by creating an article with a well-structured, step-by-step guide to answer the question:

"How do I set up an email account on my iPhone?"

If we look at how the page is formatted, we can see that the URL matches the search almost exactly, while the H1 tag serves to reinforce the relevance still further.

As we scroll down we find a user-friendly approach to the content, with short sentences and paragraphs broken up succinctly into sections.

This allows Google to quickly understand relevance and extract the most useful information to present in search; in this case, the step-by-step how-to process to complete the task.

Here are the first few paragraphs of the article, highlighting key structural elements. Below this is the list itself that's captured in the above Featured Snippet:

2. The table

Google LOVES to present tables; clearly there's something about the logical nature of how the data is presented that resonates with its team of left-brained engineers!

In the example below, we see a site listing countries by size. Historically, this page may well not have ranked so highly (it isn’t usually the page in position one that claims the snippet result). Because of the ways it has structured the information so well, however, Geohive will be enjoying a sizable spike in traffic to the page.

The page itself looks like this — clear, concise and well-structured:

3. The definition

The final example is the description, or definition snippet; it's possibly the hardest to claim consistently.

It's difficult for two key reasons:

There will be lots of competition for the space and answering the search query in prose format.

It requires a focus on HTML structure and brilliantly crafted content to win.

In the example below, we can see a very good example of how you should be structuring content pages.

We start with a perfect URL (/what-is-a-mortgage-broker/) and this follows through to the H1 (What is a Mortgage Broker). The author then cleverly uses subheadings to extend the rest of the post into a thorough piece on the subject area. Subheadings include the key How, What, Where, and When areas of focus that any good journalism tutor will lecture you on using in any good article or story. Examples might include

So how does this whole mortgage broker thing work?

Mortgage brokers can shop the rate for you

Mortgage brokers are your loan guide

Mortgage broker FAQ

The result is a piece that leaves no stone unturned. Because of this, it's been shared plenty of times — a sure fire signal that the article is positively viewed by readers.

Featured Snippet Cheatsheet

Not being one to leave you alone to figure this out though, I have created this simple Featured Snippet Cheatsheet, designed to take the guesswork out of creating pages worthy of being selected for the Knowledge Graph.

Do it today!

Thanks for making it this far. My one hope is for you to go off and put this plan into action for your own site. Doing so will quickly transform your approach to both landing pages and to your ongoing content creation plan (but that’s a post for another day!).

And if you do have a go, remember to use the free COAT tool and guides associated with this article to make the process as simple as possible.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, June 29, 2016

As anyone who's contributed to business development at an agency knows, it can be challenging to establish exactly what a given prospect needs. What projects, services, or campaigns would actually move the needle for this organization? While some clients come to an agency with specific requests, others are looking for guidance — help establishing where to focus resources. This can be especially difficult, as answering these questions often requires large amounts of information to be analyzed in a small period of time.

To address the challenge of evaluating prospective clients and prioritizing proposed work, we’ve developed the Balanced Digital Scorecard framework. This post is the first in a two-part series. Today, we'll look at:

Why we developed this framework,

Where the concept came from, and

Specific areas to review when evaluating prospects

Part two will cover how to use the inputs from the evaluation process to prioritize proposed work — stay tuned!

Evaluating potential clients

Working with new clients, establishing what strategies will be most impactful to their goals... this is what makes working at an agency awesome. But it can also be some of the most challenging work. Contributing to business development and pitching prospects tends to amplify this with time constraints and limited access to internal data. While some clients have a clear idea of the work they want help with, this doesn’t always equal the most impactful work from a consultant's standpoint. Balancing these needs and wants takes experience and skill, but can be made easier with the right framework.

The use of a framework in this setting helps narrow down the questions you need to answer and the areas to investigate. This is crucial to working smarter, not harder — words which we at Distilled take very seriously. Often when putting together proposals and pitches, consultants must quickly establish the past and present status of a site from many different perspectives.

What type of business is this and what are their overall goals?

What purpose does the site serve and how does it align with these goals?

What campaigns have they run and were they successful?

What does the internal team look like and how efficiently can they get things done?

What is the experience of the user when they arrive on the site?

The list goes on and on, often becoming a vast amount of information that, if not digested and organized, can make putting the right pitch together burdensome.

To help our consultants understand both what questions to ask and how they fit together, we've adapted the Balanced Scorecard framework to meet our needs. But before I talk more about our version, I want to briefly touch on the original framework to make sure we’re all on the same page.

The Balanced Scorecard

For anyone not familiar with this concept, the Balanced Scorecard was created by Robert Kaplan and David Norton in 1992. First published in the Harvard Business Review, Kaplan and Norton set out to create a management system, as opposed to a measurement system (which was more common at that time).

Kaplan and Norton argued that "the traditional financial performance measures worked well for the industrial era, but they are out of step with the skills and competencies companies are trying to master today." They felt the information age would require a different approach, one that guided and evaluated the journey companies undertook. This would allow them to better create "future value through investment in customers, suppliers, employees, processes, technology, and innovation."

The concept suggests that businesses be viewed through four distinct perspectives:

Innovation and learning – Can we continue to improve and create value?

Internal business – What must we excel at?

Customer – How do customers see us?

Financial – How do we look to shareholders?

Narrowing the focus to these four perspectives reduces information overload. “Companies rarely suffer from having too few measures,” wrote Kaplan and Norton. “More commonly, they keep adding new measures whenever an employee or a consultant makes a worthwhile suggestion.” By limiting the perspectives and associated measurements, management is forced to focus on only the most critical areas of the business.

This image below shows the relations of each perspective:

And now, with it filled out as an example:

As you can see, this gives the company clear goals and corresponding measurements.

Kaplan and Norton found that companies solely driven by financial goals and departments were unable to implement the scorecard, because it required all teams and departments to work toward central visions — which often weren’t financial goals.

“The balanced scorecard, on the other hand, is well suited to the kind of organization many companies are trying to become... put[ting] strategy and vision, not control, at the center,” wrote Kaplan and Norton. This would inevitably bring teams together, helping management understand the connectivity within the organization. Ultimately, they felt that “this understanding can help managers transcend traditional notions about functional barriers and ultimately lead to improved decision-making and problem-solving.”

At this point, you’re probably wondering why this framework matters to a digital marketing consultant. While it's more directly suited for evaluating companies from the inside, so much of this approach is really about breaking down the evaluation process into meaningful metrics with forward-looking goals. And this happens to be very similar to evaluating prospects.

Our digital version

As I mentioned before, evaluating prospective clients can be a very challenging task. It’s crucial to limit the areas of investigation during this process to avoid getting lost in the weeds, instead focusing only on the most critical data points.

Since our framework is built for evaluating clients in the digital world, we have appropriately named it the Balanced Digital Scorecard. Our scorecard also has main perspectives through which to view the client:

Platform – Does their platform support publishing, discovery, and discoverability from a technical standpoint?

Content – Are they publishing content which combines appropriate blends of effective, informative, entertaining, and compelling?

Audience – Are they building visibility through owned, earned, and paid media?

Conversions – Do they have a deep understanding of the needs of the market, and are they creating assets, resources, and journeys that drive profitable customer action?

Measurement – Are they measuring all relevant aspects of their approach and their prospects’ activities to enable testing, improvement, and appropriate investment?

These perspectives make up the five areas of analysis to work through when evaluating most prospective clients.

1. Platform

Most consultants or SEO experts have a good understanding of the technical elements to review in a standard site audit. A great list of these can be found on our Technical Audit Checklist, created by my fellow Distiller, Ben Estes. The goal of reviewing these factors is of course to “ensure site implementation won’t hurt rankings” says Ben. While you should definitely evaluate these elements (at a high level), there is more to look into when using this framework.

Evaluating a prospect’s platform does include standard technical SEO factors but also more internal questions, like:

How effective and/or differentiated is their CMS?

How easy is it for them to publish content?

How differentiated are their template levels?

What elements are under the control of each team?

Additionally, you should look into areas like social sharing, overall mobile-friendliness, and site speed.

If you’re thinking this seems like quite the undertaking because technical audits take time and some prospects won’t be open with platform constraints, you’re right (to an extent). Take a high-level approach and look for massive weaknesses instead of every single limitation. This will give you enough information to understand where to prioritize this perspective in the pitch.

2. Content

Similar to the technical section, evaluating content looks similar to a lightweight version of a full content audit. What content do they have, which pieces are awesome and what is missing? Also look to competitors to understand who is creating content in the space and what level the bar is set at.

Beyond looking at these elements through a search lens, aim to understand what content is being shared and why. Is this taking place largely on social channels, or are publications picking these pieces up? Evaluating content on multiple levels helps to understand what they've created in the past and their audience’s response to it.

3. Audience

Looking into a prospect’s audience can be challenging depending on how much access they grant you during the pitch process. If you’re able to get access to analytics this task is much easier but without it, there are many tools you can leverage to get some of the same insights.

In this section, you’re looking at the traffic the site is receiving and from where. Are they building visibility through owned, earned, and paid media outlets? How effective are those efforts? Look at metrics like Search Visibility from SearchMetrics, social reach, and email stats.

A large amount of this research will depend on what information is available or accessible to you. As with previous perspectives, you're just aiming to spot large weaknesses.

4. Conversion

Increased conversions are often a main goal stated by prospects, but without transparency from them, this can be very difficult to evaluate during a pitch. This means that often you’re left to speculate or use basic approaches. How difficult or simple is it to buy something, contact them, or complete a conversion in general? Are there good calls to action to micro-conversions such as joining an email list? How much different is the mobile experience of this process?

Look at the path to these conversions. Was there a clear funnel and did it make sense from a user’s perspective? Understanding the journey a user takes (which you can generally experience first-hand) can tell you a lot about expected conversion metrics.

Lastly, many companies’ financials are available to the public and offer a general idea of how the company is doing. If you can establish how much of their business takes place online, you can start to speculate about the success of their web presence.

5. Measurement

Evaluating a prospect’s measurement capabilities is (not surprisingly) vastly more accurate with analytics access. If you’re granted access, evaluate each platform not just for validity but also accessibility. Are there useful dashboards, management data, or other data sources that teams can use to monitor and make decisions?

Without access, you’re left to simply check and see the presence of analytics and if there is a data layer. While this doesn’t tell you much, you can often deduce from conversations how much data is a part of the internal team’s thought process. If people are monitoring, engaging, and interested in analytics data, changes and prioritization might be an easier undertaking.

Final thoughts

Working with prospective clients is something all agency consultants will have to do at some point in their career. This process is incredibly interesting — it challenges you to leverage a variety of skills and a range of knowledge to evaluate new clients and industries. It's also a daunting task. Often your position outside the organization or unfamiliarity with a given industry can make it difficult to know where to start.

Frameworks like the original Balanced Scorecard created by Kaplan and Norton were designed to help a business evaluate itself from a more modern and holistic perspective. This approach turns the focus to future goals and action, not just evaluation of the past.

This notion is crucial at an agency needing to establish the best path forward for prospective clients. We developed our own framework, the Balanced Digital Scorecard, to help our consultants do just that. By limiting the questions you’re looking to answer, you can work smarter and focus your attention on five perspectives to evaluate a given client. Once you've reviewed these, you’re able to identify which ones are lagging behind and prioritize proposed work accordingly.

Next time, we’ll cover the second part: how to use the Balanced Digital Scorecard to prioritize your work.

If you use a framework to evaluate prospects or have thoughts on the Balanced Digital Scorecard, I’d love to hear from you. I welcome any feedback and/or questions!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, June 27, 2016

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of Moz, Inc.

[Estimated read time: 8 minutes]

As SEOs, we often find ourselves facing new changes implemented by search engines that impact how our clients' websites perform in the SERPs. With each change, it's important that we look beyond its immediate impact and think about its future implications so that we can try to answer this question: "If I were Google, why would I do that?"

Recently, Google implemented a series of manual penalties that affected sites deemed to have unnatural outbound links. Webmasters of affected sites received messages like this in Google Search Console:

Webmasters were notified in an email that Google had detected a pattern of "unnatural artificial, deceptive, or manipulative outbound links." The manual action itself described the link as being either "unnatural or irrelevant."

The responses from webmasters varied in their usual extreme fashion, with recommendations ranging from "do nothing" to "nofollow every outbound link on your site."

Google's John Mueller posted in product forums that you don't need to nofollow every link on your site, but you should focus on nofollowing links that point to a product, sales, or social media page as the result of an exchange.

Now, on to the fun part of being an SEO: looking at a problem and trying to reverse-engineer Google's intentions to decipher the implications this could have on our industry, clients, and strategy.

The intent of this post is not to decry those opinions that this was specifically focused on bloggers who placed dofollow links on product/business reviews, but to present a few ideas to incite discussion as to the potential big-picture strategy that could be at play here.

A few concepts that influenced my thought process are as follows:

Penguin has repeatedly missed its "launch date," which indicates that Google engineers don't feel it's accurate enough to release into the wild.

The growth of negative SEO makes it even more difficult for Google to identify/penalize sites for tactics that are not implemented on their own websites.

Penguin temporarily impacted link-building markets in a way Google would want. The decline reached its plateau in July 2015, as shown in this graph from Google Trends:

If I were Google, I would expect webmasters impacted by the unnatural outbound links penalty to respond in one of these ways:

Do nothing. The penalty is specifically stated to "discount the trust in links on your site." As a webmaster, do you really care if Google trusts the outbound links on your site or not? What about if the penalty does not impact your traffic, rankings, visibility, etc.? What incentive do you have to take any action? Even if you sell links, if the information is not publicly displayed, this does nothing to harm your link-selling business.

Innocent site cleanup effort. A legitimate site that has not exchanged goods for links (or wants to pretend they haven't) would simply go through their site and remove any links that they feel may have triggered the issue and then maybe file a bland reconsideration request stating as much.

Guilty site cleanup effort. A site that has participated in link schemes would know exactly which links are the offenders and remove them. Now, depending on the business owner, some might then file a reconsideration request saying, "I'm sorry, so-and-so paid me to do it, and I'll never do it again." Others may simply state, "Yes, we have identified the problem and corrected it."

In scenario No. 1, Google wins because this helps further the fear, uncertainty, and doubt (FUD) campaigns around link development. It is suddenly impossible to know if a site's outbound links have value because they may possibly have a penalty preventing them from passing value. So link building not only continues to carry the risk of creating a penalty on your site, but it suddenly becomes more obvious that you could exchange goods/money/services for a link that has no value despite its MozRank or any other external "ranking" metric.

In scenarios No. 2 and No. 3, Google wins because they can monitor the links that have been nofollowed/removed and add potential link scheme violators to training data.

In scenario No. 3, they may be able to procure evidence of sites participating in link schemes through admissions by webmasters who sold the links.

If I were Google, I would really love to have a control group of known sites participating in link schemes to further develop my machine-learned algorithm for detecting link profile manipulation. I would take the unnatural outbound link data from scenario No. 3 above and run those sites as a data set against Penguin to attempt 100% confidence, knowing that all those sites definitely participated in link schemes. Then I would tweak Penguin with this training dataset and issue manual actions against the linked sites.

This wouldn't be the first time SEOs have predicted a Google subtext of leveraging webmasters and their data to help them further develop their algorithms for link penalties. In 2012, the SEO industry was skeptical regarding the use of the disavow tool and whether or not Google was crowdsourcing webmasters for their spam team.

"Clearly there are link schemes that cannot be caught through the standard algorithm. That's one of the reasons why there are manual actions. It's within the realm of possibilities that disavow data can be used to confirm how well they're catching spam, as well as identifying spam they couldn't catch automatically. For example, when web publishers disavow sites that were not caught by the algorithm, this can suggest a new area for quality control to look into." — Roger Montti, Martinibuster.com

What objectives could the unnatural outbound links penalties accomplish?

Legit webmasters could become more afraid to sell/place links because they get "penalized."

Spammy webmasters could continue selling links from their penalized sites, which would add to the confusion and devaluation of link markets.

Webmasters could become afraid to buy/exchange links because they could get scammed by penalized sites and be more likely to be outed by the legitimate sites.

The Penguin algorithm could have increased confidence scoring and become ready for real-time implementation.

"There was a time when Google would devalue the PR of a site that was caught selling links. With that signal gone, and Google going after outbound links, it is now more difficult than ever to know whether a link acquired is really of value." -— Russ Jones, Principal Search Scientist at MOZ

Again, if I were Google, the next generation of Penguin would likely heavily weight irrelevantly placed links, and not just commercial keyword-specific anchor text. Testing this first on the sites I think are guilty of providing the links and simply devaluing those links seems much smarter. Of course, at this point, there is no specific evidence to indicate Google's intention behind the unnatural outbound links penalties were intended as a final testing phase for Penguin and to further devalue the manipulated link market. But if I were Google, that's exactly what I would be doing.

"Gone are the days of easily repeatable link building strategies. Acquiring links shouldn’t be easy, and Penguin will continue to change the search marketing landscape whether we like it or not. I, for one, welcome our artificially intelligent overlords. Future iterations of the Penguin algorithm will further solidify the “difficulty level” of link acquisition, making spam less popular and forcing businesses toward legitimate marketing strategies." — Tripp Hamilton, Product Manager at Removeem.com

Google's webmaster guidelines show link schemes are interpreted by intent. I wonder what happens if I start nofollowing links from my site for the intent of devaluing a site's rankings? The intent is manipulation. Am I at risk of being considered a participant in link schemes? If I do link building as part of an SEO campaign, am I inherently conducting a link scheme?

So, since I'm an SEO, not Google, I have to ask myself and my colleagues, "What does this do to change or reinforce my SEO efforts?" I immediately think back to a Whiteboard Friday from a few years ago that discusses the Rules of Link Building.

"At its best, good link building is indistinguishable from good marketing." — Cyrus Shepard, former Content Astronaut at Moz

When asked what type of impact SEOs should expect from this, Garret French from Citation Labs shared:

"Clearly this new effort by Google will start to dry up the dofollow sponsored post, sponsored review marketplace. Watch for prices to drop over the next few months and then go back and test reviews with nofollowed links to see which ones actually drive converting traffic! If you can't stomach paying for nofollowed links then it's time to get creative and return to old-fashioned, story-driven blog PR. It doesn't scale well, but it works well for natural links."

In conclusion, as SEOs, we are responsible for predicting the future of our industry. We do not simply act in the present. Google does not wish for its results to be gamed and have departments full of data scientists dedicated to building algorithms to identify and devalue manipulative practices. If you are incapable of legitimately building links, then you must mimic legitimate links in all aspects (or consider a new career).

Takeaways

Most importantly, any links that we try to build must provide value. If a URL links to a landing page that is not contextually relevant to its source page, then this irrelevant link is likely to be flagged and devalued. Remember, Google can do topical analysis, too.

In link cleanup mode or Penguin recovery, we've typically approached unnatural links as being obvious when they have a commercial keyword (e.g. "insurance quotes") because links more naturally occur with the URL, brand, or navigational labels as anchor text. It would also be safe to assume that natural links tend to occur in content about the destination the link offers and that link relevance should be considered.

Finally, we should continue to identify and present clients with methods for naturally building authority by providing value in what they offer and working to build real relationships and brand advocates.

What are your thoughts? Do you agree? Disagree?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Friday, June 24, 2016

The long tail of search can be a mysterious place to explore, often lacking the volume data that we usually rely on to guide us. But the keyword phrases you can uncover there are worth their weight in gold, often driving highly valuable traffic to your site. In this edition of Whiteboard Friday, Rand delves into core strategies you can use to make long tail keywords work in your favor, from niche-specific SEO to a bigger content strategy that catches many long tail searches in its net.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about long tail SEO.

Now, for those of you who might not be familiar, there's basically a demand curve in the search engine world. Lots and lots of searchers are searching for very popular keywords in the NBA world like "NBA finals." Then we have a smaller number of folks who are searching for "basketball hoops," but it's still pretty substantial, right? Probably hundreds to thousands per month. Then maybe there are only a few dozen searches a month for something like "Miami Heat box ticket prices."

Then we get into the very long tail, where there are one, two, maybe three searches a month, or maybe not even. Maybe it's only a few searches per year for something like "retro Super Sonics customizable jersey Seattle."

Now, this is pretty tough to do keyword research anywhere in this long tail region. The long tail region is almost a mystery to us because the search engines themselves don't get enough volume to where they'd show it in a tool like AdWords or in Bing's research. Even Search Suggest or related searches will often not surface these kinds of terms and phrases. They just don't get enough volume. But for many businesses, and yours may be one of them, these keywords are actually quite valuable.

2 ways to think about long tail keyword targeting

#1: I think that there's this small set of hyper-targeted, specific keyword terms and phrases that are very high value to my business. I know they're not searched for very much, maybe only a couple of times a month, maybe not even that. But when they are, if I can drive the search traffic to my website, it's hugely valuable to me, and therefore it's worth pursuing a handful of these. A handful could be half a dozen, or it could be in the small hundreds that you decide these terms are worth going after even though they have a very small number of keyword searches. Remember, if we were to build 50 landing pages targeting terms that only get one or two searches a month, we still might get a hundred or a couple hundred searches every year coming to our site that are super valuable to the business. So these terms in general, when we're doing this hyper-specific, they need to be...

Conversion-likely, meaning that we know we're going to convert those searchers into buyers if we can get them or searchers into whatever we need them to do.

They should be very low competition, because not a lot of people know about these keywords. There's not a bunch of sites targeting them already. There are no keyword research tools out there that are showing this data.

It should be a relatively small number of terms that we're targeting. Like I said, maybe a few dozen, maybe a couple hundred, generally not more than that.

We're going to try and build specifically optimized pages to turn those searchers into customers or to serve them in whatever way we need.

#2: The second way is to have a large-scale sort of blast approach, where we're less targeted with our content, but we're covering a very wide range of keyword targets. This is what a lot of user-generated content sites, large blogs, and large content sites are doing with their work. Maybe they're doing some specific keyword targeting, but they're also kind of trying to reach this broad group of long tail keywords that might be in their niche. It tends to be the case that there's...

A ton of content being produced.

It's less conversion-focused in general, because we don't know the intent of all these searchers, particularly on the long tail terms.

We are going to be targeting a large number of terms here.

There are no specific keyword targets available. So, in general, we're focused more on the content itself and less on the specificity of that keyword targeting.

Niche + specific long tail SEO

Now, let's start with the niche and specific. The way I'm going to think about this is I might want to build these pages — my retro Super Sonics jerseys that are customizable — with my:

Standard on-page SEO best practices.

I'm going to do my smart internal linking.

I really don't need very many external links. One or two will probably do it. In fact, a lot of times, when it comes to long tail, you can rank with no external links at all, internal links only.

Quality content investment is still essential. I need to make sure that this page gets indexed by Google, and it has to do a great job of converting visitors. So it's got to serve the searcher intent. It can't look like automated content, it can't look low quality, and it certainly can't dissuade visitors from coming, because then I've wasted all the investment that I've made getting that searcher to my page. Especially since there are so few of them, I better make sure this page does a great job.

A) PPC is a great way to go. You can do a broad-term PPC buy in AdWords or in Bing, and then discover these hyper-specific opportunities. So if I'm buying keywords like "customizable jerseys," I might see that, sure, most of them are for teams and sports that I've heard of, but there might be some that come to me that are very, very long tail. This is actually a reason why you might want to do those broad PPC buys for discovery purposes, even if the ROI isn't paying off inside your AdWords campaign. You look and you go, "Hey, it doesn't pay to do this broad buy, but every week we're discovering new keywords for our long tail targeting that does make it worthwhile." That can be something to pay attention to.

B) You can use some keyword research tools, just not AdWords itself, because AdWords bias is to show you more commercial terms, and it biases to show you terms and phrases that do actually have search volume. What you want to do is actually find keyword research tools that can show you keywords with zero searches, no search volume at all. So you could use something like Moz's Keyword Explorer. You could use KeywordTool.io. You could use Übersuggest. You could use some of the keyword research tools from the other providers out there, like a Searchmetrics or what have you. But all of these kinds of terms, what you want to find are those 0–10 searches keywords, because those are going to be the ones that have very, very little volume but potentially are super high-value for your specific website or business.

C) Be aware that the keyword difficulty scores may not actually be that useful in these cases. Keyword difficulty scores — this is true for Moz's keyword difficulty score and for all the other tools that do keyword difficulty — what they tend to do is they look at a search result and then they say, "How many links or how high is the domain authority and page authority or all the link metrics that point to these 10 pages?" The problem is in a set where there are very few people doing very specific keyword targeting, you could have powerful pages that are not actually optimized at all for these keywords that aren't really relevant, and therefore it might be much easier than it looks like from a keyword difficulty score to rank for those pages. So my advice is to look at the keyword targeting to spot that opportunity. If you see that none of the 10 pages actually includes all the keywords, or only one of them seems to actually serve the searcher intent for these long tail keywords, you've probably found yourself a great long tail SEO opportunity.

Large-scale, untargeted long tail SEO

This is very, very different in approach. It's going to be for a different kind of website, different application. We are not targeting specific terms and phrases that we've identified. We're instead saying, "You know what? We want to have a big content strategy to own all types of long tail searches in a particular niche." That could be educational content. It could be discussion content. It could be product content, where you're supporting user-generated content, those kinds of things.

I want a bias to the uniqueness of the content itself and real searcher value, which means I do need content that is useful to searchers, useful to real people. It can't be completely auto-generated.

I'm worrying less about the particular keyword targeting. I know that I don't know which terms and phrases I'm going to be going after. So instead, I'm biasing to other things, like usefulness, amount of uniqueness of content, the quality of it, the value that it provides, the engagement metrics that I can look at in my analytics, all that kind of stuff.

You want to be careful here. Anytime you're doing broad-scale content creation or enabling content creation on a platform, you've got to keep low-value, low-unique content pages out of Google's index. That could be done two ways. One, you limit the system to only allow in certain amounts of content before a page can even be published. Or you look at the quantity of content that's being created or the engagement metrics from your analytics, and you essentially block — via robots.txt or via meta robots tag — any of the pages that look like they're low-value, low-unique content.

A) This approach requires a lot of scalability, and so you need something like a:

Discussion forum

Q&A-style content

User-posted product or service or business listings. Think something like an Etsy or a GitHub or a Moz Q&A, discussion forums like Reddit. These all support user-generated content.

You can also go with non-UGC if it's editorially created. Something like a frequently updated blog or news content, particularly if you have enough of a staff that can create that content on a regular basis so that you're pumping out good stuff on a regular basis, that can also work. It's generally not as scalable, but you have to worry less about the uniqueness of quality content.

B) You don't want to fully automate this system. The worst thing you can possibly do is to take a site that has been doing well, pump out hundreds, thousands, tens of thousands of pages, throw them up on the site, they're low-quality content, low uniqueness of content, and Google can hit you with something like the Panda penalty, which has happened to a lot of sites that we've seen over the years. They continue to iterate and refine that, so be very cautious. You need some human curation in order to make sure the uniqueness of content and value remain above the level you need.

C) If you're going to be doing this large-scale content creation, I highly advise you to make the content management system or the UGC submission system work in your favor. Make it do some of that hard SEO legwork for you, things like...

Nudging users to give more descriptive, more useful content when they're creating it for you.

Require some minimum level of content in order to even be able to post it.

Use spam software to be able to catch and evaluate stuff before it goes into your system. If it has lots of links, if it contains poison keywords, spam keywords, kick it out.

Encourage and reward the high-quality contributions. If you see users or content that is consistently doing well through your engagement metrics, go find out who those users were, go reward them. Go promote that content. Push that to higher visibility. You want to make this a system that rewards the best stuff and keeps the bad stuff out. A great UGC content management system can do this for you if you build it right.

All right, everyone, look forward to your thoughts on long tail SEO, and we'll see you again next week for another edition of Whiteboard Friday. Take care.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Thursday, June 23, 2016

Welcome to the fifth installment of our educational Next Level series! Last time, we led you on a journey to perform a heroic site audit. This time around we're diving into the subject of long tail keywords, equipping you with all the tools you'll need to uncover buried treasure.

One of the biggest obstacles to driving forward your business online is being able to rank well for keywords that people are searching for. Getting your lovely URLs to show up in those precious top positions — and gaining a good portion of the visitors behind the searches — can feel like an impossible dream.

Particularly if you’re working on a newish site on a modest budget within a competitive niche.

Well, strap yourself in, because today we’re going to live that dream. I’ll take you through the bronze, silver, and gold levels of finding, assessing, and targeting long tail keywords so you can start getting visitors to your site that are primed and ready to convert.

So what the bloomin’ heck are long tail keywords?

The 'long tail of search' refers to the many weird and wonderful ways the diverse people of the world search for what they're after in any given niche.

They clarify their search with emotional triggers, technical terms they’ve learned from reading forums, and compared features and prices before mustering up the courage to commit and convert on your site.

The long tail is packed with searches like “best web designer in Nottingham” or “mirrorless camera 4k video 2016” or “sailor moon cat costume.”

This lovely chart visualizes the long tail of search by using the tried and tested "Internet loves cats + animated gifs are the coolest = SUCCESS" formula.

All along that tail are searches being constantly generated by people seeking answers from the Internet hive mind. There's no end to what you’ll find if you have a good old rummage about, including: Questions, styles, colors, brands, concerns, peeves, desires, hopes, dreams… and everything in between.

Fresh, new, outrageous, often bizarre keywords. If you’ve done any keyword research you’ll know what I mean by bizarre. Things a person wouldn’t admit to their therapist, priest, or doctor they’ll happily pump into Google and hit search. And we’re going to go diving for pearls: keywords with searcher intent, high demand, low competition, and a spot on the SERPs just for you.

Bronze medal: Build your own keyword

It’s really easy to come up with a long tail keyword. You can use your brain, gather some thoughts, take a stab in the dark, and throw a few keyword modifiers around your head keyword.

Have you ever played with that magnetic fridge poetry game? It’s a bit like that. You can play online if (like me) you have an aversion to physical things.

I’m no poet, but I think I deserve a medal for this attempt, and now I really want some "hot seasonal berry water."

Magnetic poetry not doing it for you? Don’t worry — that’s only the beginning.

Use your industry knowledge

Time to draw on that valuable industry knowledge you’ve been storing up, jot down some ideas, and think about intent and common misconceptions. I’m going to use the example pearls or freshwater pearls in this post as the head term because that’s something I’m interested in.

Let’s go!

How do I clean freshwater pearls

Ok, cool, adding to my list.

Search your keyword

Now you can get some more ideas by manually entering your keyword into Google and prompting it to give you popular suggestions, like I’ve done below:

Awesome, I’m adding Freshwater pearls price to my list.

Explore the language of social media

Get amongst the over-sharers and have a look at what people are chatting about on social media by searching your keyword in Twitter, Instagram, and Youtube. These are topics in your niche that people are talking about right now.

Twitter and Instagram are proving tricky to explore for my head term because it’s jam-packed with people selling pearl jewelry.

Shout out to a cheeky Moz tool, Followerwonk, for helping with this stage. I’m searching Twitter bios to find Twitter accounts with "freshwater pearls."

Click these handy little graph icons for a more in-depth profile analysis

I can now explore what they’re tweeting, I can follow them and find out who is engaging with them, and I can find their most important tweets. Pretty groovy!

YouTube is also pulling up some interesting ideas around my keyword. This is simultaneously helping me gather keyword ideas and giving me a good sense about what content is already out there. Don’t worry, we’ll touch on content later on in this post. :)

I’m adding understanding types of pearls and Difference between saltwater and freshwater pearls to my list.

Ask keyword questions?

You’ll probably notice that I’ve added a question mark to a phrase that is not a question, just to mess with you all. Apologies for the confusing internal-reading-voice-upwards-inflection.

Questions are my favorite types of keywords. What!? You don’t have a fav keyword type? Well, you do now — trust me.

Answer the Public is packed with questions, and it has the added bonus of having this tooth-picking (not bogie-picking, thank goodness!) dude waiting for you to impress him.

So let’s give him something to munch on and pop freshwater pearls in there, too, then grab some questions for our growing list.

Now this is starting to look interesting: we’ve got some keyword modifiers, some clear buying signals, and a better idea of what people might be looking for around "freshwater pearls."

Should you stop there? I’m flabbergasted — how can you even suggest that?! This is only the beginning. :)

Silver medal: Assess demand and explore topics

So far, so rosy. But we've been focusing on finding keywords, picking them up, and stashing them in our collection like colored glass at the seaside.

To really dig into the endless tail of your niche, you’ll need a keyword tool like our very own Keyword Explorer (KWE for short). This is invaluable to finding topics within your niche that present a real opportunity for your site.

If you’re trying out KWE for the first time, you get 2 searches free per day without having to log in, but you get a few more with your Community account and even more with a Moz Pro subscription.

Find search volume for your head keyword

Let’s put "pearls" into KWE. Now you can see how many times it’s being searched per month in Google:

Now try "freshwater pearls." As expected, the search volume goes down, but we’re getting more specific.

We could keep going like this, but we’re going to burn up all our free searches. Just take it as read that, as you get more specific and enter all the phrases we found earlier, the search volume will decrease even more. There may not be any data at all. That’s why you need to explore the searches around this main keyword.

Find even more long tail searches

Below the search volume, click on "Keyword Suggestions."

Well, hi there, ever-expanding long tail! We’ve gone from a handful of keywords pulled together manually from different sources to 1,000 suggestions right there on your screen. Positioned right next to that we have search volume to give us an idea of demand.

The diversity of searches within your niche is just as important as that big number we saw at the beginning, because it shows you how much demand there is for this niche as a whole. We’re also learning more about searcher intent.

I’m scanning through those 1,000 suggestions and looking for other terms that pop up again and again. I’m also looking for signals and different ways the words are being used to pick out words to expand my list.

I like to toggle between sorting by relevancy and search volume, and then scroll through all the results to cherry-pick those that catch my eye.

Now reverse the volume filter so that it’s showing lower-volume search terms and scroll down through the end of the tail to explore the lower-volume chatter.

This is where your industry knowledge comes into play again. Bots, formulas, spreadsheets, and algorithms are all well and good, but don’t discount your own instincts and knowledge.

Use the suggestions filters to your advantage and play around with broader or more specific suggestion types. Keyword Explorer pulls together suggestions from AdWords, autosuggest, related searches, Wikipedia titles, topic modeling extractions, and SERPscape.

Looking through the suggestions, I’ve noticed that the word “cultured” has popped up a few times.

To see these all bundled together, I want to look at the grouping options in KWE. I like the high lexicon groups so I can see how much discussion is going on within my topics.

Scroll down and expand that group to get an idea of demand and assess intent.

I’m also interested in the words around "price" and "value," so I’m doing the same and saving those to my sheet, along with the search volume. A few attempts at researching the "cleaning" of pearls wasn’t very fruitful, so I’ve adjusted my search to "clean freshwater pearls."

Because I’m a keyword questions fanatic, I’m also going to filter by questions (the bottom option from the drop-down menu):

OK! How is our list looking? Pretty darn hot, I reckon! We’ve gathered together a list of keywords and dug into the long tail of these sub-niches, and right alongside we’ve got search volume.

You’ll notice that some of the keywords I discovered in the bronze stage don’t have any data showing up in KWE (indicated by the hyphen in the screenshot above). That’s ok — they’re still topics I can research further. This is exactly why we have assessed demand; no wild goose chase for us!

Gold medal: Find out who you’re competing with

We’re not operating in a vacuum. There’s always someone out there trying to elbow their way onto the first page. Don’t fall into the trap of thinking that just because it’s a long tail term with a nice chunk of search volume all those clicks will rain down on you. If the terms you’re looking to target already have big names headlining, this could very well alter your roadmap.

To reap the rewards of targeting the long tail, you’ll have to make sure you can outperform your competition.

Manually check the SERPs

Check out who's showing up in the search engine results page (SERPs) by running a search for your head term. Make sure you’re signed out of Google and in an incognito tab.

We’re focusing on the organic results to find out if there are any weaker URLs you can pick off.

I’ll start with “freshwater pearls” for illustrative purposes.

Whoooaaa, this is a noisy page. I’ve had to scroll a whole 2.5cm on my magic mouse (that’s very nearly a whole inch for the imperialists among us) just to see any organic results.

Let’s install the Mozbar to discover some metrics on the fly, like domain authority and back-linking data.

Now, if seeing those big players in the SERPs doesn’t make it clear, looking at the Mozbar metrics certainly does. This is exclusive real estate. It’s dominated by retailers, although Wikipedia gets a place in the middle of the page.

Let’s get into the mind of Google for a second. It — or should I say "they" (I can’t decide if it’s more creepy for Google to be referred to as a singular or plural pronoun. Let’s go with "they") — anyway, I digress. "They" are guessing that we’re looking to buy pearls, but they're also offering results on what they are.

This sort of information is offered up by big retailers who have created content that targets the intention of searchers. Mikimoto drives us to their blog post all about where freshwater pearls come from.

As you get deeper into the long tail of your niche, you’ll begin to come across sites you might not be so familiar with. So go and have a peek at their content.

Document all of your findings our spreadsheet from earlier to keep track of the data. This information will now inform you of your chances of ranking for that term.

Manually checking out your competition is something that I would strongly recommend. But we don’t have all the time in the world to check each results page for each keyword we’re interested in.

Keyword Explorer leaps to our rescue again

Run your search and click on "SERP Analysis" to see what the first page looks like, along with authority metrics and social activity.

All the metrics for the organic results, like Page Authority, goes into calculating the Difficulty score above (lower is better).

And all those other factors — the ads and suggestions taking up space on the SERPs — that's what's used to calculate Opportunity (higher is better).

Potential is all the other metrics tallied up. You definitely want this to be higher.

So now we have 3 important numerical values we can use to to gauge our competition. We can use these values to compare keywords.

After a few searches in KWE, you’re going to start hankering for a keyword list or two. For this you’ll need a paid subscription, or a Moz Pro 30-day free trial.

It’s well worth the sign-up; not only to you get 5,000 keyword reports per month and 30 lists (on the Medium plan), but you also get to check out the super-magical-KWE-mega-list-funky-cool metric page. That’s what I call it, just rolls off the tongue, you know?

Then head up to your lists on the top right and open up the one you just created.

Now we can see the spread of demand, competition and SERP features for our whole list.

You can compare Volume, SERPS, Difficulty, Opportunity, and Potential across multiple lists, topics, and niches.

How to compare apples with apples

Comparing keywords is something we get asked about quite a bit on the Moz Help Team.

Should I target this word or that word?

For the long tail keyword, the Volume is a lot lower, Difficulty is also down, the Opportunity is a bit up, and overall the Potential is down because of the drop in search volume.

But don’t discount it! By targeting these sorts of terms, you’re focusing more on the intent of the searcher. You’re also making your content relevant for all the other neighboring search terms.

Let’s compare difference between freshwater and cultured pearls with how much are freshwater pearls worth.

Search volume is the same, but for the keyword how much are freshwater pearls worth Difficulty is up, but so is the overall Potential because the Opportunity is higher.

But just because you’re picking between two long tail keywords doesn’t mean you’ve fully understood the long tail of search.

You know all those keywords I grabbed for my list earlier in this post? Well, here they are sorted into topics.

Look at all the different ways people search for kind of the same thing. This is what drives the long tail of search — searcher diversity. If you tally all the volume up for the cultured topic, we’ve got a bigger group of keywords and overall more search volume. This is where you can use Keyword Explorer and the long tail to make informed decisions.

You’re laying out your virtual welcome mat for all the potential traffic these terms send.

Platinum level: I lied — there's one more level!

For all you lovely overachievers out there who have reached the end of this post, I’m going to reward you with one final tip.

You’ve done all the snooping around on your competitors, so you know who you’re up against. You’ve done the research, so you know what keywords to target to begin driving intent-rich traffic.

Here's where you really have to tip your hat to long tail keywords, because by targeting the long tail you can start to build enough authority in the industry to beat stronger competition and rank higher for more competitive keywords in your niche.

Wrapping up…

The various different keyword phrases that make up the long tail in your industry is vast, often easier to rank for, and indicates stronger intent from the searcher. By targeting them you’ll find you can start to rank for relevant phrases sooner than if you just targeted the head. And over time, if you get the right signals, you’ll be able to rank for keywords with tougher competition. Pretty sweet, huh? Give our Keyword Explorer tool a whirl and let me know how you get on :)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!