Tag Archive | "Revolution"

OTT is increasingly being tested by advertisers as more inventory becomes available, says Nicole Whitesel, SVP of Enterprise Strategy at Publicis Media. “In the past, OTT was seen as a nascent channel with limited reach,” said Whitesel. “I think now you’re seeing a lot more inventory there available to them to buy. I think their willingness to test things where they’re unsure of outcomes has been increased more than ever before.”

Nicole Whitesel, SVP of Enterprise Strategy at Publicis Media recently discussed the increased experimentation with OTT by agencies and their clients in an interview with BeetTV:

OTT is the Next Step in the Digital Revolution for Ad Buyers

One of the things we’re seeing is clients appetites being larger than ever before to explore. In the past, OTT was seen as a nascent channel with limited reach. I think now you’re seeing a lot more inventory there available to them to buy. I think their willingness to test things where they’re unsure of outcomes has been increased more than ever before.

We’re really talking about kind of the next step, the digital revolution maybe seven years ago and people were early movers in that space and they had an advantage.

We’re thinking about the space in a similar way. There’s an opportunity to get in early and test things, build operational muscle between teams that maybe haven’t worked together as closely before. We really see that as an opportunity this year to do a lot of that work.

Agency Teams Working Together to Buy OTT Inventory

You have teams where historically broadcast teams and national teams have bought broadcast. Then you have teams that are more precision or audience driven that buy programmatic. You’re seeing a lot of work between those teams now to think about the way we’re buying connected TV, inventory if you will, or OTT.

You have a broadcast team that might be negotiating as part of an upfront and then you have an activation team who’s actually activating within a quarter against a specific audience, buying that inventory in-quarter.

Those are teams that historically don’t work as closely together on an ongoing basis outside of upfront. We’re seeing that that’s an opportunity to bring those teams closer together and working more closely with clients who learn these new channels and understand that. That goes as well to analytics and measurement. How are we measuring them? What’s the contribution when compared to historically traditional channels like linear TV?

Opportunity for Direct to Consumer Companies

I think there’s an opportunity for direct to consumer companies (DTC) to enter the space through these new channels that didn’t exist before from a linear broadcast perspective. A lot of inventory was sold in the upfront and there was limited inventory available on an ongoing basis. That’s changing with these new channels in inventory that’s available through connected TV or FEP inventory.

They have an opportunity to buy that in a way that benefits their business model and works with the way that their business has set up to run with retail quarters, seasonality, the things that make sense for them. They don’t have to make a commitment a year in advance. They can do it when it makes sense for their business.

Getting Smarter With Broadcast Partners

I think there’s an opportunity for us to get smarter about the way we partner with our broadcast partners. Historically we’ve gone in and we say we want this CPM and this flexibility and this is the programming or dayparts we want to buy.

I think there’s an opportunity for us to say, hey, we want to buy this from an upfront perspective, but here’s all the other inventory that you manage that we also want to think about buying. We can collectively leverage dollars and get things that are valuable for our brands and our clients that allows them the flexibility to test these new channels.

TV Attribution – A Big Next Step for Ad Buyers

I think TV attribution is one of the big next steps for our industry. Being able to understand a contribution of a specific channel and its cost and associated with an outcome the brand’s care about is I think the next big opportunity for us. Then we’ll understand investment in media mix across those different video channels.

There is nothing short of a revolution happening in the food marketplace today and it is not a quiet one, says Walter Robb, the former co-CEO of Whole Foods. “It is disrupting things left and right, all the way up the value chain back into the farmer’s field,” says Robb.

Walter Robb, former co-CEO of Whole Foods, discusses the revolution happening in the food marketplace in an interview on CNBC:

Nothing Short of a Revolution Happening in the Food Marketplace

There is nothing short of a revolution happening in the food marketplace today and it is not a quiet one. It is disrupting things left and right, all the way up the value chain back into the farmer’s field. For me, to see these (organic) brands and to see it show up at the Super Bowl, the biggest media stage of the world, is kind of an exciting thing.

Some 75 percent of the food we eat is from 12 plants. Somebody’s woken up to that realizing, wow, there’s a whole lot of stuff that we can create from stuff we don’t even know yet. The Natural Food Expo, which is the next month in LA, 85,000 people are going to that show. This is where the energy and the edge of the food industry is at right now.

We’ve broken into this area now where there’s an amazing amount of innovation with young companies and entrepreneurs. This is where the growing edge of the food industry is now. It’s not just natural and organic but it’s this innovation around new foods and new food types.

Amazing Amount of Innovation With Entrepreneurs

You have to build the tools to really understand your customer personally. I think it’s pretty exciting to see what’s happening. On the physical side, Walmart is doing a lot of things, Kroger is doing a lot of things, and Whole Foods is doing a lot of things to try to integrate digital and physical retail in a way that gives the customer a very rich experience.

I do think in terms of the food service delivery, Grubhub has had phenomenal growth. What’s happened is the world has woken up to how exciting food is again. We kind of went along after World War two for a number of years with this kind of dull drum of production, just regular stuff with the major CPG brands.

If you get a $ 5 latte and it’s probably a $ 5 delivery charge at what point does the customers say that’s a great value problem? I don’t know, but I think we’re going to find out. I do think this idea that the customer wants the convenience is here to stay and that they’re used to having that option. In some cases, they will choose it. Where that line is it’s too early to say exactly where they’ll say, that’s too expensive or that’s not a good deal.

Existential threats to SEO

Rand called “Not Provided” the First Existential Threat to SEO in 2013. While 100% Not Provided was certainly one of the largest and most egregious data grabs by Google, it was part of a long and continued history of Google pulling data sources which benefit search engine optimizers.

A brief history

Nov 2010 – Deprecate search API

Oct 2011 – Google begins Not Provided

Feb 2012 – Sampled data in Google Analytics

Aug 2013 – Google Keyword Tool closed

Sep 2013 – Not Provided ramped up

Feb 2015 – Link Operator degraded

Jan 2016 – Search API killed

Mar 2016 – Google ends Toolbar PageRank

Aug 2016 – Keyword Planner restricted to paid

I don’t intend to say that Google made any of these decisions specifically to harm SEOs, but that the decisions did harm SEO is inarguable. In our industry, like many others, data is power. Without access to SERP, keyword, and analytics data, our and our industry’s collective judgement is clouded. A recent survey of SEOs showed that data is more important to them than ever, despite these data retractions.

So how do we proceed in a world in which we need data more and more but our access is steadily restricted by the powers that be? Perhaps we have an answer — clickstream data.

What is clickstream data?

First, let’s give a quick definition of clickstream data to those who are not yet familiar. The most straightforward definition I’ve seen is:

If you’ve spent any time analyzing your funnel or looking at how users move through your site, you have utilized clickstream data in performing clickstream analysis. However, traditionally, clickstream data is restricted to sites you own. But what if we could see how users behave across the web — not just our own sites? What keywords they search, what pages they visit, and how they navigate the web? With that data, we could begin to fill in the data gaps previously lost to Google.

I think it’s worthwhile to point out the concerns presented by clickstream data. As a webmaster, you must be thoughtful about what you do with user data. You have access to the referrers which brought visitors to your site, you know what they click on, you might even have usernames, emails, and passwords. In the same manner, being vigilant about anonymizing data and excluding personally identifiable information (PII) has to be the first priority in using clickstream data. Moz and our partners remain vigilant, including our latest partner Jumpshot, whose algorithms for removing PII are industry-leading.

What can we do?

So let’s have some fun, shall we? Let’s start to talk about all the great things we can do with clickstream data. Below, I’ll outline a half dozen or so insights we’ve gleaned from clickstream data that are relevant to search marketers and Internet users in general. First, let me give credit where credit is due — the data for these insights have come from 2 excellent partners: Clickstre.am and Jumpshot.

Popping the filter bubble

It isn’t very often that the interests of search engine marketers and social scientists intersect, so this is a rare opportunity for me to blend my career with my formal education. Search engines like Google personalize results in a number of ways. We regularly see personalization of search results in the form of geolocation, previous sites visited, or even SERP features tailored to things Google knows about us as users. One question posed by social scientists is whether this personalization creates a filter bubble, where users only see information relative to their interests. Of particular concern is whether this filter bubble could influence important informational queries like those related to political candidates. Does Google show uniform results for political candidate queries, or do they show you the results you want to see based on their personalization models?

Well, with clickstream data we can answer this question quite clearly by looking at the number of unique URLs which users click on from a SERP. Personalized keywords should result in a higher number of unique URLs clicked, as users see different URLs from one another. We randomly selected 50 search-click pairs (a searched keyword and the URL the user clicked on) for the following keywords to get an idea of how personalized the SERPs were.

Dropbox – 10

Google – 12

Donald Trump – 14

Hillary Clinton – 14

Facebook – 15

Note 7 – 16

Heart Disease – 16

Banks Near Me – 107

Landscaping Company – 260

As you can see, a highly personalized keyword like “banks near me” or “landscaping company” — which are dependent upon location —receive a large number of unique URLs clicked. This is to be expected and validates the model to a degree. However, candidate names like “Hillary Clinton” and “Donald Trump” are personalized no more than major brands like Dropbox, Google, or Facebook and products like the Samsung Note 7. It appears that the hypothetical filter bubble has burst — most users see the exact same results as one another.

Biased search behavior

But is that all we need to ask? Can we learn more about the political behavior of users online? It turns out we can. One of the truly interesting features of clickstream data is the ability to do “also-searched” analysis. We can look at clickstream data and determine whether or not a person or group of people are more likely to search for one phrase or another after first searching for a particular phrase. We dove into the clickstream data to see if there were any material differences between subsequent searches of individuals who looked for “donald trump” and “hillary clinton,” respectively. While the majority of the searches were quite the same, as you would expect, searching for things like “youtube” or “facebook,” there were some very interesting differences.

For example, individuals who searched for “donald trump” were 2x as likely to then go on to search for “Omar Mateen” than individuals who previously searched for “hillary clinton.” Omar Mateen was the Orlando shooter. Individuals who searched for “Hillary Clinton” were about 60% more likely to search for “Philando Castile,” the victim of a police shooting and, in particular, one of the more egregious examples. So it seems — at least from this early evidence —that people carry their biases to the search engines, rather than search engines pushing bias back upon them.

Getting a real click-through rate model

Search marketers have been looking at click-through rate (CTR) models since the beginning of our craft, trying to predict traffic and earnings under a set of assumptions that have all but disappeared since the days of 10 blue links. With the advent of SERP features like answer boxes, the knowledge graph, and Twitter feeds in the search results, it has been hard to garner exactly what level of traffic we would derive from any given position.

With clickstream data, we have a path to uncovering those mysteries. For starters, the click-through rate curve is dead. Sorry folks, but it has been for quite some time and any allegiance to it should be categorized as willful neglect.

We have to begin building somewhere, so at Moz we start with opportunity metrics (like the one introduced by Dr. Pete, which can be found in Keyword Explorer) which depreciate the potential search traffic available from a keyword based on the presence of SERP features. We can use clickstream data to learn the non-linear relationship between SERP features and CTR, which is often counter-intuitive.

Let’s take a quick quiz.

Which SERP has the highest organic click-through rate?

A SERP with just news

A SERP with just top ads

A SERP with sitelinks, knowledge panel, tweets, and ads at the top

Strangely enough, it’s the last that has the highest click-through rate to organic. Why? It turns out that the only queries that get that bizarre combination of SERP features are for important brands, like Louis Vuitton or BMW. Subsequently, nearly 100% of the click traffic goes to the #1 sitelink, which is the brand website.

Perhaps even more strangely, pages with top ads deliver more organic clicks than those with just news. News tends to entice users more than advertisements.

It would be nearly impossible to come to these revelations without clickstream data, but now we can use the data to find the unique relationships between SERP features and click-through rates.

In production: Better volume data

Perhaps Moz’s most well-known usage of clickstream data is our volume metric in Keyword Explorer. There has been a long history of search marketers using Google’s keyword volume as a metric to predict traffic and prioritize keywords. While (not provided) hit SEOs the hardest, it seems like the recent Google Keyword Planner ranges are taking a toll as well.

So how do we address this with clickstream data? Unfortunately, it isn’t as cut-and-dry as simply replacing Google’s data with Jumpshot or a 3rd party provider. There are several steps involved — here are just a few.

Data ingestion and clean-up

Bias removal

Modeling against Google Volume

Disambiguation corrections

I can’t stress how much attention to detail needs to go into these steps in order to make sure you’re adding value with clickstream data rather than simply muddling things further. But I can say with confidence that our complex solutions have had a profoundly positive impact on the data we provide. Let me give you some disambiguation examples that were recently uncovered by our model.

Keyword

Google Value

Disambiguated

cars part

135000

2900

chopsuey

74000

4400

treatment for mononucleosis

4400

720

lorton va

9900

8100

definition of customer service

2400

1300

marion county detention center

5400

4400

smoke again lyrics

1900

880

should i get a phd

480

320

oakley crosshair 2.0

1000

480

barter 6 download

4400

590

how to build a shoe rack

880

720

Look at the huge discrepancies here for the keyword “cars part.” Most people search for “car parts” or “car part,” but Google groups together the keyword “cars part,” giving it a ridiculously high search value. We were able to use clickstream data to dramatically lower that number.

The same is true for “chopsuey.” Most people search for it, correctly, as two separate words: “chop suey.”

These corrections to Google search volume data are essential to make accurate, informed decisions about what content to create and how to properly optimize it. Without clickstream data on our side, we would be grossly misled, especially in aggregate data.

How much does this actually impact Google search volume? Roughly 25% of all keywords we process from Google data are corrected by clickstream data. This means tens of millions of keywords monthly.

Moving forward

The big question for marketers is now not only how do we respond to losses in data, but how do we prepare for future losses? A quick survey of SEOs revealed some of their future concerns…

Luckily, a blended model of crawled and clickstream data allows Moz to uniquely manage these types of losses. SERP and suggest data are all available through clickstream sources, piggybacking on real results rather than performing automated ones. Link data is already available through third-party indexes like MozScape, but can be improved even further with clickstream data that reveals the true popularity of individual links. All that being said, the future looks bright for this new blended data model, and we look forward to delivering upon its promises in the months and years to come.

And finally, a question for you…

As Moz continues to improve upon Keyword Explorer, we want to make that data more easily accessible to you. We hope to soon offer you an API, which will bring this data directly to you and your apps so that you can do more research than ever before. But we need your help in tailoring this API to your needs. If you have a moment, please answer this survey so we can piece together something that provides just what you need.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Machine learning is already a very big deal. It’s here, and it’s in use in far more businesses than you might suspect. A few months back, I decided to take a deep dive into this topic to learn more about it. In today’s post, I’ll dive into a certain amount of technical detail about how it works, but I also plan to discuss its practical impact on SEO and digital marketing.

For reference, check out Rand Fishkin’s presentation about how we’ve entered into a two-algorithm world. Rand addresses the impact of machine learning on search and SEO in detail in that presentation, and how it influences SEO. I’ll talk more about that again later.

For fun, I’ll also include a tool that allows you to predict your chances of getting a retweet based on a number of things: your Followerwonk Social Authority, whether you include images, hashtags, and several other similar factors. I call this tool the Twitter Engagement Predictor (TEP). To build the TEP, I created and trained a neural network. The tool will accept input from you, and then use the neural network to predict your chances of getting an RT.

The TEP leverages the data from a study I published in December 2014 on Twitter engagement, where we reviewed information from 1.9M original tweets (as opposed to RTs and favorites) to see what factors most improved the chances of getting a retweet.

My machine learning journey

I got my first meaningful glimpse of machine learning back in 2011 when I interviewed Google’s Peter Norvig, and he told me how Google had used it to teach Google Translate.

Basically, they looked at all the language translations they could find across the web and learned from them. This is a very intense and complicated example of machine learning, and Google had deployed it by 2011. Suffice it to say that all the major market players — such as Google, Apple, Microsoft, and Facebook — already leverage machine learning in many interesting ways.

Back in November, when I decided I wanted to learn more about the topic, I started doing a variety of searches of articles to read online. It wasn’t long before I stumbled upon this great course on machine learning on Coursera. It’s taught by Andrew Ng of Stanford University, and it provides an awesome, in-depth look at the basics of machine learning.

Warning: This course is long (19 total sections with an average of more than one hour of video each). It also requires an understanding of calculus to get through the math. In the course, you’ll be immersed in math from start to finish. But the point is this: If you have the math background, and the determination, you can take a free online course to get started with this stuff.

In addition, Ng walks you through many programming examples using a language called Octave. You can then take what you’ve learned and create your own machine learning programs. This is exactly what I have done in the example program included below.

Basic concepts of machine learning

First of all, let me be clear: this process didn’t make me a leading expert on this topic. However, I’ve learned enough to provide you with a serviceable intro to some key concepts. You can break machine learning into two classes: supervised and unsupervised. First, I’ll take a look at supervised machine learning.

Supervised machine learning

At its most basic level, you can think of supervised machine learning as creating a series of equations to fit a known set of data. Let’s say you want an algorithm to predict housing prices (an example that Ng uses frequently in the Coursera classes). You might get some data that looks like this (note that the data is totally made up):

In this example, we have (fictitious) historical data that indicates the price of a house based on its size. As you can see, the price tends to go up as house size goes up, but the data does not fit into a straight line. However, you can calculate a straight line that fits the data pretty well, and that line might look like this:

This line can then be used to predict the pricing for new houses. We treat the size of the house as the “input” to the algorithm and the predicted price as the “output.” For example, if you have a house that is 2600 square feet, the price looks like it would be about $ xxxK ?????? dollars.

However, this model turns out to be a bit simplistic. There are other factors that can play into housing prices, such as the total rooms, number of bedrooms, number of bathrooms, and lot size. Based on this, you could build a slightly more complicated model, with a table of data similar to this one:

Already you can see that a simple straight line will not do, as you’ll have to assign weights to each factor to come up with a housing price prediction. Perhaps the biggest factors are house size and lot size, but rooms, bedrooms, and bathrooms all deserve some weight as well (all of these would be considered new “inputs”).

Even now, we’re still being quite simplistic. Another huge factor in housing prices is location. Pricing in Seattle, WA is different than it is in Galveston, TX. Once you attempt to build this algorithm on a national scale, using location as an additional input, you can see that it starts to become a very complex problem.

You can use machine learning techniques to solve any of these three types of problems. In each of these examples, you’d assemble a large data set of examples, which can be called training examples, and run a set of programs to design an algorithm to fit the data. This allows you to submit new inputs and use the algorithm to predict the output (the price, in this case). Using training examples like this is what’s referred to as “supervised machine learning.”

Classification problems

This a special class of problems where the goal is to predict specific outcomes. For example, imagine we want to predict the chances that a newborn baby will grow to be at least 6 feet tall. You could imagine that inputs might be as follows:

The output of this algorithm might be a 0 if the person was going to shorter than 6 feet tall, or 1 if they were going to be 6 feet or taller. What makes it a classification problem is that you are putting the input items into one specific class or another. For the height prediction problem as I described it, we are not trying to guess the precise height, but a simple over/under 6 feet prediction.

Some examples of more complex classifying problems are handwriting recognition (recognizing characters) and identifying spam email.

Unsupervised machine learning

Unsupervised machine learning is used in situations where you don’t have training examples. Basically, you want to try and determine how to recognize groups of objects with similar properties. For example, you may have data that looks like this:

The algorithm will then attempt to analyze this data and find out how to group them together based on common characteristics. Perhaps in this example, all of the red “x” points in the following chart share similar attributes:

However, the algorithm may have trouble recognizing outlier points, and may group the data more like this:

What the algorithm has done is find natural groupings within the data, but unlike supervised learning, it had to determine the features that define each group. One industry example of unsupervised learning is Google News. For example, look at the following screen shot:

You can see that the main news story is about Iran holding 10 US sailors, but there are also related news stories shown from Reuters and Bloomberg (circled in red). The grouping of these related stories is an unsupervised machine learning problem, where the algorithm learns to group these items together.

Other industry examples of applied machine learning

A great example of a machine learning algo is the Author Extraction algorithm that Moz has built into their Moz Content tool. You can read more about that algorithm here. The referenced article outlines in detail the unique challenges that Moz faced in solving that problem, as well as how they went about solving it.

As for Stone Temple Consulting’s Twitter Engagement Predictor, this is built on a neural network. A sample screen for this program can be seen here:

The program makes a binary prediction as to whether you’ll get a retweet or not, and then provides you with a percentage probability for that prediction being true.

For those who are interested in the gory details, the neural network configuration I used was six input units, fifteen hidden units, and two output units. The algorithm used one million training examples and two hundred training iterations. The training process required just under 45 billion calculations.

One thing that made this exercise interesting is that there are many conflicting data points in the raw data. Here’s an example of what I mean:

What this shows is the data for people with Followerwonk Social Authority between 0 and 9, and a tweet with no images, no URLs, no @mentions of other users, two hashtags, and between zero and 40 characters. We had 1156 examples of such tweets that did not get a retweet, and 17 that did.

The most desirable outcome for the resulting algorithm is to predict that these tweets not get a retweet, so that would make it wrong 1.4% of the time (17 times out of 1173). Note that the resulting neural network assesses the probability of getting a retweet at 2.1%.

I did a calculation to tabulate how many of these cases existed. I found that we had 102,045 individual training examples where it was desirable to make the wrong prediction, or for just slightly over 10% of all our training data. What this means is that the best the neural network will be able to do is make the right prediction just under 90% of the time.

I also ran two other sets of data (470K and 473K samples in size) through the trained network to see the accuracy level of the TEP. I found that it was 81% accurate in its absolute (yes/no) prediction of the chance of getting a retweet. Bearing in mind that those also had approximately 10% of the samples where making the wrong prediction is the right thing to do, that’s not bad! And, of course, that’s why I show the percentage probability of a retweet, rather than a simple yes/no response.

Try the predictor yourself and let me know what you think! (You can discover your Social Authority by heading to Followerwonk and following these quick steps.) Mind you, this was simply an exercise for me to learn how to build out a neural network, so I recognize the limited utility of what the tool does — no need to give me that feedback ;->.

Examples of algorithms Google might have or create

So now that we know a bit more about what machine learning is about, let’s dive into things that Google may be using machine learning for already:

Penguin

One approach to implementing Penguin would be to identify a set of link characteristics that could potentially be an indicator of a bad link, such as these:

External link sitting in a footer

External link in a right side bar

Proximity to text such as “Sponsored” (and/or related phrases)

Proximity to an image with the word “Sponsored” (and/or related phrases) in it

Grouped with other links with low relevance to each other

Rich anchor text not relevant to page content

External link in navigation

Implemented with no user visible indication that it’s a link (i.e. no line under it)

From a bad class of sites (from an article directory, from a country where you don’t do business, etc.)

…and many other factors

Note that any one of these things isn’t necessarily inherently bad for an individual link, but the algorithm might start to flag sites if a significant portion of all of the links pointing to a given site have some combination of these attributes.

What I outlined above would be a supervised machine learning approach where you train the algorithm with known bad and good links (or sites) that have been identified over the years. Once the algo is trained, you would then run other link examples through it to calculate the probability that each one is a bad link. Based on the percentage of links (and/or total PageRank) coming from bad links, you could then make a decision to lower the site’s rankings, or not.

Another approach to this same problem would be to start with a database of known good links and bad links, and then have the algorithm automatically determine the characteristics (or features) of those links. These features would probably include factors that humans may not have considered on their own.

Panda

Now that you’ve seen the Penguin example, this one should be a bit easier to think about. Here are some things that might be features of sites with poor-quality content:

Small number of words on the page compared to competing pages

Low use of synonyms

Overuse of main keyword of the page (from the title tag)

Large blocks of text isolated at the bottom of the page

Lots of links to unrelated pages

Pages with content scraped from other sites

…and many other factors

Once again, you could start with a known set of good sites and bad sites (from a content perspective) and design an algorithm to determine the common characteristics of those sites.

As with the Penguin discussion above, I’m in no way representing that these are all parts of Panda — they’re just meant to illustrate the overall concept of how it might work.

How machine learning impacts SEO

The key to understanding the impact of machine learning on SEO is understanding what Google (and other search engines) want to use it for. A key insight is that there’s a strong correlation between Google providing high-quality search results and the revenue they get from their ads.

Back in 2009, Bing and Google performed some tests that showed how even introducing small delays into their search results significantly impacted user satisfaction. In addition, those results showed that with lower satisfaction came fewer clicks and lower revenues:

The reason behind this is simple. Google has other sources of competition, and this goes well beyond Bing. Texting friends for their input is one form of competition. So are Facebook, Apple/Siri, and Amazon. Alternative sources of information and answers exist for users, and they are working to improve the quality of what they offer every day. So must Google.

I’ve already suggested that machine learning may be a part of Panda and Penguin, and it may well be a part of the “Search Quality” algorithm. And there are likely many more of these types of algorithms to come.

So what does this mean?

Given that higher user satisfaction is of critical importance to Google, it means that content quality and user satisfaction with the content of your pages must now be treated by you as an SEO ranking factor. You’re going to need to measure it, and steadily improve it over time. Some questions to ask yourself include:

Does your page meet the intent of a large percentage of visitors to it? If a user is interested in that product, do they need help in selecting it? Learning how to use it?

What about related intents? If someone comes to your site looking for a specific product, what other related products could they be looking for?

What gaps exist in the content on the page?

Is your page a higher-quality experience than that of your competitors?

What’s your strategy for measuring page performance and improving it over time?

There are many ways that Google can measure how good your page is, and use that to impact rankings. Here are some of them:

When they arrive on your page after clicking on a SERP, how long do they stay? How does that compare to competing pages?

What is the relative rate of CTR on your SERP listing vs. competition?

What volume of brand searches does your business get?

If you have a page for a given product, do you offer thinner or richer content than competing pages?

When users click back to the search results after visiting your page, do they behave like their task was fulfilled? Or do they click on other results or enter followup searches?

For more on how content quality and user satisfaction has become a core SEO factor, please check out the following:

Summary

Machine learning is becoming highly prevalent. The barrier to learning basic algorithms is largely gone. All the major players in the tech industry are leveraging it in some manner. Here’s a little bit on what Facebook is doing, and machine learning hiring at Apple. Others are offering platforms to make implementing machine learning easier, such as Microsoft and Amazon.

For people involved in SEO and digital marketing, you can expect that these major players are going to get better and better at leveraging these algorithms to help them meet their goals. That’s why it will be of critical importance to tune your strategies to align with the goals of those organizations.

In the case of SEO, machine learning will steadily increase the importance of content quality and user experience over time. For you, that makes it time to get on board and make these factors a key part of your overall SEO strategy.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Without anyone accusing me of exaggerating, I can safely say that Joe Pulizzi is a bona fide content marketing visionary. Joe started using the term “content marketing” in 2001, long before the rest of the industry caught on to its potential. In the past decade and a half, Joe has firmly established his thought leadership in the field, earning the nickname “The Godfather of Content Marketing.”

In 2007, Joe founded the Content Marketing Institute (CMI), which has grown into a vital resource for thousands of marketers worldwide. In addition to building a massive hub of marketing resources, classes, and training programs, CMI produces Content Marketing World, the world’s largest content marketing event.

I recently sat down with Joe for a sneak preview of his keynote address at the 2015 Content Marketing World. Read on for Joe’s thoughts on how content marketing is evolving, the strategies that led to CMI’s success, and how to become an “octopus of content love.”

If you focus on a subscriber approach to audience development, you can go deeper with your content and emphasize value.

Without giving too much away from your keynote, what are 3 exciting evolutions for content marketing that you see on the horizon?

I’m really interested in the merger and acquisition scene. It’s going to hit people like a big surprise, especially in B2B. Particularly in tech, companies will see content factories already built and those will be attractive acquisitions when considering the time it takes to build.

Only 30% of marketers have subscription growth as a key metric. It speaks to where we are with content marketing. The notion that we need to build content for the buyer’s journey and different stages has been overblown. It’s easier to simplify that idea and just become an ongoing guide and resource and we touch the customer with value – every day, every week. They’re going to create their own buyer’s journey anyways. If you focus on a subscriber approach to audience development, you can go deeper with your content and emphasize value. Instead of focusing on 57 segments and 5 stages, create an incredible experience for your customers and you’ll have an amazing outcome. Simplify and create more value.

I’m excited about the field of journalism again. Marketers are bringing in professional journalists that have a nose for stories. The media business model is broken, but media itself is flourishing. There’s never been lower barriers to entry and easier ways for customers to access it. The more journalists in marketing, the better. If they want to tell great stories and have funding to do so, the opportunity is there.

Digital publishing has become more popular because we can, and not for the right reasons.

Based on your recent report at CMI, it appears that B2B and B2C marketers alike are continuing to struggle with measurement of content marketing activities. What do you believe are the biggest barriers to either collecting the data or focusing on the right metrics?

The clear majority of marketers have no documented content marketing strategy. If we can start with documenting the why, the business goal and audience, then you can begin to develop an action and execution plan that includes measurement. Digital publishing has become more popular because we can, and not for the right reasons.

People implementing content marketing do so because they’re told to, without understanding why. Content marketers need to ask the right questions relevant to achieving business goals.

We want to be an octopus of content love to provide them with options.

What are the biggest challenges that your own company faces when it comes to content creation, promotion and measurement?

Choosing the right activities – there are so many things we could do. Our key metric to everything is based on subscribers. I’m focused on creating a unique story that subscribers can’t get anywhere else. I’m focused on looking at subscribers and how we can improve.

Those people that engage with at least 3 different types of content, they are way more likely to attend CMI or buy something from us. We want to be an octopus of content love to provide them with options. The more we can do that the more positive results we’ll see.

Brands with huge budgets are struggling because they are so campaign focused.

What is the single most important thing you’ve learned in your journey from publishing to becoming the “Godfather of Content Marketing”?

If you build a loyal audience over time, you can sell them whatever you want. Focus on a content niche relevant to an area of business that you’re focused on, and develop an audience. As you build that audience, you can figure out what best to sell to your community.

Brands with huge budgets are struggling because they are so campaign focused.

There’s convergence – media companies are becoming product and product companies are becoming media companies. Soon you won’t be able to tell the difference.

What Content Marketing mix is CMI currently experiencing the most success with?

The podcast has been a pleasant surprise with a consistent flow of sponsorship that’s growing. In person events that I and Robert Rose speak at. The masterclass series of small workshops in different cities across the U.S. have been successful for driving registrations to the CMWorld event.

We have one person in charge of Internal content curation and repurposing that drives subscribers.

Do you believe that email marketing is dead or still very much alive? Why?

Not at all. It’s the most important thing we do. It’s harder to cut through the clutter but if you do, you get the lion’s share of attention.

Ready to Up Your Content Marketing Game?

Be sure to reserve your space at Content Marketing World for thought-provoking presentations from Joe and over 200 other luminaries in the content marketing industry.

If you missed the premiere of any one of the eBooks in our triple feature, you are in luck! You can access all three of them anytime, anywhere. Select the links below, grab some Junior Mints and dig in.

The last time I was in Texas for the South by Southwest festival, one of the highlights of my trip was watching Pace Smith play a video game called Dance Dance Revolution.

She used to play competitively, so her gameplay is serious business. Her feet moved like two blurs. Jump. Jump. Twist. It was an impressive thing to watch.

It looked so cool, in fact, that I decided I wanted to learn to play DDR too. So I bought a copy for my Wii, put the plastic dance pad on my floor, and quickly realized how incompetent I was.

I watched the arrows scroll by and jabbed clumsily at the pad, but I was too slow. More arrows came. I missed those, then I missed the arrows that followed while I was trying to hit the arrows that had already passed.

If Dance Dance Revolution were a business, Pace would the one with all the clients and income she could stand. I, on the other hand, would be the crappy little online startup that couldn’t get any traffic, clients, or revenue. I was pulling out my hair in frustration, screaming, “Why isn’t this working?”

It seemed that Pace had some kind of inborn skill that I simply didn’t have. She knew something I didn’t know. Does any of this sound familiar?

If so, I’ll tell you why.

Doing business online is not all that different from playing Dance Dance Revolution

You know those overnight success stories you’ve heard about?

It’s not the whole story. Dig deeper and you’ll usually find people

who have busted their asses for years to get into a position

where things could take off.

~ Jason Fried and David Heinemeier Hansson, 37 Signals

There’s nothing remarkable about it. All one has to do is hit the right keys at the right time and the instrument plays itself.

~ Johann Sebastian Bach

I know the notion that DDR and business aren’t dissimilar sounds absurd, but think about it for a second.

Pace wasn’t born being good at DDR. She was once where I was, as frustrated as I was, as disbelieving as I was that she could ever hit all of those fast-moving arrows.

And in the same way, if your business isn’t where you want it to be, you probably look up to big, successful sites and businesses online and think of how you’ll never be where they are. The blog you’re currently reading is a good example. Who among us feel we’ll ever be as successful as Copyblogger? Hell, who among us feel we’ll ever be a tenth as successful?

But what you may not really, truly understand — at least on a conscious level — is that not that long ago, Copyblogger had two subscribers, and both of them were Brian Clark.

What about Apple? That’s a big, successful company. Do you think Jobs and Wozniak always knew what they were doing, back in 1976 when they sold Apple I kits that didn’t even have keyboards? Ronald Wayne apparently didn’t think much of the little startup’s prospects, considering he sold out of it for $ 800.

So how did Steve Jobs turn it into a half-trillion dollar company and a cultural icon? He did it the same way that Pace got better at DDR.

The big (unsexy) secret of great success

To improve what he was doing, Steve Jobs practiced.

Both Jobs and Pace tried things and failed, tried things and failed. They learned from their mistakes and made small improvements every day, little by little.

One was on a dance pad and one was in a board room or talking design with Jony Ive, but the details don’t matter. The mastery of anything — be it gaming or business — comes only from practice.

Think about it. It wasn’t always smooth sailing for Apple. Sure, we look at the iPod, iPhone, iPad, and Mac lines today and think (rightly) that Apple is riding high. But do you remember the Apple Lisa? The Macintosh Portable? The ROKR? The Taligent OS? Yeah, neither does anyone else. Hell, Jobs even got kicked out of the company at one point — and stayed out for ten years.

Talk about an epic fail.

And really, it’s not that different from failing out of a DDR song with the arrows screaming past at impossible speeds. Both seem like insurmountable challenges … and even if you decide to face those challenges, they seem impossible pretty much the whole time you’re working on overcoming them.

And then one day, after more days of practice than they could count, both the DDR maestro and the Apple CEO achieved what used to seem flat-out impossible — be it a multi-billion-dollar industry titan or a ten-foot song played perfectly on Expert.

If I ever expect to be as good at Pace on Dance Dance Revolution, I had to practice, just as she practiced.

And really, that’s incredibly obvious. Nobody could disagree with the sentence above. Nobody would propose that practice wasn’t necessary, and that I should just take a course that would make me fabulous overnight by revealing some top-secret insider trick that only DDR gurus knew. That would be ludicrous.

So why do we think that’s a smart way to become successful in business?

The problem with shortcuts and tricks

Losers have tons of variety. Champions just take pride in learning to hit the same old boring winning shots.

~ Vic Braden, tennis coach

If your site isn’t getting the traffic you need or if your business is failing, you don’t need the latest, greatest, most amazing, new, trendy, magical trick out there.

What you need to do is to practice the basics. You need to bore yourself silly by making small, daily improvements in the fundamentals of business that have existed for hundreds of years.

Every day, get better at learning about your audience, getting more deeply in touch with their needs and desires.

Every day, make your communication a little bit clearer.

Every day, try to meet another person or two who might one day turn into a friend, a customer, or a fan.

Every day, polish your products and services a little, and make them of more value to those who buy them.

Every day, figure out how you can tell more of the people who might like what you sell about what you sell.

If you’re online, make sure there is a clear path between your reader’s problem, the information your site greets that reader with, and what you’re offering for sale.

Make sure you have given them an obvious way to opt in to your email list and a reason for doing so. Make sure you treat that list well and communicate with it often. Then put yourself in the shoes of your reader and ask yourself if you’d buy what you have to offer, from you, at the price you’re asking. Then, because you’re biased, ask other people to do the same.

And on the flip side, stop worrying about the latest and greatest tricks.

All the hot new LinkedIn strategies in the world won’t do you any good if your offer is unclear or uninteresting.

Do the work

The unpleasant truth about mastering anything is that you must keep doing the work and practicing the fundamentals until you become excellent.

You don’t need the secret, amazing, known-only-to-gurus Easy Button. What you need is to put in the hours.

And so, in the absence of a Twitter Mastery course on Dance Dance Revolution, I simply went downstairs each day and failed my way through a few songs. It was slow going, and progress was hard to see. I just kept putting in the time and hoping that what felt like alchemy might produce results — just like I did when my business was new, when I was banging my head against the wall and operating on faith that time would yield results.

And — in the same way that I eventually made my first business dollar — I finally passed a song on the Easy level. That first online profit was literally one dollar, and that first DDR success was literally one song. Neither was even in the same city as my goal … but it was a start.

Then I tried a song or two that were harder. I screwed them up terribly. It was like the many times I made dumb, unsuccessful experiments in my business and failed. And yeah, it set me back, but I kept at it.

Practice makes perfect. Your mother had it right all along.

Oh yeah, heeeeeeeeeere’s Johnny!

Now, I think I’ve made my point, but there’s clearly no way I’m getting out of this post without giving you a video of me dancing nowadays, so here we go:

“Revolution” is a strong word, but it fits what we’re seeing as online business evolves almost faster than anyone can keep track.

Tools, culture, and technology are converging in amazing ways. And that creates opportunity … if you know how to put it all together for yourself.

With that in mind we’ve created a free, three-part intensive course on how to profit from the sale of digital products. These three seminars alone — if acted on — can jump start your online business immediately.

This seminar series is completely free of charge. For all the details and to get started right away, click below: