RSSMix.com Mix ID 3868874http://www.rssmix.com/
This feed was created by mixing existing feeds from various sources.RSSMixMoz the Monster: Anatomy of An (Averted) Brand Crisishttp://feedproxy.google.com/~r/MozBlog/~3/B6kWju7Va9I/moz-the-monster-anatomy-of-a-brand-crisis
https://moz.com/blog/moz-the-monster-anatomy-of-a-brand-crisis<p>Posted by <a href=\"https://moz.com/community/users/22897\">Dr-Pete</a></p><p>On the morning of Friday, November 10, we woke up to the news that John Lewis had launched an ad campaign <a href="https://www.theguardian.com/business/2017/nov/10/john-lewis-christmas-ad-2017-meet-moz-the-monster-under-the-bed" target="_blank">called "Moz the Monster</a>". If you're from the UK, John Lewis needs no introduction, but for our American audience, they're a high-end retail chain that's gained a reputation for a decade of amazing Christmas ads.
</p><p style="text-align: center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/sa5dzQhvbiI?rel=0" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen="">
</iframe>
</p><p>It's estimated that John Lewis spent upwards of £7m on this campaign (roughly $9.4M). It quickly became clear that they had organized a multi-channel effort, including a #mozthemonster Twitter campaign.</p><p>From a consumer perspective, Moz was just a lovable blue monster. From the perspective of a company that has spent years building a brand, John Lewis was potentially going to rewrite what "Moz" meant to the broader world. From a search perspective, we were facing a rare possibility of competing for our own brand on Google results if this campaign went viral (and John Lewis has a solid history of viral campaigns).<br></p><h2>Step #1: Don't panic</h2><p>At the speed of social media, it can be hard to stop and take a breath, but you have to remember that that speed cuts both ways. If you're too quick to respond and make a mistake, that mistake travels at the same speed and can turn into a self-fulfilling prophecy, creating exactly the disaster you feared.
</p><p>The first step is to get multiple perspectives quickly. I took to Slack in the morning (I'm two hours ahead of the Seattle team) to find out who was awake. Two of our UK team (Jo and Eli) were quick to respond, which had the added benefit of getting us the local perspective.</p><p>Collectively, we decided that, in the spirit of our TAGFEE philosophy, a friendly monster deserved a friendly response. Even if we chose to look at it purely from a pragmatic, tactical standpoint, John Lewis wasn't a competitor, and going in metaphorical guns-blazing against a furry blue monster and the little boy he befriended could've been step one toward a reputation nightmare.</p><h2>Step #2: Respond (carefully)</h2><p>In some cases, you may choose not to respond, but in this case we felt that friendly engagement was our best approach. Since the Seattle team was finishing their first cup of coffee, I decided to test the waters with a tweet from my personal account:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-1-52112.png" style="border: 0">
</p><p>I've got a smaller audience than the main Moz account, and a personal tweet as the west coast was getting in gear was less exposure. The initial response was positive, and we even got a little bit of feedback, such as suggestions to monitor UK Google SERPs (see "Step #3").
</p><p>Our community team (thanks, Tyler!) quickly followed up with an official tweet:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-2-67547.png" style="border: 0">
</p><p>While we didn't get direct engagement from John Lewis, the general community response was positive. Roger Mozbot and Moz the Monster could live in peace, at least for now.
</p><h2>Step #3: Measure</h2><p>There was a longer-term fear – would engagement with the Moz the Monster campaign alter Google SERPs for Moz-related keywords? Google has become an incredibly dynamic engine, and the meaning of any given phrase can rewrite itself based on how searchers engage with that phrase. I decided to track "moz" itself across both the US and UK.
</p><p>In that first day of the official campaign launch, searches for "moz" were already showing news ("Top Stories") results in the US and UK, with the text-only version in the US:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-3-19699.png" style="border: 0">
</p><p>...and the richer Top Stories carousel in the UK:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-4-117106.jpg" style="border: 0">
</p><p>The Guardian article that announced the campaign launch was also ranking organically, near the bottom of page one. So, even on day one, we were seeing some brand encroachment and knew we had to keep track of the situation on a daily basis.
</p><p>Just two days later (November 12), Moz the Monster had captured four page-one organic results for "moz" in the UK (at the bottom of the page):
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-5-44450.png" style="border: 0">
</p><p>While it still wasn't time to panic, John Lewis' campaign was clearly having an impact on Google SERPs.
</p><h2>Step #4: Surprises</h2><p>On November 13, it looked like the SERPs might be returning to normal. The Moz Blog had regained the Top Stories block in both US and UK results:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-9-89958.png" style="border: 0">
</p><p>We weren't in the clear yet, though. A couple of days later, a plagiarism scandal broke, and it was dominating the UK news for "moz" by November 18:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster10-9698.png" style="border: 0">
</p><p>This story also migrated into organic SERPs after The Guardian published an op-ed piece. Fortunately for John Lewis, the follow-up story didn't last very long. It's an important reminder, though, that you can't take your eyes off of the ball just because it seems to be rolling in the right direction.
</p><h2>Step #5: Results</h2><p>It's one thing to see changes in the SERPs, but how was all of this impacting search trends and our actual traffic? Here's the data from Google Trends for a 4-week period around the Moz the Monster launch (2 weeks on either side):
</p><p class="full-width"><img src="https://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-6-12570.png" style="border: 0">
</p><p>The top graph is US trends data, and the bottom graph is UK. The large spike in the middle of the UK graph is November 10, where you can see that interest in the search "moz" increased dramatically. However, this spike fell off fairly quickly and US interest was relatively unaffected.
</p><p>Let's look at the same time period for Google Search Console impression and click data. First, the US data (isolated to just the keyword "moz"):
</p><p class="full-width"><img src="https://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-7-10552.png" style="border: 0">
</p><p>There was almost no change in impressions or clicks in the US market. Now, the UK data:
</p><p class="full-width"><img src="https://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-8-9627.png" style="border: 0">
</p><p>Here, the launch spike in impressions is very clear, and closely mirrors the Google Trends data. However, clicks to Moz.com were, like the US market, unaffected. Hindsight is 20/20, and we were trying to make decisions on the fly, but the short-term shift in Google SERPs had very little impact on clicks to our site. People looking for Moz the Monster and people looking for Moz the search marketing tool are, not shockingly, two very different groups.
</p><p>Ultimately, the impact of this campaign was short-lived, but it is interesting to see how quickly a SERP can rewrite itself based on the changing world, especially with an injection of ad dollars. At one point (in UK results), Moz the Monster had replaced Moz.com in over half (5 of 8) page-one organic spots and Top Stories – an impressive and somewhat alarming feat.
</p><p>By December 2, Moz the Monster had completely disappeared from US and UK SERPs for the phrase "moz". New, short-term signals can rewrite search results, but when those signals fade, results often return to normal. So, remember not to panic and track real, bottom-line results.
</p><h2>Your crisis plan</h2><p>So, how can we generalize this to other brand crises? What happens when someone else's campaign treads on your brand's hard-fought territory? Let's restate our 5-step process:</p><h3>(1) Remember not to panic</h3><p>The very word "crisis" almost demands panic, but remember that you can make any problem worse. I realize that's not very comforting, but unless your office is actually on fire, there's time to stop and assess the situation. Get multiple perspectives and make sure you're not overreacting.</p><h3>(2) Be cautiously proactive</h3><p>Unless there's a very good reason not to (such as a legal reason), it's almost always best to be proactive and respond to the situation on your own terms. At least acknowledge the situation, preferably with a touch of humor. These brand intrusions are, by their nature, high profile, and if you pretend it's not happening, you'll just look clueless.</p><h3>(3) Track the impact</h3><p>As soon as possible, start collecting data. These situations move quickly, and search rankings can change overnight in 2017. Find out what impact the event is really having as quickly as possible, even if you have to track some of it by hand. Don't wait for the perfect metrics or tracking tools.</p><h3>(4) Don't get complacent</h3><p>Search results are volatile and social media is fickle – don't assume that a lull or short-term change means you can stop and rest. Keep tracking, at least for a few days and preferably for a couple of weeks (depending on the severity of the crisis).</p><h3>(5) Measure bottom-line results</h3><p>As the days go by, you'll be able to more clearly see the impact. Track as deeply as you can – long-term rankings, traffic, even sales/conversions where necessary. This is the data that tells you if the short-term impact in (3) is really doing damage or is just superficial.</p><h2>The real John Lewis</h2><p>Finally, I'd like to give a shout-out to someone who has felt a much longer-term impact of John Lewis' succesful holiday campaigns. Twitter user and computer science teacher <a href="https://twitter.com/johnlewis" target="_blank">@johnlewis</a> has weathered his own brand crisis year after year with grace and humor:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster11-49440.png" style="border: 0">
</p><p>So, a hat-tip to John Lewis, and, on behalf of Moz, a very happy holidays to Moz the Monster!
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/22897\">Dr-Pete</a></p><p>On the morning of Friday, November 10, we woke up to the news that John Lewis had launched an ad campaign <a href="https://www.theguardian.com/business/2017/nov/10/john-lewis-christmas-ad-2017-meet-moz-the-monster-under-the-bed" target="_blank">called "Moz the Monster</a>". If you're from the UK, John Lewis needs no introduction, but for our American audience, they're a high-end retail chain that's gained a reputation for a decade of amazing Christmas ads.
</p><p style="text-align: center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/sa5dzQhvbiI?rel=0" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen="">
</iframe>
</p><p>It's estimated that John Lewis spent upwards of £7m on this campaign (roughly $9.4M). It quickly became clear that they had organized a multi-channel effort, including a #mozthemonster Twitter campaign.</p><p>From a consumer perspective, Moz was just a lovable blue monster. From the perspective of a company that has spent years building a brand, John Lewis was potentially going to rewrite what "Moz" meant to the broader world. From a search perspective, we were facing a rare possibility of competing for our own brand on Google results if this campaign went viral (and John Lewis has a solid history of viral campaigns).<br></p><h2>Step #1: Don't panic</h2><p>At the speed of social media, it can be hard to stop and take a breath, but you have to remember that that speed cuts both ways. If you're too quick to respond and make a mistake, that mistake travels at the same speed and can turn into a self-fulfilling prophecy, creating exactly the disaster you feared.
</p><p>The first step is to get multiple perspectives quickly. I took to Slack in the morning (I'm two hours ahead of the Seattle team) to find out who was awake. Two of our UK team (Jo and Eli) were quick to respond, which had the added benefit of getting us the local perspective.</p><p>Collectively, we decided that, in the spirit of our TAGFEE philosophy, a friendly monster deserved a friendly response. Even if we chose to look at it purely from a pragmatic, tactical standpoint, John Lewis wasn't a competitor, and going in metaphorical guns-blazing against a furry blue monster and the little boy he befriended could've been step one toward a reputation nightmare.</p><h2>Step #2: Respond (carefully)</h2><p>In some cases, you may choose not to respond, but in this case we felt that friendly engagement was our best approach. Since the Seattle team was finishing their first cup of coffee, I decided to test the waters with a tweet from my personal account:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-1-52112.png" style="border: 0">
</p><p>I've got a smaller audience than the main Moz account, and a personal tweet as the west coast was getting in gear was less exposure. The initial response was positive, and we even got a little bit of feedback, such as suggestions to monitor UK Google SERPs (see "Step #3").
</p><p>Our community team (thanks, Tyler!) quickly followed up with an official tweet:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-2-67547.png" style="border: 0">
</p><p>While we didn't get direct engagement from John Lewis, the general community response was positive. Roger Mozbot and Moz the Monster could live in peace, at least for now.
</p><h2>Step #3: Measure</h2><p>There was a longer-term fear – would engagement with the Moz the Monster campaign alter Google SERPs for Moz-related keywords? Google has become an incredibly dynamic engine, and the meaning of any given phrase can rewrite itself based on how searchers engage with that phrase. I decided to track "moz" itself across both the US and UK.
</p><p>In that first day of the official campaign launch, searches for "moz" were already showing news ("Top Stories") results in the US and UK, with the text-only version in the US:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-3-19699.png" style="border: 0">
</p><p>...and the richer Top Stories carousel in the UK:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-4-117106.jpg" style="border: 0">
</p><p>The Guardian article that announced the campaign launch was also ranking organically, near the bottom of page one. So, even on day one, we were seeing some brand encroachment and knew we had to keep track of the situation on a daily basis.
</p><p>Just two days later (November 12), Moz the Monster had captured four page-one organic results for "moz" in the UK (at the bottom of the page):
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-5-44450.png" style="border: 0">
</p><p>While it still wasn't time to panic, John Lewis' campaign was clearly having an impact on Google SERPs.
</p><h2>Step #4: Surprises</h2><p>On November 13, it looked like the SERPs might be returning to normal. The Moz Blog had regained the Top Stories block in both US and UK results:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-9-89958.png" style="border: 0">
</p><p>We weren't in the clear yet, though. A couple of days later, a plagiarism scandal broke, and it was dominating the UK news for "moz" by November 18:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster10-9698.png" style="border: 0">
</p><p>This story also migrated into organic SERPs after The Guardian published an op-ed piece. Fortunately for John Lewis, the follow-up story didn't last very long. It's an important reminder, though, that you can't take your eyes off of the ball just because it seems to be rolling in the right direction.
</p><h2>Step #5: Results</h2><p>It's one thing to see changes in the SERPs, but how was all of this impacting search trends and our actual traffic? Here's the data from Google Trends for a 4-week period around the Moz the Monster launch (2 weeks on either side):
</p><p class="full-width"><img src="https://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-6-12570.png" style="border: 0">
</p><p>The top graph is US trends data, and the bottom graph is UK. The large spike in the middle of the UK graph is November 10, where you can see that interest in the search "moz" increased dramatically. However, this spike fell off fairly quickly and US interest was relatively unaffected.
</p><p>Let's look at the same time period for Google Search Console impression and click data. First, the US data (isolated to just the keyword "moz"):
</p><p class="full-width"><img src="https://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-7-10552.png" style="border: 0">
</p><p>There was almost no change in impressions or clicks in the US market. Now, the UK data:
</p><p class="full-width"><img src="https://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster-8-9627.png" style="border: 0">
</p><p>Here, the launch spike in impressions is very clear, and closely mirrors the Google Trends data. However, clicks to Moz.com were, like the US market, unaffected. Hindsight is 20/20, and we were trying to make decisions on the fly, but the short-term shift in Google SERPs had very little impact on clicks to our site. People looking for Moz the Monster and people looking for Moz the search marketing tool are, not shockingly, two very different groups.
</p><p>Ultimately, the impact of this campaign was short-lived, but it is interesting to see how quickly a SERP can rewrite itself based on the changing world, especially with an injection of ad dollars. At one point (in UK results), Moz the Monster had replaced Moz.com in over half (5 of 8) page-one organic spots and Top Stories – an impressive and somewhat alarming feat.
</p><p>By December 2, Moz the Monster had completely disappeared from US and UK SERPs for the phrase "moz". New, short-term signals can rewrite search results, but when those signals fade, results often return to normal. So, remember not to panic and track real, bottom-line results.
</p><h2>Your crisis plan</h2><p>So, how can we generalize this to other brand crises? What happens when someone else's campaign treads on your brand's hard-fought territory? Let's restate our 5-step process:</p><h3>(1) Remember not to panic</h3><p>The very word "crisis" almost demands panic, but remember that you can make any problem worse. I realize that's not very comforting, but unless your office is actually on fire, there's time to stop and assess the situation. Get multiple perspectives and make sure you're not overreacting.</p><h3>(2) Be cautiously proactive</h3><p>Unless there's a very good reason not to (such as a legal reason), it's almost always best to be proactive and respond to the situation on your own terms. At least acknowledge the situation, preferably with a touch of humor. These brand intrusions are, by their nature, high profile, and if you pretend it's not happening, you'll just look clueless.</p><h3>(3) Track the impact</h3><p>As soon as possible, start collecting data. These situations move quickly, and search rankings can change overnight in 2017. Find out what impact the event is really having as quickly as possible, even if you have to track some of it by hand. Don't wait for the perfect metrics or tracking tools.</p><h3>(4) Don't get complacent</h3><p>Search results are volatile and social media is fickle – don't assume that a lull or short-term change means you can stop and rest. Keep tracking, at least for a few days and preferably for a couple of weeks (depending on the severity of the crisis).</p><h3>(5) Measure bottom-line results</h3><p>As the days go by, you'll be able to more clearly see the impact. Track as deeply as you can – long-term rankings, traffic, even sales/conversions where necessary. This is the data that tells you if the short-term impact in (3) is really doing damage or is just superficial.</p><h2>The real John Lewis</h2><p>Finally, I'd like to give a shout-out to someone who has felt a much longer-term impact of John Lewis' succesful holiday campaigns. Twitter user and computer science teacher <a href="https://twitter.com/johnlewis" target="_blank">@johnlewis</a> has weathered his own brand crisis year after year with grace and humor:
</p><p class="full-width"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/moz-the-monster11-49440.png" style="border: 0">
</p><p>So, a hat-tip to John Lewis, and, on behalf of Moz, a very happy holidays to Moz the Monster!
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/B6kWju7Va9I" height="1" width="1" alt=""/>Wed, 13 Dec 2017 11:21:18 GMThttps://moz.com/blog/moz-the-monster-anatomy-of-a-brand-crisisDr-Pete2017-12-13T11:21:18ZKeyword Research Beats Nate Silver's 2016 Presidential Election Predictionhttp://feedproxy.google.com/~r/MozBlog/~3/HCCk7vUnoJc/keyword-research-2016-presidential-prediction
https://moz.com/blog/keyword-research-2016-presidential-prediction<p>Posted by <a href=\"https://moz.com/community/users/514135\">BritneyMuller</a></p><p>100% of statisticians would say this is a terrible method for predicting elections. However, in the case of 2016’s presidential election, analyzing the geographic search volume of a few telling keywords “predicted” the outcome more accurately than Nate Silver himself.
</p><p>The 2016 US Presidential Election was a nail-biter, and many of us followed along with the famed statistician’s predictions in real time on <a href="http://fivethirtyeight.com/" target="_blank">FiveThirtyEight.com</a>. Silver’s predictions, though more accurate than many, were still disrupted by the election results.
</p><p>In an effort to better understand our country (and current political chaos), I dove into keyword research state-by-state searching for insights. Keywords can be powerful indicators of intent, thought, and behavior. What keyword searches might indicate a personal political opinion? Might there be a common denominator search among people with the same political beliefs?</p><p>It’s generally agreed that <a href="http://www.journalism.org/2014/10/21/political-polarization-media-habits/pj_14-10-21_mediapolarization-08/" target="_blank">Fox News leans to the right and CNN leans to the left</a>. And if we’ve learned anything this past year, it’s that the news you consume can have a strong impact on what you believe, in addition to the <a href="https://en.wikipedia.org/wiki/Confirmation_bias" target="_blank">confirmation bias</a> already present in seeking out particular sources of information.
</p><p>My crazy idea: What if Republican states showed more “fox news” searches than “cnn”? What if those searches revealed a bias and an intent that exit polling seemed to obscure?
</p><p>The limitations to this research were pretty obvious. Watching Fox News or CNN doesn’t necessarily correlate with voter behavior, but could it be a better indicator than the polls? My research says yes. I researched other media outlets as well, but the top two ideologically opposed news sources — in any of the 50 states — were consistently Fox News and CNN.</p><p>Using Google Keyword Planner (connected to a high-paying Adwords account to view the most accurate/non-bucketed data), I evaluated each state's search volume for “fox news” and “cnn.”</p><p>Eight states showed the exact same search volumes for both. Excluding those from my initial test, my results accurately predicted 42/42 of the 2016 presidential state outcomes including North Carolina and Wisconsin (which Silver mis-predicted). Interestingly, "cnn" even mirrored Hillary Clinton, similarly winning the popular vote (25,633,333 vs. 23,675,000 average monthly search volume for the United States).
</p><p>In contrast, Nate Silver accurately predicted 45/50 states using a <a href="https://fivethirtyeight.com/features/a-users-guide-to-fivethirtyeights-2016-general-election-forecast/" target="_blank">statistical methodology based on polling results</a>.
</p><p class="full-width"><a href="https://d2v4zi8pl64nxt.cloudfront.net/keyword-research-electoral-prediction/5a3034dc9ffaa3.17373204.png" target="_blank"><img src="http://d2v4zi8pl64nxt.cloudfront.net/keyword-research-electoral-prediction/5a3034dc9ffaa3.17373204.png" alt="" "=""></a>
</p><p class="full-width caption">Click for a larger image</p><p>This gets even more interesting:
</p><p>The eight states showing the same average monthly search volume for both “cnn” and “fox news” are Arizona, Florida, Michigan, Nevada, New Mexico, Ohio, Pennsylvania, and Texas.
</p><p>However, I was able to dive deeper via GrepWords API (a keyword research tool that actually powers <a href="https://moz.com/explorer" target="_blank" onclick="_gaq.push(['_trackEvent', 'blog', 'Keyword Research Beats Nate Silver’s 2016 Presidential Election Prediction', 'KWE']);">Keyword Explorer's</a> data), to discover that Arizona, Nevada, New Mexico, Pennsylvania, and Ohio each have slightly different “cnn” vs “fox news” search averages over the previous 12-month period. Those new search volume averages are:
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th><br>
</th>
<th>
<p>“fox news” avg monthly search volume
</p>
</th>
<th>
<p>“cnn” avg monthly search volume
</p>
</th>
<th>
<p>KWR Prediction
</p>
</th>
<th>
<p>2016 Vote
</p>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<p>Arizona
</p>
</td>
<td>
<p>566333
</p>
</td>
<td>
<p>518583
</p>
</td>
<td>
<p>Trump
</p>
</td>
<td>
<p>Trump
</p>
</td>
</tr>
<tr>
<td>
<p>Nevada
</p>
</td>
<td>
<p>213833
</p>
</td>
<td>
<p>214583
</p>
</td>
<td>
<p>Hillary
</p>
</td>
<td>
<p>Hillary
</p>
</td>
</tr>
<tr>
<td>
<p>New Mexico
</p>
</td>
<td>
<p>138833
</p>
</td>
<td>
<p>142916
</p>
</td>
<td>
<p>Hillary
</p>
</td>
<td>
<p>Hillary
</p>
</td>
</tr>
<tr>
<td>
<p>Ohio
</p>
</td>
<td>
<p>845833
</p>
</td>
<td>
<p>781083
</p>
</td>
<td>
<p>Trump
</p>
</td>
<td>
<p>Trump
</p>
</td>
</tr>
<tr>
<td>
<p>Pennsylvania
</p>
</td>
<td>
<p>1030500
</p>
</td>
<td>
<p>1063583
</p>
</td>
<td>
<p>Hillary
</p>
</td>
<td>
<p>Trump
</p>
</td>
</tr>
</tbody>
</table><p>Four out of five isn’t bad! This brought my new prediction up to 46/47.
</p><p>Silver and I each got Pennsylvania wrong. The GrepWords API shows the average monthly search volume for “cnn” was ~33,083 searches higher than “fox news” (to put that in perspective, that’s ~0.26% of the state’s population). This tight-knit keyword research theory is perfectly reflected in Trump’s 48.2% win against Clinton’s 47.5%.
</p><p>Nate Silver and I have very different day jobs, and he wouldn’t make many of these hasty generalizations. Any prediction method can be right a couple times. However, it got me thinking about the power of keyword research: how it can reveal searcher intent, predict behavior, and sometimes even defy the logic of things like statistics.
</p><p>It’s also easy to predict the past. What happens when we apply this model to today's Senate race?
</p><h2>Can we apply this theory to Alabama’s special election in the US Senate? </h2><p>After completing the above research on a whim, I realized that we’re on the cusp of yet another hotly contested, extremely close election: the upcoming Alabama senate race, between controversy-laden Republican Roy Moore and Democratic challenger Doug Jones, fighting for a Senate seat that hasn’t been held by a Democrat since 1992.
</p><p>I researched each Alabama county — 67 in total — for good measure. There are obviously <a href="https://www.washingtonpost.com/news/politics/wp/2017/12/11/why-polls-showing-a-20-point-spread-in-alabama-arent-actually-wrong/" target="_blank">a ton of variables</a> at play. However, 52 out of the 67 counties (77.6%) 2016 presidential county votes are correctly “predicted” by my theory.
</p><p>Even when giving the Democratic nominee more weight to the very low search volume counties (19 counties showed a search volume difference of less than 500), my numbers lean pretty far to the right (48/67 Republican counties):
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/keyword-research-electoral-prediction/5a3034dd9d10a2.66116469.png">
</p><p>It should be noted that my theory incorrectly guessed two of the five largest Alabama counties, Montgomery and Jefferson, which both voted Democrat in 2016.
</p><p>Greene and Macon Counties should both vote Democrat; their very slight “cnn” over “fox news” search volume is confirmed by their previous presidential election results.
</p><p>I realize state elections are not won by county, they’re won by popular vote, and the state of Alabama searches for “fox news” 204,000 more times a month than “cnn” (to put that in perspective, that’s around ~4.27% of Alabama’s population).</p><p>All things aside and regardless of outcome, this was an interesting exploration into how keyword research can offer us a glimpse into popular opinion, future behavior, and search intent. What do you think? Any other predictions we could make to test this theory? What other keywords or factors would you look at? <a href="https://moz.com/blog/keyword-research-electoral-prediction#comments" target="_blank">Let us know in the comments</a>.<br>
</p><p>Also, if you've enjoyed this post, check out Sam Wang's <a href="http://election.princeton.edu/2016/04/26/google-wide-association-studies/" target="_blank">Google-Wide Association Studies</a>! --Fascinating read.</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/514135\">BritneyMuller</a></p><p>100% of statisticians would say this is a terrible method for predicting elections. However, in the case of 2016’s presidential election, analyzing the geographic search volume of a few telling keywords “predicted” the outcome more accurately than Nate Silver himself.
</p><p>The 2016 US Presidential Election was a nail-biter, and many of us followed along with the famed statistician’s predictions in real time on <a href="http://fivethirtyeight.com/" target="_blank">FiveThirtyEight.com</a>. Silver’s predictions, though more accurate than many, were still disrupted by the election results.
</p><p>In an effort to better understand our country (and current political chaos), I dove into keyword research state-by-state searching for insights. Keywords can be powerful indicators of intent, thought, and behavior. What keyword searches might indicate a personal political opinion? Might there be a common denominator search among people with the same political beliefs?</p><p>It’s generally agreed that <a href="http://www.journalism.org/2014/10/21/political-polarization-media-habits/pj_14-10-21_mediapolarization-08/" target="_blank">Fox News leans to the right and CNN leans to the left</a>. And if we’ve learned anything this past year, it’s that the news you consume can have a strong impact on what you believe, in addition to the <a href="https://en.wikipedia.org/wiki/Confirmation_bias" target="_blank">confirmation bias</a> already present in seeking out particular sources of information.
</p><p>My crazy idea: What if Republican states showed more “fox news” searches than “cnn”? What if those searches revealed a bias and an intent that exit polling seemed to obscure?
</p><p>The limitations to this research were pretty obvious. Watching Fox News or CNN doesn’t necessarily correlate with voter behavior, but could it be a better indicator than the polls? My research says yes. I researched other media outlets as well, but the top two ideologically opposed news sources — in any of the 50 states — were consistently Fox News and CNN.</p><p>Using Google Keyword Planner (connected to a high-paying Adwords account to view the most accurate/non-bucketed data), I evaluated each state's search volume for “fox news” and “cnn.”</p><p>Eight states showed the exact same search volumes for both. Excluding those from my initial test, my results accurately predicted 42/42 of the 2016 presidential state outcomes including North Carolina and Wisconsin (which Silver mis-predicted). Interestingly, "cnn" even mirrored Hillary Clinton, similarly winning the popular vote (25,633,333 vs. 23,675,000 average monthly search volume for the United States).
</p><p>In contrast, Nate Silver accurately predicted 45/50 states using a <a href="https://fivethirtyeight.com/features/a-users-guide-to-fivethirtyeights-2016-general-election-forecast/" target="_blank">statistical methodology based on polling results</a>.
</p><p class="full-width"><a href="https://d2v4zi8pl64nxt.cloudfront.net/keyword-research-electoral-prediction/5a3034dc9ffaa3.17373204.png" target="_blank"><img src="http://d2v4zi8pl64nxt.cloudfront.net/keyword-research-electoral-prediction/5a3034dc9ffaa3.17373204.png" alt="" "=""></a>
</p><p class="full-width caption">Click for a larger image</p><p>This gets even more interesting:
</p><p>The eight states showing the same average monthly search volume for both “cnn” and “fox news” are Arizona, Florida, Michigan, Nevada, New Mexico, Ohio, Pennsylvania, and Texas.
</p><p>However, I was able to dive deeper via GrepWords API (a keyword research tool that actually powers <a href="https://moz.com/explorer" target="_blank" onclick="_gaq.push(['_trackEvent', 'blog', 'Keyword Research Beats Nate Silver’s 2016 Presidential Election Prediction', 'KWE']);">Keyword Explorer's</a> data), to discover that Arizona, Nevada, New Mexico, Pennsylvania, and Ohio each have slightly different “cnn” vs “fox news” search averages over the previous 12-month period. Those new search volume averages are:
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th><br>
</th>
<th>
<p>“fox news” avg monthly search volume
</p>
</th>
<th>
<p>“cnn” avg monthly search volume
</p>
</th>
<th>
<p>KWR Prediction
</p>
</th>
<th>
<p>2016 Vote
</p>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<p>Arizona
</p>
</td>
<td>
<p>566333
</p>
</td>
<td>
<p>518583
</p>
</td>
<td>
<p>Trump
</p>
</td>
<td>
<p>Trump
</p>
</td>
</tr>
<tr>
<td>
<p>Nevada
</p>
</td>
<td>
<p>213833
</p>
</td>
<td>
<p>214583
</p>
</td>
<td>
<p>Hillary
</p>
</td>
<td>
<p>Hillary
</p>
</td>
</tr>
<tr>
<td>
<p>New Mexico
</p>
</td>
<td>
<p>138833
</p>
</td>
<td>
<p>142916
</p>
</td>
<td>
<p>Hillary
</p>
</td>
<td>
<p>Hillary
</p>
</td>
</tr>
<tr>
<td>
<p>Ohio
</p>
</td>
<td>
<p>845833
</p>
</td>
<td>
<p>781083
</p>
</td>
<td>
<p>Trump
</p>
</td>
<td>
<p>Trump
</p>
</td>
</tr>
<tr>
<td>
<p>Pennsylvania
</p>
</td>
<td>
<p>1030500
</p>
</td>
<td>
<p>1063583
</p>
</td>
<td>
<p>Hillary
</p>
</td>
<td>
<p>Trump
</p>
</td>
</tr>
</tbody>
</table><p>Four out of five isn’t bad! This brought my new prediction up to 46/47.
</p><p>Silver and I each got Pennsylvania wrong. The GrepWords API shows the average monthly search volume for “cnn” was ~33,083 searches higher than “fox news” (to put that in perspective, that’s ~0.26% of the state’s population). This tight-knit keyword research theory is perfectly reflected in Trump’s 48.2% win against Clinton’s 47.5%.
</p><p>Nate Silver and I have very different day jobs, and he wouldn’t make many of these hasty generalizations. Any prediction method can be right a couple times. However, it got me thinking about the power of keyword research: how it can reveal searcher intent, predict behavior, and sometimes even defy the logic of things like statistics.
</p><p>It’s also easy to predict the past. What happens when we apply this model to today's Senate race?
</p><h2>Can we apply this theory to Alabama’s special election in the US Senate? </h2><p>After completing the above research on a whim, I realized that we’re on the cusp of yet another hotly contested, extremely close election: the upcoming Alabama senate race, between controversy-laden Republican Roy Moore and Democratic challenger Doug Jones, fighting for a Senate seat that hasn’t been held by a Democrat since 1992.
</p><p>I researched each Alabama county — 67 in total — for good measure. There are obviously <a href="https://www.washingtonpost.com/news/politics/wp/2017/12/11/why-polls-showing-a-20-point-spread-in-alabama-arent-actually-wrong/" target="_blank">a ton of variables</a> at play. However, 52 out of the 67 counties (77.6%) 2016 presidential county votes are correctly “predicted” by my theory.
</p><p>Even when giving the Democratic nominee more weight to the very low search volume counties (19 counties showed a search volume difference of less than 500), my numbers lean pretty far to the right (48/67 Republican counties):
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/keyword-research-electoral-prediction/5a3034dd9d10a2.66116469.png">
</p><p>It should be noted that my theory incorrectly guessed two of the five largest Alabama counties, Montgomery and Jefferson, which both voted Democrat in 2016.
</p><p>Greene and Macon Counties should both vote Democrat; their very slight “cnn” over “fox news” search volume is confirmed by their previous presidential election results.
</p><p>I realize state elections are not won by county, they’re won by popular vote, and the state of Alabama searches for “fox news” 204,000 more times a month than “cnn” (to put that in perspective, that’s around ~4.27% of Alabama’s population).</p><p>All things aside and regardless of outcome, this was an interesting exploration into how keyword research can offer us a glimpse into popular opinion, future behavior, and search intent. What do you think? Any other predictions we could make to test this theory? What other keywords or factors would you look at? <a href="https://moz.com/blog/keyword-research-electoral-prediction#comments" target="_blank">Let us know in the comments</a>.<br>
</p><p>Also, if you've enjoyed this post, check out Sam Wang's <a href="http://election.princeton.edu/2016/04/26/google-wide-association-studies/" target="_blank">Google-Wide Association Studies</a>! --Fascinating read.</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/HCCk7vUnoJc" height="1" width="1" alt=""/>Tue, 12 Dec 2017 11:29:00 GMThttps://moz.com/blog/keyword-research-2016-presidential-predictionBritneyMuller2017-12-12T11:29:00ZNot-Actually-the-Best Local SEO Practiceshttp://feedproxy.google.com/~r/MozBlog/~3/qIcbTvT7wOA/not-actually-the-best-local-seo-practices
https://moz.com/blog/not-actually-the-best-local-seo-practices<p>Posted by <a href=\"https://moz.com/community/users/13017\">MiriamEllis</a></p><p>It’s never fun being the bearer of bad news.
</p><p>You’re on the phone with an amazing prospect. Let’s say it’s a growing appliance sales and repair provider with 75 locations in the western US. Your agency would absolutely love to onboard this client, and the contact is telling you, with some pride, that they’re already ranking pretty well for about half of their locations.
</p><p>With the right strategy, getting them the rest of the way there should be no problem at all.
</p><p>But then you notice something, and your end of the phone conversation falls a little quiet as you click through from one of their Google My Business listings in Visalia to Streetview and see… not a commercial building, but a house. <em>Uh-oh</em>. In answer to your delicately worded question, you find out that 45 of this brand’s listings have been built around the private homes of their repairmen — an egregious violation of <a href="https://support.google.com/business/answer/3038177?hl=en">Google’s guidelines</a>.
</p><p>“I hate to tell you this…,” you clear your throat, and then you deliver <em>the bad news</em>.
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e148288f3.37415689.jpg" alt="marketingfoundations1.jpg">
</p><p>If you do in-house Local SEO, do it for clients, or even just answer questions in a forum, you’ve surely had the unenviable (yet vital) task of telling someone they’re “doing it wrong,” frequently after they’ve invested considerable resources in creating a marketing structure that threatens to topple due to a crack in its foundation. Sometimes you can patch the crack, but sometimes, whole edifices of bad marketing have to be demolished before safe and secure new buildings can be erected.
</p><p>Here are 5 of the commonest foundational marketing mistakes I’ve encountered over the years as a Local SEO consultant and forum participant. If you run into these in your own work, you’ll be doing someone a big favor by delivering “the bad news” as quickly as possible:
</p><h2>1. Creating GMB listings at ineligible addresses</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e151c61d6.08246492.jpg">
</p><h3>What you’ll hear:</h3><p><em>“We need to rank for these other towns, because we want customers there. Well, no, we don’t really have offices there. We have P.O. Boxes/virtual offices/our employees’ houses.”</em>
</p><h3>Why it’s a problem:</h3><p><a href="https://support.google.com/business/answer/3038177?hl=en">Google’s guidelines</a> state:
</p><ul>
<li>Make sure that your page is created at your actual, real-world location</li>
<li>PO Boxes or mailboxes located at remote locations are not acceptable.</li>
<li>Service-area businesses—businesses that serve customers at their locations—should have one page for the central office or location and designate a service area from that point. </li>
</ul><p>All of this adds up to Google saying you shouldn’t create a listing for anything other than a real-world location, but it’s extremely common to see a) spammers simply creating tons of listings for non-existent locations, b) people of good will not knowing the guidelines and doing the same thing, and c) service area businesses (SABs) feeling they have to create fake-location listings because Google won’t rank them for their service cities otherwise.
</p><p>In all three scenarios, the brand puts itself at risk for detection and listing removal. Google can catch them, competitors and consumers can catch them, and marketers can catch them. Once caught, any effort that was put into ranking and building reputation around a fake-location listing is wasted. Better to have devoted resources to risk-free marketing efforts that will add up to something real.
</p><h3>What to do about it:</h3><p>Advise the SAB owner to self-report the problem to Google. I know this sounds risky, but Google My Business forum Top Contributor <a href="https://www.sterlingsky.ca/">Joy Hawkins</a> let me know that <a href="https://www.localsearchforum.com/local-search/44278-forum-etiquette-when-someone-spamming.html">she’s never seen a case in which Google has punished a business that self-reported accidental spam</a>. The owner will likely need to un-verify the spam listings (<a href="https://moz.com/blog/delete-gmb-listing">see how to do that here</a>) and then Google will likely remove the ineligible listings, leaving only the eligible ones intact.
</p><p>What about dyed-in-the-wool spammers who know the guidelines and are violating them regardless, turning local pack results into useless junk? Get to the spam listing in Google Maps, click the “Suggest an edit” link, toggle the toggle to “Yes,” and choose the radio button for spam. Google may or may not act on your suggestion. If not, and the spam is misleading to consumers, I think it’s always a good idea to report it to the <a href="https://www.en.advertisercommunity.com/t5/Google-My-Business/ct-p/GMB">Google My Business forum</a> in hopes that a volunteer Top Contributor may escalate an egregious case to a Google staffer.
</p><h2>2. Sharing phone numbers between multiple entities</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e15a666b7.84304260.jpg">
</p><h3>What you’ll hear:</h3><p><em>“I run both my dog walking service and my karate classes out of my house, but I don’t want to have to pay for two different phone lines.”</em>
</p><p>-or-
</p><p><em>“Our restaurant has 3 locations in the city now, but we want all the calls to go through one number for reservation purposes. It’s just easier.”</em>
</p><p>-or-
</p><p><em>“There are seven doctors at our practice. Front desk handles all calls. We can’t expect the doctors to answer their calls personally.”</em>
</p><h3>Why it’s a problem:</h3><p>There are actually multiple issues at hand on this one. First of all, Google’s guidelines state:
</p><ul>
<li>Provide a phone number that connects to your individual business location as directly as possible, and provide one website that represents your individual business location.</li>
<li>Use a local phone number instead of a central, call center helpline number whenever possible.</li>
<li>The phone number must be under the direct control of the business.</li>
</ul><p>This rules out having the phone number of a single location representing multiple locations.
</p><h4>Confusing to Google
</h4><p>Google has also been known in the past to phone businesses for verification purposes. Should a business answer “Jim’s Dog Walking” when a Google rep is calling to verify that the phone number is associated with “Jim’s Karate Lessons,” we’re in trouble. Shared phone numbers have also been suspected in the past of causing accidental merging of Google listings, though I’ve not seen a case of this in a couple of years.
</p><h4>Confusing for businesses
</h4><p>As for the multi-practitioner scenario, the reality is that some business models simply don’t allow for practitioners to answer their own phones. Calls for doctors, dentists, attorneys, etc. are traditionally routed through a front desk. This reality calls into question whether forward-facing listings should be built for these individuals at all. We’ll dive deeper into this topic below, in the section on multi-practitioner listings.
</p><h4>Confusing for the ecosystem
</h4><p>Beyond Google-related concerns, Moz Local’s awesome engineers have taught me some rather amazing things about the problems shared phone numbers can create for citation-building campaigns in the greater ecosystem. Many local business data platforms are highly dependent on unique phone numbers as a signal of entity uniqueness (the “P” in NAP is powerful!). So, for example, if you submit both Jim’s Dog Walking and Jim’s Bookkeeping to Infogroup with the same number, Infogroup may publish both listings, <em>but leave the phone number fields blank</em>! And without a phone number, a local business listing is pretty worthless.
</p><p>It’s because of realities like these that a unique phone number for each entity is a requirement of the Moz Local product, and should be a prerequisite for any citation building campaign.
</p><h3>What to do about it:</h3><p>Let the business owner know that a unique phone number for each business entity, each business location, and each forward-facing practitioner who wants to be listed is a necessary business expense (and, hey, likely tax deductible, too!). Once the investment has been made in the unique numbers, the work ahead involves editing all existing citations to reflect them. The free tool <a href="https://moz.com/local/search" target="_blank">Moz Check Listing</a> can help you instantly locate existing citations for the purpose of creating a spreadsheet that details the bad data, allowing you to start correcting it manually. Or, to save time, the business owner may wish to invest in a paid, automated citation correction product like <a href="https://moz.com/products/local" target="_blank">Moz Local</a>.
</p><p>Pro tip: Apart from removing local business listing stumbling blocks, unique phone numbers have an added bonus in that they enable the benefits of associating KPIs like clicks-to-call to a given entity, and existing numbers can be ported into call tracking numbers for even further analysis of traffic and conversions. You just can’t enjoy these benefits if you lump multiple entities together under a single, shared number.
</p><h2>3. Keyword stuffing GMB listing names</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e16362811.10055424.jpg">
</p><h3>What you’ll hear:</h3><p><em>“I have 5 locations in Dallas. How are my customers supposed to find the right one unless I add the neighborhood name to the business name on the listings?”</em>
</p><p>-or-
</p><p><em>“We want customers to know we do both acupuncture and massage, so we put both in the listing name.”</em>
</p><p>-or-
</p><p><em>“Well, no, the business name doesn’t actually have a city name in it, but my competitors are adding city names to their GMB listings and they’re outranking me!”</em>
</p><h3>Why it’s a problem:</h3><p>Long story short, it’s a blatant violation of Google’s guidelines to put extraneous keywords in the business name field of a GMB listing. Google states:
</p><ul>
<li>Your name should reflect your business’ real-world name, as used consistently on your storefront, website, stationery, and as known to customers. </li>
<li>Including unnecessary information in your business name is not permitted, and could result in your listing being suspended. </li>
</ul><h3>What to do about it:</h3><p>I consider this a genuine Local SEO toughie. On the one hand, Google’s lack of enforcement of these guidelines, and apparent lack of concern about the whole thing, makes it difficult to adequately alarm business owners about the risk of suspension. I’ve successfully reported keyword stuffing violations to Google and have had them act on my reports within 24 hours… only to have the spammy names reappear hours or days afterwards. If there’s a suspension of some kind going on here, I don’t see it.
</p><p>Simultaneously, Google’s local algo apparently continues to be influenced by exact keyword matches. When a business owner sees competitors outranking him via outlawed practices which Google appears to ignore, the Local SEO may feel slightly idiotic urging guideline-compliance from his patch of shaky ground.
</p><p>But, do it anyway. For two reasons:
</p><ol>
<li>If you’re not teaching business owners about the importance of brand building at this point, you’re not really teaching marketing. Ask the owner, “Are you into building a lasting brand, or are you hoping to get by on tricks?” Smart owners (and their marketers) will see that it’s a more legitimate strategy to build a future based on earning permanent local brand recognition for <em>Lincoln & Herndon</em>, than for <em>Springfield Car Accident Slip and Fall Personal Injury Lawyers Attorneys.</em></li>
<li>I find it interesting that, in all of Google’s guidelines, the word “suspended” is used only a few times, and one of these rare instances relates to spamming the business title field. In other words, Google is using the strongest possible language to warn against this practice, and that makes me quite nervous about tying large chunks of reputation and rankings to a tactic against which Google has forewarned. I remember that companies were doing all kinds of risky things on the eve of the <a href="https://moz.com/google-algorithm-change" target="_blank">Panda and Penguin updates</a> and they woke up to a changed webscape in which they were no longer winners. Because of this, I advocate alerting any business owner who is risking his livelihood to chancy shortcuts. Better to build things for real, for the long haul.</li>
</ol><p>Fortunately, it only takes a few seconds to sign into a GMB account and remove extraneous keywords from a business name. If it needs to be done at scale for large multi-location enterprises across the major aggregators, Moz Local can get the job done. Will removing spammy keywords from the GMB listing title cause the business to move down in Google’s local rankings? It’s possible that they will, but at least they’ll be able to go forward building real stuff, with the moral authority to report rule-breaking competitors and keep at it until Google acts.
</p><p>And tell owners not to worry about Google not being able to sort out a downtown location from an uptown one for consumers. Google’s ability to parse user proximity is getting better every day. Mobile-local packs prove this out. If one location is wrongly outranking another, chances are good the business needs to do <a href="https://moz.com/blog/basic-local-competitive-audit">an audit</a> to discover weaknesses that are holding the more appropriate listing back. That’s real strategy - no tricks!
</p><h2>4. Creating a multi-site morass</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e16cd5934.12429006.jpg">
</p><h3>What you’ll hear:</h3><p><em>“So, to cover all 3 or our locations, we have greengrocerysandiego.com, greengrocerymonterey.com and greengrocerymendocino.com… but the problem is, the content on the three sites is kind of all the same. What should we do to make the sites different?”</em>
</p><p>-or-
</p><p><em>“So, to cover all of our services, we have jimsappliancerepair.com, jimswashingmachinerepair.com, jimsdryerrepair.com, jimshotwaterheaterrepair.com, jimsrefrigeratorrepair.com. We’re about to buy jimsvacuumrepair.com … but the problem is, there’s not much content on any of these sites. It feels like management is getting out of hand.”</em>
</p><h3>Why it’s a problem:</h3><p>Definitely a frequent topic in SEO forums, the practice of relying on exact match domains (EMDs) proliferates because of Google’s historic bias in their favor. The ranking influence of EMDs has been the subject of <a href="https://moz.com/google-algorithm-change#2012" target="_blank">a Google update</a>and has lessened over time. I wouldn’t want to try to rank for competitive terms with creditcards.com or insurance.com these days.
</p><p>But if you believe EMDs no longer work in the local-organic world, read this post in which a fellow’s surname/domain name gets mixed up with a distant city name and he ends up <a href="https://moz.com/ugc/case-study-the-interconnectedness-of-local-seo-and-exact-match-domains" target="_blank">ranking in the local packs for it</a>! Chances are, you see weak EMDs ranking all the time for your local searches — more’s the pity. And, no doubt, this ranking boost is the driving force behind local business models continuing to purchase multiple keyword-oriented domains to represent branches of their company or the variety of services they offer. This approach is problematic for 3 chief reasons:
</p><ol>
<li>It’s impractical. The majority of the forum threads I’ve encountered in which small-to-medium local businesses have ended up with two, or five, or ten domains invariably lead to the discovery that the websites are made up of either thin or <a href="https://moz.com/learn/seo/duplicate-content" target="_blank">duplicate content</a>. Larger enterprises are often guilty of the same. What seemed like a great idea at first, buying up all those EMDs, turns into an unmanageable morass of web properties that no one has the time to keep updated, to write for, or to market. </li>
<li>Specific to the multi-service business, it’s not a smart move to put single-location NAP on multiple websites. In other words, if your construction firm is located at 123 Main Street in Funky Town, but consumers and Google are finding that same physical address associated with fences.com, bathroomremodeling.com, decks.com, and kitchenremodeling.com, you are sowing confusion in the ecosystem. Which is the authoritative business associated with that address? Some business owners further compound problems by assuming they can then build separate sets of local business listings for each of these different service-oriented domains, violating Google’s guidelines, which state:<br><br><em>Do not create more than one page for each location of your business.</em><em><br></em><em><br></em>The whole thing can become a giant mess, instead of the clean, manageable simplicity of a single brand, tied to a single domain, with a single NAP signal.</li>
</ol><ol>
<li>With rare-to-nonexistent exceptions, I consider EMDs to be missed opportunities for brand building. Imagine, if instead of being Whole Foods at WholeFoods.com, the natural foods giant had decided they needed to try to squeeze a ranking boost out of buying 400+ domains to represent the eventual number of locations they now operate. WholeFoodsDallas.com, WholeFoodsMississauga.com, etc? Such an approach would get out of hand very fast. </li>
</ol><p>Even the smallest businesses should take cues from big commerce. Your brand is the magic password you want on every consumer’s lips, associated with every service you offer, in every location you open. As I <a href="https://moz.com/community/q/to-re-domain-or-not-re-domain-that-is-the-question" target="_blank">recently suggested to a Moz community member</a>, be proud to domain your flower shop as <em>rossirovetti.com</em> instead of hoping <em>FloralDelivery24hoursSanFrancisco.com </em>will boost your rankings. It’s authentic, easy to remember, looks trustworthy in the SERPs, and is ripe for memorable brand building.
</p><h3>What to do about it:</h3><p>While I can’t speak to the minutiae of every single scenario, I’ve yet to be part of a discussion about multi-sites in the Local SEO community in which I didn’t advise consolidation. Basically, the business should choose a single, proud domain and, in most cases, 301 redirect the old sites to the main one, then work to get as many external links that pointed to the multi-sites to point to the chosen main site. <a href="https://moz.com/blog/2-become-1-merging-two-domains-made-us-an-seo-killing" target="_blank">This oldie but goodie from the Moz blog</a> provides a further technical checklist from a company that saw a 40% increase in traffic after consolidating domains. I’d recommend that any business that is nervous about handling the tech aspects of consolidation in-house should hire a qualified SEO to help them through the process.
</p><h2>5. Creating ill-considered practitioner listings</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e1769b7b1.50050410.jpg">
</p><h3>What you’ll hear:</h3><p><em>“We have 5 dentists at the practice, but one moved/retired last month and we don’t know what to do with the GMB listing for him.”</em>
</p><p>-or-
</p><p><em>“Dr. Green is outranking the practice in the local results for some reason, and it’s really annoying.”</em>
</p><h3>Why it’s a problem: </h3><p>I’ve saved the most complex for last! Multi-practitioner listings can be a blessing, but they’re so often a bane that my position on creating them has evolved to a point where I only recommend building them in specific cases.
</p><p>When Google first enabled practitioner listings (listings that represent each doctor, lawyer, dentist, or agent within a business) I saw them as a golden opportunity for a given practice to dominate local search results with its presence. However, Google’s subsequent unwillingness to simply remove practitioner duplicates, coupled with the rollout of the <a href="https://moz.com/learn/seo/google-possum" target="_blank">Possum update</a> which filters out shared category/similar location listings, coupled with the number of instances I’ve seen in which practitioner listings end up outranking brand listings, has caused me to change my opinion of their benefits. I should also add that the business title field on practitioner listings is a hotbed of Google guideline violations — few business owners have ever read Google’s nitty gritty rules about how to name these types of listings.
</p><p>In a nutshell, practitioner listings gone awry can result in a bunch of wrongly-named listings often clouded by duplicates that Google won’t remove, all competing for the same keywords. Not good!
</p><h3>What to do about it:</h3><p>You’ll have multiple scenarios to address when offering advice about this topic.
</p><p>1.) If the business is brand new, and there is no record of it on the Internet as of yet, then I would only recommend creating practitioner listings if it is necessary to point out an area of specialization. So, for example if a medical practice has 5 MDs, the listing for the practice covers that, with no added listings needed. But, if a medical practice has 5 MDs and an Otolaryngologist, it may be good marketing to give the specialist his own listing, <em>because it has its own GMB category</em> and won’t be competing with the practice for rankings. *However, read on to understand the challenges being undertaken any time a multi-practitioner listing is created.
</p><p>2.) If the multi-practitioner business is not new, chances are very good that there are listings out there for present, past, and even deceased practitioners.
</p><ul>
<li>If a partner is current, be sure you point his listing at a landing page on the practice’s website, instead of at the homepage, see if you can differentiate categories, and do your utmost to optimize the practice’s own listing — the point here is to prevent practitioners from outranking the practice. What do I mean by optimization? Be sure the practice’s GMB listing is fully filled out, you’ve got amazing photos, you’re actively earning and responding to reviews, you’re publishing a Google Post at least once a week, and your citations across the web are consistent. These things should all strengthen the listing for the practice.</li>
<li>If a partner is no longer with the practice, it’s ideal to unverify the listing and ask Google to market it as moved to the practice — not to the practitioner’s new location. Sound goofy? <a href="https://searchengineland.com/cannot-ignore-practitioner-listings-gmb-case-study-253314" target="_blank">Read Joy Hawkins’ smart explanation of this convoluted issue</a>. </li>
<li>If, sadly, a practitioner has passed away, contact Google to show them an obituary so that the listing can be removed.</li>
<li>If a listing represents what is actually a solo practitioner (instead of a partner in a multi-practitioner business model) and his GMB listing is now competing with the listing for his business, you can ask Google to merge the two listings.</li>
</ul><p>3.) If a business wants to create practitioner listings, and they feel up to the task of handling any ranking or situational management concerns, there is one final proviso I’d add. Google’s guidelines state that practitioners should be “directly contactable at the verified location during stated hours” in order to qualify for a GMB listing. I’ve always found this requirement rather vague. Contactable by phone? Contactable in person? Google doesn’t specify. Presumably, a real estate agent in a multi-practitioner agency might be directly contactable, but as my graphic above illustrates, we wouldn’t really expect the same public availability of a surgeon, right? Point being, it may only make marketing sense to create a practitioner listing for someone who needs to be directly available to the consumer public for the business to function. I consider this a genuine grey area in the guidelines, so think it through carefully before acting.
</p><h2>Giving good help</h2><p>It’s genuinely an honor to advise owners and marketers who are strategizing for the success of local businesses. In our own small way, local SEO consultants live in the neighborhood Mister Rogers envisioned in which you could <a href="https://www.youtube.com/watch?v=-LGHtc_D328" target="_blank">look for the helpers</a> when confronted with trouble. Given the livelihoods dependent on local commerce, rescuing a company from a foundational marketing mistake is satisfying work for people who like to be “helpers,” and it carries a weight of responsibility.
</p><p>I’ve worked in 3 different SEO forums over the past 10+ years, and I’d like to close with some things I’ve learned about helping:
</p><ol>
<li>Learn to ask the right questions. Small nuances in business models and scenarios can necessitate completely different advice. Don’t be scared to come back with second and third rounds of follow-up queries if someone hasn’t provided sufficient detail for you to advise them well. Read all details thoroughly before replying.</li>
<li>Always, always consult <a href="https://support.google.com/business/answer/3038177?hl=en" target="_blank">Google’s guidelines</a>, and link to them in your answers. It’s absolutely amazing how few owners and marketers have ever encountered them. Local SEOs are volunteer liaisons between Google and businesses. That’s just the way things have worked out.</li>
<li>Don’t say you’re sure unless you’re really sure. If a forum or client question necessitates a full audit to surface a useful answer, say so. Giving pat answers to complicated queries helps no one, and can actually hurt businesses by leaving them in limbo, losing money, for an even longer time.</li>
<li>Network with colleagues when weird things come up. Ranking drops can be attributed to new Google updates, or bugs, or other factors you haven’t yet noticed but that a trusted peer may have encountered. </li>
<li>Practice humility. 90% of what I know about Local SEO, I’ve learned from people coming to me with problems for which, at some point, I had to discover answers. Over time, the work put in builds up our store of ready knowledge, but we will never know it all, and that’s humbling in a very good way. Community members and clients are our <em>teachers</em>. Let’s be grateful for them, and treat them with respect.</li>
<li>Finally, don’t stress about delivering “the bad news” when you see someone who is asking for help making a marketing mistake. In the long run, your honesty will be the best gift you could possibly have given. </li>
</ol><p>Happy helping!
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/13017\">MiriamEllis</a></p><p>It’s never fun being the bearer of bad news.
</p><p>You’re on the phone with an amazing prospect. Let’s say it’s a growing appliance sales and repair provider with 75 locations in the western US. Your agency would absolutely love to onboard this client, and the contact is telling you, with some pride, that they’re already ranking pretty well for about half of their locations.
</p><p>With the right strategy, getting them the rest of the way there should be no problem at all.
</p><p>But then you notice something, and your end of the phone conversation falls a little quiet as you click through from one of their Google My Business listings in Visalia to Streetview and see… not a commercial building, but a house. <em>Uh-oh</em>. In answer to your delicately worded question, you find out that 45 of this brand’s listings have been built around the private homes of their repairmen — an egregious violation of <a href="https://support.google.com/business/answer/3038177?hl=en">Google’s guidelines</a>.
</p><p>“I hate to tell you this…,” you clear your throat, and then you deliver <em>the bad news</em>.
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e148288f3.37415689.jpg" alt="marketingfoundations1.jpg">
</p><p>If you do in-house Local SEO, do it for clients, or even just answer questions in a forum, you’ve surely had the unenviable (yet vital) task of telling someone they’re “doing it wrong,” frequently after they’ve invested considerable resources in creating a marketing structure that threatens to topple due to a crack in its foundation. Sometimes you can patch the crack, but sometimes, whole edifices of bad marketing have to be demolished before safe and secure new buildings can be erected.
</p><p>Here are 5 of the commonest foundational marketing mistakes I’ve encountered over the years as a Local SEO consultant and forum participant. If you run into these in your own work, you’ll be doing someone a big favor by delivering “the bad news” as quickly as possible:
</p><h2>1. Creating GMB listings at ineligible addresses</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e151c61d6.08246492.jpg">
</p><h3>What you’ll hear:</h3><p><em>“We need to rank for these other towns, because we want customers there. Well, no, we don’t really have offices there. We have P.O. Boxes/virtual offices/our employees’ houses.”</em>
</p><h3>Why it’s a problem:</h3><p><a href="https://support.google.com/business/answer/3038177?hl=en">Google’s guidelines</a> state:
</p><ul>
<li>Make sure that your page is created at your actual, real-world location</li>
<li>PO Boxes or mailboxes located at remote locations are not acceptable.</li>
<li>Service-area businesses—businesses that serve customers at their locations—should have one page for the central office or location and designate a service area from that point. </li>
</ul><p>All of this adds up to Google saying you shouldn’t create a listing for anything other than a real-world location, but it’s extremely common to see a) spammers simply creating tons of listings for non-existent locations, b) people of good will not knowing the guidelines and doing the same thing, and c) service area businesses (SABs) feeling they have to create fake-location listings because Google won’t rank them for their service cities otherwise.
</p><p>In all three scenarios, the brand puts itself at risk for detection and listing removal. Google can catch them, competitors and consumers can catch them, and marketers can catch them. Once caught, any effort that was put into ranking and building reputation around a fake-location listing is wasted. Better to have devoted resources to risk-free marketing efforts that will add up to something real.
</p><h3>What to do about it:</h3><p>Advise the SAB owner to self-report the problem to Google. I know this sounds risky, but Google My Business forum Top Contributor <a href="https://www.sterlingsky.ca/">Joy Hawkins</a> let me know that <a href="https://www.localsearchforum.com/local-search/44278-forum-etiquette-when-someone-spamming.html">she’s never seen a case in which Google has punished a business that self-reported accidental spam</a>. The owner will likely need to un-verify the spam listings (<a href="https://moz.com/blog/delete-gmb-listing">see how to do that here</a>) and then Google will likely remove the ineligible listings, leaving only the eligible ones intact.
</p><p>What about dyed-in-the-wool spammers who know the guidelines and are violating them regardless, turning local pack results into useless junk? Get to the spam listing in Google Maps, click the “Suggest an edit” link, toggle the toggle to “Yes,” and choose the radio button for spam. Google may or may not act on your suggestion. If not, and the spam is misleading to consumers, I think it’s always a good idea to report it to the <a href="https://www.en.advertisercommunity.com/t5/Google-My-Business/ct-p/GMB">Google My Business forum</a> in hopes that a volunteer Top Contributor may escalate an egregious case to a Google staffer.
</p><h2>2. Sharing phone numbers between multiple entities</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e15a666b7.84304260.jpg">
</p><h3>What you’ll hear:</h3><p><em>“I run both my dog walking service and my karate classes out of my house, but I don’t want to have to pay for two different phone lines.”</em>
</p><p>-or-
</p><p><em>“Our restaurant has 3 locations in the city now, but we want all the calls to go through one number for reservation purposes. It’s just easier.”</em>
</p><p>-or-
</p><p><em>“There are seven doctors at our practice. Front desk handles all calls. We can’t expect the doctors to answer their calls personally.”</em>
</p><h3>Why it’s a problem:</h3><p>There are actually multiple issues at hand on this one. First of all, Google’s guidelines state:
</p><ul>
<li>Provide a phone number that connects to your individual business location as directly as possible, and provide one website that represents your individual business location.</li>
<li>Use a local phone number instead of a central, call center helpline number whenever possible.</li>
<li>The phone number must be under the direct control of the business.</li>
</ul><p>This rules out having the phone number of a single location representing multiple locations.
</p><h4>Confusing to Google
</h4><p>Google has also been known in the past to phone businesses for verification purposes. Should a business answer “Jim’s Dog Walking” when a Google rep is calling to verify that the phone number is associated with “Jim’s Karate Lessons,” we’re in trouble. Shared phone numbers have also been suspected in the past of causing accidental merging of Google listings, though I’ve not seen a case of this in a couple of years.
</p><h4>Confusing for businesses
</h4><p>As for the multi-practitioner scenario, the reality is that some business models simply don’t allow for practitioners to answer their own phones. Calls for doctors, dentists, attorneys, etc. are traditionally routed through a front desk. This reality calls into question whether forward-facing listings should be built for these individuals at all. We’ll dive deeper into this topic below, in the section on multi-practitioner listings.
</p><h4>Confusing for the ecosystem
</h4><p>Beyond Google-related concerns, Moz Local’s awesome engineers have taught me some rather amazing things about the problems shared phone numbers can create for citation-building campaigns in the greater ecosystem. Many local business data platforms are highly dependent on unique phone numbers as a signal of entity uniqueness (the “P” in NAP is powerful!). So, for example, if you submit both Jim’s Dog Walking and Jim’s Bookkeeping to Infogroup with the same number, Infogroup may publish both listings, <em>but leave the phone number fields blank</em>! And without a phone number, a local business listing is pretty worthless.
</p><p>It’s because of realities like these that a unique phone number for each entity is a requirement of the Moz Local product, and should be a prerequisite for any citation building campaign.
</p><h3>What to do about it:</h3><p>Let the business owner know that a unique phone number for each business entity, each business location, and each forward-facing practitioner who wants to be listed is a necessary business expense (and, hey, likely tax deductible, too!). Once the investment has been made in the unique numbers, the work ahead involves editing all existing citations to reflect them. The free tool <a href="https://moz.com/local/search" target="_blank">Moz Check Listing</a> can help you instantly locate existing citations for the purpose of creating a spreadsheet that details the bad data, allowing you to start correcting it manually. Or, to save time, the business owner may wish to invest in a paid, automated citation correction product like <a href="https://moz.com/products/local" target="_blank">Moz Local</a>.
</p><p>Pro tip: Apart from removing local business listing stumbling blocks, unique phone numbers have an added bonus in that they enable the benefits of associating KPIs like clicks-to-call to a given entity, and existing numbers can be ported into call tracking numbers for even further analysis of traffic and conversions. You just can’t enjoy these benefits if you lump multiple entities together under a single, shared number.
</p><h2>3. Keyword stuffing GMB listing names</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e16362811.10055424.jpg">
</p><h3>What you’ll hear:</h3><p><em>“I have 5 locations in Dallas. How are my customers supposed to find the right one unless I add the neighborhood name to the business name on the listings?”</em>
</p><p>-or-
</p><p><em>“We want customers to know we do both acupuncture and massage, so we put both in the listing name.”</em>
</p><p>-or-
</p><p><em>“Well, no, the business name doesn’t actually have a city name in it, but my competitors are adding city names to their GMB listings and they’re outranking me!”</em>
</p><h3>Why it’s a problem:</h3><p>Long story short, it’s a blatant violation of Google’s guidelines to put extraneous keywords in the business name field of a GMB listing. Google states:
</p><ul>
<li>Your name should reflect your business’ real-world name, as used consistently on your storefront, website, stationery, and as known to customers. </li>
<li>Including unnecessary information in your business name is not permitted, and could result in your listing being suspended. </li>
</ul><h3>What to do about it:</h3><p>I consider this a genuine Local SEO toughie. On the one hand, Google’s lack of enforcement of these guidelines, and apparent lack of concern about the whole thing, makes it difficult to adequately alarm business owners about the risk of suspension. I’ve successfully reported keyword stuffing violations to Google and have had them act on my reports within 24 hours… only to have the spammy names reappear hours or days afterwards. If there’s a suspension of some kind going on here, I don’t see it.
</p><p>Simultaneously, Google’s local algo apparently continues to be influenced by exact keyword matches. When a business owner sees competitors outranking him via outlawed practices which Google appears to ignore, the Local SEO may feel slightly idiotic urging guideline-compliance from his patch of shaky ground.
</p><p>But, do it anyway. For two reasons:
</p><ol>
<li>If you’re not teaching business owners about the importance of brand building at this point, you’re not really teaching marketing. Ask the owner, “Are you into building a lasting brand, or are you hoping to get by on tricks?” Smart owners (and their marketers) will see that it’s a more legitimate strategy to build a future based on earning permanent local brand recognition for <em>Lincoln & Herndon</em>, than for <em>Springfield Car Accident Slip and Fall Personal Injury Lawyers Attorneys.</em></li>
<li>I find it interesting that, in all of Google’s guidelines, the word “suspended” is used only a few times, and one of these rare instances relates to spamming the business title field. In other words, Google is using the strongest possible language to warn against this practice, and that makes me quite nervous about tying large chunks of reputation and rankings to a tactic against which Google has forewarned. I remember that companies were doing all kinds of risky things on the eve of the <a href="https://moz.com/google-algorithm-change" target="_blank">Panda and Penguin updates</a> and they woke up to a changed webscape in which they were no longer winners. Because of this, I advocate alerting any business owner who is risking his livelihood to chancy shortcuts. Better to build things for real, for the long haul.</li>
</ol><p>Fortunately, it only takes a few seconds to sign into a GMB account and remove extraneous keywords from a business name. If it needs to be done at scale for large multi-location enterprises across the major aggregators, Moz Local can get the job done. Will removing spammy keywords from the GMB listing title cause the business to move down in Google’s local rankings? It’s possible that they will, but at least they’ll be able to go forward building real stuff, with the moral authority to report rule-breaking competitors and keep at it until Google acts.
</p><p>And tell owners not to worry about Google not being able to sort out a downtown location from an uptown one for consumers. Google’s ability to parse user proximity is getting better every day. Mobile-local packs prove this out. If one location is wrongly outranking another, chances are good the business needs to do <a href="https://moz.com/blog/basic-local-competitive-audit">an audit</a> to discover weaknesses that are holding the more appropriate listing back. That’s real strategy - no tricks!
</p><h2>4. Creating a multi-site morass</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e16cd5934.12429006.jpg">
</p><h3>What you’ll hear:</h3><p><em>“So, to cover all 3 or our locations, we have greengrocerysandiego.com, greengrocerymonterey.com and greengrocerymendocino.com… but the problem is, the content on the three sites is kind of all the same. What should we do to make the sites different?”</em>
</p><p>-or-
</p><p><em>“So, to cover all of our services, we have jimsappliancerepair.com, jimswashingmachinerepair.com, jimsdryerrepair.com, jimshotwaterheaterrepair.com, jimsrefrigeratorrepair.com. We’re about to buy jimsvacuumrepair.com … but the problem is, there’s not much content on any of these sites. It feels like management is getting out of hand.”</em>
</p><h3>Why it’s a problem:</h3><p>Definitely a frequent topic in SEO forums, the practice of relying on exact match domains (EMDs) proliferates because of Google’s historic bias in their favor. The ranking influence of EMDs has been the subject of <a href="https://moz.com/google-algorithm-change#2012" target="_blank">a Google update</a>and has lessened over time. I wouldn’t want to try to rank for competitive terms with creditcards.com or insurance.com these days.
</p><p>But if you believe EMDs no longer work in the local-organic world, read this post in which a fellow’s surname/domain name gets mixed up with a distant city name and he ends up <a href="https://moz.com/ugc/case-study-the-interconnectedness-of-local-seo-and-exact-match-domains" target="_blank">ranking in the local packs for it</a>! Chances are, you see weak EMDs ranking all the time for your local searches — more’s the pity. And, no doubt, this ranking boost is the driving force behind local business models continuing to purchase multiple keyword-oriented domains to represent branches of their company or the variety of services they offer. This approach is problematic for 3 chief reasons:
</p><ol>
<li>It’s impractical. The majority of the forum threads I’ve encountered in which small-to-medium local businesses have ended up with two, or five, or ten domains invariably lead to the discovery that the websites are made up of either thin or <a href="https://moz.com/learn/seo/duplicate-content" target="_blank">duplicate content</a>. Larger enterprises are often guilty of the same. What seemed like a great idea at first, buying up all those EMDs, turns into an unmanageable morass of web properties that no one has the time to keep updated, to write for, or to market. </li>
<li>Specific to the multi-service business, it’s not a smart move to put single-location NAP on multiple websites. In other words, if your construction firm is located at 123 Main Street in Funky Town, but consumers and Google are finding that same physical address associated with fences.com, bathroomremodeling.com, decks.com, and kitchenremodeling.com, you are sowing confusion in the ecosystem. Which is the authoritative business associated with that address? Some business owners further compound problems by assuming they can then build separate sets of local business listings for each of these different service-oriented domains, violating Google’s guidelines, which state:<br><br><em>Do not create more than one page for each location of your business.</em><em><br></em><em><br></em>The whole thing can become a giant mess, instead of the clean, manageable simplicity of a single brand, tied to a single domain, with a single NAP signal.</li>
</ol><ol>
<li>With rare-to-nonexistent exceptions, I consider EMDs to be missed opportunities for brand building. Imagine, if instead of being Whole Foods at WholeFoods.com, the natural foods giant had decided they needed to try to squeeze a ranking boost out of buying 400+ domains to represent the eventual number of locations they now operate. WholeFoodsDallas.com, WholeFoodsMississauga.com, etc? Such an approach would get out of hand very fast. </li>
</ol><p>Even the smallest businesses should take cues from big commerce. Your brand is the magic password you want on every consumer’s lips, associated with every service you offer, in every location you open. As I <a href="https://moz.com/community/q/to-re-domain-or-not-re-domain-that-is-the-question" target="_blank">recently suggested to a Moz community member</a>, be proud to domain your flower shop as <em>rossirovetti.com</em> instead of hoping <em>FloralDelivery24hoursSanFrancisco.com </em>will boost your rankings. It’s authentic, easy to remember, looks trustworthy in the SERPs, and is ripe for memorable brand building.
</p><h3>What to do about it:</h3><p>While I can’t speak to the minutiae of every single scenario, I’ve yet to be part of a discussion about multi-sites in the Local SEO community in which I didn’t advise consolidation. Basically, the business should choose a single, proud domain and, in most cases, 301 redirect the old sites to the main one, then work to get as many external links that pointed to the multi-sites to point to the chosen main site. <a href="https://moz.com/blog/2-become-1-merging-two-domains-made-us-an-seo-killing" target="_blank">This oldie but goodie from the Moz blog</a> provides a further technical checklist from a company that saw a 40% increase in traffic after consolidating domains. I’d recommend that any business that is nervous about handling the tech aspects of consolidation in-house should hire a qualified SEO to help them through the process.
</p><h2>5. Creating ill-considered practitioner listings</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/not-actually-the-best-local-seo-practices/5a2e1e1769b7b1.50050410.jpg">
</p><h3>What you’ll hear:</h3><p><em>“We have 5 dentists at the practice, but one moved/retired last month and we don’t know what to do with the GMB listing for him.”</em>
</p><p>-or-
</p><p><em>“Dr. Green is outranking the practice in the local results for some reason, and it’s really annoying.”</em>
</p><h3>Why it’s a problem: </h3><p>I’ve saved the most complex for last! Multi-practitioner listings can be a blessing, but they’re so often a bane that my position on creating them has evolved to a point where I only recommend building them in specific cases.
</p><p>When Google first enabled practitioner listings (listings that represent each doctor, lawyer, dentist, or agent within a business) I saw them as a golden opportunity for a given practice to dominate local search results with its presence. However, Google’s subsequent unwillingness to simply remove practitioner duplicates, coupled with the rollout of the <a href="https://moz.com/learn/seo/google-possum" target="_blank">Possum update</a> which filters out shared category/similar location listings, coupled with the number of instances I’ve seen in which practitioner listings end up outranking brand listings, has caused me to change my opinion of their benefits. I should also add that the business title field on practitioner listings is a hotbed of Google guideline violations — few business owners have ever read Google’s nitty gritty rules about how to name these types of listings.
</p><p>In a nutshell, practitioner listings gone awry can result in a bunch of wrongly-named listings often clouded by duplicates that Google won’t remove, all competing for the same keywords. Not good!
</p><h3>What to do about it:</h3><p>You’ll have multiple scenarios to address when offering advice about this topic.
</p><p>1.) If the business is brand new, and there is no record of it on the Internet as of yet, then I would only recommend creating practitioner listings if it is necessary to point out an area of specialization. So, for example if a medical practice has 5 MDs, the listing for the practice covers that, with no added listings needed. But, if a medical practice has 5 MDs and an Otolaryngologist, it may be good marketing to give the specialist his own listing, <em>because it has its own GMB category</em> and won’t be competing with the practice for rankings. *However, read on to understand the challenges being undertaken any time a multi-practitioner listing is created.
</p><p>2.) If the multi-practitioner business is not new, chances are very good that there are listings out there for present, past, and even deceased practitioners.
</p><ul>
<li>If a partner is current, be sure you point his listing at a landing page on the practice’s website, instead of at the homepage, see if you can differentiate categories, and do your utmost to optimize the practice’s own listing — the point here is to prevent practitioners from outranking the practice. What do I mean by optimization? Be sure the practice’s GMB listing is fully filled out, you’ve got amazing photos, you’re actively earning and responding to reviews, you’re publishing a Google Post at least once a week, and your citations across the web are consistent. These things should all strengthen the listing for the practice.</li>
<li>If a partner is no longer with the practice, it’s ideal to unverify the listing and ask Google to market it as moved to the practice — not to the practitioner’s new location. Sound goofy? <a href="https://searchengineland.com/cannot-ignore-practitioner-listings-gmb-case-study-253314" target="_blank">Read Joy Hawkins’ smart explanation of this convoluted issue</a>. </li>
<li>If, sadly, a practitioner has passed away, contact Google to show them an obituary so that the listing can be removed.</li>
<li>If a listing represents what is actually a solo practitioner (instead of a partner in a multi-practitioner business model) and his GMB listing is now competing with the listing for his business, you can ask Google to merge the two listings.</li>
</ul><p>3.) If a business wants to create practitioner listings, and they feel up to the task of handling any ranking or situational management concerns, there is one final proviso I’d add. Google’s guidelines state that practitioners should be “directly contactable at the verified location during stated hours” in order to qualify for a GMB listing. I’ve always found this requirement rather vague. Contactable by phone? Contactable in person? Google doesn’t specify. Presumably, a real estate agent in a multi-practitioner agency might be directly contactable, but as my graphic above illustrates, we wouldn’t really expect the same public availability of a surgeon, right? Point being, it may only make marketing sense to create a practitioner listing for someone who needs to be directly available to the consumer public for the business to function. I consider this a genuine grey area in the guidelines, so think it through carefully before acting.
</p><h2>Giving good help</h2><p>It’s genuinely an honor to advise owners and marketers who are strategizing for the success of local businesses. In our own small way, local SEO consultants live in the neighborhood Mister Rogers envisioned in which you could <a href="https://www.youtube.com/watch?v=-LGHtc_D328" target="_blank">look for the helpers</a> when confronted with trouble. Given the livelihoods dependent on local commerce, rescuing a company from a foundational marketing mistake is satisfying work for people who like to be “helpers,” and it carries a weight of responsibility.
</p><p>I’ve worked in 3 different SEO forums over the past 10+ years, and I’d like to close with some things I’ve learned about helping:
</p><ol>
<li>Learn to ask the right questions. Small nuances in business models and scenarios can necessitate completely different advice. Don’t be scared to come back with second and third rounds of follow-up queries if someone hasn’t provided sufficient detail for you to advise them well. Read all details thoroughly before replying.</li>
<li>Always, always consult <a href="https://support.google.com/business/answer/3038177?hl=en" target="_blank">Google’s guidelines</a>, and link to them in your answers. It’s absolutely amazing how few owners and marketers have ever encountered them. Local SEOs are volunteer liaisons between Google and businesses. That’s just the way things have worked out.</li>
<li>Don’t say you’re sure unless you’re really sure. If a forum or client question necessitates a full audit to surface a useful answer, say so. Giving pat answers to complicated queries helps no one, and can actually hurt businesses by leaving them in limbo, losing money, for an even longer time.</li>
<li>Network with colleagues when weird things come up. Ranking drops can be attributed to new Google updates, or bugs, or other factors you haven’t yet noticed but that a trusted peer may have encountered. </li>
<li>Practice humility. 90% of what I know about Local SEO, I’ve learned from people coming to me with problems for which, at some point, I had to discover answers. Over time, the work put in builds up our store of ready knowledge, but we will never know it all, and that’s humbling in a very good way. Community members and clients are our <em>teachers</em>. Let’s be grateful for them, and treat them with respect.</li>
<li>Finally, don’t stress about delivering “the bad news” when you see someone who is asking for help making a marketing mistake. In the long run, your honesty will be the best gift you could possibly have given. </li>
</ol><p>Happy helping!
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/qIcbTvT7wOA" height="1" width="1" alt=""/>Mon, 11 Dec 2017 00:05:00 GMThttps://moz.com/blog/not-actually-the-best-local-seo-practicesMiriamEllis2017-12-11T00:05:00ZWhat Do Google's New, Longer Snippets Mean for SEO? - Whiteboard Fridayhttp://feedproxy.google.com/~r/MozBlog/~3/CnzBtuconkk/googles-longer-snippets
https://moz.com/blog/googles-longer-snippets<p>Posted by <a href=\"https://moz.com/community/users/63\">randfish</a></p><p>Snippets and meta descriptions have brand-new character limits, and it's a big change for Google and SEOs alike. Learn about what's new, when it changed, and what it all means for SEO in this edition of Whiteboard Friday.
</p><p class="wistia_responsive_padding" style="padding:5.25% 0 28px 0;position:relative;">
<iframe src="https://fast.wistia.net/embed/iframe/rzyt0jmt93?videoFoam=true" title="Wistia video player" allowtransparency="true" frameborder="0" scrolling="no" class="wistia_embed" name="wistia_embed" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen="" oallowfullscreen="" msallowfullscreen="" width="100%" height="100%">
</iframe>
</p><script rel="display: none;" src="https://fast.wistia.net/assets/external/E-v1.js" async=""></script><p style="text-align: center;"><a href="http://d2v4zi8pl64nxt.cloudfront.net/what-do-google-s-new-longer-snippets-mean-for-seo-whiteboard-friday/5a29b6a7c4f1f4.33866068.jpg" target="_blank"><img src="http://d2v4zi8pl64nxt.cloudfront.net/what-do-google-s-new-longer-snippets-mean-for-seo-whiteboard-friday/5a29b6a7c4f1f4.33866068.jpg" alt="What do Google's now, longer snippets mean for SEO?" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;"></a>
</p><p style="text-align: center;" class="caption">Click on the whiteboard image above to open a high-resolution version in a new tab!
</p><iframe width="100%" height="100" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/366792656&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true">&#38;amp;amp;lt;span id="selection-marker-1" class="redactor-selection-marker"&#38;amp;amp;gt;&#38;amp;amp;lt;/span&#38;amp;amp;gt;</iframe><h2>Video Transcription</h2><p>Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're chatting about Google's big change to the snippet length. <br><br>This is the display length of the snippet for any given result in the search results that Google provides. This is on both mobile and desktop. It sort of impacts the meta description, which is how many snippets are written. They're taken from the meta description tag of the web page. Google essentially said just last week, "Hey, we have officially increased the length, the recommended length, and the display length of what we will show in the text snippet of standard organic results."
</p><p><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/1-216207.jpg" style="box-shadow: 0 0 10px 0 #999; border-radius: 20px;">
</p>So I'm illustrating that for you here. I did a search for "net neutrality bill," something that's on the minds of a lot of Americans right now. You can see here that this article from The Hill, which is a recent article — it was two days ago — has a much longer text snippet than what we would normally expect to find. In fact, I went ahead and counted this one and then showed it here.<p><br>So basically, at the old 165-character limit, which is what you would have seen prior to the middle of December on most every search result, occasionally Google would have a longer one for very specific kinds of search results, but more than 90%, according to data from SISTRIX, which put out a great report and I'll link to it here, more than <a href="https://www.sistrix.com/blog/google-permits-longer-snippet-texts/" target="_blank">90% of search snippets were 165 characters or less</a> prior to the middle of November. Then Google added basically a few more lines.
</p><p>So now, on mobile and desktop, instead of an average of two or three lines, we're talking three, four, five, sometimes even six lines of text. So this snippet here is 266 characters that Google is displaying. The next result, from Save the Internet, is 273 characters. Again, this might be because Google sort of realized, "Hey, we almost got all of this in here. Let's just carry it through to the end rather than showing the ellipsis." But you can see that 165 characters would cut off right here. This one actually does a good job of displaying things.
</p><p>So imagine a searcher is querying for something in your field and they're just looking for a basic understanding of what it is. So they've never heard of net neutrality. They're not sure what it is. So they can read here, "Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast, and Verizon from speeding up, slowing down, or blocking any . . ." And that's where it would cut off. Or that's where it would have cut off in November.
</p><p>Now, if I got a snippet like that, I need to visit the site. I've got to click through in order to learn more. That doesn't tell me enough to give me the data to go through. Now, Google has tackled this before with things, like a featured snippet, that sit at the top of the search results, that are a more expansive short answer. But in this case, I can get the rest of it because now, as of mid-November, Google has lengthened this. So now I can get, "Any content, applications, or websites you want to use. Net neutrality is the way that the Internet has always worked."
</p><p>Now, you might quibble and say this is not a full, thorough understanding of what net neutrality is, and I agree. But for a lot of searchers, this is good enough. They don't need to click any more. This extension from 165 to 275 or 273, in this case, has really done the trick.
</p><h2>What changed?</h2><p>So this can have a bunch of changes to SEO too. So the change that happened here is that Google updated basically two things. One, they updated the snippet length, and two, they updated their guidelines around it.
</p><p>So Google's had historic guidelines that said, well, you want to keep your meta description tag between about 160 and 180 characters. I think that was the number. They've updated that to where they say there's no official meta description recommended length. But on Twitter, Danny Sullivan said that he would probably not make that greater than 320 characters. In fact, we and other data providers, that collect a lot of search results, didn't find many that extended beyond 300. So I think that's a reasonable thing.
</p><h2>When?</h2><p>When did this happen? It was starting at about mid-November. November 22nd is when SISTRIX's dataset starts to notice the increase, and it was over 50%. Now it's sitting at about 51% of search results that have these longer snippets in at least 1 of the top 10 as of December 2nd.
</p><p>Here's the amazing thing, though — 51% of search results have at least one. Many of those, because they're still pulling old meta descriptions or meta descriptions that SEO has optimized for the 165-character limit, are still very short. So if you're the person in your search results, especially it's holiday time right now, lots of ecommerce action, if you're the person to go update your important pages right now, you might be able to get more real estate in the search results than any of your competitors in the SERPs because they're not updating theirs.
</p><h2>How will this affect SEO?</h2><p>So how is this going to really change SEO? Well, three things:
</p><h3>A. It changes how marketers should write and optimize the meta description.
</h3><p>We're going to be writing a little bit differently because we have more space. We're going to be trying to entice people to click, but we're going to be very conscientious that we want to try and answer a lot of this in the search result itself, because if we can, there's a good chance that Google will rank us higher, even if we're actually sort of sacrificing clicks by helping the searcher get the answer they need in the search result.
</p><h3>B. It may impact click-through rate. </h3><p>We'll be looking at Jumpshot data over the next few months and year ahead. We think that there are two likely ways they could do it. Probably negatively, meaning fewer clicks on less complex queries. But conversely, possible it will get more clicks on some more complex queries, because people are more enticed by the longer description. Fingers crossed, that's kind of what you want to do as a marketer.
</p><h3>C. It may lead to lower click-through rate further down in the search results. </h3><p>If you think about the fact that this is taking up the real estate that was taken up by three results with two, as of a month ago, well, maybe people won't scroll as far down. Maybe the ones that are higher up will in fact draw more of the clicks, and thus being further down on page one will have less value than it used to.
</p><h2>What should SEOs do?</h2><p>What are things that you should do right now? Number one, make a priority list — you should probably already have this — of your most important landing pages by search traffic, the ones that receive the most search traffic on your website, organic search. Then I would go and reoptimize those meta descriptions for the longer limits.
</p><p>Now, you can judge as you will. My advice would be go to the SERPs that are sending you the most traffic, that you're ranking for the most. Go check out the limits. They're probably between about 250 and 300, and you can optimize somewhere in there.
</p><p>The second thing I would do is if you have internal processes or your CMS has rules around how long you can make a meta description tag, you're going to have to update those probably from the old limit of somewhere in the 160 to 180 range to the new 230 to 320 range. It doesn't look like many are smaller than 230 now, at least limit-wise, and it doesn't look like anything is particularly longer than 320. So somewhere in there is where you're going to want to stay.
</p><p>Good luck with your new meta descriptions and with your new snippet optimization. We'll see you again next week for another edition of Whiteboard Friday. Take care.
</p><p><a href="http://www.speechpad.com/page/video-transcription/" target="_blank">Video transcription</a> by <a href="http://www.speechpad.com/" target="_blank">Speechpad.com</a>
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/63\">randfish</a></p><p>Snippets and meta descriptions have brand-new character limits, and it's a big change for Google and SEOs alike. Learn about what's new, when it changed, and what it all means for SEO in this edition of Whiteboard Friday.
</p><p class="wistia_responsive_padding" style="padding:5.25% 0 28px 0;position:relative;">
<iframe src="https://fast.wistia.net/embed/iframe/rzyt0jmt93?videoFoam=true" title="Wistia video player" allowtransparency="true" frameborder="0" scrolling="no" class="wistia_embed" name="wistia_embed" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen="" oallowfullscreen="" msallowfullscreen="" width="100%" height="100%">
</iframe>
</p><script rel="display: none;" src="https://fast.wistia.net/assets/external/E-v1.js" async=""></script><p style="text-align: center;"><a href="http://d2v4zi8pl64nxt.cloudfront.net/what-do-google-s-new-longer-snippets-mean-for-seo-whiteboard-friday/5a29b6a7c4f1f4.33866068.jpg" target="_blank"><img src="http://d2v4zi8pl64nxt.cloudfront.net/what-do-google-s-new-longer-snippets-mean-for-seo-whiteboard-friday/5a29b6a7c4f1f4.33866068.jpg" alt="What do Google's now, longer snippets mean for SEO?" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;"></a>
</p><p style="text-align: center;" class="caption">Click on the whiteboard image above to open a high-resolution version in a new tab!
</p><iframe width="100%" height="100" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/366792656&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true">&amp;amp;amp;lt;span id="selection-marker-1" class="redactor-selection-marker"&amp;amp;amp;gt;&amp;amp;amp;lt;/span&amp;amp;amp;gt;</iframe><h2>Video Transcription</h2><p>Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're chatting about Google's big change to the snippet length. <br><br>This is the display length of the snippet for any given result in the search results that Google provides. This is on both mobile and desktop. It sort of impacts the meta description, which is how many snippets are written. They're taken from the meta description tag of the web page. Google essentially said just last week, "Hey, we have officially increased the length, the recommended length, and the display length of what we will show in the text snippet of standard organic results."
</p><p><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/1-216207.jpg" style="box-shadow: 0 0 10px 0 #999; border-radius: 20px;">
</p>So I'm illustrating that for you here. I did a search for "net neutrality bill," something that's on the minds of a lot of Americans right now. You can see here that this article from The Hill, which is a recent article — it was two days ago — has a much longer text snippet than what we would normally expect to find. In fact, I went ahead and counted this one and then showed it here.<p><br>So basically, at the old 165-character limit, which is what you would have seen prior to the middle of December on most every search result, occasionally Google would have a longer one for very specific kinds of search results, but more than 90%, according to data from SISTRIX, which put out a great report and I'll link to it here, more than <a href="https://www.sistrix.com/blog/google-permits-longer-snippet-texts/" target="_blank">90% of search snippets were 165 characters or less</a> prior to the middle of November. Then Google added basically a few more lines.
</p><p>So now, on mobile and desktop, instead of an average of two or three lines, we're talking three, four, five, sometimes even six lines of text. So this snippet here is 266 characters that Google is displaying. The next result, from Save the Internet, is 273 characters. Again, this might be because Google sort of realized, "Hey, we almost got all of this in here. Let's just carry it through to the end rather than showing the ellipsis." But you can see that 165 characters would cut off right here. This one actually does a good job of displaying things.
</p><p>So imagine a searcher is querying for something in your field and they're just looking for a basic understanding of what it is. So they've never heard of net neutrality. They're not sure what it is. So they can read here, "Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast, and Verizon from speeding up, slowing down, or blocking any . . ." And that's where it would cut off. Or that's where it would have cut off in November.
</p><p>Now, if I got a snippet like that, I need to visit the site. I've got to click through in order to learn more. That doesn't tell me enough to give me the data to go through. Now, Google has tackled this before with things, like a featured snippet, that sit at the top of the search results, that are a more expansive short answer. But in this case, I can get the rest of it because now, as of mid-November, Google has lengthened this. So now I can get, "Any content, applications, or websites you want to use. Net neutrality is the way that the Internet has always worked."
</p><p>Now, you might quibble and say this is not a full, thorough understanding of what net neutrality is, and I agree. But for a lot of searchers, this is good enough. They don't need to click any more. This extension from 165 to 275 or 273, in this case, has really done the trick.
</p><h2>What changed?</h2><p>So this can have a bunch of changes to SEO too. So the change that happened here is that Google updated basically two things. One, they updated the snippet length, and two, they updated their guidelines around it.
</p><p>So Google's had historic guidelines that said, well, you want to keep your meta description tag between about 160 and 180 characters. I think that was the number. They've updated that to where they say there's no official meta description recommended length. But on Twitter, Danny Sullivan said that he would probably not make that greater than 320 characters. In fact, we and other data providers, that collect a lot of search results, didn't find many that extended beyond 300. So I think that's a reasonable thing.
</p><h2>When?</h2><p>When did this happen? It was starting at about mid-November. November 22nd is when SISTRIX's dataset starts to notice the increase, and it was over 50%. Now it's sitting at about 51% of search results that have these longer snippets in at least 1 of the top 10 as of December 2nd.
</p><p>Here's the amazing thing, though — 51% of search results have at least one. Many of those, because they're still pulling old meta descriptions or meta descriptions that SEO has optimized for the 165-character limit, are still very short. So if you're the person in your search results, especially it's holiday time right now, lots of ecommerce action, if you're the person to go update your important pages right now, you might be able to get more real estate in the search results than any of your competitors in the SERPs because they're not updating theirs.
</p><h2>How will this affect SEO?</h2><p>So how is this going to really change SEO? Well, three things:
</p><h3>A. It changes how marketers should write and optimize the meta description.
</h3><p>We're going to be writing a little bit differently because we have more space. We're going to be trying to entice people to click, but we're going to be very conscientious that we want to try and answer a lot of this in the search result itself, because if we can, there's a good chance that Google will rank us higher, even if we're actually sort of sacrificing clicks by helping the searcher get the answer they need in the search result.
</p><h3>B. It may impact click-through rate. </h3><p>We'll be looking at Jumpshot data over the next few months and year ahead. We think that there are two likely ways they could do it. Probably negatively, meaning fewer clicks on less complex queries. But conversely, possible it will get more clicks on some more complex queries, because people are more enticed by the longer description. Fingers crossed, that's kind of what you want to do as a marketer.
</p><h3>C. It may lead to lower click-through rate further down in the search results. </h3><p>If you think about the fact that this is taking up the real estate that was taken up by three results with two, as of a month ago, well, maybe people won't scroll as far down. Maybe the ones that are higher up will in fact draw more of the clicks, and thus being further down on page one will have less value than it used to.
</p><h2>What should SEOs do?</h2><p>What are things that you should do right now? Number one, make a priority list — you should probably already have this — of your most important landing pages by search traffic, the ones that receive the most search traffic on your website, organic search. Then I would go and reoptimize those meta descriptions for the longer limits.
</p><p>Now, you can judge as you will. My advice would be go to the SERPs that are sending you the most traffic, that you're ranking for the most. Go check out the limits. They're probably between about 250 and 300, and you can optimize somewhere in there.
</p><p>The second thing I would do is if you have internal processes or your CMS has rules around how long you can make a meta description tag, you're going to have to update those probably from the old limit of somewhere in the 160 to 180 range to the new 230 to 320 range. It doesn't look like many are smaller than 230 now, at least limit-wise, and it doesn't look like anything is particularly longer than 320. So somewhere in there is where you're going to want to stay.
</p><p>Good luck with your new meta descriptions and with your new snippet optimization. We'll see you again next week for another edition of Whiteboard Friday. Take care.
</p><p><a href="http://www.speechpad.com/page/video-transcription/" target="_blank">Video transcription</a> by <a href="http://www.speechpad.com/" target="_blank">Speechpad.com</a>
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/CnzBtuconkk" height="1" width="1" alt=""/>Fri, 08 Dec 2017 00:04:00 GMThttps://moz.com/blog/googles-longer-snippetsrandfish2017-12-08T00:04:00ZDon't Be Fooled by Data: 4 Data Analysis Pitfalls &amp; How to Avoid Themhttp://feedproxy.google.com/~r/MozBlog/~3/COElfU7-2lM/data-analysis-pitfalls
https://moz.com/blog/data-analysis-pitfalls<p>Posted by <a href=\"https://moz.com/community/users/726559\">Tom.Capper</a></p><p>Digital marketing is a proudly data-driven field. Yet, as SEOs especially, we often have such incomplete or questionable data to work with, that we end up jumping to the wrong conclusions in our attempts to substantiate our arguments or quantify our issues and opportunities.
</p><p>In this post, I’m going to outline 4 data analysis pitfalls that are endemic in our industry, and how to avoid them.
</p><h2>1. Jumping to conclusions</h2><p>Earlier this year, I conducted a ranking factor study around brand awareness, and I posted this caveat:
</p><p><em></em>
</p><blockquote><em>"...the fact that Domain Authority (or branded search volume, or anything else) is positively correlated with rankings could indicate that any or all of the following is likely:
</em>
<ul><em>
<li>Links cause sites to rank well</li>
<li>Ranking well causes sites to get links</li>
<li>Some third factor (e.g. reputation or age of site) causes sites to get both links and rankings"<br>~ <a href="https://moz.com/blog/rankings-correlation-study-domain-authority-vs-branded-search-volume" target="_blank">Me</a></li></em>
</ul>
</blockquote><p><a href="https://moz.com/blog/rankings-correlation-study-domain-authority-vs-branded-search-volume"></a>
</p><p>However, I want to go into this in a bit more depth and give you a framework for analyzing these yourself, because it still comes up a lot. Take, for example, this <a href="https://www.stonetemple.com/link-as-a-ranking-factor/" target="_blank">recent study by Stone Temple</a>, which you may have seen in the Moz Top 10 or Rand’s <a href="https://twitter.com/randfish/status/907995200986869761" target="_blank">tweets</a>, or this <a href="https://webmarketingschool.com/semrush-direct-traffic-ranking-factor-claim/" target="_blank">excellent article</a> discussing SEMRush’s recent direct traffic findings. To be absolutely clear, I’m not criticizing either of the studies, but I do want to draw attention to how we might interpret them.
</p><p>Firstly, we do tend to suffer a little confirmation bias — we’re all too eager to call out the cliché “correlation vs. causation” distinction when we see successful sites that are keyword-stuffed, but all too approving when we see studies doing the same with something we think is or was effective, like links.
</p><p>Secondly, we fail to critically analyze the potential mechanisms. The options aren’t just causation or coincidence.
</p><p>Before you jump to a conclusion based on a correlation, you’re obliged to consider various possibilities:
</p><ul>
<li>Complete coincidence</li>
<li>Reverse causation</li>
<li>Joint causation</li>
<li>Linearity</li>
<li>Broad applicability</li>
</ul><p>If those don’t make any sense, then that’s fair enough — they’re jargon. Let’s go through an example:
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e7089deb53.58500794.png">
</p><p>Before I warn you not to eat cheese because you may die in your bedsheets, I’m obliged to check that it isn’t any of the following:
</p><ul>
<li><strong>Complete coincidence -</strong> Is it possible that so many datasets were compared, that some were bound to be similar? Why, that’s exactly what <a href="http://www.tylervigen.com/spurious-correlations" target="_blank">Tyler Vigen</a> did! <em><strong>Yes, this is possible.</strong></em></li>
<li><strong>Reverse causation -</strong> Is it possible that we have this the wrong way around? For example, perhaps your relatives, in mourning for your bedsheet-related death, eat cheese in large quantities to comfort themselves? This seems pretty unlikely, so let’s give it a pass. <em><strong>No, this is very unlikely.</strong></em></li>
<li><strong>Joint causation -</strong> Is it possible that some third factor is behind both of these? Maybe increasing affluence makes you healthier (so you don’t die of things like malnutrition), and also causes you to eat more cheese? This seems very plausible. <em><strong>Yes, this is possible.</strong></em></li>
<li><strong>Linearity -</strong> Are we comparing two linear trends? A linear trend is a steady rate of growth or decline. Any two statistics which are both roughly linear over time will be very well correlated. In the graph above, both our statistics are trending linearly upwards. If the graph was drawn with different scales, they might look completely unrelated, like <a href="https://imgur.com/muS5w9b" target="_blank">this</a>, but because they both have a steady rate, they’d still be very well correlated. <em><strong>Yes, this looks likely.</strong></em></li>
<li><strong>Broad applicability -</strong> Is it possible that this relationship only exists in certain niche scenarios, or, at least, not in my niche scenario? Perhaps, for example, cheese does this to some people, and that’s been enough to create this correlation, because there are so few bedsheet-tangling fatalities otherwise? <em><strong>Yes, this seems possible.</strong></em></li>
</ul><p>So we have 4 “<em>Yes</em>” answers and one “<em>No</em>” answer from those 5 checks.
</p><p>If your example doesn’t get 5 “<em>No</em>” answers from those 5 checks, it’s a fail, and you don’t get to say that the study has established either a ranking factor or a fatal side effect of cheese consumption.
</p><p>A similar process should apply to case studies, which are another form of correlation — the correlation between you making a change, and something good (or bad!) happening. For example, ask:
</p><ul>
<li>Have I ruled out other factors (e.g. external demand, seasonality, competitors making mistakes)?</li>
<li>Did I increase traffic by doing the thing I tried to do, or did I accidentally improve some other factor at the same time?</li>
<li>Did this work because of the unique circumstance of the particular client/project?</li>
</ul><p>This is particularly challenging for SEOs, because we rarely have data of this quality, but I’d suggest an additional pair of questions to help you navigate this minefield:
</p><ul>
<li>If I were Google, would I do this?</li>
<li>If I were Google, could I do this?</li>
</ul><p>Direct traffic as a ranking factor passes the “could” test, but only barely — Google could use data from Chrome, Android, or ISPs, but it’d be sketchy. It doesn’t really pass the “would” test, though — it’d be far easier for Google to use branded search traffic, which would answer the same questions you might try to answer by comparing direct traffic levels (e.g. how popular is this website?).
</p><h2>2. Missing the context</h2><p>If I told you that my traffic was up 20% week on week today, what would you say? Congratulations?
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e70914fd70.02061093.png">
</p><p>What if it was up 20% this time last year?
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e709992347.54502962.png">
</p><p>What if I told you it had been up 20% year on year, up until recently?
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e70a17bd95.25982670.png">
</p><p>It’s funny how a little context can completely change this. This is another problem with case studies and their evil inverted twin, traffic drop analyses.
</p><p>If we really want to understand whether to be surprised at something, positively or negatively, we need to compare it to our expectations, and then figure out what deviation from our expectations is “normal.” If this is starting to sound like statistics, that’s because it is statistics — indeed, I wrote about a statistical approach to measuring change way back in <a href="https://www.distilled.net/resources/statistical-forecasting-for-seo-analytics-and-a-free-tool/" target="_blank">2015</a>.
</p><p>If you want to be lazy, though, a good rule of thumb is to zoom out, and add in those previous years. And if someone shows you data that is suspiciously zoomed in, you might want to take it with a pinch of salt.
</p><h2>3. Trusting our tools</h2><p>Would you make a multi-million dollar business decision based on a number that your competitor could manipulate at will? Well, chances are you do, and the number can be found in Google Analytics. I’ve covered this extensively in <a href="https://www.slideshare.net/THCapper/everything-you-didnt-know-about-google-analytics-measurefest-november-2016" target="_blank">other places</a>, but there are some major problems with most analytics platforms around:
</p><ul>
<li>How easy they are to manipulate externally</li>
<li>How arbitrarily they group hits into sessions</li>
<li>How vulnerable they are to ad blockers</li>
<li>How they perform under sampling, and how obvious they make this</li>
</ul><p>For example, did you know that the Google Analytics API v3 can heavily sample data whilst telling you that the data is unsampled, above a certain amount of traffic (~500,000 within date range)? Neither did I, until we ran into it whilst building Distilled ODN.
</p><p>Similar problems exist with many “Search Analytics” tools. My colleague <a href="https://twitter.com/samnemzer" target="_blank">Sam Nemzer</a> has written a bunch about this — did you know that most rank tracking platforms report <a href="https://www.slideshare.net/SamNemzer/are-the-first-things-you-learnt-about-seo-still-true/36?src=clipshare">completely different rankings</a>? Or how about the fact that the keywords grouped by Google (and thus tools like SEMRush and STAT, too) are <a href="https://moz.com/blog/google-grouping-keyword-volumes-what-does-this-mean-for-seo" target="_blank">not equivalent</a>, and don’t necessarily have the volumes quoted?
</p><p>It’s important to understand the strengths and weaknesses of tools that we use, so that we can at least know when they’re directionally accurate (as in, their insights guide you in the right direction), even if not perfectly accurate. All I can really recommend here is that skilling up in SEO (or any other digital channel) necessarily means understanding the mechanics behind your measurement platforms — which is why all new starts at Distilled end up learning how to do analytics audits.
</p><p>One of the most common solutions to the root problem is combining multiple data sources, but…
</p><h2>4. Combining data sources</h2><p>There are numerous platforms out there that will “defeat (not provided)” by bringing together data from two or more of:
</p><ul>
<li>Analytics</li>
<li>Search Console</li>
<li>AdWords</li>
<li>Rank tracking</li>
</ul><p>The problems here are that, firstly, these platforms do not have equivalent definitions, and secondly, ironically, (not provided) tends to break them.
</p><p>Let’s deal with definitions first, with an example — let’s look at a landing page with a channel:
</p><ul>
<li>In Search Console, these are reported as <em>clicks</em>, and can be vulnerable to heavy, invisible sampling when multiple dimensions (e.g. keyword and page) or filters are combined.</li>
<li>In Google Analytics, these are reported using <em>last non-direct click</em>, meaning that your organic traffic includes a bunch of direct sessions, time-outs that resumed mid-session, etc. That’s without getting into dark traffic, ad blockers, etc.</li>
<li>In AdWords, most reporting uses <em>last AdWords click</em>, and conversions may be defined differently. In addition, keyword volumes are bundled, as referenced above.</li>
<li>Rank tracking is location specific, and inconsistent, as referenced above.</li>
</ul><p>Fine, though — it may not be precise, but you can at least get to some <em>directionally</em> useful data given these limitations. However, about that “(not provided)”...</p><p>Most of your landing pages get traffic from more than one keyword. It’s very likely that some of these keywords convert better than others, particularly if they are branded, meaning that even the most thorough click-through rate model isn’t going to help you. So how do you know which keywords are valuable?
</p><p>The best answer is to generalize from AdWords data for those keywords, but it’s very unlikely that you have analytics data for all those combinations of keyword and landing page. Essentially, the tools that report on this make the very bold assumption that a given page converts identically for all keywords. Some are more transparent about this than others.
</p><p>Again, this isn’t to say that those tools aren’t valuable — they just need to be understood carefully. The only way you could reliably fill in these blanks created by “not provided” would be to spend a ton on paid search to get decent volume, conversion rate, and bounce rate estimates for all your keywords, and even then, you’ve not fixed the inconsistent definitions issues.
</p><h2>Bonus peeve: Average rank</h2><p>I still see this way too often. Three questions:
</p><ol>
<li>Do you care more about losing rankings for ten very low volume queries (10 searches a month or less) than for one high volume query (millions plus)? If the answer isn’t “yes, I absolutely care more about the ten low-volume queries”, then this metric isn’t for you, and you should consider a visibility metric based on click through rate estimates.</li>
<li>When you start ranking at 100 for a keyword you didn’t rank for before, does this make you unhappy? If the answer isn’t “yes, I hate ranking for new keywords,” then this metric isn’t for you — because that will lower your average rank. You could of course treat all non-ranking keywords as position 100, as some tools allow, but is a drop of 2 average rank positions really the best way to express that 1/50 of your landing pages have been de-indexed? Again, use a visibility metric, please.</li>
<li>Do you like comparing your performance with your competitors? If the answer isn’t “no, of course not,” then this metric isn’t for you — your competitors may have more or fewer branded keywords or long-tail rankings, and these will skew the comparison. Again, use a visibility metric.</li>
</ol><h2>Conclusion</h2><p>Hopefully, you’ve found this useful. To summarize the main takeaways:
</p><ul>
<li>Critically analyse correlations & case studies by seeing if you can explain them as coincidences, as reverse causation, as joint causation, through reference to a third mutually relevant factor, or through niche applicability.</li>
<li>Don’t look at changes in traffic without looking at the context — what would you have forecasted for this period, and with what margin of error?</li>
<li>Remember that the tools we use have limitations, and do your research on how that impacts the numbers they show. “<em>How has this number been produced?”</em> is an important component in <em>“What does this number mean?”</em></li>
<li>If you end up combining data from multiple tools, remember to work out the relationship between them — treat this information as directional rather than precise.</li>
</ul><p>Let me know what data analysis fallacies bug you, <a href="https://moz.com/blog/data-analysis-pitfalls#comments">in the comments below</a>.</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/726559\">Tom.Capper</a></p><p>Digital marketing is a proudly data-driven field. Yet, as SEOs especially, we often have such incomplete or questionable data to work with, that we end up jumping to the wrong conclusions in our attempts to substantiate our arguments or quantify our issues and opportunities.
</p><p>In this post, I’m going to outline 4 data analysis pitfalls that are endemic in our industry, and how to avoid them.
</p><h2>1. Jumping to conclusions</h2><p>Earlier this year, I conducted a ranking factor study around brand awareness, and I posted this caveat:
</p><p><em></em>
</p><blockquote><em>"...the fact that Domain Authority (or branded search volume, or anything else) is positively correlated with rankings could indicate that any or all of the following is likely:
</em>
<ul><em>
<li>Links cause sites to rank well</li>
<li>Ranking well causes sites to get links</li>
<li>Some third factor (e.g. reputation or age of site) causes sites to get both links and rankings"<br>~ <a href="https://moz.com/blog/rankings-correlation-study-domain-authority-vs-branded-search-volume" target="_blank">Me</a></li></em>
</ul>
</blockquote><p><a href="https://moz.com/blog/rankings-correlation-study-domain-authority-vs-branded-search-volume"></a>
</p><p>However, I want to go into this in a bit more depth and give you a framework for analyzing these yourself, because it still comes up a lot. Take, for example, this <a href="https://www.stonetemple.com/link-as-a-ranking-factor/" target="_blank">recent study by Stone Temple</a>, which you may have seen in the Moz Top 10 or Rand’s <a href="https://twitter.com/randfish/status/907995200986869761" target="_blank">tweets</a>, or this <a href="https://webmarketingschool.com/semrush-direct-traffic-ranking-factor-claim/" target="_blank">excellent article</a> discussing SEMRush’s recent direct traffic findings. To be absolutely clear, I’m not criticizing either of the studies, but I do want to draw attention to how we might interpret them.
</p><p>Firstly, we do tend to suffer a little confirmation bias — we’re all too eager to call out the cliché “correlation vs. causation” distinction when we see successful sites that are keyword-stuffed, but all too approving when we see studies doing the same with something we think is or was effective, like links.
</p><p>Secondly, we fail to critically analyze the potential mechanisms. The options aren’t just causation or coincidence.
</p><p>Before you jump to a conclusion based on a correlation, you’re obliged to consider various possibilities:
</p><ul>
<li>Complete coincidence</li>
<li>Reverse causation</li>
<li>Joint causation</li>
<li>Linearity</li>
<li>Broad applicability</li>
</ul><p>If those don’t make any sense, then that’s fair enough — they’re jargon. Let’s go through an example:
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e7089deb53.58500794.png">
</p><p>Before I warn you not to eat cheese because you may die in your bedsheets, I’m obliged to check that it isn’t any of the following:
</p><ul>
<li><strong>Complete coincidence -</strong> Is it possible that so many datasets were compared, that some were bound to be similar? Why, that’s exactly what <a href="http://www.tylervigen.com/spurious-correlations" target="_blank">Tyler Vigen</a> did! <em><strong>Yes, this is possible.</strong></em></li>
<li><strong>Reverse causation -</strong> Is it possible that we have this the wrong way around? For example, perhaps your relatives, in mourning for your bedsheet-related death, eat cheese in large quantities to comfort themselves? This seems pretty unlikely, so let’s give it a pass. <em><strong>No, this is very unlikely.</strong></em></li>
<li><strong>Joint causation -</strong> Is it possible that some third factor is behind both of these? Maybe increasing affluence makes you healthier (so you don’t die of things like malnutrition), and also causes you to eat more cheese? This seems very plausible. <em><strong>Yes, this is possible.</strong></em></li>
<li><strong>Linearity -</strong> Are we comparing two linear trends? A linear trend is a steady rate of growth or decline. Any two statistics which are both roughly linear over time will be very well correlated. In the graph above, both our statistics are trending linearly upwards. If the graph was drawn with different scales, they might look completely unrelated, like <a href="https://imgur.com/muS5w9b" target="_blank">this</a>, but because they both have a steady rate, they’d still be very well correlated. <em><strong>Yes, this looks likely.</strong></em></li>
<li><strong>Broad applicability -</strong> Is it possible that this relationship only exists in certain niche scenarios, or, at least, not in my niche scenario? Perhaps, for example, cheese does this to some people, and that’s been enough to create this correlation, because there are so few bedsheet-tangling fatalities otherwise? <em><strong>Yes, this seems possible.</strong></em></li>
</ul><p>So we have 4 “<em>Yes</em>” answers and one “<em>No</em>” answer from those 5 checks.
</p><p>If your example doesn’t get 5 “<em>No</em>” answers from those 5 checks, it’s a fail, and you don’t get to say that the study has established either a ranking factor or a fatal side effect of cheese consumption.
</p><p>A similar process should apply to case studies, which are another form of correlation — the correlation between you making a change, and something good (or bad!) happening. For example, ask:
</p><ul>
<li>Have I ruled out other factors (e.g. external demand, seasonality, competitors making mistakes)?</li>
<li>Did I increase traffic by doing the thing I tried to do, or did I accidentally improve some other factor at the same time?</li>
<li>Did this work because of the unique circumstance of the particular client/project?</li>
</ul><p>This is particularly challenging for SEOs, because we rarely have data of this quality, but I’d suggest an additional pair of questions to help you navigate this minefield:
</p><ul>
<li>If I were Google, would I do this?</li>
<li>If I were Google, could I do this?</li>
</ul><p>Direct traffic as a ranking factor passes the “could” test, but only barely — Google could use data from Chrome, Android, or ISPs, but it’d be sketchy. It doesn’t really pass the “would” test, though — it’d be far easier for Google to use branded search traffic, which would answer the same questions you might try to answer by comparing direct traffic levels (e.g. how popular is this website?).
</p><h2>2. Missing the context</h2><p>If I told you that my traffic was up 20% week on week today, what would you say? Congratulations?
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e70914fd70.02061093.png">
</p><p>What if it was up 20% this time last year?
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e709992347.54502962.png">
</p><p>What if I told you it had been up 20% year on year, up until recently?
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/data-analysis-pitfalls/5a28e70a17bd95.25982670.png">
</p><p>It’s funny how a little context can completely change this. This is another problem with case studies and their evil inverted twin, traffic drop analyses.
</p><p>If we really want to understand whether to be surprised at something, positively or negatively, we need to compare it to our expectations, and then figure out what deviation from our expectations is “normal.” If this is starting to sound like statistics, that’s because it is statistics — indeed, I wrote about a statistical approach to measuring change way back in <a href="https://www.distilled.net/resources/statistical-forecasting-for-seo-analytics-and-a-free-tool/" target="_blank">2015</a>.
</p><p>If you want to be lazy, though, a good rule of thumb is to zoom out, and add in those previous years. And if someone shows you data that is suspiciously zoomed in, you might want to take it with a pinch of salt.
</p><h2>3. Trusting our tools</h2><p>Would you make a multi-million dollar business decision based on a number that your competitor could manipulate at will? Well, chances are you do, and the number can be found in Google Analytics. I’ve covered this extensively in <a href="https://www.slideshare.net/THCapper/everything-you-didnt-know-about-google-analytics-measurefest-november-2016" target="_blank">other places</a>, but there are some major problems with most analytics platforms around:
</p><ul>
<li>How easy they are to manipulate externally</li>
<li>How arbitrarily they group hits into sessions</li>
<li>How vulnerable they are to ad blockers</li>
<li>How they perform under sampling, and how obvious they make this</li>
</ul><p>For example, did you know that the Google Analytics API v3 can heavily sample data whilst telling you that the data is unsampled, above a certain amount of traffic (~500,000 within date range)? Neither did I, until we ran into it whilst building Distilled ODN.
</p><p>Similar problems exist with many “Search Analytics” tools. My colleague <a href="https://twitter.com/samnemzer" target="_blank">Sam Nemzer</a> has written a bunch about this — did you know that most rank tracking platforms report <a href="https://www.slideshare.net/SamNemzer/are-the-first-things-you-learnt-about-seo-still-true/36?src=clipshare">completely different rankings</a>? Or how about the fact that the keywords grouped by Google (and thus tools like SEMRush and STAT, too) are <a href="https://moz.com/blog/google-grouping-keyword-volumes-what-does-this-mean-for-seo" target="_blank">not equivalent</a>, and don’t necessarily have the volumes quoted?
</p><p>It’s important to understand the strengths and weaknesses of tools that we use, so that we can at least know when they’re directionally accurate (as in, their insights guide you in the right direction), even if not perfectly accurate. All I can really recommend here is that skilling up in SEO (or any other digital channel) necessarily means understanding the mechanics behind your measurement platforms — which is why all new starts at Distilled end up learning how to do analytics audits.
</p><p>One of the most common solutions to the root problem is combining multiple data sources, but…
</p><h2>4. Combining data sources</h2><p>There are numerous platforms out there that will “defeat (not provided)” by bringing together data from two or more of:
</p><ul>
<li>Analytics</li>
<li>Search Console</li>
<li>AdWords</li>
<li>Rank tracking</li>
</ul><p>The problems here are that, firstly, these platforms do not have equivalent definitions, and secondly, ironically, (not provided) tends to break them.
</p><p>Let’s deal with definitions first, with an example — let’s look at a landing page with a channel:
</p><ul>
<li>In Search Console, these are reported as <em>clicks</em>, and can be vulnerable to heavy, invisible sampling when multiple dimensions (e.g. keyword and page) or filters are combined.</li>
<li>In Google Analytics, these are reported using <em>last non-direct click</em>, meaning that your organic traffic includes a bunch of direct sessions, time-outs that resumed mid-session, etc. That’s without getting into dark traffic, ad blockers, etc.</li>
<li>In AdWords, most reporting uses <em>last AdWords click</em>, and conversions may be defined differently. In addition, keyword volumes are bundled, as referenced above.</li>
<li>Rank tracking is location specific, and inconsistent, as referenced above.</li>
</ul><p>Fine, though — it may not be precise, but you can at least get to some <em>directionally</em> useful data given these limitations. However, about that “(not provided)”...</p><p>Most of your landing pages get traffic from more than one keyword. It’s very likely that some of these keywords convert better than others, particularly if they are branded, meaning that even the most thorough click-through rate model isn’t going to help you. So how do you know which keywords are valuable?
</p><p>The best answer is to generalize from AdWords data for those keywords, but it’s very unlikely that you have analytics data for all those combinations of keyword and landing page. Essentially, the tools that report on this make the very bold assumption that a given page converts identically for all keywords. Some are more transparent about this than others.
</p><p>Again, this isn’t to say that those tools aren’t valuable — they just need to be understood carefully. The only way you could reliably fill in these blanks created by “not provided” would be to spend a ton on paid search to get decent volume, conversion rate, and bounce rate estimates for all your keywords, and even then, you’ve not fixed the inconsistent definitions issues.
</p><h2>Bonus peeve: Average rank</h2><p>I still see this way too often. Three questions:
</p><ol>
<li>Do you care more about losing rankings for ten very low volume queries (10 searches a month or less) than for one high volume query (millions plus)? If the answer isn’t “yes, I absolutely care more about the ten low-volume queries”, then this metric isn’t for you, and you should consider a visibility metric based on click through rate estimates.</li>
<li>When you start ranking at 100 for a keyword you didn’t rank for before, does this make you unhappy? If the answer isn’t “yes, I hate ranking for new keywords,” then this metric isn’t for you — because that will lower your average rank. You could of course treat all non-ranking keywords as position 100, as some tools allow, but is a drop of 2 average rank positions really the best way to express that 1/50 of your landing pages have been de-indexed? Again, use a visibility metric, please.</li>
<li>Do you like comparing your performance with your competitors? If the answer isn’t “no, of course not,” then this metric isn’t for you — your competitors may have more or fewer branded keywords or long-tail rankings, and these will skew the comparison. Again, use a visibility metric.</li>
</ol><h2>Conclusion</h2><p>Hopefully, you’ve found this useful. To summarize the main takeaways:
</p><ul>
<li>Critically analyse correlations & case studies by seeing if you can explain them as coincidences, as reverse causation, as joint causation, through reference to a third mutually relevant factor, or through niche applicability.</li>
<li>Don’t look at changes in traffic without looking at the context — what would you have forecasted for this period, and with what margin of error?</li>
<li>Remember that the tools we use have limitations, and do your research on how that impacts the numbers they show. “<em>How has this number been produced?”</em> is an important component in <em>“What does this number mean?”</em></li>
<li>If you end up combining data from multiple tools, remember to work out the relationship between them — treat this information as directional rather than precise.</li>
</ul><p>Let me know what data analysis fallacies bug you, <a href="https://moz.com/blog/data-analysis-pitfalls#comments">in the comments below</a>.</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/COElfU7-2lM" height="1" width="1" alt=""/>Thu, 07 Dec 2017 00:02:00 GMThttps://moz.com/blog/data-analysis-pitfallsTom.Capper2017-12-07T00:02:00ZOur Readership: Results of the 2017 Moz Blog Reader Surveyhttp://feedproxy.google.com/~r/MozBlog/~3/nYj7I0pIvrc/2017-moz-blog-reader-survey-results
https://moz.com/blog/2017-moz-blog-reader-survey-results<p>Posted by <a href=\"https://moz.com/community/users/544762\">Trevor-Klein</a></p><p>This blog is for all of you. In a notoriously opaque and confusing industry that's prone to frequent changes, we see immense benefit in helping all of you stay on top of the game. To that end, every couple of years we ask for a report card of sorts, hoping not only to get a sense for how your jobs have changed, but also to get a sense for how we can improve.
</p><p>About a month ago, we asked you all to take a reader survey, and nearly 600 of you generously gave your time. The results, summarized in this post, were immensely helpful, and were a reminder of how lucky we are to have such a thoughtful community of readers.
</p><p>I've offered as much data as I can, and when possible, I've also trended responses against the same questions from our 2015 and 2013 surveys, so you can get a sense for how things have changed. There's a lot here, so buckle up. =)
</p><hr><h2>Who our readers are</h2><p>To put all of this great feedback into context, it helps to know a bit about who the people in our audience actually are. Sure, we can glean a bit of information from our site analytics, and can make some educated guesses, but neither of those can answer the questions we're most curious about. What's your day-to-day work like, and how much SEO does it really involve? Would you consider yourself more of an SEO beginner, or more of an SEO wizard? And, most importantly, what challenges are you facing in your work these days? The answers give us a fuller understanding of where the rest of your feedback comes from.
</p><h3>What is your job title?</h3><p>Readers of the Moz Blog have a multitude of backgrounds, from CEOs of agencies to in-the-weeds SEOs of all skill levels. One of the most common themes we see, though, is a skew toward the more general marketing industry. I know that word clouds have their faults, but it's still a relatively interesting way to gauge how often things appear in a list like this, so here's what we've got this year:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499af0b5b9.47945419.jpg">
</p><p>Of note, similar to our results in 2015, the word "marketing" is the most common result, followed by the word "SEO" and the word "manager."
</p><p>Here's a look at the top 20 terms used in this year's results, along with the percentage of responses containing each term. You'll also see those same percentages from the 2015 and 2013 surveys to give you an idea of what's changed -- the darker the bar, the more recent the survey:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499be931f5.90994273.png">
</p><p>The thing that surprises me the most about this list is how little it's changed in the four-plus years since we first asked the question (a theme you'll see recur in the rest of these results). In fact, the top 20 terms this year are nearly identical to the top 20 terms four years ago, with only a few things sliding up or down a few spots.
</p><h3>What percentage of your day-to-day work involves SEO?</h3><p>We hear a lot about people wearing multiple hats for their companies. One person who took this survey noted that even at a 9,000-person company, they were the only one who worked on SEO, and it was only about 80% of their job. That idea is backed up by this data, which shows an incredibly broad range of responses. More than 10% of respondents barely touch SEO, and not even 14% say they're full-time:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499c57faf0.02677389.png">
</p><p>One interesting thing to note is the sharp decline in the number of people who say that SEO isn't a part of their day-to-day at all. That shift is likely a result of our shift back toward SEO, away from related areas like social media and content marketing. I think we had attracted a significant number of community managers and content specialists who didn't work in SEO, and we're now seeing the pendulum swing the other direction.
</p><h3>On a scale of 1-5, how advanced would you say your SEO knowledge is?</h3><p>The similarity between this year's graph for this question and those from 2015 and 2013 is simply astonishing:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499cbff7e7.86135897.png">
</p><p>There's been a slight drop in folks who say they're at an expert level, and a slight increase in folks who have some background, but are relative beginners. But only slight. The interesting thing is, our blog traffic has increased significantly over these four years, so the newer members of our audience bear a striking resemblance to those of you who've been around for quite some time. In a sense, that's reassuring -- it paints a clear picture for us as we continue refining our content.
</p><h3>Do you work in-house, or at an agency/consultancy?</h3><p>Here's another window into just how little our audience has changed in the last couple of years:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499d2a5c08.69347322.png">
</p><p>A slight majority of our readers still work in-house for their own companies, and about a third still work on SEO for their company's clients.
</p><p>Interestingly, though, respondents who work for clients deal with many of the same issues as those who work in-house -- especially in trying to convey the value of their work in SEO. They're just trying to send that message to external clients instead of internal stakeholders. More details on that come from our next question:
</p><h3>What are some of the biggest challenges you face in your work today?</h3><p>I'm consistently amazed by the time and thought that so many of you put into answering this question, and rest assured, your feedback will be presented to several teams around Moz, both on the marketing and the product sides. For this question, I organized each and every response into recurring themes, tallying each time those themes were mentioned. Here are all the themes that were mentioned 10 or more times:
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th>Challenge
</th>
<th># of mentions
</th>
</tr>
</thead>
<tbody>
<tr>
<td>My clients / colleagues / bosses don't understand the value of SEO
</td>
<td>59
</td>
</tr>
<tr>
<td>The industry and tactics are constantly changing; algo updates
</td>
<td>45
</td>
</tr>
<tr>
<td>Time constraints
</td>
<td>44
</td>
</tr>
<tr>
<td>Link building
</td>
<td>35
</td>
</tr>
<tr>
<td>My clients / colleagues / bosses don't understand how SEO works
</td>
<td>29
</td>
</tr>
<tr>
<td>Content (strategy / creation / marketing)
</td>
<td>25
</td>
</tr>
<tr>
<td>Resource constraints
</td>
<td>23
</td>
</tr>
<tr>
<td>It's difficult to prove ROI
</td>
<td>18
</td>
</tr>
<tr>
<td>Budget constraints
</td>
<td>17
</td>
</tr>
<tr>
<td>It's a difficult industry in which to learn tools and techniques
</td>
<td>16
</td>
</tr>
<tr>
<td>I regularly need to educate my colleagues / employees
</td>
<td>16
</td>
</tr>
<tr>
<td>It's difficult to prioritize my work
</td>
<td>16
</td>
</tr>
<tr>
<td>My clients either don't have or won't offer sufficient budget / effort
</td>
<td>15
</td>
</tr>
<tr>
<td>Effective reporting
</td>
<td>15
</td>
</tr>
<tr>
<td>Bureaucracy, red tape, other company problems
</td>
<td>11
</td>
</tr>
<tr>
<td>It's difficult to compete with other companies
</td>
<td>11
</td>
</tr>
<tr>
<td>I'm required to wear multiple hats
</td>
<td>11
</td>
</tr>
</tbody>
</table><p>More than anything else, it's patently obvious that one of the greatest difficulties faced by any SEO is explaining it to other people in a way that demonstrates its value while setting appropriate expectations for results. Whether it's your clients, your boss, or your peers that you're trying to convince, it isn't an easy case to make, especially when it's so difficult to show what kind of return a company can see from an investment in SEO.
</p><p>We also saw tons of frustrated responses about how the industry is constantly changing, and it takes too much of your already-constrained time just to stay on top of those changes.
</p><p>In terms of tactics, link building easily tops the list of challenges. That makes sense, as it's the piece of SEO that relies most heavily on the cooperation of other human beings (and humans are often tricky beings to figure out). =)
</p><p>Content marketing -- both the creation/copywriting side as well as the strategy side -- is still a challenge for many folks in the industry, though fewer people mentioned it this year as mentioned it in 2015, so I think we're all starting to get used to how those skills overlap with the more traditional aspects of SEO.
</p><hr><h2>How our readers read</h2><p>With all that context in mind, we started to dig into your preferences in terms of formats, frequency, and subject matter on the blog.</p><h3>How often do you read posts on the Moz Blog?</h3><p>This is the one set of responses that caused a bit of concern. We've seen a steady decrease in the number of people who say they read every day, a slight decrease in the number of people who say they read multiple times each week, and a dramatic increase in the number of people who say they read once a week.
</p><p>The 2015 decrease came after an expansion in the scope of subjects we covered on the blog -- as we branched away from just SEO, we published more posts about social media, email, and other aspects of digital marketing. We knew that not all of those subjects were relevant for everyone, so we expected a dip in frequency of readership.
</p><p>This year, though, we've attempted to refocus on SEO, and might have expected a bit of a rebound. That didn't happen:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499d8b7175.90795708.png">
</p><p>There are two other factors at play, here. For one thing, we no longer publish a post every single weekday. After our <a href="https://moz.com/blog/publishing-volume-experiment" target="_blank">publishing volume experiment</a> in 2015, we realized it was safe (even beneficial) to emphasize quality over quantity, so if we don't feel like a post turned out the way we hoped, we don't publish it until we've had a chance to improve it. That means we're down to about four posts per week. We've also made a concerted effort to publish more posts about local SEO, as that's relevant to our software and an increasingly important part of the work of folks in our industry.<br>
</p><p>It could also be a question of time -- we've already covered how little time everyone in our industry has, and with that problem continuing, there may just be less time to read blog posts.
</p><p>If anyone has any additional insight into why they read less often than they once did, please let us know in the comments below!
</p><h3>On which types of devices do you prefer to read blog posts?</h3><p>We were surprised by the responses to this answer in 2013, and they've only gotten more extreme:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499dde8812.84201607.png">
</p><p>Nearly everyone prefers to read blog posts on a full computer. Only about 15% of folks add their phones into the equation, and the number of people in all the other buckets is extremely small. In 2013, our blog didn't have a responsive design, and was quite difficult to read on mobile devices. We thought that might have had something to do with people's responses -- maybe they were just <em>used to</em> reading our blog on larger screens. The trend in 2015 and this year, though, proves that's not the case. People just prefer reading posts on their computers, plain and simple.
</p><h3>Which other site(s), if any, do you regularly visit for information or education on SEO?</h3><p>This was a new question for this year. We have our own favorite sites, of course, but we had no idea how the majority of folks would respond to this question. As it turns out, there was quite a broad range of responses listing sites that take very different approaches:
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th>Site
</th>
<th># responses
</th>
</tr>
</thead>
<tbody>
<tr>
<td>Search Engine Land
</td>
<td>184
</td>
</tr>
<tr>
<td>Search Engine Journal
</td>
<td>89
</td>
</tr>
<tr>
<td>Search Engine Roundtable
</td>
<td>74
</td>
</tr>
<tr>
<td>SEMrush
</td>
<td>51
</td>
</tr>
<tr>
<td>Ahrefs
</td>
<td>50
</td>
</tr>
<tr>
<td>Search Engine Watch
</td>
<td>41
</td>
</tr>
<tr>
<td>Quick Sprout / Neil Patel
</td>
<td>35
</td>
</tr>
<tr>
<td>HubSpot
</td>
<td>33
</td>
</tr>
<tr>
<td>Backlinko
</td>
<td>31
</td>
</tr>
<tr>
<td>Google Blogs
</td>
<td>29
</td>
</tr>
<tr>
<td>The SEM Post
</td>
<td>21
</td>
</tr>
<tr>
<td>Kissmetrics
</td>
<td>17
</td>
</tr>
<tr>
<td>Yoast
</td>
<td>16
</td>
</tr>
<tr>
<td>Distilled
</td>
<td>13
</td>
</tr>
<tr>
<td>SEO by the Sea
</td>
<td>13
</td>
</tr>
</tbody>
</table><p>I suppose it's no surprise that the most prolific sites sit at the top. They've always got something new, even if the stories don't often go into much depth. We've tended to steer our own posts toward longer-form, in-depth pieces, and I think it's safe to say (based on these responses and some to questions below) that it'd be beneficial for us to include some shorter stories, too. In other words, depth shouldn't necessarily be a requisite for a post to be published on the Moz Blog. We may start experimenting with a more "short and sweet" approach to some posts.
</p><hr><h2>What our readers think of the blog</h2><p>Here's where we get into more specific feedback about the Moz Blog, including whether it's relevant, how easy it is for you to consume, and more.</p><h3>What percentage of the posts on the Moz Blog would you say are relevant to you and your work?</h3><p>Overall, I'm pretty happy with the results here, as SEO is a broad enough industry (and we've got a broad enough audience) that there's simply no way we're going to hit the sweet spot for everyone with every post. But those numbers toward the bottom of the chart are low enough that I feel confident we're doing pretty well in terms of topic relevance.</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499e498bd5.48952303.png">
</p><h3>Do you feel the Moz Blog posts are generally too basic, too advanced, or about right?</h3><p>Responses to this question have made me smile every time I see them. This is clearly one thing we're getting about as right as we could expect to. We're even seeing a slight balancing of the "too basic" and "too advanced" columns over time, which is great:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499eb82551.03424441.png">
</p><p>We also asked the people who told us that posts were "too basic" or "too advanced" <em>to what extent</em> they felt that way, using a scale from 1-5 (1 being "just a little bit too basic/advanced" and 5 being "way too basic/advanced." The responses tell us that the people who feel posts are too advanced feel <em>more strongly </em><span class="redactor-invisible-space">about that opinion than the people who feel posts are too basic:</span>
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499f1f77b7.17557852.png">
</p><p>This makes some sense, I think. If you're just starting out in SEO, which many of our readers are, some of the posts on this blog are likely to go straight over your head. That could be frustrating. If you're an SEO expert, though, you probably aren't <em>frustrated</em> by posts you see as too basic for you -- you just skip past them and move on with your day.
</p><p>This does make me think, though, that we might benefit from offering a dedicated section of the site for folks who are just starting out -- more than just the Beginner's Guide. That's actually something that was specifically requested by one respondent this year.
</p><h3>In general, what do you think about the length of Moz Blog posts?</h3><p>While it definitely seems like we're doing pretty well in this regard, I'd also say we've got some room to tighten things up a bit, especially in light of the lack of time so many of you mentioned:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499f9195e1.22264838.png">
</p><p>There were quite a few comments specifically asking for "short and sweet" posts from time to time -- offering up useful tips or news in a format that didn't expound on details because it didn't have to. I think sprinkling some of those types of posts in with the longer-form posts we have so often would be beneficial.
</p><h3>Do you ever comment on Moz Blog posts?</h3><p>This was another new question this year. Despite so many sites are removing comment sections from their blogs, we've always believed in their value. Sometimes the discussions we see in comments end up being the most helpful part of the posts, and we value our community too much to keep that from happening. So, we were happy to see a full quarter of respondents have participated in comments:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499feae794.57325603.png">
</p><p>We also asked for a bit of info about <em>why</em> you either do or don't comment on posts. The top reasons why you do were pretty predictable -- to ask a clarifying question related to the post, or to offer up your own perspective on the topic at hand. The #3 reason was interesting -- 18 people mentioned that they like to comment in order to thank the author for their hard work. This is a great sentiment, and as someone who's published several posts on this blog, I can say for a fact that it <em>does </em><span class="redactor-invisible-space">feel pretty great. At the same time, those comments are really only written for one person -- the author -- and are a bit problematic from our perspective, because they add noise around the more substantial conversations, which are what we like to see most.</span>
</p><p><span class="redactor-invisible-space">I think the solution is going to lie in a new UI element that allows readers to note their appreciation to the authors without leaving one of the oft-maligned "Great post!" comments. There's got to be a happy medium there, and I think it's worth our finding it.</span>
</p><p><span class="redactor-invisible-space">The reasons people gave for <em>not</em><span class="redactor-invisible-space"> commenting were even more interesting. A bunch of people mentioned the need to log in (sorry, folks -- if we didn't require that, we'd spend half our day removing spam!). The most common response, though, involved a lack of confidence. Whether it was worded along the lines of "I'm an introvert" or along the lines of "I just don't have a lot of expertise," there were quite a few people who worried about how their comments would be received.</span></span>
</p><p><span class="redactor-invisible-space"><span class="redactor-invisible-space">I want to take this chance to encourage those of you who feel that way to take the step, and ask questions about points you find confusing. At the very least, I can guarantee you aren't the only ones, and others like you will appreciate your initiative. One of the best ways to develop your expertise is to get comfortable asking questions. We all work in a really confusing industry, and the Moz Blog is all about providing a place to help each other out.</span></span>
</p><h3>What, if anything, would you like to see different about the Moz Blog?</h3><p>As usual, the responses to this question were chock full of great suggestions, and again, we <em>so</em> appreciate the amount of time you all spent providing really thoughtful feedback.
</p><p>One pattern I saw was requests for more empirical data -- hard evidence that things should be done a certain way, whether through case studies or other formats. Another pattern was requests for step-by-step walkthroughs. That makes a lot of sense for an industry of folks who are strapped for time: Make things as clear-cut as possible, and where we can, offer a linear path you can walk down instead of asking you to holistically understand the subject matter, then figure that out on your own. (That's actually something we're hoping to do with our entire Learning Center: Make it easier to figure out where to start, and where to continue after that, instead of putting everything into buckets and asking you all to figure it out.)
</p><p>Whiteboard Friday remains a perennial favorite, and we were surprised to see more requests for <em>more</em><span class="redactor-invisible-space"> posts about our own tools than we had requests for <em>fewer </em><span class="redactor-invisible-space">posts about our own tools. (We've been wary of that in the past, as we wanted to make sure we never crossed from "helpful" into "salesy," something we'll still focus on even if we do add another tool-based post here and there.)</span></span>
</p><p><span class="redactor-invisible-space"><span class="redactor-invisible-space">We expected a bit of feedback about the format of the emails -- we're absolutely working on that! -- but didn't expect to see so many folks requesting that we bring back YouMoz. That's something that's been on the backs of our minds, and while it may not take the same form it did before, we do plan on finding new ways to encourage the community to contribute content, and hope to have something up and running early in 2018.</span></span>
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th>Request
</th>
<th>#responses
</th>
</tr>
</thead>
<tbody>
<tr>
<td>More case studies
</td>
<td>26
</td>
</tr>
<tr>
<td>More Whiteboard Friday (or other videos)
</td>
<td>25
</td>
</tr>
<tr>
<td>More long-form step-by-step training/guides
</td>
<td>18
</td>
</tr>
<tr>
<td>Clearer steps to follow in posts; how-tos
</td>
<td>11
</td>
</tr>
<tr>
<td>Bring back UGC / YouMoz
</td>
<td>9
</td>
</tr>
<tr>
<td>More from Rand
</td>
<td>9
</td>
</tr>
<tr>
<td>Improve formatting of the emails
</td>
<td>9
</td>
</tr>
<tr>
<td>Higher-level, less-technical posts
</td>
<td>8
</td>
</tr>
<tr>
<td>More authors
</td>
<td>7
</td>
</tr>
<tr>
<td>More news (algorithm updates, e.g.)
</td>
<td>7
</td>
</tr>
<tr>
<td>Shorter posts, "quick wins"
</td>
<td>7
</td>
</tr>
<tr>
<td>Quizzes, polls, or other engagement opportunities
</td>
<td>6
</td>
</tr>
<tr>
<td>Broader range of topics (engagement, CRO, etc.)
</td>
<td>6
</td>
</tr>
<tr>
<td>More about Moz tools
</td>
<td>5
</td>
</tr>
<tr>
<td>More data-driven, less opinion-based
</td>
<td>5
</td>
</tr>
</tbody>
</table><hr><h2>What our readers want to see</h2><p>This section is a bit more future-facing, where some of what we asked before had to do with how things have been in the past.</p><h3>Which of the following topics would you like to learn more about?</h3><p>There were very, very few surprises in this list. Lots of interest in on-page SEO and link building, as well as other core tactical areas of SEO. Content, branding, and social media all took dips -- that makes sense, given the fact that we don't usually post about those things anymore, and we've no doubt lost some audience members who were more interested in them as a result. Interestingly, mobile took a sizable dip, too. I'd be really curious to know what people think about why that is. My best guess is that with the mobile-first indexing from Google and with responsive designs having become so commonplace, there isn't as much of a need as there once was to think of mobile much <em>differently</em> than there was a couple of years ago. Also of note: When we did this survey in 2015, Google had recently rolled out its "<a href="https://moz.com/blog/9-things-about-googles-mobile-friendly-update" target="_blank">Mobile-Friendly Update</a>," not-so-affectionately referred to by many in the industry as Mobilegeddon. So... it was on our minds. =)
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a2649a05b78f9.08617247.png">
</p><h3>Which of the following types of posts would you most like to see on the Moz Blog?</h3><p>This is a great echo and validation of what we took away from the more general question about what you'd like to see different about the Blog: More tactical posts and step-by-step walkthroughs. Posts that cut to the chase and offer a clear direction forward, as opposed to some of the types at the bottom of this list, which offer more opinions and cerebral explorations:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a2649a0bf1ed5.97491501.png">
</p><hr><h2>What happens next?</h2><p>Now we go to work. =)
</p><p>We'll spend some time fully digesting this info, and coming up with new goals for 2018 aimed at making improvements inspired by your feedback. We'll keep you all apprised as we start moving forward.
</p><p>If you have any additional insight that strikes you in taking a look at these results, please do share it in the comments below -- we'd love to have those discussions.
</p><p>For now, we've got some initial takeaways that we're already planning to take action on.
</p><h3>Primary takeaways</h3><p>There are some relatively obvious things we can take away from these results that we're already working on:</p><ul><li>People in all businesses are finding it quite difficult to communicate the value of SEO to their clients, bosses, and colleagues. That's something we can help with, and we'll be developing materials in the near future to try and alleviate some of that particular frustration.</li><li>There's a real desire for more succinct, actionable, step-by-step walkthroughs on the Blog. We can pretty easily explore formats for posts that are off our "beaten path," and will attempt to make things easier to consume through improvements to both the content itself and its delivery. I think there's some room for more "short and sweet" mixed in with our longer norm.</li><li>The bulk of our audience does more than just SEO, despite a full 25% of them having it in their job titles, and the challenges you mentioned include a bunch of areas that are <em>related to</em>, but outside the traditional world of SEO. Since you all are clearly working on those sorts of things, we should work to highlight and facilitate the relationship between the SEO work and the non-SEO marketing work you do.</li><li>In looking through some of the other sites you all visit for information on SEO, and knowing the kinds of posts they typically publish, it's clear we've got an opportunity to publish more news. We've always dreamed of being more of a one-stop shop for SEO content, and that's good validation that we may want to head down that path.</li></ul><p>Again, thank you all <em>so much</em> for the time and effort you spent filling out this survey. Hopefully you'll notice some changes in the near (and not-so-near) future that make it clear we're really listening.</p><p>If you've got anything to add to these results -- insights, further explanations, questions for clarification, rebuttals of points, etc. -- please leave them in the comments below. We're looking forward to continuing the conversation. =)</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/544762\">Trevor-Klein</a></p><p>This blog is for all of you. In a notoriously opaque and confusing industry that's prone to frequent changes, we see immense benefit in helping all of you stay on top of the game. To that end, every couple of years we ask for a report card of sorts, hoping not only to get a sense for how your jobs have changed, but also to get a sense for how we can improve.
</p><p>About a month ago, we asked you all to take a reader survey, and nearly 600 of you generously gave your time. The results, summarized in this post, were immensely helpful, and were a reminder of how lucky we are to have such a thoughtful community of readers.
</p><p>I've offered as much data as I can, and when possible, I've also trended responses against the same questions from our 2015 and 2013 surveys, so you can get a sense for how things have changed. There's a lot here, so buckle up. =)
</p><hr><h2>Who our readers are</h2><p>To put all of this great feedback into context, it helps to know a bit about who the people in our audience actually are. Sure, we can glean a bit of information from our site analytics, and can make some educated guesses, but neither of those can answer the questions we're most curious about. What's your day-to-day work like, and how much SEO does it really involve? Would you consider yourself more of an SEO beginner, or more of an SEO wizard? And, most importantly, what challenges are you facing in your work these days? The answers give us a fuller understanding of where the rest of your feedback comes from.
</p><h3>What is your job title?</h3><p>Readers of the Moz Blog have a multitude of backgrounds, from CEOs of agencies to in-the-weeds SEOs of all skill levels. One of the most common themes we see, though, is a skew toward the more general marketing industry. I know that word clouds have their faults, but it's still a relatively interesting way to gauge how often things appear in a list like this, so here's what we've got this year:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499af0b5b9.47945419.jpg">
</p><p>Of note, similar to our results in 2015, the word "marketing" is the most common result, followed by the word "SEO" and the word "manager."
</p><p>Here's a look at the top 20 terms used in this year's results, along with the percentage of responses containing each term. You'll also see those same percentages from the 2015 and 2013 surveys to give you an idea of what's changed -- the darker the bar, the more recent the survey:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499be931f5.90994273.png">
</p><p>The thing that surprises me the most about this list is how little it's changed in the four-plus years since we first asked the question (a theme you'll see recur in the rest of these results). In fact, the top 20 terms this year are nearly identical to the top 20 terms four years ago, with only a few things sliding up or down a few spots.
</p><h3>What percentage of your day-to-day work involves SEO?</h3><p>We hear a lot about people wearing multiple hats for their companies. One person who took this survey noted that even at a 9,000-person company, they were the only one who worked on SEO, and it was only about 80% of their job. That idea is backed up by this data, which shows an incredibly broad range of responses. More than 10% of respondents barely touch SEO, and not even 14% say they're full-time:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499c57faf0.02677389.png">
</p><p>One interesting thing to note is the sharp decline in the number of people who say that SEO isn't a part of their day-to-day at all. That shift is likely a result of our shift back toward SEO, away from related areas like social media and content marketing. I think we had attracted a significant number of community managers and content specialists who didn't work in SEO, and we're now seeing the pendulum swing the other direction.
</p><h3>On a scale of 1-5, how advanced would you say your SEO knowledge is?</h3><p>The similarity between this year's graph for this question and those from 2015 and 2013 is simply astonishing:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499cbff7e7.86135897.png">
</p><p>There's been a slight drop in folks who say they're at an expert level, and a slight increase in folks who have some background, but are relative beginners. But only slight. The interesting thing is, our blog traffic has increased significantly over these four years, so the newer members of our audience bear a striking resemblance to those of you who've been around for quite some time. In a sense, that's reassuring -- it paints a clear picture for us as we continue refining our content.
</p><h3>Do you work in-house, or at an agency/consultancy?</h3><p>Here's another window into just how little our audience has changed in the last couple of years:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499d2a5c08.69347322.png">
</p><p>A slight majority of our readers still work in-house for their own companies, and about a third still work on SEO for their company's clients.
</p><p>Interestingly, though, respondents who work for clients deal with many of the same issues as those who work in-house -- especially in trying to convey the value of their work in SEO. They're just trying to send that message to external clients instead of internal stakeholders. More details on that come from our next question:
</p><h3>What are some of the biggest challenges you face in your work today?</h3><p>I'm consistently amazed by the time and thought that so many of you put into answering this question, and rest assured, your feedback will be presented to several teams around Moz, both on the marketing and the product sides. For this question, I organized each and every response into recurring themes, tallying each time those themes were mentioned. Here are all the themes that were mentioned 10 or more times:
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th>Challenge
</th>
<th># of mentions
</th>
</tr>
</thead>
<tbody>
<tr>
<td>My clients / colleagues / bosses don't understand the value of SEO
</td>
<td>59
</td>
</tr>
<tr>
<td>The industry and tactics are constantly changing; algo updates
</td>
<td>45
</td>
</tr>
<tr>
<td>Time constraints
</td>
<td>44
</td>
</tr>
<tr>
<td>Link building
</td>
<td>35
</td>
</tr>
<tr>
<td>My clients / colleagues / bosses don't understand how SEO works
</td>
<td>29
</td>
</tr>
<tr>
<td>Content (strategy / creation / marketing)
</td>
<td>25
</td>
</tr>
<tr>
<td>Resource constraints
</td>
<td>23
</td>
</tr>
<tr>
<td>It's difficult to prove ROI
</td>
<td>18
</td>
</tr>
<tr>
<td>Budget constraints
</td>
<td>17
</td>
</tr>
<tr>
<td>It's a difficult industry in which to learn tools and techniques
</td>
<td>16
</td>
</tr>
<tr>
<td>I regularly need to educate my colleagues / employees
</td>
<td>16
</td>
</tr>
<tr>
<td>It's difficult to prioritize my work
</td>
<td>16
</td>
</tr>
<tr>
<td>My clients either don't have or won't offer sufficient budget / effort
</td>
<td>15
</td>
</tr>
<tr>
<td>Effective reporting
</td>
<td>15
</td>
</tr>
<tr>
<td>Bureaucracy, red tape, other company problems
</td>
<td>11
</td>
</tr>
<tr>
<td>It's difficult to compete with other companies
</td>
<td>11
</td>
</tr>
<tr>
<td>I'm required to wear multiple hats
</td>
<td>11
</td>
</tr>
</tbody>
</table><p>More than anything else, it's patently obvious that one of the greatest difficulties faced by any SEO is explaining it to other people in a way that demonstrates its value while setting appropriate expectations for results. Whether it's your clients, your boss, or your peers that you're trying to convince, it isn't an easy case to make, especially when it's so difficult to show what kind of return a company can see from an investment in SEO.
</p><p>We also saw tons of frustrated responses about how the industry is constantly changing, and it takes too much of your already-constrained time just to stay on top of those changes.
</p><p>In terms of tactics, link building easily tops the list of challenges. That makes sense, as it's the piece of SEO that relies most heavily on the cooperation of other human beings (and humans are often tricky beings to figure out). =)
</p><p>Content marketing -- both the creation/copywriting side as well as the strategy side -- is still a challenge for many folks in the industry, though fewer people mentioned it this year as mentioned it in 2015, so I think we're all starting to get used to how those skills overlap with the more traditional aspects of SEO.
</p><hr><h2>How our readers read</h2><p>With all that context in mind, we started to dig into your preferences in terms of formats, frequency, and subject matter on the blog.</p><h3>How often do you read posts on the Moz Blog?</h3><p>This is the one set of responses that caused a bit of concern. We've seen a steady decrease in the number of people who say they read every day, a slight decrease in the number of people who say they read multiple times each week, and a dramatic increase in the number of people who say they read once a week.
</p><p>The 2015 decrease came after an expansion in the scope of subjects we covered on the blog -- as we branched away from just SEO, we published more posts about social media, email, and other aspects of digital marketing. We knew that not all of those subjects were relevant for everyone, so we expected a dip in frequency of readership.
</p><p>This year, though, we've attempted to refocus on SEO, and might have expected a bit of a rebound. That didn't happen:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499d8b7175.90795708.png">
</p><p>There are two other factors at play, here. For one thing, we no longer publish a post every single weekday. After our <a href="https://moz.com/blog/publishing-volume-experiment" target="_blank">publishing volume experiment</a> in 2015, we realized it was safe (even beneficial) to emphasize quality over quantity, so if we don't feel like a post turned out the way we hoped, we don't publish it until we've had a chance to improve it. That means we're down to about four posts per week. We've also made a concerted effort to publish more posts about local SEO, as that's relevant to our software and an increasingly important part of the work of folks in our industry.<br>
</p><p>It could also be a question of time -- we've already covered how little time everyone in our industry has, and with that problem continuing, there may just be less time to read blog posts.
</p><p>If anyone has any additional insight into why they read less often than they once did, please let us know in the comments below!
</p><h3>On which types of devices do you prefer to read blog posts?</h3><p>We were surprised by the responses to this answer in 2013, and they've only gotten more extreme:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499dde8812.84201607.png">
</p><p>Nearly everyone prefers to read blog posts on a full computer. Only about 15% of folks add their phones into the equation, and the number of people in all the other buckets is extremely small. In 2013, our blog didn't have a responsive design, and was quite difficult to read on mobile devices. We thought that might have had something to do with people's responses -- maybe they were just <em>used to</em> reading our blog on larger screens. The trend in 2015 and this year, though, proves that's not the case. People just prefer reading posts on their computers, plain and simple.
</p><h3>Which other site(s), if any, do you regularly visit for information or education on SEO?</h3><p>This was a new question for this year. We have our own favorite sites, of course, but we had no idea how the majority of folks would respond to this question. As it turns out, there was quite a broad range of responses listing sites that take very different approaches:
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th>Site
</th>
<th># responses
</th>
</tr>
</thead>
<tbody>
<tr>
<td>Search Engine Land
</td>
<td>184
</td>
</tr>
<tr>
<td>Search Engine Journal
</td>
<td>89
</td>
</tr>
<tr>
<td>Search Engine Roundtable
</td>
<td>74
</td>
</tr>
<tr>
<td>SEMrush
</td>
<td>51
</td>
</tr>
<tr>
<td>Ahrefs
</td>
<td>50
</td>
</tr>
<tr>
<td>Search Engine Watch
</td>
<td>41
</td>
</tr>
<tr>
<td>Quick Sprout / Neil Patel
</td>
<td>35
</td>
</tr>
<tr>
<td>HubSpot
</td>
<td>33
</td>
</tr>
<tr>
<td>Backlinko
</td>
<td>31
</td>
</tr>
<tr>
<td>Google Blogs
</td>
<td>29
</td>
</tr>
<tr>
<td>The SEM Post
</td>
<td>21
</td>
</tr>
<tr>
<td>Kissmetrics
</td>
<td>17
</td>
</tr>
<tr>
<td>Yoast
</td>
<td>16
</td>
</tr>
<tr>
<td>Distilled
</td>
<td>13
</td>
</tr>
<tr>
<td>SEO by the Sea
</td>
<td>13
</td>
</tr>
</tbody>
</table><p>I suppose it's no surprise that the most prolific sites sit at the top. They've always got something new, even if the stories don't often go into much depth. We've tended to steer our own posts toward longer-form, in-depth pieces, and I think it's safe to say (based on these responses and some to questions below) that it'd be beneficial for us to include some shorter stories, too. In other words, depth shouldn't necessarily be a requisite for a post to be published on the Moz Blog. We may start experimenting with a more "short and sweet" approach to some posts.
</p><hr><h2>What our readers think of the blog</h2><p>Here's where we get into more specific feedback about the Moz Blog, including whether it's relevant, how easy it is for you to consume, and more.</p><h3>What percentage of the posts on the Moz Blog would you say are relevant to you and your work?</h3><p>Overall, I'm pretty happy with the results here, as SEO is a broad enough industry (and we've got a broad enough audience) that there's simply no way we're going to hit the sweet spot for everyone with every post. But those numbers toward the bottom of the chart are low enough that I feel confident we're doing pretty well in terms of topic relevance.</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499e498bd5.48952303.png">
</p><h3>Do you feel the Moz Blog posts are generally too basic, too advanced, or about right?</h3><p>Responses to this question have made me smile every time I see them. This is clearly one thing we're getting about as right as we could expect to. We're even seeing a slight balancing of the "too basic" and "too advanced" columns over time, which is great:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499eb82551.03424441.png">
</p><p>We also asked the people who told us that posts were "too basic" or "too advanced" <em>to what extent</em> they felt that way, using a scale from 1-5 (1 being "just a little bit too basic/advanced" and 5 being "way too basic/advanced." The responses tell us that the people who feel posts are too advanced feel <em>more strongly </em><span class="redactor-invisible-space">about that opinion than the people who feel posts are too basic:</span>
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499f1f77b7.17557852.png">
</p><p>This makes some sense, I think. If you're just starting out in SEO, which many of our readers are, some of the posts on this blog are likely to go straight over your head. That could be frustrating. If you're an SEO expert, though, you probably aren't <em>frustrated</em> by posts you see as too basic for you -- you just skip past them and move on with your day.
</p><p>This does make me think, though, that we might benefit from offering a dedicated section of the site for folks who are just starting out -- more than just the Beginner's Guide. That's actually something that was specifically requested by one respondent this year.
</p><h3>In general, what do you think about the length of Moz Blog posts?</h3><p>While it definitely seems like we're doing pretty well in this regard, I'd also say we've got some room to tighten things up a bit, especially in light of the lack of time so many of you mentioned:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499f9195e1.22264838.png">
</p><p>There were quite a few comments specifically asking for "short and sweet" posts from time to time -- offering up useful tips or news in a format that didn't expound on details because it didn't have to. I think sprinkling some of those types of posts in with the longer-form posts we have so often would be beneficial.
</p><h3>Do you ever comment on Moz Blog posts?</h3><p>This was another new question this year. Despite so many sites are removing comment sections from their blogs, we've always believed in their value. Sometimes the discussions we see in comments end up being the most helpful part of the posts, and we value our community too much to keep that from happening. So, we were happy to see a full quarter of respondents have participated in comments:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a26499feae794.57325603.png">
</p><p>We also asked for a bit of info about <em>why</em> you either do or don't comment on posts. The top reasons why you do were pretty predictable -- to ask a clarifying question related to the post, or to offer up your own perspective on the topic at hand. The #3 reason was interesting -- 18 people mentioned that they like to comment in order to thank the author for their hard work. This is a great sentiment, and as someone who's published several posts on this blog, I can say for a fact that it <em>does </em><span class="redactor-invisible-space">feel pretty great. At the same time, those comments are really only written for one person -- the author -- and are a bit problematic from our perspective, because they add noise around the more substantial conversations, which are what we like to see most.</span>
</p><p><span class="redactor-invisible-space">I think the solution is going to lie in a new UI element that allows readers to note their appreciation to the authors without leaving one of the oft-maligned "Great post!" comments. There's got to be a happy medium there, and I think it's worth our finding it.</span>
</p><p><span class="redactor-invisible-space">The reasons people gave for <em>not</em><span class="redactor-invisible-space"> commenting were even more interesting. A bunch of people mentioned the need to log in (sorry, folks -- if we didn't require that, we'd spend half our day removing spam!). The most common response, though, involved a lack of confidence. Whether it was worded along the lines of "I'm an introvert" or along the lines of "I just don't have a lot of expertise," there were quite a few people who worried about how their comments would be received.</span></span>
</p><p><span class="redactor-invisible-space"><span class="redactor-invisible-space">I want to take this chance to encourage those of you who feel that way to take the step, and ask questions about points you find confusing. At the very least, I can guarantee you aren't the only ones, and others like you will appreciate your initiative. One of the best ways to develop your expertise is to get comfortable asking questions. We all work in a really confusing industry, and the Moz Blog is all about providing a place to help each other out.</span></span>
</p><h3>What, if anything, would you like to see different about the Moz Blog?</h3><p>As usual, the responses to this question were chock full of great suggestions, and again, we <em>so</em> appreciate the amount of time you all spent providing really thoughtful feedback.
</p><p>One pattern I saw was requests for more empirical data -- hard evidence that things should be done a certain way, whether through case studies or other formats. Another pattern was requests for step-by-step walkthroughs. That makes a lot of sense for an industry of folks who are strapped for time: Make things as clear-cut as possible, and where we can, offer a linear path you can walk down instead of asking you to holistically understand the subject matter, then figure that out on your own. (That's actually something we're hoping to do with our entire Learning Center: Make it easier to figure out where to start, and where to continue after that, instead of putting everything into buckets and asking you all to figure it out.)
</p><p>Whiteboard Friday remains a perennial favorite, and we were surprised to see more requests for <em>more</em><span class="redactor-invisible-space"> posts about our own tools than we had requests for <em>fewer </em><span class="redactor-invisible-space">posts about our own tools. (We've been wary of that in the past, as we wanted to make sure we never crossed from "helpful" into "salesy," something we'll still focus on even if we do add another tool-based post here and there.)</span></span>
</p><p><span class="redactor-invisible-space"><span class="redactor-invisible-space">We expected a bit of feedback about the format of the emails -- we're absolutely working on that! -- but didn't expect to see so many folks requesting that we bring back YouMoz. That's something that's been on the backs of our minds, and while it may not take the same form it did before, we do plan on finding new ways to encourage the community to contribute content, and hope to have something up and running early in 2018.</span></span>
</p><table class="table-basic table-row-hover">
<thead>
<tr>
<th>Request
</th>
<th>#responses
</th>
</tr>
</thead>
<tbody>
<tr>
<td>More case studies
</td>
<td>26
</td>
</tr>
<tr>
<td>More Whiteboard Friday (or other videos)
</td>
<td>25
</td>
</tr>
<tr>
<td>More long-form step-by-step training/guides
</td>
<td>18
</td>
</tr>
<tr>
<td>Clearer steps to follow in posts; how-tos
</td>
<td>11
</td>
</tr>
<tr>
<td>Bring back UGC / YouMoz
</td>
<td>9
</td>
</tr>
<tr>
<td>More from Rand
</td>
<td>9
</td>
</tr>
<tr>
<td>Improve formatting of the emails
</td>
<td>9
</td>
</tr>
<tr>
<td>Higher-level, less-technical posts
</td>
<td>8
</td>
</tr>
<tr>
<td>More authors
</td>
<td>7
</td>
</tr>
<tr>
<td>More news (algorithm updates, e.g.)
</td>
<td>7
</td>
</tr>
<tr>
<td>Shorter posts, "quick wins"
</td>
<td>7
</td>
</tr>
<tr>
<td>Quizzes, polls, or other engagement opportunities
</td>
<td>6
</td>
</tr>
<tr>
<td>Broader range of topics (engagement, CRO, etc.)
</td>
<td>6
</td>
</tr>
<tr>
<td>More about Moz tools
</td>
<td>5
</td>
</tr>
<tr>
<td>More data-driven, less opinion-based
</td>
<td>5
</td>
</tr>
</tbody>
</table><hr><h2>What our readers want to see</h2><p>This section is a bit more future-facing, where some of what we asked before had to do with how things have been in the past.</p><h3>Which of the following topics would you like to learn more about?</h3><p>There were very, very few surprises in this list. Lots of interest in on-page SEO and link building, as well as other core tactical areas of SEO. Content, branding, and social media all took dips -- that makes sense, given the fact that we don't usually post about those things anymore, and we've no doubt lost some audience members who were more interested in them as a result. Interestingly, mobile took a sizable dip, too. I'd be really curious to know what people think about why that is. My best guess is that with the mobile-first indexing from Google and with responsive designs having become so commonplace, there isn't as much of a need as there once was to think of mobile much <em>differently</em> than there was a couple of years ago. Also of note: When we did this survey in 2015, Google had recently rolled out its "<a href="https://moz.com/blog/9-things-about-googles-mobile-friendly-update" target="_blank">Mobile-Friendly Update</a>," not-so-affectionately referred to by many in the industry as Mobilegeddon. So... it was on our minds. =)
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a2649a05b78f9.08617247.png">
</p><h3>Which of the following types of posts would you most like to see on the Moz Blog?</h3><p>This is a great echo and validation of what we took away from the more general question about what you'd like to see different about the Blog: More tactical posts and step-by-step walkthroughs. Posts that cut to the chase and offer a clear direction forward, as opposed to some of the types at the bottom of this list, which offer more opinions and cerebral explorations:
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/2017-moz-blog-reader-survey-results/5a2649a0bf1ed5.97491501.png">
</p><hr><h2>What happens next?</h2><p>Now we go to work. =)
</p><p>We'll spend some time fully digesting this info, and coming up with new goals for 2018 aimed at making improvements inspired by your feedback. We'll keep you all apprised as we start moving forward.
</p><p>If you have any additional insight that strikes you in taking a look at these results, please do share it in the comments below -- we'd love to have those discussions.
</p><p>For now, we've got some initial takeaways that we're already planning to take action on.
</p><h3>Primary takeaways</h3><p>There are some relatively obvious things we can take away from these results that we're already working on:</p><ul><li>People in all businesses are finding it quite difficult to communicate the value of SEO to their clients, bosses, and colleagues. That's something we can help with, and we'll be developing materials in the near future to try and alleviate some of that particular frustration.</li><li>There's a real desire for more succinct, actionable, step-by-step walkthroughs on the Blog. We can pretty easily explore formats for posts that are off our "beaten path," and will attempt to make things easier to consume through improvements to both the content itself and its delivery. I think there's some room for more "short and sweet" mixed in with our longer norm.</li><li>The bulk of our audience does more than just SEO, despite a full 25% of them having it in their job titles, and the challenges you mentioned include a bunch of areas that are <em>related to</em>, but outside the traditional world of SEO. Since you all are clearly working on those sorts of things, we should work to highlight and facilitate the relationship between the SEO work and the non-SEO marketing work you do.</li><li>In looking through some of the other sites you all visit for information on SEO, and knowing the kinds of posts they typically publish, it's clear we've got an opportunity to publish more news. We've always dreamed of being more of a one-stop shop for SEO content, and that's good validation that we may want to head down that path.</li></ul><p>Again, thank you all <em>so much</em> for the time and effort you spent filling out this survey. Hopefully you'll notice some changes in the near (and not-so-near) future that make it clear we're really listening.</p><p>If you've got anything to add to these results -- insights, further explanations, questions for clarification, rebuttals of points, etc. -- please leave them in the comments below. We're looking forward to continuing the conversation. =)</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/nYj7I0pIvrc" height="1" width="1" alt=""/>Wed, 06 Dec 2017 00:04:00 GMThttps://moz.com/blog/2017-moz-blog-reader-survey-resultsTrevor-Klein2017-12-06T00:04:00ZHow Local SEO Fits In With What You're Already Doinghttp://feedproxy.google.com/~r/MozBlog/~3/l8DkB_v8ONM/the-local-seo-illusion
https://moz.com/blog/the-local-seo-illusion<p>Posted by <a href=\"https://moz.com/community/users/13017\">MiriamEllis</a></p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/the-local-seo-illusion/5a24a0b2ac0909.36022991.jpg" alt="islandfinal.jpg"></p><p>You own, work for, or market a business, but you don’t think of yourself as a Local SEO. </p><p>That’s okay. The forces of history have, in fact, conspired in some weird ways to make local search seem like an island unto itself. Out there, beyond the horizon, there may be technicians puzzling out NAP, citations, owner responses, duplicate listings, store locator widgets and the like, but it doesn’t seem like they’re talking about <em>your</em> job at all. </p><p>And that’s the problem. </p><p>If I could offer you a seat in my kayak, I’d paddle us over to that misty isle, and we’d go ashore. After we’d walked around a bit, talking to the locals, it would hit you that the language barrier you’d once perceived is a mere illusion, as is the distance between you. </p><p>By sunset — whoa! Look around again. This is no island. You and the Local SEOs are all mainlanders, reaching towards identical goals of <em>customer acquisition, service, and retention</em> via an exceedingly enriched and enriching skill set. You can use it all.</p><p>Before I paddle off into the darkness, under the rising stars, I’d like to leave you a chart that plots out how Local SEO fits in with everything you’ve been doing all along.</p><h2>The roots of the divide</h2><p>Why is Local SEO often treated as separate from the rest of marketing? We can narrow this down to three contributing factors:</p><h3>1) Early separation of the local and organic algos</h3><p>Google’s early-days local product was governed by an algorithm that was much more distinct from their organic algorithm than it is today. It was once extremely common, for example, for businesses without websites to rank well locally. This didn’t do much to form clear bridges between the offline, organic, and local marketing worlds. But, then came <a href="https://moz.com/learn/seo/google-pigeon" target="_blank">Google’s Pigeon Update</a> in 2013, which signaled Google’s stated intention of deeply tying the two algorithms together.</p><p>This should ultimately impact the way industry publications, SaaS companies, and agencies present local as an extension of organic SEO, but we’re not quite there yet. I continue to encounter examples of large companies which are doing an amazing job with their website strategies, their e-commerce solutions and their paid outreach, but which are only now taking their first steps into local listings management for their hundreds of physical locations. It’s not that they’re late to the party — it’s just that they’ve only recently begun to realize what a large party their customers are having with their brands’ location data layers on the web.</p><h3>2) Inheriting the paid vs. organic dichotomy</h3><p>Local SEO has experienced the same lack-of-adoption/awareness as organic SEO. Agencies have long fought the uphill battle against a lopsided dependence on paid advertising. This phenomenon is <a href="https://moz.com/blog/the-disconnect-in-ppc-vs-seo-spending" target="_blank">highlighted by historic stats like these</a> showing brands investing some $10 million in PPC vs. $1 million in SEO, despite <a href="https://www.slideshare.net/randfish/inside-googles-numbers-in-2017" target="_blank">studies like this one</a> which show PPC earning less than 10% of clicks in search. </p><p>My take on this is that the transition from traditional offline paid advertising to its online analog was initially easier for many brands to get their heads around. And there have been ongoing challenges in proving direct ROI from SEO in the simple terms a PPC campaign can provide. To this day, we’re still all seeing statistics like only <a href="https://www.bluecorona.com/blog/29-small-business-digital-marketing-statistics" target="_blank">17% of small businesses investing in SEO</a>. In many ways, the SEO conundrum has simply been inherited by every Local SEO.</p><h3>3) A lot to take in and on</h3><p>Look at the service menu of any full-service digital marketing agency and you’ll see just how far it’s had to stretch over the past couple of decades to encompass an ever-expanding range of publicity opportunities:</p><ul><li>Technical website audits</li><li>On-site optimization</li><li>Linkbuilding</li><li>Keyword research</li><li>Content dev and promotion</li><li>Brand building</li><li>Social media marketing</li><li>PPC management</li><li>UX audits</li><li>Conversion optimization</li><li>Etc.</li></ul><p>Is it any wonder that agencies feel spread a bit too thin when considering how to support yet further needs and disciplines? How do you find the bandwidth, and the experts, to be able to offer:</p><ul><li>Ongoing citation management</li><li>Local on-site SEO</li><li>Local landing page dev</li><li>Store locator SEO</li><li>Review management</li><li>Local brand building</li><li>Local link building</li><li>And abstruse forms of local Schema implementation...</li></ul><p> And while many agencies have met the challenge by forming smart, strategic partnerships with providers specializing in Local SEO solutions, the agency is still then tasked with understanding how Local fits in with everything else they’re doing, and then explaining this to clients. At the multi-location and enterprise level, even amongst the best-known brands, high-level staffers may have no idea what it is the folks in the in-house Local SEO department are actually doing, or <em>why their work matters</em>.</p><p>To tie it all together … that’s what we need to do here. With a shared vision of how all practitioners are working on consumer-centric outreach, we can really get somewhere. Let’s plot this out, together:</p><h2>Sharing is caring</h2><blockquote>“We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better.” <br>- Jeff Bezos, Amazon</blockquote><p>Let’s imagine a sporting goods brand, established in 1979, that’s grown to 400 locations across the US while also becoming well-known for its e-commerce presence. Whether aspects of marketing are being outsourced or it’s all in-house, here is how 3 shared consumer-centric goals unify all parties.</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/the-local-seo-illusion/5a24a0b32606d7.23259649.jpg" alt="sharedgoalsfinal.jpg"></p><p>As we can see from the above chart, there is definitely an overlap of techniques, particularly between SEOs and Local SEOs. Yet overall, it’s not the language or tactics, but the end game and end goals that unify all parties. <em>Viewed properly, consumers are what make all marketing a true team effort.</em></p><h2>Before I buy that kayak…</h2><p>On my commute, I hear a radio ad promoting a holiday sale at some sporting goods store, but which brand was it?</p><p>Then I turn to the Internet to research kayak brands, and I find your website’s nicely researched, written, and optimized article comparing the best models in 2017. It’s ranking #2 organically. Those Sun Dolphins look pretty good, according to your massive comparison chart.</p><p>I think about it for a couple of days and go looking again, and I see your Adwords spot advertising your 30% off sale. <em>This is the third time I’ve encountered your brand</em>. </p><p>On my day off, I’m doing a local search for your brand, which has impressed me so far. I’m ready to look at these kayaks in person. Thanks to the fact that you properly managed your recent move across town by updating all of your major citations, I’m finding an accurate address on your Google My Business listing. Your reviews are mighty favorable, too. They keep mentioning how knowledgeable the staff is at your location nearest me.</p><p>And that turns out to be true. At first, I’m disappointed that I don’t see any Sun Dolphins on your shelves — your website comparison chart spoke well of them. As a sales associate approaches me, I notice in-store signage above his head, featuring a text/phone hotline for complaints. I don’t really have a complaint… not yet… but it’s good to know you care.</p><p>“I’m so sorry. We just sold out of Sun Dolphins this morning. But we can have one delivered to you within 3 days. We have in-store pickup, too,” the salesperson says. “Or, maybe you’d be interested in another model with comparable features. Let me show you.”</p><p>Turns out, your staffer isn’t just helpful — his training has made him so well-versed in your product line that he’s able to match my needs to a perfect kayak for me. I end up buying an Intex on the spot. </p><p>The cashier double-checks with me that I’ve found everything satisfactory and lets me know your brand takes feedback very seriously. She says my review would be valued, and my receipt invites me to read your reviews on Google, Yelp, and Facebook… and offers a special deal for signing up for your email newsletter.</p><p>My subsequent 5-star review signals to all departments of your company that a company-wide goal was met. Over the next year, my glowing review also influences 20 of my local neighbors to choose you over a competitor. </p><p>After my first wet, cold, and exciting kayaking trip, I realize I need to invest in a better waterproof jacket for next time. Your email newsletter hits my inbox at just the right time, announcing your Fourth of July sale. I’m about to become a repeat customer… <a href="https://www.helpscout.net/75-customer-service-facts-quotes-statistics/" target="_blank">worth up to 10x the value of my first purchase</a>. </p><blockquote><em>“No matter how brilliant your mind or strategy, if you’re playing a solo game, you’ll always lose out to a team.” <br>- Reid Hoffman, Co-Founder of LinkedIn</em></blockquote><p>There’s a kind of magic in this adventurous mix of marketing wins. Subtract anything from the picture, <em>and you may miss out on the customer.</em> It’s been said that great teams beat with a single heart. The secret lies in seeing every marketing discipline and practitioner as part of <em>your </em>team, doing what your brand has been doing all along: working with dedication to acquire, serve and retain consumers. Whether achievement comes via citation management, conversion optimization, or a write-up in the New York Times, the end goal is identical.</p><p>It’s also long been said that the race is to the swift. Media mogul Rupert Murdoch appears to agree, stating that, in today’s world, it’s not big that beats small — it’s fast that beats slow. How quickly your brand is able to integrate all forms of on-and-offline marketing into its core strategy, leaving no team as an island, may well be what writes your future.</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/13017\">MiriamEllis</a></p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/the-local-seo-illusion/5a24a0b2ac0909.36022991.jpg" alt="islandfinal.jpg"></p><p>You own, work for, or market a business, but you don’t think of yourself as a Local SEO. </p><p>That’s okay. The forces of history have, in fact, conspired in some weird ways to make local search seem like an island unto itself. Out there, beyond the horizon, there may be technicians puzzling out NAP, citations, owner responses, duplicate listings, store locator widgets and the like, but it doesn’t seem like they’re talking about <em>your</em> job at all. </p><p>And that’s the problem. </p><p>If I could offer you a seat in my kayak, I’d paddle us over to that misty isle, and we’d go ashore. After we’d walked around a bit, talking to the locals, it would hit you that the language barrier you’d once perceived is a mere illusion, as is the distance between you. </p><p>By sunset — whoa! Look around again. This is no island. You and the Local SEOs are all mainlanders, reaching towards identical goals of <em>customer acquisition, service, and retention</em> via an exceedingly enriched and enriching skill set. You can use it all.</p><p>Before I paddle off into the darkness, under the rising stars, I’d like to leave you a chart that plots out how Local SEO fits in with everything you’ve been doing all along.</p><h2>The roots of the divide</h2><p>Why is Local SEO often treated as separate from the rest of marketing? We can narrow this down to three contributing factors:</p><h3>1) Early separation of the local and organic algos</h3><p>Google’s early-days local product was governed by an algorithm that was much more distinct from their organic algorithm than it is today. It was once extremely common, for example, for businesses without websites to rank well locally. This didn’t do much to form clear bridges between the offline, organic, and local marketing worlds. But, then came <a href="https://moz.com/learn/seo/google-pigeon" target="_blank">Google’s Pigeon Update</a> in 2013, which signaled Google’s stated intention of deeply tying the two algorithms together.</p><p>This should ultimately impact the way industry publications, SaaS companies, and agencies present local as an extension of organic SEO, but we’re not quite there yet. I continue to encounter examples of large companies which are doing an amazing job with their website strategies, their e-commerce solutions and their paid outreach, but which are only now taking their first steps into local listings management for their hundreds of physical locations. It’s not that they’re late to the party — it’s just that they’ve only recently begun to realize what a large party their customers are having with their brands’ location data layers on the web.</p><h3>2) Inheriting the paid vs. organic dichotomy</h3><p>Local SEO has experienced the same lack-of-adoption/awareness as organic SEO. Agencies have long fought the uphill battle against a lopsided dependence on paid advertising. This phenomenon is <a href="https://moz.com/blog/the-disconnect-in-ppc-vs-seo-spending" target="_blank">highlighted by historic stats like these</a> showing brands investing some $10 million in PPC vs. $1 million in SEO, despite <a href="https://www.slideshare.net/randfish/inside-googles-numbers-in-2017" target="_blank">studies like this one</a> which show PPC earning less than 10% of clicks in search. </p><p>My take on this is that the transition from traditional offline paid advertising to its online analog was initially easier for many brands to get their heads around. And there have been ongoing challenges in proving direct ROI from SEO in the simple terms a PPC campaign can provide. To this day, we’re still all seeing statistics like only <a href="https://www.bluecorona.com/blog/29-small-business-digital-marketing-statistics" target="_blank">17% of small businesses investing in SEO</a>. In many ways, the SEO conundrum has simply been inherited by every Local SEO.</p><h3>3) A lot to take in and on</h3><p>Look at the service menu of any full-service digital marketing agency and you’ll see just how far it’s had to stretch over the past couple of decades to encompass an ever-expanding range of publicity opportunities:</p><ul><li>Technical website audits</li><li>On-site optimization</li><li>Linkbuilding</li><li>Keyword research</li><li>Content dev and promotion</li><li>Brand building</li><li>Social media marketing</li><li>PPC management</li><li>UX audits</li><li>Conversion optimization</li><li>Etc.</li></ul><p>Is it any wonder that agencies feel spread a bit too thin when considering how to support yet further needs and disciplines? How do you find the bandwidth, and the experts, to be able to offer:</p><ul><li>Ongoing citation management</li><li>Local on-site SEO</li><li>Local landing page dev</li><li>Store locator SEO</li><li>Review management</li><li>Local brand building</li><li>Local link building</li><li>And abstruse forms of local Schema implementation...</li></ul><p> And while many agencies have met the challenge by forming smart, strategic partnerships with providers specializing in Local SEO solutions, the agency is still then tasked with understanding how Local fits in with everything else they’re doing, and then explaining this to clients. At the multi-location and enterprise level, even amongst the best-known brands, high-level staffers may have no idea what it is the folks in the in-house Local SEO department are actually doing, or <em>why their work matters</em>.</p><p>To tie it all together … that’s what we need to do here. With a shared vision of how all practitioners are working on consumer-centric outreach, we can really get somewhere. Let’s plot this out, together:</p><h2>Sharing is caring</h2><blockquote>“We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better.” <br>- Jeff Bezos, Amazon</blockquote><p>Let’s imagine a sporting goods brand, established in 1979, that’s grown to 400 locations across the US while also becoming well-known for its e-commerce presence. Whether aspects of marketing are being outsourced or it’s all in-house, here is how 3 shared consumer-centric goals unify all parties.</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/the-local-seo-illusion/5a24a0b32606d7.23259649.jpg" alt="sharedgoalsfinal.jpg"></p><p>As we can see from the above chart, there is definitely an overlap of techniques, particularly between SEOs and Local SEOs. Yet overall, it’s not the language or tactics, but the end game and end goals that unify all parties. <em>Viewed properly, consumers are what make all marketing a true team effort.</em></p><h2>Before I buy that kayak…</h2><p>On my commute, I hear a radio ad promoting a holiday sale at some sporting goods store, but which brand was it?</p><p>Then I turn to the Internet to research kayak brands, and I find your website’s nicely researched, written, and optimized article comparing the best models in 2017. It’s ranking #2 organically. Those Sun Dolphins look pretty good, according to your massive comparison chart.</p><p>I think about it for a couple of days and go looking again, and I see your Adwords spot advertising your 30% off sale. <em>This is the third time I’ve encountered your brand</em>. </p><p>On my day off, I’m doing a local search for your brand, which has impressed me so far. I’m ready to look at these kayaks in person. Thanks to the fact that you properly managed your recent move across town by updating all of your major citations, I’m finding an accurate address on your Google My Business listing. Your reviews are mighty favorable, too. They keep mentioning how knowledgeable the staff is at your location nearest me.</p><p>And that turns out to be true. At first, I’m disappointed that I don’t see any Sun Dolphins on your shelves — your website comparison chart spoke well of them. As a sales associate approaches me, I notice in-store signage above his head, featuring a text/phone hotline for complaints. I don’t really have a complaint… not yet… but it’s good to know you care.</p><p>“I’m so sorry. We just sold out of Sun Dolphins this morning. But we can have one delivered to you within 3 days. We have in-store pickup, too,” the salesperson says. “Or, maybe you’d be interested in another model with comparable features. Let me show you.”</p><p>Turns out, your staffer isn’t just helpful — his training has made him so well-versed in your product line that he’s able to match my needs to a perfect kayak for me. I end up buying an Intex on the spot. </p><p>The cashier double-checks with me that I’ve found everything satisfactory and lets me know your brand takes feedback very seriously. She says my review would be valued, and my receipt invites me to read your reviews on Google, Yelp, and Facebook… and offers a special deal for signing up for your email newsletter.</p><p>My subsequent 5-star review signals to all departments of your company that a company-wide goal was met. Over the next year, my glowing review also influences 20 of my local neighbors to choose you over a competitor. </p><p>After my first wet, cold, and exciting kayaking trip, I realize I need to invest in a better waterproof jacket for next time. Your email newsletter hits my inbox at just the right time, announcing your Fourth of July sale. I’m about to become a repeat customer… <a href="https://www.helpscout.net/75-customer-service-facts-quotes-statistics/" target="_blank">worth up to 10x the value of my first purchase</a>. </p><blockquote><em>“No matter how brilliant your mind or strategy, if you’re playing a solo game, you’ll always lose out to a team.” <br>- Reid Hoffman, Co-Founder of LinkedIn</em></blockquote><p>There’s a kind of magic in this adventurous mix of marketing wins. Subtract anything from the picture, <em>and you may miss out on the customer.</em> It’s been said that great teams beat with a single heart. The secret lies in seeing every marketing discipline and practitioner as part of <em>your </em>team, doing what your brand has been doing all along: working with dedication to acquire, serve and retain consumers. Whether achievement comes via citation management, conversion optimization, or a write-up in the New York Times, the end goal is identical.</p><p>It’s also long been said that the race is to the swift. Media mogul Rupert Murdoch appears to agree, stating that, in today’s world, it’s not big that beats small — it’s fast that beats slow. How quickly your brand is able to integrate all forms of on-and-offline marketing into its core strategy, leaving no team as an island, may well be what writes your future.</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/l8DkB_v8ONM" height="1" width="1" alt=""/>Mon, 04 Dec 2017 00:01:00 GMThttps://moz.com/blog/the-local-seo-illusionMiriamEllis2017-12-04T00:01:00ZDesigning a Page's Content Flow to Maximize SEO Opportunity - Whiteboard Fridayhttp://feedproxy.google.com/~r/MozBlog/~3/DDV0w9zu_JE/page-content-flow-to-maximize-seo
https://moz.com/blog/page-content-flow-to-maximize-seo<p>Posted by <a href=\"https://moz.com/community/users/63\">randfish</a></p><p>Controlling and improving the flow of your on-site content can actually help your SEO. What's the best way to capitalize on the opportunity present in your page design? Rand covers the questions you need to ask (and answer) and the goals you should strive for in today's Whiteboard Friday.
</p><p class="wistia_responsive_padding" style="padding:5.25% 0 28px 0;position:relative;">
<iframe src="https://fast.wistia.net/embed/iframe/s9ad0z1eee?videoFoam=true" title="Wistia video player" allowtransparency="true" frameborder="0" scrolling="no" class="wistia_embed" name="wistia_embed" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen="" oallowfullscreen="" msallowfullscreen="" width="100%" height="100%">
</iframe>
</p><script rel="display: none;" src="https://fast.wistia.net/assets/external/E-v1.js" async=""></script><p style="text-align: center;"><a href="http://d2v4zi8pl64nxt.cloudfront.net/designing-a-page-s-content-flow-to-maximize-seo-opportunity-whiteboard-friday/5a209b654369e7.88708807.jpg" target="_blank"><img src="http://d2v4zi8pl64nxt.cloudfront.net/designing-a-page-s-content-flow-to-maximize-seo-opportunity-whiteboard-friday/5a209b654369e7.88708807.jpg" alt="Designing a page's content flow to maximize SEO opportunity" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;"></a>
</p><p style="text-align: center;" class="caption">Click on the whiteboard image above to open a high-resolution version in a new tab!
</p><iframe width="100%" height="100" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/363558659&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true">&#38;amp;amp;lt;span id="selection-marker-1" class="redactor-selection-marker"&#38;amp;amp;gt;&#38;amp;amp;lt;/span&#38;amp;amp;gt;</iframe><h2>Video Transcription</h2><p>Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about a designing a page's content flow to help with your SEO. <br><br>Now, unfortunately, somehow in the world of SEO tactics, this one has gotten left by the wayside. I think a lot of people in the SEO world are investing in things like content and solving searchers' problems and getting to the bottom of searcher intent. But unfortunately, the page design and the flow of the elements, the UI elements, the content elements that sit in a page is discarded or left aside. That's unfortunate because it can actually make a huge difference to your SEO.
</p><h2>Q: What needs to go on this page, in what order, with what placement?</h2><p><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/untitled-3-210844.jpg" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;">
</p><p>So if we're asking ourselves like, "Well, what's the question here?" Well, it's what needs to go on this page. I'm trying to rank for "faster home Wi-Fi." Right now, Lifehacker and a bunch of other people are ranking in these results. It gets a ton of searches. I can drive a lot of revenue for my business if I can rank there. But what needs to go on this page in what order with what placement in order for me to perform the best that I possibly can? It turns out that sometimes great content gets buried in a poor page design and poor page flow. But if we want to answer this question, we actually have to ask some other ones. We need answers to at least these three:
</p><p><strong>A. What is the searcher in this case trying to accomplish?</strong>
</p><p>When they enter "faster home Wi-Fi," what's the task that they want to get done?
</p><p><strong>B. Are there multiple intents behind this query, and which ones are most popular?</strong>
</p><p>What's the popularity of those intents in what order? We need to know that so that we can design our flow around the most common ones first and the secondary and tertiary ones next.
</p><p><strong>C. What's the business goal of ranking? What are we trying to accomplish?</strong>
</p><p>That's always going to have to be balanced out with what is the searcher trying to accomplish. Otherwise, in a lot of cases, there's no point in ranking at all. If we can't get our goals met, we should just rank for something else where we can.
</p><h2>Let's assume we've got some answers:</h2><p>Let's assume that, in this case, we have some good answers to these questions so we can proceed. So pretty simple. If I search for "faster home Wi-Fi," what I want is usually it's going to be...
</p><p><strong>A. Faster download speed at home.</strong>
</p><p>That's what the searcher is trying to accomplish. But there are multiple intents behind this. Sometimes the searcher is looking to do that..
</p><p><strong>B1. With their current ISP and their current equipment.</strong>
</p><p>They want to know things they can optimize that don't cause them to spend money. Can they place their router in different places? Can they change out a cable? Do they need to put it in a different room? Do they need to move their computer? Is the problem something else that's interfering with their Wi-Fi in their home that they need to turn off? Those kinds of issues.
</p><p><strong>B2. With a new ISP.</strong>
</p><p>Or can they get a new ISP? They might be looking for an ISP that can provide them with faster home internet in their area, and they want to know what's available, which is a very different intent than the first one.
</p><p><strong>B3. With current ISP but new equipment.</strong>
</p><p>maybe they want to keep their ISP, but they are willing to upgrade to new equipment. So they're looking for what's the equipment that I could buy that would make the current ISP I have, which in many cases in the United States, sadly, there's only one ISP that can provide you with service in a lot of areas. So they can't change ISP, but they can change out their equipment.
</p><p><strong>C. Affiliate revenue with product referrals.</strong>
</p><p>Let's assume that (C) is we know that what we're trying to accomplish is affiliate revenue from product referrals. So our business is basically we're going to send people to new routers or the Google Mesh Network home device, and we get affiliate revenue by passing folks off to those products and recommending them.
</p><h2>Now we can design a content flow.</h2><p><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/untitled-2-107471.jpg" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;">
</p><p>Okay, fair enough. We now have enough to be able to take care of this design flow. The design flow can involve lots of things. There are a lot of things that could live on a page, everything from navigation to headline to the lead-in copy or the header image or body content, graphics, reference links, the footer, a sidebar potentially.
</p><p>The elements that go in here are not actually what we're talking about today. We can have that conversation too. I want a headline that's going to tell people that I serve all of these different intents. I want to have a lead-in that has a potential to be the featured snippet in there. I want a header image that can rank in image results and be in the featured snippet panel. I'm going to want body content that serves all of these in the order that's most popular. I want graphics and visuals that suggest to people that I've done my research and I can provably show that the results that you get with this different equipment or this different ISP will be relevant to them.
</p><p>But really, what we're talking about here is the flow that matters. The content itself, the problem is that it gets buried. What I see many times is folks will take a powerful visual or a powerful piece of content that's solving the searcher's query and they'll put it in a place on the page where it's hard to access or hard to find. So even though they've actually got great content, it is buried by the page's design.
</p><h2>5 big goals that matter.</h2><p>The goals that matter here and the ones that you should be optimizing for when you're thinking about the design of this flow are:
</p><p><strong>1. How do I solve the searcher's task quickly and enjoyably?</strong>
</p><p>So that's about user experience as well as the UI. I know that, for many people, they are going to want to see and, in fact, the result that's ranking up here on the top is Lifehacker's top 10 list for how to get your home Wi-Fi faster. They include things like upgrading your ISP, and here's a tool to see what's available in your area. They include maybe you need a better router, and here are the best ones. Maybe you need a different network or something that expands your network in your home, and here's a link out to those. So they're serving that purpose up front, up top.
</p><p><strong>2. Serve these multiple intents in the order of demand.</strong>
</p><p>So if we can intuit that most people want to stick with their ISP, but are willing to change equipment, we can serve this one first (B3). We can serve this one second (B1), and we can serve the change out my ISP third (B2), which is actually the ideal fit in this scenario for us. That helps us
</p><p><strong>3. Optimize for the business goal without sacrificing one and two.</strong>
</p><p>I would urge you to design generally with the searcher in mind and if you can fit in the business goal, that is ideal. Otherwise, what tends to happen is the business goal comes first, the searcher comes second, and you come tenth in the results.
</p><p><strong>4. If possible, try to claim the featured snippet and the visual image that go up there.</strong>
</p><p>That means using the lead-in up at the top. It's usually the first paragraph or the first few lines of text in an ordered or unordered list, along with a header image or visual in order to capture that featured snippet. That's very powerful for search results that are still showing it.
</p><p><strong>5. Limit our bounce back to the SERP as much as possible.</strong>
</p><p>In many cases, this means limiting some of the UI or design flow elements that hamper people from solving their problems or that annoy or dissuade them. So, for example, advertising that pops up or overlays that come up before I've gotten two-thirds of the way down the page really tend to hamper efforts, really tend to increase this bounce back to the SERP, the search engine call pogo-sticking and can harm your rankings dramatically. Design elements, design flows where the content that actually solves the problem is below an advertising block or below a promotional block, that also is very limiting.
</p><p>So to the degree that we can control the design of our pages and optimize for that, we can actually take existing content that you might already have and improve its rankings without having to remake it, without needing new links, simply by improving the flow.
</p><p>I hope we'll see lots of examples of those in the comments, and we'll see you again next week for another edition of Whiteboard Friday. Take care.<br>
</p><p><a href="http://www.speechpad.com/page/video-transcription/" target="_blank">Video transcription</a> by <a href="http://www.speechpad.com/" target="_blank">Speechpad.com</a>
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/63\">randfish</a></p><p>Controlling and improving the flow of your on-site content can actually help your SEO. What's the best way to capitalize on the opportunity present in your page design? Rand covers the questions you need to ask (and answer) and the goals you should strive for in today's Whiteboard Friday.
</p><p class="wistia_responsive_padding" style="padding:5.25% 0 28px 0;position:relative;">
<iframe src="https://fast.wistia.net/embed/iframe/s9ad0z1eee?videoFoam=true" title="Wistia video player" allowtransparency="true" frameborder="0" scrolling="no" class="wistia_embed" name="wistia_embed" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen="" oallowfullscreen="" msallowfullscreen="" width="100%" height="100%">
</iframe>
</p><script rel="display: none;" src="https://fast.wistia.net/assets/external/E-v1.js" async=""></script><p style="text-align: center;"><a href="http://d2v4zi8pl64nxt.cloudfront.net/designing-a-page-s-content-flow-to-maximize-seo-opportunity-whiteboard-friday/5a209b654369e7.88708807.jpg" target="_blank"><img src="http://d2v4zi8pl64nxt.cloudfront.net/designing-a-page-s-content-flow-to-maximize-seo-opportunity-whiteboard-friday/5a209b654369e7.88708807.jpg" alt="Designing a page's content flow to maximize SEO opportunity" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;"></a>
</p><p style="text-align: center;" class="caption">Click on the whiteboard image above to open a high-resolution version in a new tab!
</p><iframe width="100%" height="100" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/363558659&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true">&amp;amp;amp;lt;span id="selection-marker-1" class="redactor-selection-marker"&amp;amp;amp;gt;&amp;amp;amp;lt;/span&amp;amp;amp;gt;</iframe><h2>Video Transcription</h2><p>Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about a designing a page's content flow to help with your SEO. <br><br>Now, unfortunately, somehow in the world of SEO tactics, this one has gotten left by the wayside. I think a lot of people in the SEO world are investing in things like content and solving searchers' problems and getting to the bottom of searcher intent. But unfortunately, the page design and the flow of the elements, the UI elements, the content elements that sit in a page is discarded or left aside. That's unfortunate because it can actually make a huge difference to your SEO.
</p><h2>Q: What needs to go on this page, in what order, with what placement?</h2><p><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/untitled-3-210844.jpg" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;">
</p><p>So if we're asking ourselves like, "Well, what's the question here?" Well, it's what needs to go on this page. I'm trying to rank for "faster home Wi-Fi." Right now, Lifehacker and a bunch of other people are ranking in these results. It gets a ton of searches. I can drive a lot of revenue for my business if I can rank there. But what needs to go on this page in what order with what placement in order for me to perform the best that I possibly can? It turns out that sometimes great content gets buried in a poor page design and poor page flow. But if we want to answer this question, we actually have to ask some other ones. We need answers to at least these three:
</p><p><strong>A. What is the searcher in this case trying to accomplish?</strong>
</p><p>When they enter "faster home Wi-Fi," what's the task that they want to get done?
</p><p><strong>B. Are there multiple intents behind this query, and which ones are most popular?</strong>
</p><p>What's the popularity of those intents in what order? We need to know that so that we can design our flow around the most common ones first and the secondary and tertiary ones next.
</p><p><strong>C. What's the business goal of ranking? What are we trying to accomplish?</strong>
</p><p>That's always going to have to be balanced out with what is the searcher trying to accomplish. Otherwise, in a lot of cases, there's no point in ranking at all. If we can't get our goals met, we should just rank for something else where we can.
</p><h2>Let's assume we've got some answers:</h2><p>Let's assume that, in this case, we have some good answers to these questions so we can proceed. So pretty simple. If I search for "faster home Wi-Fi," what I want is usually it's going to be...
</p><p><strong>A. Faster download speed at home.</strong>
</p><p>That's what the searcher is trying to accomplish. But there are multiple intents behind this. Sometimes the searcher is looking to do that..
</p><p><strong>B1. With their current ISP and their current equipment.</strong>
</p><p>They want to know things they can optimize that don't cause them to spend money. Can they place their router in different places? Can they change out a cable? Do they need to put it in a different room? Do they need to move their computer? Is the problem something else that's interfering with their Wi-Fi in their home that they need to turn off? Those kinds of issues.
</p><p><strong>B2. With a new ISP.</strong>
</p><p>Or can they get a new ISP? They might be looking for an ISP that can provide them with faster home internet in their area, and they want to know what's available, which is a very different intent than the first one.
</p><p><strong>B3. With current ISP but new equipment.</strong>
</p><p>maybe they want to keep their ISP, but they are willing to upgrade to new equipment. So they're looking for what's the equipment that I could buy that would make the current ISP I have, which in many cases in the United States, sadly, there's only one ISP that can provide you with service in a lot of areas. So they can't change ISP, but they can change out their equipment.
</p><p><strong>C. Affiliate revenue with product referrals.</strong>
</p><p>Let's assume that (C) is we know that what we're trying to accomplish is affiliate revenue from product referrals. So our business is basically we're going to send people to new routers or the Google Mesh Network home device, and we get affiliate revenue by passing folks off to those products and recommending them.
</p><h2>Now we can design a content flow.</h2><p><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/untitled-2-107471.jpg" style="box-shadow: rgb(153, 153, 153) 0px 0px 10px 0px; border-radius: 20px;">
</p><p>Okay, fair enough. We now have enough to be able to take care of this design flow. The design flow can involve lots of things. There are a lot of things that could live on a page, everything from navigation to headline to the lead-in copy or the header image or body content, graphics, reference links, the footer, a sidebar potentially.
</p><p>The elements that go in here are not actually what we're talking about today. We can have that conversation too. I want a headline that's going to tell people that I serve all of these different intents. I want to have a lead-in that has a potential to be the featured snippet in there. I want a header image that can rank in image results and be in the featured snippet panel. I'm going to want body content that serves all of these in the order that's most popular. I want graphics and visuals that suggest to people that I've done my research and I can provably show that the results that you get with this different equipment or this different ISP will be relevant to them.
</p><p>But really, what we're talking about here is the flow that matters. The content itself, the problem is that it gets buried. What I see many times is folks will take a powerful visual or a powerful piece of content that's solving the searcher's query and they'll put it in a place on the page where it's hard to access or hard to find. So even though they've actually got great content, it is buried by the page's design.
</p><h2>5 big goals that matter.</h2><p>The goals that matter here and the ones that you should be optimizing for when you're thinking about the design of this flow are:
</p><p><strong>1. How do I solve the searcher's task quickly and enjoyably?</strong>
</p><p>So that's about user experience as well as the UI. I know that, for many people, they are going to want to see and, in fact, the result that's ranking up here on the top is Lifehacker's top 10 list for how to get your home Wi-Fi faster. They include things like upgrading your ISP, and here's a tool to see what's available in your area. They include maybe you need a better router, and here are the best ones. Maybe you need a different network or something that expands your network in your home, and here's a link out to those. So they're serving that purpose up front, up top.
</p><p><strong>2. Serve these multiple intents in the order of demand.</strong>
</p><p>So if we can intuit that most people want to stick with their ISP, but are willing to change equipment, we can serve this one first (B3). We can serve this one second (B1), and we can serve the change out my ISP third (B2), which is actually the ideal fit in this scenario for us. That helps us
</p><p><strong>3. Optimize for the business goal without sacrificing one and two.</strong>
</p><p>I would urge you to design generally with the searcher in mind and if you can fit in the business goal, that is ideal. Otherwise, what tends to happen is the business goal comes first, the searcher comes second, and you come tenth in the results.
</p><p><strong>4. If possible, try to claim the featured snippet and the visual image that go up there.</strong>
</p><p>That means using the lead-in up at the top. It's usually the first paragraph or the first few lines of text in an ordered or unordered list, along with a header image or visual in order to capture that featured snippet. That's very powerful for search results that are still showing it.
</p><p><strong>5. Limit our bounce back to the SERP as much as possible.</strong>
</p><p>In many cases, this means limiting some of the UI or design flow elements that hamper people from solving their problems or that annoy or dissuade them. So, for example, advertising that pops up or overlays that come up before I've gotten two-thirds of the way down the page really tend to hamper efforts, really tend to increase this bounce back to the SERP, the search engine call pogo-sticking and can harm your rankings dramatically. Design elements, design flows where the content that actually solves the problem is below an advertising block or below a promotional block, that also is very limiting.
</p><p>So to the degree that we can control the design of our pages and optimize for that, we can actually take existing content that you might already have and improve its rankings without having to remake it, without needing new links, simply by improving the flow.
</p><p>I hope we'll see lots of examples of those in the comments, and we'll see you again next week for another edition of Whiteboard Friday. Take care.<br>
</p><p><a href="http://www.speechpad.com/page/video-transcription/" target="_blank">Video transcription</a> by <a href="http://www.speechpad.com/" target="_blank">Speechpad.com</a>
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/DDV0w9zu_JE" height="1" width="1" alt=""/>Fri, 01 Dec 2017 00:03:00 GMThttps://moz.com/blog/page-content-flow-to-maximize-seorandfish2017-12-01T00:03:00ZThe Complete Guide to Direct Traffic in Google Analyticshttp://feedproxy.google.com/~r/MozBlog/~3/IMLT-n7SO4Y/guide-to-direct-traffic-google-analytics
https://moz.com/blog/guide-to-direct-traffic-google-analytics<p>Posted by <a href=\"https://moz.com/community/users/10671287\">tombennet</a></p><p>When it comes to direct traffic in Analytics, there are two deeply entrenched misconceptions.
</p><p>The first is that it’s caused almost exclusively by users typing an address into their browser (or clicking on a bookmark). The second is that it’s a <em>Bad Thing</em>, not because it has any overt negative impact on your site’s performance, but rather because it’s somehow immune to further analysis. The prevailing attitude amongst digital marketers is that direct traffic is an unavoidable inconvenience; as a result, discussion of direct is typically limited to ways of attributing it to other channels, or side-stepping the issues associated with it.
</p><p>In this article, we’ll be taking a fresh look at direct traffic in modern Google Analytics. As well as exploring the myriad ways in which referrer data can be lost, we’ll look at some tools and tactics you can start using immediately to reduce levels of direct traffic in your reports. Finally, we’ll discover how advanced analysis and segmentation can unlock the mysteries of direct traffic and shed light on what might actually be your most valuable users.
</p><h2>What is direct traffic?</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df786c3fd42.13567253.png">
</p><p>In short, Google Analytics will report a traffic source of "direct" when it has no data on how the session arrived at your website, or when the referring source has been configured to be ignored. You can think of direct as GA’s fall-back option for when its processing logic has failed to attribute a session to a particular source.
</p><p>To properly understand the causes and fixes for direct traffic, it’s important to understand exactly how GA processes traffic sources. The following flow-chart illustrates how sessions are bucketed — note that direct sits right at the end as a final "catch-all" group.
</p><p class="full-width"><a href="https://i.imgur.com/oFXDXsn.jpg" target="_blank"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/campaign-processing-final-422987.jpg" alt="" "=""></a>
</p><p>Broadly speaking, and disregarding user-configured overrides, GA’s processing follows this sequence of checks:
</p><p><em>AdWords parameters &#62; Campaign overrides &#62; UTM campaign parameters &#62; Referred by a search engine &#62; Referred by another website &#62; Previous campaign within timeout period &#62; Direct</em>
</p><p>Note the penultimate processing step (<em>previous campaign within timeout</em>), which has a significant impact on the direct channel. Consider a user who discovers your site via organic search, then returns via direct a week later. Both sessions would be attributed to organic search. In fact, campaign data persists for up to six months by default. The key point here is that Google Analytics is already trying to minimize the impact of direct traffic for you.
</p><h2>What causes direct traffic?</h2><p>Contrary to popular belief, there are actually many reasons why a session might be missing campaign and traffic source data. Here we will run through some of the most common.
</p><h3>1. Manual address entry and bookmarks</h3><p>The classic direct-traffic scenario, this one is largely unavoidable. If a user types a URL into their browser’s address bar or clicks on a browser bookmark, that session will appear as direct traffic.
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df78734a5a1.86359518.png">
</p><p>Simple as that.
</p><h3>2. HTTPS &#62; HTTP</h3><p>When a user follows a link on a secure (HTTPS) page to a non-secure (HTTP) page, no referrer data is passed, meaning the session appears as direct traffic instead of as a referral. Note that this is intended behavior. It’s part of how the secure protocol was designed, and it does not affect other scenarios: HTTP to HTTP, HTTPS to HTTPS, and even HTTP to HTTPS all pass referrer data.
</p><p>So, if your referral traffic has tanked but direct has spiked, it could be that one of your major referrers has migrated to HTTPS. The inverse is also true: If you’ve migrated to HTTPS and are linking to HTTP websites, the traffic you’re driving to them will appear in <em>their</em> Analytics as direct.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df7878599b4.33289691.png">If your referrers have moved to HTTPS and you’re stuck on HTTP, you really ought to consider migrating to HTTPS. Doing so (and updating your backlinks to point to HTTPS URLs) will bring back any referrer data which is being stripped from cross-protocol traffic. SSL certificates can now be obtained for free thanks to automated authorities like <a href="https://letsencrypt.org/" target="_blank">LetsEncrypt</a>, but that’s not to say you should neglect to explore the potentially-significant SEO implications of <a href="https://moz.com/blog/no-such-thing-as-a-site-migration" target="_blank">site migrations</a>. Remember, HTTPS and HTTP/2 are the future of the web.
</p><p>If, on the other hand, you’ve already migrated to HTTPS and are concerned about your users appearing to partner websites as direct traffic, you can implement the meta referrer tag. Cyrus Shepard has <a href="https://moz.com/blog/meta-referrer-tag" target="_blank">written about this on Moz</a> before, so I won’t delve into it now. Suffice to say, it’s a way of telling browsers to pass <em>some</em> referrer data to non-secure sites, and can be <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy">implemented</a> as a &#60;meta&#62; element or HTTP header.
</p><h3>3. Missing or broken tracking code</h3><p>Let’s say you’ve launched a new landing page template and forgotten to include the GA tracking code. Or, to use a scenario I’m encountering more and more frequently, imagine your GTM container is a horrible mess of poorly configured triggers, and your tracking code is simply failing to fire.
</p><p>Users land on this page without tracking code. They click on a link to a deeper page which <em>does</em> have tracking code. From GA’s perspective, the first hit of the session is the second page visited, meaning that the referrer appears as your own website (i.e. a <a href="https://support.google.com/analytics/answer/6350128?hl=en" target="_blank">self-referral</a>). If your domain is on the referral exclusion list (as per default configuration), the session is bucketed as direct. This will happen even if the first URL is tagged with UTM campaign parameters.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df787ea2ef6.09255866.png">
</p><p>As a short-term fix, you can try to repair the damage by simply adding the missing tracking code. To prevent it happening again, carry out a thorough <a href="https://builtvisible.com/analytics-auditing/" target="_blank">Analytics audit</a>, move to a GTM-based tracking implementation, and promote a culture of data-driven marketing.
</p><h3>4. Improper redirection</h3><p>This is an easy one. Don’t use meta refreshes or JavaScript-based redirects — these can wipe or replace referrer data, leading to direct traffic in Analytics. You should also be meticulous with your server-side redirects, and — as is often recommended by SEOs — audit your redirect file frequently. Complex chains are more likely to result in a loss of referrer data, and you run the risk of UTM parameters getting stripped out.
</p><p>Once again, control what you can: use carefully mapped (i.e. non-chained) code 301 server-side redirects to preserve referrer data wherever possible.
</p><h3>5. Non-web documents</h3><p>Links in Microsoft Word documents, slide decks, or PDFs do not pass referrer information. By default, users who click these links will appear in your reports as direct traffic. Clicks from native mobile apps (particularly those with embedded "in-app" browsers) are similarly prone to stripping out referrer data.
</p><p>To a degree, this is unavoidable. Much like so-called “dark social” visits (discussed in detail below), non-web links will inevitably result in some quantity of direct traffic. However, you also have an opportunity here to <em>control the controllables.</em>
</p><p>If you publish whitepapers or offer downloadable PDF guides, for example, you should be tagging the embedded hyperlinks with <a href="https://ga-dev-tools.appspot.com/campaign-url-builder/" target="_blank">UTM campaign parameters</a>. You’d never even contemplate launching an email marketing campaign without campaign tracking (I hope), so why would you distribute any other kind of freebie without similarly tracking its success? In some ways this is even <em>more</em> important, since these kinds of downloadables often have a longevity not seen in a single email campaign. Here’s an example of a properly tagged URL which we would embed as a link:
</p><p style="text-align: center;">https://builtvisible.com/embedded-whitepaper-url/?utm_source=whitepaper&utm<br>
</p><p>The same goes for URLs in your <em>offline</em> marketing materials. For major campaigns it’s common practice to select a short, memorable URL (e.g. moz.com/tv/) and design an entirely new landing page. It’s possible to bypass page creation altogether: simply redirect the vanity URL to an existing page URL which is properly tagged with UTM parameters.
</p><p>So, whether you tag your URLs directly, use redirected vanity URLs, or — if you think UTM parameters are ugly — opt for some crazy-ass hash-fragment solution with GTM (<a href="https://builtvisible.com/one-weird-trick-to-avoid-utm-parameters/" target="_blank">read more here</a>), the takeaway is the same: use campaign parameters wherever it’s appropriate to do so.
</p><h3>6. “Dark social”</h3><p>This is a big one, and probably the least well understood by marketers.
</p><p>The term “dark social” was first coined back in 2012 by Alexis Madrigal in <a href="https://www.theatlantic.com/technology/archive/2012/10/dark-social-we-have-the-whole-history-of-the-web-wrong/263523/" target="_blank">an article for The Atlantic</a>. Essentially it refers to methods of social sharing which cannot easily be attributed to a particular source, like email, instant messaging, Skype, WhatsApp, and Facebook Messenger.
</p><p><a href="https://radiumone.com/darksocial/" target="_blank">Recent studies</a> have found that upwards of 80% of consumers’ outbound sharing from publishers’ and marketers’ websites now occurs via these private channels. In terms of numbers of active users, messaging apps are <em>outpacing</em> social networking apps. All the activity driven by these thriving platforms is typically bucketed as direct traffic by web analytics software.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df788741f31.91792226.png">
</p><p>People who use the ambiguous phrase “social media marketing” are typically referring to <em>advertising</em>: you broadcast your message and hope people will listen. Even if you overcome consumer indifference with a well-targeted campaign, any subsequent interactions are affected by their very public nature. The privacy of dark social, by contrast, represents a potential goldmine of intimate, targeted, and relevant interactions with high conversion potential. Nebulous and difficult-to-track though it may be, dark social has the potential to let marketers tap into elusive power of <em>word of mouth</em>.
</p><p>So, how can we minimize the amount of dark social traffic which is bucketed under direct? The unfortunate truth is that there is no magic bullet: proper attribution of dark social requires rigorous campaign tracking. The optimal approach will vary greatly based on your industry, audience, proposition, and so on. For many websites, however, a good first step is to provide convenient and properly configured sharing buttons for private platforms like email, WhatsApp, and Slack, thereby ensuring that users share URLs appended with UTM parameters (or vanity/shortened URLs which redirect to the same). This will go some way towards shining a light on <em>part</em> of your dark social traffic.
</p><h2>Checklist: Minimizing direct traffic</h2><p>To summarize what we’ve already discussed, here are the steps you can take to minimize the level of unnecessary direct traffic in your reports:
</p><ol>
<li><strong>Migrate to HTTPS: </strong>Not only is the secure protocol your gateway to HTTP/2 and the future of the web, it will also have an enormously positive effect on your ability to track referral traffic.</li>
<li><strong>Manage your use of redirects:</strong> Avoid chains and eliminate client-side redirection in favour of carefully-mapped, single-hop, server-side 301s. If you use vanity URLs to redirect to pages with UTM parameters, be meticulous.</li>
<li><strong>Get really good at campaign tagging:</strong> Even amongst data-driven marketers I encounter the belief that UTM begins and ends with switching on automatic tagging in your email marketing software. Others go to the other extreme, doing silly things like tagging internal links. Control what you can, and your ability to carry out meaningful attribution will markedly improve.</li>
<li><strong>Conduct an Analytics audit: </strong>Data integrity is vital, so consider this essential when assessing the success of your marketing. It’s not simply a case of checking for missing track code: good audits involve a review of your measurement plan and rigorous testing at page and property-level.</li>
</ol><p>Adhere to these principles, and it’s often possible to achieve a dramatic reduction in the level of direct traffic reported in Analytics. The following example involved an HTTPS migration, GTM migration (as part of an Analytics review), and an overhaul of internal campaign tracking processes over the course of about 6 months:
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df7892938f9.72562485.png">
</p><p>But the saga of direct traffic doesn’t end there! Once this channel is “clean” — that is, once you’ve minimized the number of avoidable pollutants — what remains might actually be one of your most valuable traffic segments.
</p><h2>Analyze! Or: why direct traffic can actually be pretty cool</h2><p>For reasons we’ve already discussed, traffic from bookmarks and dark social is an enormously valuable segment to analyze. These are likely to be some of your most loyal and engaged users, and it’s not uncommon to see a notably higher conversion rate for a clean direct channel compared to the site average. You should make the effort to get to know them.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df7898e1021.58268901.png">
</p><p>The number of potential avenues to explore is infinite, but here are some good starting points:
</p><ul>
<li>Build meaningful custom segments, defining a subset of your direct traffic based on their landing page, location, device, repeat visit or purchase behavior, or even enhanced e-commerce interactions.</li>
<li>Track meaningful engagement metrics using modern GTM triggers such as <a href="https://builtvisible.com/tracking-element-visibility-with-gtm/" target="_blank">element visibility and native scroll tracking</a>. Measure <em>how</em> your direct users are using and viewing your content.</li>
<li>Watch for correlations with your other marketing activities, and use it as an opportunity to refine your tagging practices and segment definitions. Create a <a href="https://support.google.com/analytics/answer/1033021?hl=en" target="_blank">custom alert</a> which watches for spikes in direct traffic.</li>
<li>Familiarize yourself with flow reports to get an understanding of how your direct traffic is converting. By using Goal Flow and Behavior Flow reports with segmentation, it’s often possible to glean actionable insights which can be applied to the site as a whole.</li>
<li>Ask your users for help! If you’ve isolated a valuable segment of traffic which eludes deeper analysis, add a button to the page offering visitors a free downloadable ebook if they tell you how they discovered your page.</li>
<li>Start thinking about lifetime value, if you haven’t already — overhauling your attribution model or implementing <a href="https://support.google.com/analytics/answer/3123662?hl=en" target="_blank">User ID</a> are good steps towards overcoming the indifference or frustration felt by marketers towards direct traffic.</li>
</ul><p>I hope this guide has been useful. With any luck, you arrived looking for ways to reduce the level of direct traffic in your reports, and left with some new ideas for how to better analyze this valuable segment of users.
</p><p>Thanks for reading!
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><p>Posted by <a href=\"https://moz.com/community/users/10671287\">tombennet</a></p><p>When it comes to direct traffic in Analytics, there are two deeply entrenched misconceptions.
</p><p>The first is that it’s caused almost exclusively by users typing an address into their browser (or clicking on a bookmark). The second is that it’s a <em>Bad Thing</em>, not because it has any overt negative impact on your site’s performance, but rather because it’s somehow immune to further analysis. The prevailing attitude amongst digital marketers is that direct traffic is an unavoidable inconvenience; as a result, discussion of direct is typically limited to ways of attributing it to other channels, or side-stepping the issues associated with it.
</p><p>In this article, we’ll be taking a fresh look at direct traffic in modern Google Analytics. As well as exploring the myriad ways in which referrer data can be lost, we’ll look at some tools and tactics you can start using immediately to reduce levels of direct traffic in your reports. Finally, we’ll discover how advanced analysis and segmentation can unlock the mysteries of direct traffic and shed light on what might actually be your most valuable users.
</p><h2>What is direct traffic?</h2><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df786c3fd42.13567253.png">
</p><p>In short, Google Analytics will report a traffic source of "direct" when it has no data on how the session arrived at your website, or when the referring source has been configured to be ignored. You can think of direct as GA’s fall-back option for when its processing logic has failed to attribute a session to a particular source.
</p><p>To properly understand the causes and fixes for direct traffic, it’s important to understand exactly how GA processes traffic sources. The following flow-chart illustrates how sessions are bucketed — note that direct sits right at the end as a final "catch-all" group.
</p><p class="full-width"><a href="https://i.imgur.com/oFXDXsn.jpg" target="_blank"><img src="http://d1avok0lzls2w.cloudfront.net/uploads/blog/campaign-processing-final-422987.jpg" alt="" "=""></a>
</p><p>Broadly speaking, and disregarding user-configured overrides, GA’s processing follows this sequence of checks:
</p><p><em>AdWords parameters &gt; Campaign overrides &gt; UTM campaign parameters &gt; Referred by a search engine &gt; Referred by another website &gt; Previous campaign within timeout period &gt; Direct</em>
</p><p>Note the penultimate processing step (<em>previous campaign within timeout</em>), which has a significant impact on the direct channel. Consider a user who discovers your site via organic search, then returns via direct a week later. Both sessions would be attributed to organic search. In fact, campaign data persists for up to six months by default. The key point here is that Google Analytics is already trying to minimize the impact of direct traffic for you.
</p><h2>What causes direct traffic?</h2><p>Contrary to popular belief, there are actually many reasons why a session might be missing campaign and traffic source data. Here we will run through some of the most common.
</p><h3>1. Manual address entry and bookmarks</h3><p>The classic direct-traffic scenario, this one is largely unavoidable. If a user types a URL into their browser’s address bar or clicks on a browser bookmark, that session will appear as direct traffic.
</p><p><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df78734a5a1.86359518.png">
</p><p>Simple as that.
</p><h3>2. HTTPS &gt; HTTP</h3><p>When a user follows a link on a secure (HTTPS) page to a non-secure (HTTP) page, no referrer data is passed, meaning the session appears as direct traffic instead of as a referral. Note that this is intended behavior. It’s part of how the secure protocol was designed, and it does not affect other scenarios: HTTP to HTTP, HTTPS to HTTPS, and even HTTP to HTTPS all pass referrer data.
</p><p>So, if your referral traffic has tanked but direct has spiked, it could be that one of your major referrers has migrated to HTTPS. The inverse is also true: If you’ve migrated to HTTPS and are linking to HTTP websites, the traffic you’re driving to them will appear in <em>their</em> Analytics as direct.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df7878599b4.33289691.png">If your referrers have moved to HTTPS and you’re stuck on HTTP, you really ought to consider migrating to HTTPS. Doing so (and updating your backlinks to point to HTTPS URLs) will bring back any referrer data which is being stripped from cross-protocol traffic. SSL certificates can now be obtained for free thanks to automated authorities like <a href="https://letsencrypt.org/" target="_blank">LetsEncrypt</a>, but that’s not to say you should neglect to explore the potentially-significant SEO implications of <a href="https://moz.com/blog/no-such-thing-as-a-site-migration" target="_blank">site migrations</a>. Remember, HTTPS and HTTP/2 are the future of the web.
</p><p>If, on the other hand, you’ve already migrated to HTTPS and are concerned about your users appearing to partner websites as direct traffic, you can implement the meta referrer tag. Cyrus Shepard has <a href="https://moz.com/blog/meta-referrer-tag" target="_blank">written about this on Moz</a> before, so I won’t delve into it now. Suffice to say, it’s a way of telling browsers to pass <em>some</em> referrer data to non-secure sites, and can be <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy">implemented</a> as a &lt;meta&gt; element or HTTP header.
</p><h3>3. Missing or broken tracking code</h3><p>Let’s say you’ve launched a new landing page template and forgotten to include the GA tracking code. Or, to use a scenario I’m encountering more and more frequently, imagine your GTM container is a horrible mess of poorly configured triggers, and your tracking code is simply failing to fire.
</p><p>Users land on this page without tracking code. They click on a link to a deeper page which <em>does</em> have tracking code. From GA’s perspective, the first hit of the session is the second page visited, meaning that the referrer appears as your own website (i.e. a <a href="https://support.google.com/analytics/answer/6350128?hl=en" target="_blank">self-referral</a>). If your domain is on the referral exclusion list (as per default configuration), the session is bucketed as direct. This will happen even if the first URL is tagged with UTM campaign parameters.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df787ea2ef6.09255866.png">
</p><p>As a short-term fix, you can try to repair the damage by simply adding the missing tracking code. To prevent it happening again, carry out a thorough <a href="https://builtvisible.com/analytics-auditing/" target="_blank">Analytics audit</a>, move to a GTM-based tracking implementation, and promote a culture of data-driven marketing.
</p><h3>4. Improper redirection</h3><p>This is an easy one. Don’t use meta refreshes or JavaScript-based redirects — these can wipe or replace referrer data, leading to direct traffic in Analytics. You should also be meticulous with your server-side redirects, and — as is often recommended by SEOs — audit your redirect file frequently. Complex chains are more likely to result in a loss of referrer data, and you run the risk of UTM parameters getting stripped out.
</p><p>Once again, control what you can: use carefully mapped (i.e. non-chained) code 301 server-side redirects to preserve referrer data wherever possible.
</p><h3>5. Non-web documents</h3><p>Links in Microsoft Word documents, slide decks, or PDFs do not pass referrer information. By default, users who click these links will appear in your reports as direct traffic. Clicks from native mobile apps (particularly those with embedded "in-app" browsers) are similarly prone to stripping out referrer data.
</p><p>To a degree, this is unavoidable. Much like so-called “dark social” visits (discussed in detail below), non-web links will inevitably result in some quantity of direct traffic. However, you also have an opportunity here to <em>control the controllables.</em>
</p><p>If you publish whitepapers or offer downloadable PDF guides, for example, you should be tagging the embedded hyperlinks with <a href="https://ga-dev-tools.appspot.com/campaign-url-builder/" target="_blank">UTM campaign parameters</a>. You’d never even contemplate launching an email marketing campaign without campaign tracking (I hope), so why would you distribute any other kind of freebie without similarly tracking its success? In some ways this is even <em>more</em> important, since these kinds of downloadables often have a longevity not seen in a single email campaign. Here’s an example of a properly tagged URL which we would embed as a link:
</p><p style="text-align: center;">https://builtvisible.com/embedded-whitepaper-url/?utm_source=whitepaper&utm<br>
</p><p>The same goes for URLs in your <em>offline</em> marketing materials. For major campaigns it’s common practice to select a short, memorable URL (e.g. moz.com/tv/) and design an entirely new landing page. It’s possible to bypass page creation altogether: simply redirect the vanity URL to an existing page URL which is properly tagged with UTM parameters.
</p><p>So, whether you tag your URLs directly, use redirected vanity URLs, or — if you think UTM parameters are ugly — opt for some crazy-ass hash-fragment solution with GTM (<a href="https://builtvisible.com/one-weird-trick-to-avoid-utm-parameters/" target="_blank">read more here</a>), the takeaway is the same: use campaign parameters wherever it’s appropriate to do so.
</p><h3>6. “Dark social”</h3><p>This is a big one, and probably the least well understood by marketers.
</p><p>The term “dark social” was first coined back in 2012 by Alexis Madrigal in <a href="https://www.theatlantic.com/technology/archive/2012/10/dark-social-we-have-the-whole-history-of-the-web-wrong/263523/" target="_blank">an article for The Atlantic</a>. Essentially it refers to methods of social sharing which cannot easily be attributed to a particular source, like email, instant messaging, Skype, WhatsApp, and Facebook Messenger.
</p><p><a href="https://radiumone.com/darksocial/" target="_blank">Recent studies</a> have found that upwards of 80% of consumers’ outbound sharing from publishers’ and marketers’ websites now occurs via these private channels. In terms of numbers of active users, messaging apps are <em>outpacing</em> social networking apps. All the activity driven by these thriving platforms is typically bucketed as direct traffic by web analytics software.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df788741f31.91792226.png">
</p><p>People who use the ambiguous phrase “social media marketing” are typically referring to <em>advertising</em>: you broadcast your message and hope people will listen. Even if you overcome consumer indifference with a well-targeted campaign, any subsequent interactions are affected by their very public nature. The privacy of dark social, by contrast, represents a potential goldmine of intimate, targeted, and relevant interactions with high conversion potential. Nebulous and difficult-to-track though it may be, dark social has the potential to let marketers tap into elusive power of <em>word of mouth</em>.
</p><p>So, how can we minimize the amount of dark social traffic which is bucketed under direct? The unfortunate truth is that there is no magic bullet: proper attribution of dark social requires rigorous campaign tracking. The optimal approach will vary greatly based on your industry, audience, proposition, and so on. For many websites, however, a good first step is to provide convenient and properly configured sharing buttons for private platforms like email, WhatsApp, and Slack, thereby ensuring that users share URLs appended with UTM parameters (or vanity/shortened URLs which redirect to the same). This will go some way towards shining a light on <em>part</em> of your dark social traffic.
</p><h2>Checklist: Minimizing direct traffic</h2><p>To summarize what we’ve already discussed, here are the steps you can take to minimize the level of unnecessary direct traffic in your reports:
</p><ol>
<li><strong>Migrate to HTTPS: </strong>Not only is the secure protocol your gateway to HTTP/2 and the future of the web, it will also have an enormously positive effect on your ability to track referral traffic.</li>
<li><strong>Manage your use of redirects:</strong> Avoid chains and eliminate client-side redirection in favour of carefully-mapped, single-hop, server-side 301s. If you use vanity URLs to redirect to pages with UTM parameters, be meticulous.</li>
<li><strong>Get really good at campaign tagging:</strong> Even amongst data-driven marketers I encounter the belief that UTM begins and ends with switching on automatic tagging in your email marketing software. Others go to the other extreme, doing silly things like tagging internal links. Control what you can, and your ability to carry out meaningful attribution will markedly improve.</li>
<li><strong>Conduct an Analytics audit: </strong>Data integrity is vital, so consider this essential when assessing the success of your marketing. It’s not simply a case of checking for missing track code: good audits involve a review of your measurement plan and rigorous testing at page and property-level.</li>
</ol><p>Adhere to these principles, and it’s often possible to achieve a dramatic reduction in the level of direct traffic reported in Analytics. The following example involved an HTTPS migration, GTM migration (as part of an Analytics review), and an overhaul of internal campaign tracking processes over the course of about 6 months:
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df7892938f9.72562485.png">
</p><p>But the saga of direct traffic doesn’t end there! Once this channel is “clean” — that is, once you’ve minimized the number of avoidable pollutants — what remains might actually be one of your most valuable traffic segments.
</p><h2>Analyze! Or: why direct traffic can actually be pretty cool</h2><p>For reasons we’ve already discussed, traffic from bookmarks and dark social is an enormously valuable segment to analyze. These are likely to be some of your most loyal and engaged users, and it’s not uncommon to see a notably higher conversion rate for a clean direct channel compared to the site average. You should make the effort to get to know them.
</p><p class="full-width"><img src="http://d2v4zi8pl64nxt.cloudfront.net/guide-to-direct-traffic-google-analytics/5a1df7898e1021.58268901.png">
</p><p>The number of potential avenues to explore is infinite, but here are some good starting points:
</p><ul>
<li>Build meaningful custom segments, defining a subset of your direct traffic based on their landing page, location, device, repeat visit or purchase behavior, or even enhanced e-commerce interactions.</li>
<li>Track meaningful engagement metrics using modern GTM triggers such as <a href="https://builtvisible.com/tracking-element-visibility-with-gtm/" target="_blank">element visibility and native scroll tracking</a>. Measure <em>how</em> your direct users are using and viewing your content.</li>
<li>Watch for correlations with your other marketing activities, and use it as an opportunity to refine your tagging practices and segment definitions. Create a <a href="https://support.google.com/analytics/answer/1033021?hl=en" target="_blank">custom alert</a> which watches for spikes in direct traffic.</li>
<li>Familiarize yourself with flow reports to get an understanding of how your direct traffic is converting. By using Goal Flow and Behavior Flow reports with segmentation, it’s often possible to glean actionable insights which can be applied to the site as a whole.</li>
<li>Ask your users for help! If you’ve isolated a valuable segment of traffic which eludes deeper analysis, add a button to the page offering visitors a free downloadable ebook if they tell you how they discovered your page.</li>
<li>Start thinking about lifetime value, if you haven’t already — overhauling your attribution model or implementing <a href="https://support.google.com/analytics/answer/3123662?hl=en" target="_blank">User ID</a> are good steps towards overcoming the indifference or frustration felt by marketers towards direct traffic.</li>
</ul><p>I hope this guide has been useful. With any luck, you arrived looking for ways to reduce the level of direct traffic in your reports, and left with some new ideas for how to better analyze this valuable segment of users.
</p><p>Thanks for reading!
</p><br /><p><a href="https://moz.com/moztop10">Sign up for The Moz Top 10</a>, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!</p><img src="http://feeds.feedburner.com/~r/MozBlog/~4/IMLT-n7SO4Y" height="1" width="1" alt=""/>Wed, 29 Nov 2017 00:05:00 GMThttps://moz.com/blog/guide-to-direct-traffic-google-analyticstombennet2017-11-29T00:05:00ZSemantic Keyword Research and Topic Modelshttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/JNkaGHxY8lQ/
http://www.seobythesea.com/2017/11/semantic-keyword-research-topic-models/feed/2819443http://www.seobythesea.com/2017/11/semantic-keyword-research-topic-models/<p>I went to the Pubcon 2017 Conference this week in Las Vegas Nevada and gave a presentation about Semantic Search topics based upon white papers and patents from Google. My focus was on things such as Context Vectors and Phrase-Based Indexing. I promised in social media that I would post the presentation on my blog [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/11/semantic-keyword-research-topic-models/">Semantic Keyword Research and Topic Models</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><p><img src="http://www.seobythesea.com/wp-content/think.jpg" alt="Seeing Meaning" width="600" height="400" class="alignleft size-full wp-image-19444" srcset="http://www.seobythesea.com/wp-content/think.jpg 600w, http://www.seobythesea.com/wp-content/think-300x200.jpg 300w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<p>I went to the Pubcon 2017 Conference this week in Las Vegas Nevada and gave a presentation about Semantic Search topics based upon white papers and patents from Google. My focus was on things such as Context Vectors and Phrase-Based Indexing. </p>
<p>I promised in social media that I would post the presentation on my blog so that I could answer questions if anyone had any. </p>
<p>I&#8217;ve been doing keyword research like this for years, where I&#8217;ve looked at other pages that rank well for keyword terms that I want to use, and identify phrases and terms that tend to appear upon those pages, and include them on pages that I am trying to optimize. It made a lot of sense to start doing that after reading about phrase based indexing in 2005 and later.</p>
<p>Some of the terms I see when I search for Semantic Keyword Research include such things as &#8220;improve your rankings,&#8221; and &#8220;conducting keyword research&#8221; and &#8220;smarter content.&#8221; I&#8217;m seeing phrases that I&#8217;m not a fan of such as &#8220;LSI Keywords&#8221; which has as much scientific credibility as Keyword Density, which is next to none. There were researchers from Bell Labs, in 1990, who wrote a white paper about Latent Semantic Indexing, which was something that was used with small (less than 10,000 documents) and static collections of documents (the web is constantly changing and hasn&#8217;t been that small for a long time.) </p>
<p><span id="more-19443"></span></p>
<p>There are many people who call themselves SEOs who tout LSI keywords as being keywords that are based upon having related meanings to other words, unfortunately, that has nothing to do with the LSI that was developed in 1990.</p>
<p>If you are going to present research or theories about things such as LSI, it really pays to do a little research first. Here&#8217;s my presentation. It includes links to patents and white papers that the ideas within in are based upon. I do look forward to questions.</p>
<p><iframe src="//www.slideshare.net/slideshow/embed_code/key/eArHyGpWTKoh2k" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> </p>
<div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/billslawski/keyword-research-and-topic-modeling-in-a-semantic-web" title="Keyword Research and Topic Modeling in a Semantic Web" target="_blank">Keyword Research and Topic Modeling in a Semantic Web</a> </strong> from <strong><a href="https://www.slideshare.net/billslawski" target="_blank">Bill Slawski</a></strong> </div>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/11/semantic-keyword-research-topic-models/">Semantic Keyword Research and Topic Models</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/JNkaGHxY8lQ" height="1" width="1" alt=""/>Semantic SearchFri, 10 Nov 2017 01:09:53 GMThttp://www.seobythesea.com/2017/11/semantic-keyword-research-topic-models/#commentshttp://www.seobythesea.com/?p=19443Bill Slawski2017-11-10T01:09:53ZDoes Tomorrow Deliver Topical Search Results at Google?http://feedproxy.google.com/~r/seobythesea/Tesr/~3/e9Jc8wayPSU/
http://www.seobythesea.com/2017/10/topical-search-results/feed/2919430http://www.seobythesea.com/2017/10/topical-search-results/<p>At one point in time, search engines such as Google learned about topics on the Web from sources such as Yahoo! and the Open Directory Project, which provided categories of sites, within directories that people could skim through to find something that they might be interested in. Those listings of categories included hierarchical topics and [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/10/topical-search-results/">Does Tomorrow Deliver Topical Search Results at Google?</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><figure id="attachment_19431" style="width: 500px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/pepper-tree.jpg" alt="" width="500" height="334" class="size-full wp-image-19431" srcset="http://www.seobythesea.com/wp-content/pepper-tree.jpg 500w, http://www.seobythesea.com/wp-content/pepper-tree-300x200.jpg 300w" sizes="(max-width: 500px) 100vw, 500px" /><figcaption class="wp-caption-text">The Oldest Pepper Tree in California</figcaption></figure>
<p>At one point in time, search engines such as Google learned about topics on the Web from sources such as Yahoo! and the Open Directory Project, which provided categories of sites, within directories that people could skim through to find something that they might be interested in. </p>
<p>Those listings of categories included hierarchical topics and subtopics; but they were managed by human beings and both directories have closed down. </p>
<p>In addition to learning about categories and topics from such places, search engines used to use such sources to do focused crawls of the web, to make sure that they were indexing as wide a range of topics as they could. </p>
<p><span id="more-19430"></span></p>
<p>It&#8217;s possible that we are seeing those sites replaced by sources such as Wikipedia and <a href="https://www.wikidata.org/wiki/Wikidata:Introduction">Wikidata</a> and <a href="https://www.google.com/intl/bn/insidesearch/features/search/knowledge.html">Google&#8217;s Knowledge Graph</a> and the <a href="https://concept.research.microsoft.com/Home/Introduction">Microsoft Concept Graph</a>.</p>
<p>Last year, I wrote a post called, <a href="http://www.seobythesea.com/2016/10/google-patents-context-vectors-improve-search/">Google Patents Context Vectors to Improve Search</a>. It focused upon a Google patent titled <a href="https://patentscope.wipo.int/search/en/detail.jsf?docId=US177618724">User-context-based search engine</a>. </p>
<p>In that patent we learned that Google was using information from knowledge bases (sources such as Yahoo Finance, IMDB, Wikipedia, and other data-rich and well organized places) to learn about words that may have more than one meaning. </p>
<p>An example from that patent was that the word &#8220;horse&#8221; has different meanings in different contexts. </p>
<p>To an equestrian, a horse is an animal. To a carpenter, a horse is a work tool when they do carpentry. To a gymnast, a horse is a piece of equipment that they perform manuevers upon during competitions with other gymnasts. </p>
<p>A context vector takes these different meanings from knowledge bases, and the number of times they are mentioned in those places to catalogue how often they are used in which context.</p>
<p>I thought knowing about context vectors was useful for doing keyword research, but I was excited to see another patent from Google appear where the word &#8220;context&#8221; played a featured role in the patent. When you search for something such as a &#8220;horse&#8221;, the search results you recieve are going to be mixed with horses of different types, depending upon the meaning. As this new patent tells us about such search results:</p>
<blockquote><p>The ranked list of search results may include search results associated with a topic that the user does not find useful and/or did not intend to be included within the ranked list of search results.</p></blockquote>
<p>If I was searching for a horse of the animal type, I might include another word in my query that identified the context of my search better. The inventors of this new patent seem to have a similar idea. The patent mentions</p>
<blockquote><p>In yet another possible implementation, a system may include one or more server devices to receive a search query and context information associated with a document identified by the client; obtain search results based on the search query, the search results identifying documents relevant to the search query; analyze the context information to identify content; and generate a group of first scores for a hierarchy of topics, each first score, of the group of first scores, corresponding to a respective measure of relevance of each topic, of the hierarchy of topics, to the content.</p></blockquote>
<p>From the pictures that accompany the patent it looks like this context information is in the form of Headings that appear above each search result that identify Context information that those results fit within. Here&#8217;s a drawing from the patent showing off topical search results (showing rock/music and geology/rocks):</p>
<figure id="attachment_19432" style="width: 625px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/Search-Results-in-Context.jpg" alt="Search Results in Context" width="625" height="765" class="size-full wp-image-19432" srcset="http://www.seobythesea.com/wp-content/Search-Results-in-Context.jpg 625w, http://www.seobythesea.com/wp-content/Search-Results-in-Context-245x300.jpg 245w" sizes="(max-width: 625px) 100vw, 625px" /><figcaption class="wp-caption-text">Different types of &#8216;rock&#8217; on a search for &#8216;rock&#8217; at Google</figcaption></figure>
<p>This patent does remind me of the context vector patent, and the two processes in these two patents look like they could work together. This patent is:</p>
<p><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PALL&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&#038;r=1&#038;f=G&#038;l=50&#038;s1=9,779,139.PN.&#038;OS=PN/9,779,139&#038;RS=PN/9,779,139">Context-based filtering of search results</a><br />
Inventors: Sarveshwar Duddu, Kuntal Loya, Minh Tue Vo Thanh and Thorsten Brants<br />
Assignee: Google Inc.<br />
US Patent: 9,779,139<br />
Granted: October 3, 2017<br />
Filed: March 15, 2016</p>
<p>Abstract</p>
<blockquote><p>A server is configured to receive, from a client, a query and context information associated with a document; obtain search results, based on the query, that identify documents relevant to the query; analyze the context information to identify content; generate first scores for a hierarchy of topics, that correspond to measures of relevance of the topics to the content; select a topic that is most relevant to the context information when the topic is associated with a greatest first score; generate second scores for the search results that correspond to measures of relevance, of the search results, to the topic; select one or more of the search results as being most relevant to the topic when the search results are associated with one or more greatest second scores; generate a search result document that includes the selected search results; and send, to a client, the search result document.</p></blockquote>
<p>It will be exciting to see topical search results start appearing at Google.</p>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/10/topical-search-results/">Does Tomorrow Deliver Topical Search Results at Google?</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/e9Jc8wayPSU" height="1" width="1" alt=""/>Search Engine Optimization (SEO)Fri, 13 Oct 2017 19:50:41 GMThttp://www.seobythesea.com/2017/10/topical-search-results/#commentshttp://www.seobythesea.com/?p=19430Bill Slawski2017-10-13T19:50:41ZUsing Ngram Phrase Models to Generate Site Quality Scoreshttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/9hOowoUbEkQ/
http://www.seobythesea.com/2017/09/site-quality-scores/feed/2619415http://www.seobythesea.com/2017/09/site-quality-scores/<p>Navneet Panda, whom the Google Panda update is named after, has co-invented a new patent that focuses on site quality scores. It&#8217;s worth studying to understand how it determines the quality of sites. Back in 2013, I wrote the post Google Scoring Gibberish Content to Demote Pages in Rankings, about Google using ngrams from sites [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/09/site-quality-scores/">Using Ngram Phrase Models to Generate Site Quality Scores</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><figure id="attachment_19416" style="width: 640px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/640px-Scrabble_game_in_progress.jpg" alt="Scrabble-phrases" width="640" height="424" class="size-full wp-image-19416" srcset="http://www.seobythesea.com/wp-content/640px-Scrabble_game_in_progress.jpg 640w, http://www.seobythesea.com/wp-content/640px-Scrabble_game_in_progress-300x199.jpg 300w" sizes="(max-width: 640px) 100vw, 640px" /><figcaption class="wp-caption-text">Source: <a href="https://commons.wikimedia.org/wiki/File:Scrabble_game_in_progress.jpg">https://commons.wikimedia.org/wiki/File:Scrabble_game_in_progress.jpg</a><br />Photographer: <a href="https://commons.wikimedia.org/wiki/User_talk:McGeddon">McGeddon</a><br />Creative Commons License: <a href="https://creativecommons.org/licenses/by/2.0/deed.en">Attribution 2.0 Generic</a></figcaption></figure>
<p>Navneet Panda, whom the Google Panda update is named after, has co-invented a new patent that focuses on site quality scores. It&#8217;s worth studying to understand how it determines the quality of sites.</p>
<p>Back in 2013, I wrote the post <a href="http://www.seobythesea.com/2013/10/google-gibberish-content-to-demote-pages/">Google Scoring Gibberish Content to Demote Pages in Rankings</a>, about Google using ngrams from sites and building language models from them to determine if those sites were filled with gibberish, or spammy content. I was reminded of that post when I read this patent. </p>
<p><span id="more-19415"></span></p>
<p>Rather than explaining what ngrams are in this post (which I did in the gibberish post), I&#8217;m going to point to an example of ngrams at the <a href="https://books.google.com/ngrams">Google n-gram viewer</a>, which shows Google indexing phrases in scanned books. This article published by the Wired site also focused upon ngrams: <a href="https://www.wired.com/2015/10/pitfalls-of-studying-language-with-google-ngram/#slide-1">The Pitfalls of Using Google Ngram to Study Language</a>. </p>
<p>An ngram phrase could be a 2-gram, a 3-gram, a 4-gram, or a 5-gram phrase; where pages are broken down into two-word phrases, three-word phrases, four-word phrases, or 5 word phrases. If a body of pages are broken down into ngrams, they could be used to create language models or phrase models to compare to other pages. </p>
<p>Language models, like the ones that Google used to create gibberish scores for sites could also be used to determine the quality of sites, if example sites were used to generate those language models. That seems to be the idea behind the new patent granted this week. The summary section of the patent tells us about this use of the process it describes and protects:</p>
<blockquote><p>In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of obtaining baseline site quality scores for a plurality of previously-stored sites; generating a phrase model for a plurality of sites including the plurality of previously-scored sites, wherein the phrase model defines a mapping from phrase-specific relative frequency measures to phrase-specific baseline site quality scores; for a new site, the new site not being one of the plurality of previously-scored sites, obtaining a relative frequency measure for each of a plurality of phrases in the new site; determining an aggregate site quality score for the new site from the phrase model using the relative frequency measures of the plurality of phrases in the new site; and determining a predicted site quality score for the new site from the aggregate site quality score.</p></blockquote>
<p>The newly granted patent from Google is:</p>
<p><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PALL&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&#038;r=1&#038;f=G&#038;l=50&#038;s1=9,767,157.PN.&#038;OS=PN/9,767,157&#038;RS=PN/9,767,157">Predicting site quality</a><br />
Inventors: Navneet Panda and Yun Zhou<br />
Assignee: Google<br />
US Patent: 9,767,157<br />
Granted: September 19, 2017<br />
Filed: March 15, 2013</p>
<p>Abstract</p>
<blockquote><p>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicating a measure of quality for a site, e.g., a web site. In some implementations, the methods include obtaining baseline site quality scores for multiple previously scored sites; generating a phrase model for multiple sites including the previously scored sites, wherein the phrase model defines a mapping from phrase specific relative frequency measures to phrase specific baseline site quality scores; for a new site that is not one of the previously scored sites, obtaining a relative frequency measure for each of a plurality of phrases in the new site; determining an aggregate site quality score for the new site from the phrase model using the relative frequency measures of phrases in the new site; and determining a predicted site quality score for the new site from the aggregate site quality score.</p></blockquote>
<p>In addition to generating ngrams from text upon sites, in some versions of the implementation of this patent will include generating ngrams from anchor text of links pointing to pages of the sites. Building a phrase model involves calculating the frequency of n-grams on a site &#8220;based on the count of pages divided by the number of pages on the site.&#8221; </p>
<p>The patent tells us that site quality scores can impact rankings of pages from those sites, according to the patent:</p>
<blockquote><p>Obtain baseline site quality scores for a number of previously-scored sites. The baseline site quality scores are scores used by the system, e.g., by a ranking engine of the system, as signals, among other signals, to rank search results. In some implementations, the baseline scores are determined by a backend process that may be expensive in terms of time or computing resources, or by a process that may not be applicable to all sites. For these or other reasons, baseline site quality scores are not available for all sites.</p></blockquote>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/09/site-quality-scores/">Using Ngram Phrase Models to Generate Site Quality Scores</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/9hOowoUbEkQ" height="1" width="1" alt=""/>Search Engine Optimization (SEO)Fri, 22 Sep 2017 05:33:00 GMThttp://www.seobythesea.com/2017/09/site-quality-scores/#commentshttp://www.seobythesea.com/?p=19415Bill Slawski2017-09-22T05:33:00ZGoogle’s Project Jacquard: Textile-Based Device Controlshttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/3XybAa7rqNI/
http://www.seobythesea.com/2017/09/textile-based-device-controls/feed/1219397http://www.seobythesea.com/2017/09/textile-based-device-controls/<p>I remember my father building some innovative plastics blow molding machines where he added a central processing control device to the machines so that all adjustable controls could be changed from one place. He would have loved seeing what is going on at Google these days, and the hardware that they are working on developing, [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/09/textile-based-device-controls/">Google&#8217;s Project Jacquard: Textile-Based Device Controls</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><p><img src="http://www.seobythesea.com/wp-content/Textile-Devices.jpg" alt="Textile Devices with Controls Built into them" width="647" height="472" class="alignleft size-full wp-image-19398" srcset="http://www.seobythesea.com/wp-content/Textile-Devices.jpg 647w, http://www.seobythesea.com/wp-content/Textile-Devices-300x219.jpg 300w" sizes="(max-width: 647px) 100vw, 647px" /></p>
<p>I remember my father building some innovative plastics blow molding machines where he added a central processing control device to the machines so that all adjustable controls could be changed from one place. He would have loved seeing what is going on at Google these days, and the hardware that they are working on developing, which focuses on building controls into textiles and plastics.</p>
<p>Outside of search efforts from Google, but it is interesting seeing what else they may get involved in since that is begining to cover a wider and wider range of things, from self-driving cars to glucose analyzing contact lenses.</p>
<p> This morning I tweeted an article I saw in the Sun, from the UK that was kind of interesting: <a href="https://www.thesun.co.uk/motors/4464108/googles-creating-touch-sensitive-car-seats-that-will-switch-on-air-con-sat-nav-and-change-music-with-a-bum-wiggle/">Seating Plan Google’s creating touch sensitive car seats that will switch on air con, sat-nav and change music with a BUM WIGGLE</a></p>
<p><span id="more-19397"></span></p>
<p>It had me curious if I could find patents related to Google&#8217;s Project Jacquard, so I went to the USPTO website, and searched, and a couple came up.</p>
<p><a href="http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PG01&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.html&#038;r=1&#038;f=G&#038;l=50&#038;s1=%2220170232538%22.PGNR.&#038;OS=DN/20170232538&#038;RS=DN/20170232538">Attaching Electronic Components to Interactive Textiles</a><br />
Inventors: Karen Elizabeth Robinson, Nan-Wei Gong, Mustafa Emre Karagozler, Ivan Poupyrev<br />
Assignee: Google<br />
US Patent Application: 20170232538<br />
Granted: August 17, 2017<br />
Filed: May 3, 2017</p>
<p>Abstract</p>
<blockquote><p>This document describes techniques and apparatuses for attaching electronic components to interactive textiles. In various implementations, an interactive textile that includes conductive thread woven into the interactive textile is received. The conductive thread includes a conductive wire (e.g., a copper wire) that that is twisted, braided, or wrapped with one or more flexible threads (e.g., polyester or cotton threads). A fabric stripping process is applied to the interactive textile to strip away fabric of the interactive textile and the flexible threads to expose the conductive wire in a window of the interactive textile. After exposing the conductive wires in the window of the interactive textile, an electronic component (e.g., a flexible circuit board) is attached to the exposed conductive wire of the conductive thread in the window of the interactive textile.</p></blockquote>
<p><a href="http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PG01&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.html&#038;r=1&#038;f=G&#038;l=50&#038;s1=%2220170115777%22.PGNR.&#038;OS=DN/20170115777&#038;RS=DN/20170115777">Interactive Textiles</a><br />
Inventors: Ivan Poupyrev<br />
Assignee: Google Inc.<br />
US Patent Application: 20170115777<br />
Granted: April 27, 2017<br />
Filed: January 4, 2017</p>
<p>Abstract</p>
<blockquote><p>This document describes interactive textiles. An interactive textile includes a grid of conductive thread woven into the interactive textile to form a capacitive touch sensor that is configured to detect touch input. The interactive textile can process the touch-input to generate touch data that is useable to control various remote devices. For example, the interactive textiles may aid users in controlling volume on a stereo, pausing a movie playing on a television, or selecting a web page on a desktop computer. Due to the flexibility of textiles, the interactive textile may be easily integrated within flexible objects, such as clothing, handbags, fabric casings, hats, and so forth. In one or more implementations, the interactive textiles may be integrated within various hard objects, such as by injection molding the interactive textile into a plastic cup, a hard casing of a smart phone, and so forth.</p></blockquote>
<p>The drawings that accompanied this patent were interesting because they showed off how gestures used on controls might be used:</p>
<p><img src="http://www.seobythesea.com/wp-content/in-action.jpg" alt="Controls in action" width="698" height="517" class="alignleft size-full wp-image-19399" srcset="http://www.seobythesea.com/wp-content/in-action.jpg 698w, http://www.seobythesea.com/wp-content/in-action-300x222.jpg 300w" sizes="(max-width: 698px) 100vw, 698px" /></p>
<figure id="attachment_19400" style="width: 510px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/Textile-Controller.jpg" alt="textile controller" width="510" height="360" class="size-full wp-image-19400" srcset="http://www.seobythesea.com/wp-content/Textile-Controller.jpg 510w, http://www.seobythesea.com/wp-content/Textile-Controller-300x212.jpg 300w" sizes="(max-width: 510px) 100vw, 510px" /><figcaption class="wp-caption-text">Here is a look at the textile controller.</figcaption></figure>
<figure id="attachment_19401" style="width: 713px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/double-Tap-Gestures.jpg" alt="double tap" width="713" height="712" class="size-full wp-image-19401" srcset="http://www.seobythesea.com/wp-content/double-Tap-Gestures.jpg 713w, http://www.seobythesea.com/wp-content/double-Tap-Gestures-150x150.jpg 150w, http://www.seobythesea.com/wp-content/double-Tap-Gestures-300x300.jpg 300w" sizes="(max-width: 713px) 100vw, 713px" /><figcaption class="wp-caption-text">A double tap on the controller is possible.</figcaption></figure>
<figure id="attachment_19402" style="width: 717px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/two-finger-touch.jpg" alt="two finger touch" width="717" height="680" class="size-full wp-image-19402" srcset="http://www.seobythesea.com/wp-content/two-finger-touch.jpg 717w, http://www.seobythesea.com/wp-content/two-finger-touch-300x285.jpg 300w" sizes="(max-width: 717px) 100vw, 717px" /><figcaption class="wp-caption-text">A two finger touch on the controller is also possible.</figcaption></figure>
<figure id="attachment_19403" style="width: 747px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/Swipe-up.jpg" alt="swipe up" width="747" height="744" class="size-full wp-image-19403" srcset="http://www.seobythesea.com/wp-content/Swipe-up.jpg 747w, http://www.seobythesea.com/wp-content/Swipe-up-150x150.jpg 150w, http://www.seobythesea.com/wp-content/Swipe-up-300x300.jpg 300w" sizes="(max-width: 747px) 100vw, 747px" /><figcaption class="wp-caption-text">You can swipe up on textile controllers</figcaption></figure>
<figure id="attachment_19404" style="width: 750px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/injection-molding.jpg" alt="extruder" width="750" height="480" class="size-full wp-image-19404" srcset="http://www.seobythesea.com/wp-content/injection-molding.jpg 750w, http://www.seobythesea.com/wp-content/injection-molding-300x192.jpg 300w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-caption-text">An Extruder showing plastics materials being heated up to send to a mold</figcaption></figure>
<figure id="attachment_19405" style="width: 556px" class="wp-caption alignleft"><img src="http://www.seobythesea.com/wp-content/molded-devices.jpg" alt="molded devices" width="556" height="725" class="size-full wp-image-19405" srcset="http://www.seobythesea.com/wp-content/molded-devices.jpg 556w, http://www.seobythesea.com/wp-content/molded-devices-230x300.jpg 230w" sizes="(max-width: 556px) 100vw, 556px" /><figcaption class="wp-caption-text">The patent shows off plastic molder devices with controls built into them.</figcaption></figure>
<p>My father would have gotten a kick out of seeing a plastics extruder in a Google Patent (I know I did.)</p>
<p>It will be interesting seeing textile and plastics controls come out as described in these patents.</p>
<p><em>Added 9/25/2017:</em> Saw this news this morning: <a href="https://www.theverge.com/2017/9/25/16354712/google-project-jacquard-levis-commuter-trucker-jacket-price-release-date">This Levi’s jacket with a smart sleeve is finally going on sale for $350</a></p>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/09/textile-based-device-controls/">Google&#8217;s Project Jacquard: Textile-Based Device Controls</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/3XybAa7rqNI" height="1" width="1" alt=""/>GadgetsThu, 14 Sep 2017 16:00:03 GMThttp://www.seobythesea.com/2017/09/textile-based-device-controls/#commentshttp://www.seobythesea.com/?p=19397Bill Slawski2017-09-14T16:00:03ZCitations behind the Google Brain Word Vector Approachhttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/DtlPrqAKqQo/
http://www.seobythesea.com/2017/09/word-vector-approach/feed/2519386http://www.seobythesea.com/2017/09/word-vector-approach/<p>In October of 2015, a new algorithm was announced by members of the Google Brain team, described in this post from Search Engine Land &#8211; Meet RankBrain: The Artificial Intelligence That’s Now Processing Google Search Results One of the Google Brain team members who gave Bloomberg News a long interview on Rankbrain, Gregory S. Corrado [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/09/word-vector-approach/">Citations behind the Google Brain Word Vector Approach</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><p><img src="http://www.seobythesea.com/wp-content/cardiff-tidal-pools.jpg" alt="Cardiff-Tidal-pools" width="500" height="334" class="alignleft size-full wp-image-19388" srcset="http://www.seobythesea.com/wp-content/cardiff-tidal-pools.jpg 500w, http://www.seobythesea.com/wp-content/cardiff-tidal-pools-300x200.jpg 300w" sizes="(max-width: 500px) 100vw, 500px" /></p>
<p>In October of 2015, a new algorithm was announced by members of the Google Brain team, described in this post from Search Engine Land &#8211; <a href="http://searchengineland.com/meet-rankbrain-google-search-results-234386">Meet RankBrain: The Artificial Intelligence That’s Now Processing Google Search Results</a> One of the Google Brain team members who gave Bloomberg News a long interview on Rankbrain, Gregory S. Corrado was a co-inventor on a patent that was granted this August along with other members of the Google Brain team. </p>
<p>In the SEM Post article, <a href="http://www.thesempost.com/rankbrain-everything-we-know-about-googles-ai-algorithm/">RankBrain: Everything We Know About Google’s AI Algorithm</a> we are told that Rankbrain uses concepts from Geoffrey Hinton, involving Thought Vectors. The summary in the description from the patent tells us about how a word vector approach might be used in such a system:</p>
<blockquote><p>Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. Unknown words in sequences of words can be effectively predicted if the surrounding words are known. Words surrounding a known word in a sequence of words can be effectively predicted. Numerical representations of words in a vocabulary of words can be easily and effectively generated. The numerical representations can reveal semantic and syntactic similarities and relationships between the words that they represent.</p>
<p><span id="more-19386"></span></p>
<p>By using a word prediction system having a two-layer architecture and by parallelizing the training process, the word prediction system can be can be effectively trained on very large word corpuses, e.g., corpuses that contain on the order of 200 billion words, resulting in higher quality numeric representations than those that are obtained by training systems on relatively smaller word corpuses. Further, words can be represented in very high-dimensional spaces, e.g., spaces that have on the order of 1000 dimensions, resulting in higher quality representations than when words are represented in relatively lower-dimensional spaces. Additionally, the time required to train the word prediction system can be greatly reduced.</p></blockquote>
<p>So, an incomplete or ambiguous query that contains some words could use those words to predict missing words that may be related. Those predicted words could then be used to return search results that the original words might have difficulties returning. The patent that describes this prediction process is:</p>
<p><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PALL&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&#038;r=1&#038;f=G&#038;l=50&#038;s1=9,740,680.PN.&#038;OS=PN/9,740,680&#038;RS=PN/9,740,680">Computing numeric representations of words in a high-dimensional space</a></p>
<p>Inventors: Tomas Mikolov, Kai Chen, Gregory S. Corrado and Jeffrey A. Dean<br />
Assignee: Google Inc.<br />
US Patent: 9,740,680<br />
Granted: August 22, 2017<br />
Filed: May 18, 2015</p>
<p>Abstract</p>
<blockquote><p>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing numeric representations of words. One of the methods includes obtaining a set of training data, wherein the set of training data comprises sequences of words; training a classifier and an embedding function on the set of training data, wherein training the embedding function comprises obtained trained values of the embedding function parameters; processing each word in the vocabulary using the embedding function in accordance with the trained values of the embedding function parameters to generate a respective numerical representation of each word in the vocabulary in the high-dimensional space; and associating each word in the vocabulary with the respective numeric representation of the word in the high-dimensional space.</p></blockquote>
<p>One of the things that I found really interesting about this patent was that it includes a number of citations from the applicants for the patent. They looked worth reading, and many of them were co-authored by inventors of this patent, by people who are well-known in the field of artificial intelligence, or by people from Google. When I saw them, I started hunting for locations on the Web for them, and I was able to find copies of them. I will be reading through them and thought it would be helpful to share those links; which was the idea behind this post. It may be helpful to read as many of these as possible before tackling this patent. If anything stands out in any way to you, let us know what you&#8217;ve found interesting.</p>
<p>Bengio and LeCun, &#8220;<a href="http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf">Scaling learning algorithms towards AI</a>,&#8221; Large-Scale Kernel Machines, MIT Press, 41 pages, 2007. cited by applicant.</p>
<p>Bengio et al., &#8220;<a href="http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf">A neural probabilistic language model</a>,&#8221; Journal of Machine Learning Research, 3:1137-1155, 2003. cited by applicant .</p>
<p>Brants et al., &#8220;<a href="http://www.aclweb.org/anthology/D07-1090.pdf">Large language models in machine translation</a>,&#8221; Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Language Learning, 10 pages, 2007. cited by applicant .</p>
<p>Collobert and Weston, &#8220;<a href="https://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf">A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning</a>,&#8221; International Conference on Machine Learning, ICML, 8 pages, 2008. cited by applicant .</p>
<p>Collobert et al., &#8220;<a href="http://www.jmlr.org/papers/volume12/collobert11a/collobert11a.pdf">Natural Language Processing (Almost) from Scratch</a>,&#8221; Journal of Machine Learning Research, 12:2493-2537, 2011. cited by applicant .</p>
<p>Dean et al., &#8220;<a href="http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks.pdf">Large Scale Distributed Deep Networks</a>,&#8221; Neural Information Processing Systems Conference, 9 pages, 2012. cited by applicant .</p>
<p>Elman, &#8220;<a href="http://psych.colorado.edu/~kimlab/Elman1990.pdf">Finding Structure in Time</a>,&#8221; Cognitive Science, 14, 179-211, 1990. cited by applicant .</p>
<p>Huang et al <a href="http://www.aclweb.org/anthology/P12-1092">Improving Word Representations via Global Context and Multiple Word Prototypes</a>,&#8221; Proc. Association for Computational Linguistics, 10 pages, 2012. cited by applicant .</p>
<p>Mikolov and Zweig, &#8220;<a href="http://www.aclweb.org/anthology/N13-1090">Linguistic Regularities in Continuous Space Word Representations</a>,&#8221; submitted to NAACL HLT, 6 pages, 2012. cited by applicant .</p>
<p>Mikolov et al., &#8220;<a href="http://www.fit.vutbr.cz/~imikolov/rnnlm/is2011_emp.pdf">Empirical Evaluation and Combination of Advanced Language Modeling Techniques</a>,&#8221; Proceedings of Interspeech, 4 pages, 2011. cited by applicant .</p>
<p>Mikolov et al., &#8220;<a href="https://github.com/yihui-he/Natural-Language-Process/blob/master/Extensions%20of%20recurrent%20neural%20network%20language%20model.pdf">Extensions of recurrent neural network language model</a>,&#8221; IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5528-5531, May 22-27, 2011. cited by applicant .</p>
<p>Mikolov et al., &#8220;<a href="http://www.fit.vutbr.cz/research/groups/speech/publi/2009/mikolov_ic2009_nnlm_4.pdf">Neural network based language models for highly inflective languages</a>,&#8221; Proc. ICASSP, 4 pages, 2009. cited by applicant .</p>
<p>Mikolov et al., &#8220;<a href="http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf">Recurrent neural network based language model</a>,&#8221; Proceedings of Interspeech, 4 pages, 2010. cited by applicant .</p>
<p>Mikolov et al., &#8220;<a href="https://www.microsoft.com/en-us/research/publication/strategies-for-training-large-scale-neural-network-language-models/">Strategies for Training Large Scale Neural Network Language Models</a>,&#8221; Proc. Automatic Speech Recognition and Understanding, 6 pages, 2011. cited by applicant .</p>
<p>Mikolov, &#8220;<a href="http://www.fit.vutbr.cz/~imikolov/rnnlm/">RNNLM Toolkit</a>,&#8221; Faculty of Information Technology (FIT) of Brno University of Technology [online], 2010-2012 [retrieved on Jun. 16, 2014]. Retrieved from the Internet: < URL: http://www.fit.vutbr.cz/.about.imikolov/rnnlm/>, 3 pages. cited by applicant .</p>
<p>Mikolov, &#8220;<a href="http://www.fit.vutbr.cz/~imikolov/rnnlm/thesis.pdf">Statistical Language Models based on Neural Networks</a>,&#8221; PhD thesis, Brno University of Technology, 133 pages, 2012. cited by applicant .</p>
<p>Mnih and Hinton, &#8220;<a href="http://papers.nips.cc/paper/3583-a-scalable-hierarchical-distributed-language-model.pdf">A Scalable Hierarchical Distributed Language Model</a>,&#8221; Advances in Neural Information Processing Systems 21, MIT Press, 8 pages, 2009. cited by applicant .</p>
<p>Morin and Bengio, &#8220;<a href="https://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats05.pdf">Hierarchical Probabilistic Neural Network Language Model</a>,&#8221; AISTATS, 7 pages, 2005. cited by applicant .</p>
<p>Rumelhart et al., &#8220;<a href="https://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf">Learning representations by back-propagating errors</a>,&#8221; Nature, 323:533-536, 1986. cited by applicant .</p>
<p>Turian et al., &#8220;<a href="http://cogcomp.org/page/publication_view/653">MetaOptimize / projects / wordreprs /</a>&#8221; Metaoptimize.com [online], captured on Mar. 7, 2012. Retrieved from the Internet using the Wayback Machine: < URL: http://web.archive.org/web/20120307230641/http://metaoptimize.com/project- s/wordreprs>, 2 pages. cited by applicant .<br />
Turlan et al., &#8220;<a href="http://www.aclweb.org/anthology/P10-1040">Word Representations: A Simple and General Method for Semi-Supervised Learning</a>,&#8221; Proc. Association for Computational Linguistics, 384-394, 2010. cited by applicant .</p>
<p>Turney, &#8220;<a href="https://arxiv.org/ftp/cs/papers/0508/0508053.pdf">Measuring Semantic Similarity by Latent Relational Analysis</a>,&#8221; Proc. International Joint Conference on Artificial Intelligence, 6 pages, 2005. cited by applicant .</p>
<p>Zweig and Burges, &#8220;<a href="https://www.microsoft.com/en-us/research/publication/the-microsoft-research-sentence-completion-challenge/">The Microsoft Research Sentence Completion Challenge</a>,&#8221; Microsoft Research Technical Report MSR-TR-2011-129, 7 pages, Feb. 20, 2011. cited by applicant.</p>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/09/word-vector-approach/">Citations behind the Google Brain Word Vector Approach</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/DtlPrqAKqQo" height="1" width="1" alt=""/>Search QueriesFri, 01 Sep 2017 20:37:59 GMThttp://www.seobythesea.com/2017/09/word-vector-approach/#commentshttp://www.seobythesea.com/?p=19386Bill Slawski2017-09-01T20:37:59ZPersonalizing Search Results at Googlehttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/1ob-AFBxIxE/
http://www.seobythesea.com/2017/08/personalizing-search-results/feed/2919375http://www.seobythesea.com/2017/08/personalizing-search-results/<p>One thing most SEOs are aware of is that search results at Google are sometimes personalized for searchers; but it&#8217;s not something that I&#8217;ve seen too much written about. So when I came across a patent that is about personalizing search results, I wanted to dig in, and see if it could give us more [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/08/personalizing-search-results/">Personalizing Search Results at Google</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><p><img src="http://www.seobythesea.com/wp-content/Document-Sets.jpg" alt="document sets at Google" width="649" height="764" class="alignleft size-full wp-image-19376" srcset="http://www.seobythesea.com/wp-content/Document-Sets.jpg 649w, http://www.seobythesea.com/wp-content/Document-Sets-255x300.jpg 255w" sizes="(max-width: 649px) 100vw, 649px" /></p>
<p>One thing most SEOs are aware of is that search results at Google are sometimes personalized for searchers; but it&#8217;s not something that I&#8217;ve seen too much written about. So when I came across a patent that is about personalizing search results, I wanted to dig in, and see if it could give us more insights. </p>
<p>The patent was an updated continuation patent, and I love to look at those, because it is possible to compare changes to claims from an older version, to see if they can provide some details of how processes described in those patents have changed. Sometimes changes are spelled out in great detail, and sometimes they focus upon different concepts that might be in the original version of the patent, but weren&#8217;t necessarily focused upon so much. </p>
<p>One of the last continuation patents I looked at was one from Navneet Panda, in the post, <a href="https://gofishdigital.com/high-quality-search-results/">Click a Panda: High Quality Search Results based on Repeat Clicks and Visit Duration</a> In that one, we saw a shift in focus to involve more user behavior data such as repeat clicks by the same user on a site, and the duration of a visit to a site. </p>
<p><span id="more-19375"></span></p>
<p><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PALL&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&#038;r=1&#038;f=G&#038;l=50&#038;s1=9,734,211.PN.&#038;OS=PN/9,734,211&#038;RS=PN/9,734,211">Personalizing search results</a><br />
Inventors: Paul Tucker<br />
Assignee: GOOGLE INC.<br />
US Patent: 9,734,211<br />
Granted: August 15, 2017<br />
Filed: February 27, 2015</p>
<p>Abstract</p>
<blockquote><p>A system receives a search query from a user and performs a search of a corpus of documents, based on the search query, to form a ranked set of search results. The system re-ranks the set of search results based on preferences of the user, or a group of users, and provides the re-ranked search results to the user.</p></blockquote>
<p>The older version of the patent is <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PALL&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&#038;r=1&#038;f=G&#038;l=50&#038;s1=8,977,630.PN.&#038;OS=PN/8,977,630&#038;RS=PN/8,977,630">Personalizing search results</a>, which was filed on September 16, 2013, and was granted on March 10, 2015.</p>
<p>A continuation patent has claims rewritten on it, that reflect changes in how a process that has been patented might have changed, using the filing date of the original version of the patent. </p>
<p>I like comparing the claims, since that is what usually changes in continuation patents. I noticed some significant changes from the older version to this newer version. </p>
<p>There is a lot more emphasis on &#8220;high quality&#8221; sites and &#8220;distrusted sites&#8221; in the new version of the patent, which can be seen in the first claim of the patent. It&#8217;s worth putting the old and the new first claim one after the other, and comparing the two.</p>
<h2>The Old First Claim</h2>
<blockquote><p>1. A method comprising: identifying, by at least one of one or more server devices, a first set of documents associated with a user, documents, in the first set of documents, being assigned weights that reflect a relative quantification of an interest of the user in the documents in the first set of documents; receiving, by at least one of the one or more server devices, a search query from a client device associated with the user; identifying, by at least one of the one or more server devices and based on the search query, a second set of documents, each document from the second set of documents having a respective score; determining, by at least one of the one or more server devices, that a particular document, from the second set of documents, matches or links to one of the documents in the first set of documents; adjusting, by at least one of the one or more server devices, the respective score of the particular document, to form an adjusted score, based on the weight assigned to the one of the documents in the first set of documents; forming, by at least one of the one or more server devices, a list of documents in which documents from the second set of documents are ranked based on the respective scores, the particular document being ranked in the list based on the adjusted score; and providing, by at least one of the one or more server devices, the list of documents to the client device.</p></blockquote>
<h2>The New First Claim</h2>
<p>This is newly granted this week:</p>
<blockquote><p>1. A method, comprising: determining, by at least one of one or more server devices, preferences of a user or a group of users, wherein the preferences indicate a document bias set and weights assigned to the documents, wherein the weights include <span style="background-color: #FFFF00">distrusted document weights</span>; determining, by the at least one of the one or more server devices, <span style="background-color: #FFFF00">a high quality document set</span> obtained from a document ranking algorithm; creating, by at least one of the one or more server devices, an intersection set of documents which includes documents in both the document bias set and the high quality document set; receiving, by at least one of the one or more server devices, a search query from the user; performing, by at least one of the one or more server devices, a search of a corpus of documents, based on the search query, to form a ranked set of search result documents; determining, by at least one of the one or more server devices, at least one link from the intersection set of documents to at least one document in the ranked set of search result documents, the at least one document not in the intersection set of documents; re-ranking, by at least one of the one or more server devices, the set of search result documents based on the preferences of the user or the group of users, wherein re-ranking the set of search results comprises: identifying a link of the set of links from the intersection set of documents to the document of the set of search result documents, and based on identifying the link, adjusting a rank of the search result document based on the weight assigned to the document in the document bias set from where the identified link originated from; and providing, by at least one of the one or more server devices, the re-ranked search results to the user.</p></blockquote>
<p>The changes I am seeing in these two different first claims involve what are being called &#8220;distrusted document weights&#8221; from a &#8220;document bias set&#8221;, and showing pages from &#8220;a high quality document set.&#8221; The newer claim makes it more clear that personalized results come from these two different sets of results. It&#8217;s possible that it doesn&#8217;t change how personalization actually works, but the increased clarity is good to see.</p>
<h2>The Purpose of these Personalizing Search Results Patents</h2>
<p>We are told that some sites are favored more than others, and some are disliked more than others, and those are are created from a query or browser history, to generate a document bias set:</p>
<blockquote><p>FIG. 1 illustrates an overview of the re-ranking of search results based on a user&#8217;s or group&#8217;s document or site preferences. In accordance with this aspect of the invention, a document bias set F 105 may be generated that indicates the user&#8217;s or group&#8217;s preferred and/or disfavored documents. Bias set F 105 may be automatically collected from a query or browser history of a user. Bias set F 105 may also be generated by human compilation, or editing of an automatically generated set. Bias set F 105 may include a set of documents shared, or developed, by a group that may further include a community of users of common interest. Document bias set F 105 may include one or more designated documents (e.g., documents a, b, x, y and z) with associated weights (e.g. w.sup.a.sub.F, w.sup.b.sub.F, w.sup.x.sub.F, w.sup.y.sub.F and w.sup.z.sub.F). The weights may be assigned to each document (e.g., documents a, b, x, y and z) based on a user&#8217;s, or group&#8217;s, relative preferences among documents of bias set F 105. For example, bias set F 105 may include a user&#8217;s personal most-respected, or most-distrusted, document list, with the weights being assigned to each document in bias set F 105 based on a relative quantification of the user&#8217;s preference among each of the documents of the set.</p></blockquote>
<p>This document bias set mention appears in both the older, and the newer version of the patent.</p>
<p>The patents also both refer to a high quality document set, and that is described in a way that seems to place a lot of attention on PageRank or a Hubs and Authority approach to ranking:</p>
<blockquote><p>A high quality document set L 110 may be obtained from any existing document ranking algorithm. Such document ranking algorithms may include a link-based ranking algorithm, such as, for example, Google&#8217;s PageRank algorithm, or Kleinberg&#8217;s Hubs and Authorities ranking algorithm. The document ranking algorithm may provide a global ranking of document quality that may be used for ranking the results of searches performed by search engines. High quality document set L 110 may be derived from the highest-ranking documents in the web as ranked by an existing document ranking algorithm. In one implementation, for example, set L 110 may include the top percentage of the documents globally ranked by an existing document ranking algorithm (e.g., the highest ranked 20% of documents). In an implementation using PageRank, set L 110 may include documents having PageRank scores higher than a threshold value (e.g., documents with PageRank scores higher than 10,000,000). Set L 110 may include multiple documents (e.g., documents m, n, o, p, x, y and z) with associated weights (e.g., weights W.sup.m.sub.L, W.sup.n.sub.L, W.sup.o.sub.L, W.sup.p.sub.L, W.sup.x.sub.L, W.sup.y.sub.L and W.sup.Z.sub.L). The weights may be assigned to each document (e.g., documents m, n, o, p, x, y and z) based on a relative ranking of &#8220;quality&#8221; between the different documents of set L 110 produced by the document ranking algorithm.</p></blockquote>
<p>Personalized results served to a searcher are results that come from both the document bias set, and the high quality document set (as the patent says, from an &#8220;intersection&#8221; between the two sets).</p>
<p>If you are interested in how personalized search may work at Google, spending some time with this new patent may provide some insights. Knowing about how two different sets of documents are involved in returning results is a good starting point.</p>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/08/personalizing-search-results/">Personalizing Search Results at Google</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/1ob-AFBxIxE" height="1" width="1" alt=""/>Search Engine Optimization (SEO)Wed, 16 Aug 2017 23:09:25 GMThttp://www.seobythesea.com/2017/08/personalizing-search-results/#commentshttp://www.seobythesea.com/?p=19375Bill Slawski2017-08-16T23:09:25ZLearn SEO Through Forumshttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/V4fgeDfdp2I/
http://www.seobythesea.com/2017/06/learn-seo-forums/feed/8419240http://www.seobythesea.com/2017/06/learn-seo-forums/<p>I had someone who was reading my previous entries in my Learning SEO series ask about using forums to learn SEO. I promised that I would write a post about the value of forums in learning SEO. Back in 1998 I became a moderator of a couple of forums on small business and website promotion [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/06/learn-seo-forums/">Learn SEO Through Forums</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><p><img src="http://www.seobythesea.com/wp-content/DSC_0008-e1498860064439.jpg" alt="Solana Beach Farmer&#039;s Market" width="500" height="334" class="alignleft size-full wp-image-19241" /></p>
<p>I had someone who was reading my previous entries in my Learning SEO series ask about using forums to learn SEO. I promised that I would write a post about the value of forums in learning SEO. </p>
<p>Back in 1998 I became a moderator of a couple of forums on small business and website promotion on Yahoo Groups. Those lead to me becoming a moderator at <a href="https://www.cre8asiteforums.com/forums/">Cre8asiteforums</a>, joining forum owner <a href="https://creativevisionwebconsulting.com/">Kim Krause Berg</a> along with a number of other moderators such as <a href="http://www.ammonjohns.com/">Ammon Johns</a> and <a href="http://www.highrankings.com/">Jill Whalen</a>. </p>
<p>Cre8asiteforums was (and still is) a tremendous place to talk about SEO and web design and usability and accessibility. One of my favorite individual forums on the site was one called <a href="https://www.cre8asiteforums.com/forums/forum/4-free-web-site-hospital/">The Website Hospital</a>, where people would bring their site&#8217;s URL and concerns about it, and ask questions. That was were I learned a lot about auditing sites, and seeing what worked well on them, and what might need some help. This thread is a good introduction to it: <a href="https://www.cre8asiteforums.com/forums/topic/30472-getting-started-in-the-website-hospital/">Getting Started in the Website Hospital</a>. </p>
<p>Here&#8217;s a thread I started in November of 2005 that was an interesting read, on <a href="https://www.cre8asiteforums.com/forums/topic/45004-seo-myths/">SEO Myths</a>.</p>
<p><span id="more-19240"></span></p>
<p>Another forum that I have gotten a lot of value from over the years is one call <a href="https://www.webmasterworld.com/">Webmasterworld</a>. Most of the members of this forum are practicing SEOs or siteowners, who enjoy sharing their experiences. It reminds me of a weather vane, in that people are often open with information about changes that they experience to rankings and traffic to their sites. You can see changes taking place on the Web from what they write.</p>
<p>Another place that can be informative about how search works is the <a href="https://productforums.google.com/forum/#!forum/webmasters">Google Webmaster Help Forum</a>. If you experience problems with a site, it is often a good place to search to see if anyone else has experienced something similar &#8211; it is possible that someone has, and the answers they received may help you as well.</p>
<p>There are other forums on the Web that focus upon SEO and Search. I&#8217;ve included the ones that I am most familiar with. There were some others that I participated on, that aren&#8217;t very active anymore. It doesn&#8217;t hurt to start off as a lurker, and learn about the customs and culture of a forum before you start participating in it. You may find some that you enjoy participating in very much. </p>
<p>When I started going to conferences and events after being involved in forums for a few years, I finally had a chance to meet in real life many people whom I had only met in discussions at forums. It was nice getting a chance to do so.</p>
<p>You can learn a lot through forums.</p>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/06/learn-seo-forums/">Learn SEO Through Forums</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/V4fgeDfdp2I" height="1" width="1" alt=""/>Learning SEOFri, 30 Jun 2017 22:08:44 GMThttp://www.seobythesea.com/2017/06/learn-seo-forums/#commentshttp://www.seobythesea.com/?p=19240Bill Slawski2017-06-30T22:08:44ZGoogle Patents Extracting Facts from the Webhttp://feedproxy.google.com/~r/seobythesea/Tesr/~3/2qLEfpwh9u0/
http://www.seobythesea.com/2017/06/google-patents-extracting-facts-web/feed/1619230http://www.seobythesea.com/2017/06/google-patents-extracting-facts-web/<p>When Google crawls the Web, it extracts facts from content on the pages it finds as well as links on pages. How much information does it extract about facts on the Web? Microsoft showed off an object-based search about 10 years ago, in the paper, Object-Level Ranking: Bringing Order to Web Objects.. The team from [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/06/google-patents-extracting-facts-web/">Google Patents Extracting Facts from the Web</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p><p><img src="http://www.seobythesea.com/wp-content/Porch-flower.jpg" alt="" width="500" height="500" class="alignleft size-full wp-image-19234" srcset="http://www.seobythesea.com/wp-content/Porch-flower.jpg 500w, http://www.seobythesea.com/wp-content/Porch-flower-150x150.jpg 150w, http://www.seobythesea.com/wp-content/Porch-flower-300x300.jpg 300w" sizes="(max-width: 500px) 100vw, 500px" /></p>
<p>When Google crawls the Web, it extracts facts from content on the pages it finds as well as links on pages. How much information does it extract about facts on the Web? Microsoft showed off an object-based search about 10 years ago, in the paper, <a href="http://www.ra.ethz.ch/CDstore/www2005/docs/p567.pdf">Object-Level Ranking: Bringing Order to Web Objects.</a>.</p>
<p>The team from Microsoft Research Asia tells us in that paper:</p>
<blockquote><p>Existing Web search engines generally treat a whole Web page as the unit for retrieval and consuming. However, there are various kinds of objects embedded in the static Web pages or Web databases. Typical objects are products, people, papers, organizations, etc. We can imagine that if these objects can be extracted and integrated from the Web, powerful object-level search engines can be built to meet users’ information needs more precisely, especially for some specific domains.</p></blockquote>
<p><span id="more-19230"></span></p>
<p>This patent from Google focuses upon extracting factual information about entities on the Web. It&#8217;s an approach that goes beyond making the Web index that we know Google for because it collects more information that is related to each other. The patent tells us:</p>
<blockquote><p>Information extraction systems automatically extract structured information from unstructured or semi-structured documents. For example, some information extraction systems that exist extract facts from collections of electronic documents, with each fact identifying a subject entity, an attribute possessed by the entity, and the value of the attribute for the entity.</p></blockquote>
<p>I&#8217;m reminded of an early Google Provisional patent that Sergy Brin came up with in the 1990s. My post about that patent I called, <a href="http://www.seobythesea.com/2014/09/google-first-semantic-search-invention-patented-1999/">Google’s First Semantic Search Invention was Patented in 1999</a>. The patent it is about was titled <a href="http://www.seobythesea.com/2014/09/google-first-semantic-search-invention-patented-1999/">Extracting Patterns and Relations from Scattered Databases Such as the World Wide Web (pdf)</a> (Skip ahead to the third page, where it becomes much more readable). This was published as a paper on the Stanford website. It describes Sergy Brin taking some facts about some books, and searching for those books on the Web; once they are found; patterns about the locations of those books are gathered, and information about other books are collected as well. That approach sounds much like the one from this patent granted the first week of this month: </p>
<blockquote><p>In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of obtaining a plurality of seed facts, wherein each seed fact identifies a subject entity, an attribute possessed by the subject entity, and an object, and wherein the object is an attribute value of the attribute possessed by the subject entity; generating a plurality of patterns from the seed facts, wherein each of the plurality of patterns is a dependency pattern generated from a dependency parse, wherein a dependency parse of a text portion corresponds to a directed graph of vertices and edges, wherein each vertex represents a token in the text portion and each edge represents a syntactic relationship between tokens represented by vertices connected by the edge, wherein each vertex is associated with the token represented by the vertex and a part of speech tag, and wherein a dependency pattern corresponds to a sub-graph of a dependency parse with one or more of the vertices in the sub-graph having a token associated with the vertex replaced by a variable; applying the patterns to documents in a collection of documents to extract a plurality of candidate additional facts from the collection of documents; and selecting one or more additional facts from the plurality of candidate additional facts.</p></blockquote>
<p>The patent breaks the process it describes into a number of &#8220;Advantages&#8221; that are worth keeping in mind, because it sounds a lot like how people talking about the Semantic Web describe the Web as a web of data. These are the Advantages that the patent brings us: </p>
<blockquote><p>(1) A fact extraction system can accurately extract facts, i.e., (subject, attribute, object) triples, from a collection of electronic documents to identify values of attributes, i.e., &#8220;objects&#8221; in the extracted triples, that are not known to the fact extraction system.</p></blockquote>
<blockquote><p>(2)In particular, values of long-tail attributes that appear infrequently in the collection of electronic documents relative to other, more frequently occurring attributes can be accurately extracted from the collection. For example, given a set of attributes for which values are to be extracted from the collection, the attributes in the set can be ordered by the number of occurrences of each of the attributes in the collection and the fact extraction system can accurately extract attribute values for the long-tail attributes in the set, with the long-tail attributes being the attributes that are ranked below N in the order, where N is chosen such that the total number of appearances of attributes ranked N and above in the ranking equals the total number of appearances of attributes ranked below N in the ranking.</p></blockquote>
<blockquote><p>(3)Additionally, the fact extraction system can accurately extract facts to identify values of nominal attributes, i.e., attributes that are expressed as nouns.</p></blockquote>
<p>The patent is:</p>
<p><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&#038;Sect2=HITOFF&#038;d=PALL&#038;p=1&#038;u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&#038;r=1&#038;f=G&#038;l=50&#038;s1=9,672,251.PN.&#038;OS=PN/9,672,251&#038;RS=PN/9,672,251">Extracting facts from documents</a><br />
Inventors: Steven Euijong Whang, Rahul Gupta, Alon Yitzchak Halevy, and Mohamed Yahya<br />
Assignee: Google Inc.<br />
US Patent: 9,672,251<br />
Granted: June 6, 2017<br />
Filed: September 29, 2014</p>
<p>Abstract</p>
<blockquote><p>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting facts from a collection of documents. One of the methods includes obtaining a plurality of seed facts; generating a plurality of patterns from the seed facts, wherein each of the plurality of patterns is a dependency pattern generated from a dependency parse; applying the patterns to documents in a collection of documents to extract a plurality of candidate additional facts from the collection of documents; and selecting one or more additional facts from the plurality of candidate additional facts.</p></blockquote>
<p>The patent contains a list of &#8220;other references&#8221; that were cited by the applicants. These are worth spending some time with because they contain a lot of hints about the direction that Google appears to be moving towards.</p>
<ul>
<li>Finkel et al., <a href="https://nlp.stanford.edu/manning/papers/gibbscrf3.pdf">Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling</a> In Proceedings of the 43rd Annual Meeting of the ACL, Ann Arbor, Michigan, USA, Jun. 2005, pp. 363-370. cited by applicant .</li>
<li>Gupta et al, <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41894.pdf">Biperpedia: An Ontology for Search Applications</a> In Proceedings of the VLDB Endowment, 2014, pp. 505-516. cited by applicant .</li>
<li>Haghighi and Klein, <a href="http://www.aclweb.org/anthology/D09-1120">Simple Coreference Resolution with Rich Syntactic and Semantic Features</a> In Proceedings of Empirical Methods in Natural Language Processing, Singapore, Aug. 6-7, 2009, pp. 1152-1161. cited by applicant .</li>
<li>Madnani and Dorr, <a href="http://www.mitpressjournals.org/doi/pdf/10.1162/coli_a_00002">Generating Phrasal and Sentential Paraphrases: A Survey of Data-Driven Methods</a> In Computational Linguistics, 2010, 36(3):341-387. cited by applicant .</li>
<li>de Marneffe et al., <a href="https://nlp.stanford.edu/pubs/LREC06_dependencies.pdf">Generating Typed Dependency Parses from Phrase Structure Parses</a> In Proceedings of Language Resources and Evaluation, 2006, pp. 449-454. cited by applicant .</li>
<li>Mausam et al., <a href="https://homes.cs.washington.edu/~mausam/papers/emnlp12a.pdf">Open Language Learning for Information Extraction</a> In Proceedings of Empirical Methods in Natural Language Processing, 2012, 12 pages. cited by applicant .</li>
<li>Mikolov et al., <a href="https://arxiv.org/pdf/1301.3781.pdf">Efficient Estimation of Word Representations in Vector Space</a> International Conference on Learning Representations (ICLR), Scottsdale, Arizona, USA, 2013, 12 pages. cited by applicant .</li>
<li>Mintz et al, <a href="https://www.aclweb.org/anthology/P09-1113">Distant Supervision for Relation Extraction Without Labeled Data</a> In Proceedings of the Association for Computational Linguistics, 2009, 9 pages. cited by applicant.</li>
</ul>
<p>The patent tells us that entities identified by this extraction process may be stored in an entity database, and they point at the old freebase site (which used to be run by Google).</p>
<p>They give us some insights into how the information extracted from the Web might be used by Google in a <a href="http://www.seobythesea.com/2014/09/googles-browseable-fact-repository-early-knowledge-graph/">fact repository</a> (which is the term they used to refer to an early version of their knowledge graph):</p>
<blockquote><p>Once extracted, the fact extraction system may store the extracted facts in a facts repository or provide the facts for use for some other purpose. In some cases, the extracted facts may be used by an Internet search engine in providing formatted answers in response to search queries that have been classified as seeking to determine the value of an attribute possessed by a particular entity. For example, a received search query &#8220;who is the chief economist of example organization?&#8221; may be classified by the search engine as seeking to determine the value of the &#8220;Chief Economist&#8221; attribute for the entity &#8220;Example Organization.&#8221; By accessing the fact repository, the search engine may identify that the fact repository includes a (Example Organization, Chief Economist, Example Economist) triple and, in response to the search query, can provide a formatted presentation that identifies &#8220;Example Economist&#8221; as the &#8220;Chief Economist&#8221; of the entity &#8220;Example Organization.&#8221;</p></blockquote>
<p>The patent tells us about how they use patterns to identify additional facts:</p>
<p><blocquote>The system selects additional facts from among the candidate additional facts based on the scores (step 212). For example, the system can select each candidate additional fact having a score above a threshold value as an additional fact. As another example, the system can select a predetermined number of highest-scoring candidate additional facts as additional facts. The system can store the selected additional facts in a fact repository, e.g., the fact repository of FIG. 1, or provide the selected additional facts to an external system for use for some immediate purpose.</blocquote></p>
<p>The patent also describes the process that might be followed to score candidate additional facts. </p>
<p>This fact extraction process does appear to be aimed towards building a repository that might be capable of answering a lot of questions, using a machine learning approach and the kind of semantic vectors that the Google Brain team may have used to develop Google&#8217;s Rank Brain approach.</p>
<hr/>Copyright &#169; 2017 <strong><a href="http://www.seobythesea.com">SEO by the Sea &#9875;</a></strong>. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact <a href="http://www.seobythesea.com">SEO by the Sea</a>, so we can take appropriate action immediately.<br/><span style="float: right;font-size: 7pt"><a href="http://blog.taragana.com/index.php/archive/wordpress-plugins-provided-by-taraganacom/">Plugin</a> by <a href="http://www.taragana.com/">Taragana</a></span><p>The post <a rel="nofollow" href="http://www.seobythesea.com/2017/06/google-patents-extracting-facts-web/">Google Patents Extracting Facts from the Web</a> appeared first on <a rel="nofollow" href="http://www.seobythesea.com">SEO by the Sea &#9875;</a>.</p>
<img src="http://feeds.feedburner.com/~r/seobythesea/Tesr/~4/2qLEfpwh9u0" height="1" width="1" alt=""/>Semantic SearchMon, 19 Jun 2017 20:20:46 GMThttp://www.seobythesea.com/2017/06/google-patents-extracting-facts-web/#commentshttp://www.seobythesea.com/?p=19230Bill Slawski2017-06-19T20:20:46Z