Strange Encounters of the SEO Kind

We all have to deal with the common SEO mistakes, such as a robots.txt moved from a staging server to the production site that blocks the entire site and duplicate meta tags. But from time to time in our daily work, we also see some “Strange Encounters of the SEO Kind.”

A panel at SES Chicago 2011 yesterday specifically looked at some extraordinary SEO situations that myself and the other panelists – Angie Schottmuller, Bob Tripathi (and moderated by Chris Boggs) – had encountered, how they figured out what the heck was going on, and how they fixed it.

One case study I talked about was a client who owned a shopping site. This shopping site had been monitoring their indexed pages and saw that they’d suddenly had a huge drop in indexed pages in Google only.

To figure out what happened, I found myself using the following six-step investigation process.

Step 1: Identify the Problem

Identify the problem as granularly as possible so you know the right questions to ask. This could be for anything from an unexpected drop in traffic and/or rankings, to an unexpected increase in traffic and/or rankings, to a seemingly random selection of pages not appearing in the index.

Upon examination it looked as though most of the product pages had vanished from Google, but were still in Bing and Yahoo. So now we knew the problem we could start to investigate.

Step 2: Rule Out Basic Causes First

Make sure that common SEO mistakes aren’t present. In our example case, the robots.txt was clean, allowing what it should allow, and blocking what it should block, there was nothing specifically related to Google’s crawler – Googlebot.

The source code was examined to see if a noindex tag had been added in. It hadn’t.

The team looked to ensure that the missing pages were linked on the site, and were listed in a valid sitemap. Again, they all looked good.

Step 3: Look at Your Data

See if your data explains the issue. Has there been a huge shift in traffic from month A to month B.

Compare the search engines to see if it’s across the board or just one specific engine. Then compare the keywords, to see what’s changed there.

Step 4: Use All Available Tools

We started by going into Google Webmaster Tools to see if Google was reporting any particular types of errors for the product pages. Again, no issues detected in our case study.

The team then looked more deeply at the product pages themselves. They were being ingested from a third party, so perhaps there was some kind of issue with the feed. What if they were pushing the content in an iFrame, which would block the crawlers? They weren’t.

Had they pushed a cross-domain canonical tag in the code to point to their own pages, thus preventing the client’s pages from being indexed? No, they hadn’t.

The feeds were being used by other shopping sites, so maybe the site was getting hit by a duplicate content filter? That was a possibility, but the pages would then most likely still be in the index, just not ranking.

After several days of looking at this issue, and not really coming any closer to solving the issue, one of the team used the Fetch as Googlebot tool in Google Webmaster Tools, and suddenly a noindex could be seen in the code. The third party vendor’s feed was inserting a noindex when the requestor was Googlebot. It wasn’t intentional, it was just some testing code that had been forgotten about and pushed to the live site, but it cost the client a good few weeks of potential revenue, as it took them time to get the site re-indexed and ranking after the code fix was published.

D’oh. And yes, that probably should have been one of the first tools used, given that the problem was specifically centered around that single search engine, but at least it was eventually run.

Step 5: Ask Others for Opinions

An extra set of eyes may be all that you need. There have been many other instances that I and the others on the panel have run up against, strange instances that we’ve had to root around to find out the answers to, and in some cases strange instances that we haven’t been able to figure out.

In those case where you’ve come up with a variety of potential answers, sometimes it's a good idea to reach out to the SEO community and get that extra pair of eyes looking at the problem. Sometimes all it takes is explaining it to someone else for you to figure it out.

Step 6: Ask the Search Engines

If all else fails, in rare cases we’ve had to reach out to the search engines themselves to ask for assistance. Sometimes they answer, sometimes their answer is helpful – and on at least one occasion I’ve had a search engine admit that there was an error in their algorithm that was preventing a client site from ranking in their organic search, even though those same pages were ranking in Google News.

The truth (and the solution) is out there. You just have to know where to look.

About the author

Simon Heseltine is the Director in charge of Search Engine Optimization (SEO) at AOL Inc.In this role Simon and his team are responsible for organic search and training across all AOL and Huffington Post Media Group properties.He has also visited the UK office on several occasions to consult and train the AOL-UK teams on SEO best practices.

In his previous position as Director of Search at a Washington, D.C., based agency, Simon was responsible for developing and implementing organic search and social media strategies for companies across several industries.He also developed and delivered training programs for clients, including a large U.S. media company, to enable them to best take advantage of the opportunities available to their companies through both organic search and social media.

Simon is a frequent speaker at conferences in the U.S. and UK on topics ranging from SEO to social search to reputation management.He also teaches SEO at Georgetown University in Washington, D.C., as part of their Digital Media Management program.

Simon has a BA (Hons) from the University of Humberside, and a Masters in IT from Virginia Tech.