6-week old site disappeared from SERPs around March 1

Junior Member

joined:Sept 14, 2012
posts:66
votes: 0

I started a site in mid-January (domain registered around January 12, site indexed around January 15) and since then, I've been creating a lot of content on it and doing some very aggressive link building (white hat only). The main keyword I was targeting gets more than 300,000 exact searches/month. In late February, the site's homepage was ranking on page 6 for that keyword but on around March 1, it completely disappeared from the SERPs (not in the top 100 or top 1000 or anywhere at all). One of the site's subpages though is ranking on page 43.

Is my site in the infamous sandbox, implying some sort of a domain age "penalty" and should I just wait it out while continuing to build links? Or is there something else I should be worried about?

Moderator This Forum from US

joined:Nov 11, 2000
posts:12016
votes: 323

Sounds offhand like you built links to the homepage in ways that didn't appear natural for Google.

It's possible also that the anchor text didn't exhibit natural variation, that the rate of growth was too fast, and that the sources may have been in some way related, either to each other or to you. Perhaps Google didn't see enough traffic to the site to warrant the links. Whatever, I'm assuming that Google didn't see these as freely given, relevant, editorial links.

That's a quick guess, without knowing anything else about the situation.

Senior Member

Right so from now on make sure you carry that philosophy offline too. Do not advertise your business in directories like the YellowPages or trade publications. Just sit around in your empty warehouse and hope that some newspaper takes an interest in your business and writes an article about it? What's wrong with that picture?

Preferred Member

joined:Feb 18, 2013
posts:552
votes: 0

The anchor text of most of the links was the name of the site/domain - don't see how that's unnatural.

When the site/domain name linked is not in a proportional relative to the number of links and varied anchor text expected out of the whole of links to a site in the same niche it could easily be considered unnatural in my opinion.

In other words, if you have 95% of inbound links with the site/domain name as the anchor and most in your niche have 58.3% with 41.7% being varied, yours could be unnatural (not like the others).

Matt Cutts has said that the rate of link acquisition/link velocity is largely irrelevant to rankings.

Interesting interpretation, I thought he says get as many links as you can via natural linking, which would in my opinion, include naturally varied anchor text, not linking to the site/domain name at a higher than expected rate relative to the whole, and things along those lines, which would I think include an expected rate of "new link finding" relative to number of pages spidered over time, etc. but maybe I'm missing something?

Google doesn't consider traffic/analytics data for rankings

How about click-through from the results or click-through, click back (or search again), click on another result, meaning the search did not end on your site/page (iow your site was not the right answer)?

Is that addressed in the video cited also? If it was, I missed it, so please, point out where, because I'm curious how they would not use click-through, click back (or search again), click on a different result as a factor to indicate whether your site/page was "the right answer" to a query or not.

Junior Member

"You also need to consider that when Google finds new content and deems it news worthy that it will rank it higher for a week or so and then you will start to see it slide."

That's only for QDF keywords. Some keywords are newsworthy, some are not (http://www.youtube.com/watch?v=o4hH4ZQ_19k). I don't think this particular keyword would be considered a QDF.

"When the site/domain name linked is not in a proportional relative to the number of links and varied anchor text expected out of the whole of links to a site in the same niche it could easily be considered unnatural in my opinion.

In other words, if you have 95% of inbound links with the site/domain name as the anchor and most in your niche have 58.3% with 41.7% being varied, yours could be unnatural (not like the others)."

I don't think most websites in any niche have any common or similar anchor text variation. Anchor text variation differs from site by site, not niche by niche, IMHO. The site that ranks #1 for the keyword is an exact match domain that has about 90% of inbound links with the site/domain name/keyword as the anchor (according to ahrefs), which is much more than mine, which has about 75% of inbound links with the site/domain name as the anchor text, and mine isn't an exact match domain.

"which would I think include an expected rate of "new link finding" relative to number of pages spidered over time, etc. but maybe I'm missing something?"

I don't think it's logical to say that there is a correlation between the number of pages spidered over time and the number of links found. I could find lots of counterexamples to this.

"How about click-through from the results or click-through, click back (or search again), click on another result, meaning the search did not end on your site/page (iow your site was not the right answer)?"

Since my site is new and hasn't gotten any traffic from Google yet (it was ranked on page 6 before, which nobody goes to), this data wouldn't be available yet for my site yet, or any other new sites.

"I'm curious how they would not use click-through, click back (or search again), click on a different result as a factor to indicate whether your site/page was "the right answer" to a query or not."

There could be lots of reasons why people click back, search again, or click on another result. Many queries don't have just one simple answer that users are looking for. What if you're searching for reviews or opinions about something? You would probably click on multiple search results to get as many opinions as possible. And that wouldn't mean that those sites you click back on are low quality.

Preferred Member

joined:Feb 18, 2013
posts:552
votes: 0

should I just wait it out while continuing to build links? Or is there something else I should be worried about?

Personally, I would be most worried about asking questions then arguing with every single answer given, especially with the number of years experience those you're arguing with have, even more so when those years of experience are combined.

Some of us have a wee bit more than our username may indicate ;)

What if you're searching for reviews or opinions about something? You would probably click on multiple search results to get as many opinions as possible.

So you would click through, click back (or research), click again at a similar rate across a number of sites, which would mean there's not a distinct variance between the rate of one from the other, correct?

Looking at things as narrowly as a "singular answer" fitting a whole of situations wrt an algo that's been developed for 15 years can in my opinion lead to some very narrow and faulty conclusions.

Preferred Member

If I had 15 years to develop an algo by myself, I think I could account for that, even without teams of programmers. How?

Through personalization (of results of course, which means must I track users and their behavior) and if I know user N1524389 usually clicks on an average of 5.3 results within N seconds and for some reason on query Y they only clicked on 4 results within N seconds, I can determine there's a variance.

Then I can look and see if there's a "confirmation" with the "normal users" whom do not open multiple tabs (click on N results within a relatively low N of seconds consistently over a large number of result sets) and it's likely I could determine which of the results not clicked were deemed by user N1524389 as "least important" within the result set (say they click 1,2,3,5,6 or 1,2,3,5,7 and 1,2,3,4 on average, but this time they clicked 1,3,4,5 and skipped 2 which is a usual click) based on average result number click, click time and number of clicks made within a result set by user N1524389 I could likely determine what the "variant non-click" was.

I could likely especially do so if I compared those clicks to clicks of "normal users" who would have a different click or time between click pattern.

I could even throw user N1524389 out of the "lack of click scoring" and just use their clicks on the 4 results they visited as a positive if I needed to.

And of course, I would browser sniff and associate by IP/Location, so clearing those cookies without changing IP and browsers and possibly even location via IP would not do any good, because I could associate a user to a query based on other variables even without the cookies that make it easier and more reliable.

Preferred Member

The previous two posts I made on how I could/would do things if I had time took me about an hour combined to revise and edit. They were off the top of my head.

Imagine what I could come up with and be able to do if I had a year to "tackle an issue" as one person. Then think they've had teams of people who are way more well educated and experienced than me working on these issues and answering the questions presented within this thread.

They're way farther "into things" than I think most of us give them credit for.

Senior Member from US

joined:Apr 9, 2011
posts:14499
votes: 587

Through personalization (of results of course, which means must I track users and their behavior) and if I know user N1524389 usually clicks on an average of 5.3 results within N seconds and for some reason on query Y they only clicked on 4 results within N seconds, I can determine there's a variance.

I don't see why you need to do any of this. The only real variable is whether a user opens more than one page without reloading the search-results page. And you know when the results page has been reloaded-- even if the browser doesn't put in a fresh request-- because your own analytics will tell you ;)

Now, about the OP:

around March 1, it completely disappeared from the SERPs (not in the top 100 or top 1000 or anywhere at all)

You forgot to answer the inevitable first question: Does the page in question exist in the index at all? Either do a site: search or search for a unique text string. Answers that are obvious to you are not obvious to everyone else.

You don't see a lot of single pages being de-indexed-- usually it's a whole site being stomped-- but it's best to collect all possible information.

The other thing you forgot to say is what information you're getting from wmt. Most of it you can just ignore at this point. In particular: Do not look at the keyword list yet. This is generated strictly for wmt, so it may take as much as several months for it to become accurate. That's counting from when the site was added to wmt, not from when it was created.

But do look for anything red-flaggish like "couldn't load robots.txt" or huge numbers of non-200 responses. This early in the site's life, there shouldn't be any redirects or 404s. Except the purely mechanical /index.html and domain-name canonicalization redirects. Search engines will always ask for the wrong name now and them, just to test you. (Bing more than google, for some reason. And MJ12 loves to leave off directory slashes. And so on.)

Junior Member

joined:Sept 14, 2012
posts:66
votes: 0

"Personally, I would be most worried about asking questions then arguing with every single answer given, especially with the number of years experience those you're arguing with have, even more so when those years of experience are combined."

I just think that one can never be too sure about Google's ranking factors just because they might have seen correlation in their data. After all, correlation is not causation - this is especially true in SEO and unfortunately, it's something that most SEOs often forget.

"So you would click through, click back (or research), click again at a similar rate across a number of sites, which would mean there's not a distinct variance between the rate of one from the other, correct?"

Not necessarily. What if some pages have more/longer content than others? It would take a longer time to read them but that wouldn't necessarily indicate higher quality. I just don't think one can extract conclusive data about the quality of a website(s) from SERP click data.

Preferred Member

joined:Feb 18, 2013
posts:552
votes: 0

The only real variable is whether a user opens more than one page without reloading the search-results page.

Not really. If the user opens Page A and clicks on Page B 1 second later when their average click time between two results is 5 seconds it tells you something different than if they open Page A and don't click on Page B for 17 seconds when their average click time between results is 8 seconds.

Sure, you'll miss some when then phone rings or something on a specific result set, but if there's a difference in click time between a specific result and the average click time between different results exhibited by a given user it definitely tells you something.

And without getting into too much detail, the click time between results could indicate a better or worse answer, depending on the click time behavior of other visitors to the result set and the specific visitor within other result sets.

For example, if the click time is higher between results for a specific result, but 90% of the time the search does not end when it ends an average of 20% of the time on 3 other results, it can tell you the "longer click time" is due to some "non-informative distraction" by the site with the longer visit time making it a worse result, but if the result with the longer visit time has a higher than average "search ends" % than the rest say it's 35% and the next closest average is 20% for 3 other results it can tell you the result is generally better than the rest.

Click time between results on average for users is in my opinion a very important metric and it's really tough to fake.

Junior Member

joined:Sept 14, 2012
posts:66
votes: 0

@gyppo: Thanks, that's what I was hoping to hear. This has actually happened to one of my other sites too but it came back after 1 week. The site mentioned in my original post though, has been out for almost 3 weeks now.

Preferred Member

joined:Feb 18, 2013
posts:552
votes: 0

Not necessarily. What if some pages have more/longer content than others? It would take a longer time to read them but that wouldn't necessarily indicate higher quality. I just don't think one can extract conclusive data about the quality of a website(s) from SERP click data.

Sure you can when you base it on averages and multiple data points, including per user averages (more accurately what I said initially which is on a per user basis) and search ending percentage, but I'm done trying to explain it, because you heard what you wanted to hear from one person, so your question is answered.

Junior Member

"Sure you can when you base it on averages and multiple data points, including a per user basis and search ending percentage"

But there will always be outliers; there are tons of high quality sites on the internet and not all of them can possibly conform to similar SERP click data - same goes for low quality websites.

"Does the page in question exist in the index at all?"

Yes, and it ranks #1 for the site/domain name (which is normal).

"But do look for anything red-flaggish like "couldn't load robots.txt" or huge numbers of non-200 responses. This early in the site's life, there shouldn't be any redirects or 404s. Except the purely mechanical /index.html and domain-name canonicalization redirects. Search engines will always ask for the wrong name now and them, just to test you. (Bing more than google, for some reason. And MJ12 loves to leave off directory slashes. And so on.)"

There is a 404 but that's because there were two pages with the same content (post page and the homepage) and I only wanted the content to be on the homepage so I 404'ed the post page. Surely that wouldn't cause any serious issues?

Preferred Member

Right so from now on make sure you carry that philosophy offline too. Do not advertise your business in directories like the YellowPages or trade publications. Just sit around in your empty warehouse and hope that some newspaper takes an interest in your business and writes an article about it? What's wrong with that picture?

Exactly. Google's monopoly and reliance on links - and subsequent "rules" to prevent gaming of links - subverts how marketing has worked in business for years. Now we're supposed to not market, but simply build it and gain marketshare without any marketing (well, asides from Google Adwords, right?). I'll just wait for sites to naturally link to my crane hire service website.

Junior Member

joined:Apr 27, 2012
posts:114
votes: 1

@ColourOfSpring your are wrong, as I wrote before link quantity is not relevant anymore , believe me I have seen websites one year old ranking on top results for very competitive keywords with only one, but, quality link ;)

Preferred Member

joined:Mar 12, 2013
posts: 500
votes: 0

@ColourOfSpring your are wrong, as I wrote before link quantity is not relevant anymore , believe me I have seen websites one year old ranking on top results for very competitive keywords with only one, but, quality link ;)

Great! I'll just wait for that single quality link in that case, as we all should, right?

Junior Member

joined:Sept 14, 2012
posts:66
votes: 0

"Personally, I consider these two concepts to be pretty much mutually exclusive. Slight chance Google may as well."

I didn't use any blog networks, article directories, or any link building software. A lot of sites that go viral exhibit an extremely aggressive growth in links, would that be considered black hat as well?

Senior Member

joined:Aug 4, 2008
posts:3342
votes: 255

plc90210 -- The probable explanation for your problem is really very simple, and if you will read the earlier posts in this thread, you will see that it's already been explained more than once. In summary: If you create a good website, then it will attract backlinks on its own, without any effort on your part. That's an indication of a quality site. But if most of the backlinks were created as a result of your own efforts (known as link-building), then they didn't happen "naturally".

Google's algorithm probably found indications that most of your backlinks are the result of link-building, and demoted it for that reason.

Preferred Member

joined:Mar 12, 2013
posts: 500
votes: 0

If you create a good website, then it will attract backlinks on its own, without any effort on your part. That's an indication of a quality site. But if most of the backlinks were created as a result of your own efforts (known as link-building), then they didn't happen "naturally". Google's algorithm probably found indications that most of your backlinks are the result of link-building, and demoted it for that reason.

Correct on artificial link-building, but I disagree on the "build it and they will come" myth. It's a bit like saying if you build a bricks and mortar shop in an obscure area where shoppers don't normally go, they will magically find you so long as you stock great products or provide a great service. In reality, you have to advertise/market your business - essentially you have to make people aware you even exist. Online it's no different. Google dominate traffic, and that means you have to try to move from obscurity to the "high street" (first page results). You can pay to be on the high street (Ad Words), or you can manipulate your way there. Some people will automatically assume the word "manipulate" to be an allusion to black-hat techniques, but in my mind, so-called white-hat techniques still involve manual processes that essentially manipulate rankings (means-to-an-end actions you perform soley to manipulate your ranking position, like viral videos and other link bait). In other words, it's not enough to build it. They will not come.

Senior Member

joined:Aug 4, 2008
posts:3342
votes: 255

ColourOfSpring -- Well I just re-read my post, and i don't see anything about "build it and they will come." I was merely trying to explain the difference between natural and un-natural backlinks. So please don't put words in my mouth.

Preferred Member

joined:Mar 12, 2013
posts: 500
votes: 0

ColourOfSpring -- Well I just re-read my post, and i don't see anything about "build it and they will come." I was merely trying to explain the difference between natural and un-natural backlinks. So please don't put words in my mouth.

You didn't literally say "build it and they will come" but you did say a phrase that contains (to me) the same meaning as "build it and they will come" :-

If you create a good website, then it will attract backlinks on its own

That is your quote I made comments about. I think your quote is true in some very obscure niches online where you can truly stand-out and Google will reward you. In the vast majority of other markets, you have to be involved in black or white-hat manipulation (some call it marketing) if you concern yourself with improving rankings in Google.

Senior Member

joined:Aug 4, 2008
posts:3342
votes: 255

If you create a good website, then it will attract backlinks on its own

ColourOfSpring -- Read my post. I said that's an indication of a quality website. And the statement is true for all niches, not just obscure niches. If you wanted to talk about manipulation, you should have found a more plausible excuse to bring it up.