Google SEO News and Discussion Forum

Hi All, My site consider midsize to large with pre-panda traffic of 60k unique/day. Am sure that most of us feel the frustration. My site now hangs on life support of 18k/day and declining weekly. But here is one reason why Google make a brilliant move that I give them a A++.

1. After months of searching and fixing my site. Boom! All of a sudden I figure out that is what Google want all webmaster do and be serious about your site and make it as perfect as possible so that the internet will have more and more great content.

Smart move Google. If Bing and Yahoo still sit around and not force webmaster to put some work on this site then they will lose the search war.

All of a sudden I figure out that is what Google want all webmaster do and be serious about your site and make it as perfect as possible so that the internet will have more and more great content.

I have been doing this for years! I had great rankings and many emails from happy visitors for years and now suddenly Google decides my site is crap? I used to update the site on an almost daily basis. As good as no updates since Panda. Less and less great content instead of "more and more great content". Why would I invest in content if Google gives better rankings to thiefs that copy my content?

I have a 9 page article that i wrote and then paid to have the copy and grammar checked. It is an absolute piece of art and took me the best part of six weeks to research and write up. That was about four years ago.

The content has been on my site and ranked for the subject matter. I have had many site visitors compliment me on the content which has also been printed off and used by some schools and colleges.

Since then numerous content jackers have stolen all or bits of this article. Not even bothering to even try and re-write just simply copying and pasting the entire document.

Following Panda, the article is nowhere to be seen in Google. In fact if i take a snippet of my own text i see 9 sites that have stolen the content. If i select repeat with the omitted results, i then show up in position 2.

Just because Google is now attempting (not yet successfully, in my opinion) to detect and supress "low quality" websites it suddenly isn't a search engine?

Someone is searching for detailed, authoritative information about widgets; they send a query to Google and they get back a list of documents that are "relevant" to the query.

99% of the web pages about widgets were created for Google, and the vast majority are either a copy of an original article about widgets, or some sort of autogenerated mashup of text that includes widget related text, or a cheaply produced mini-article that superficially talks about widgets, published by a content farmer.

So, which of these documents does Google list at the top? The one that some algorithm decides is the one that is most "relevant" to the way the user worded their query?

If they try to push the worst of the junk documents down the list, that means they are no longer a search engine?

There has to be manual reviews of sites

No question, humans can judge quality better than a machine, but they are a costly input. Google uses human judgement to "seed" processes, and evaluate processes that are largely automated. Even if they could afford to do so, Google isn't going to study each of our sites and give us a grade; more importantly, they can't afford to hire humans to look at the millions of junk sites that crank out billions of pages of garbage every day. It's those "low quality" sites Google wants to suppress.

The problem isn't with their intentions -- which are good -- but with the execution (which is inadequate) and the fact that everything they do has such an outsized impact on the entire internet.

The core problem is that Google is too powerful -- so even their well intentioned efforts create enormous collateral damage.

We would all be better off in an environment in which Google were no longer the pipe sending 70-90% of the traffic to so many websites (depending on the niche).

But, currently Google is effectively a single point of failure for the entire internet ecosystem, so even a well-intentioned attempt to suppress "low quality" can cause widespread havoc and destruction.

Very well written econman. I am literally disgusted with Panda and with the way Google handled the roll-out, but I agree with your thinking. They are a search engine and they DO have a challenge and they ARE falling far short in the Panda version of their algorithm.

Here's my problem with human reviews: It is done by humans.

That means they'll inevitably bring to their reviews all the biases, prejudices and subjective judgements that are part of everyone's character. I'd just as much have a machine do it.

Unfortunately for millions of siteowners, Google and their machines screwed-up bigtime with this one. The animosity directed at them has never been greater, and it's entirely their doing. I can only hope that behind the scenes they are working overtime to do it right, though candidly, I see no evidence of that awareness on their part.

One of the posters said in this thread or another one that we are lucky to only have Google as our one boss. I totally disagree. I've been on the web since '95 and remember well the days of multiple search services, all about equal. I'll take that in a heartbeat over this current scenario, where the "one boss" can make your life miserable. Give me 5 bosses ~ I'll make 4 of them happy and won't care about the fifth.

Not only do we need human reviewers, and quite a few of them to smooth out errors and bias (if 20 reviewers all say the website is crap, and 2 say it's good, then most likely it's crap), we also need expert reviewers. It's not good asking someone in India to rate which New Zealand pet products shopping site is the best, or getting someone with no interest in sports to judge hockey websites.

But I argue that human reviewers should not judge actual rankings, but just the intentions of website and the operators, whether it's just one big scrapper, or a content farm, an Adsense site, or a website run by actual people with the best of intentions to provide a good service and/or good unique content. And once the expert reviewers weed out the "bad" websites, then let the Google algorithm do their work and rank the "good" websites. This way, at the very least, you can push the expert judged "bad" websites all the way down the bottom of the SERPs. Ideally, this should be done on a regular basis for "good" websites to ensure they remain good, with an "appeals" process for judged "bad" websites (limit to one appeal every six month or something - remember that for a website to be judged "bad", dozens of experts judges would have had to agree first, so the chance of a false positive isn't high to begin with).

This may sound like a lot of work, and it is, but Google makes multi-billion dollar worth of profits every year, so I think they can afford it. And judging a website, especially by experts, shouldn't take too long, no more than a couple of minutes per website, if that. I think most people can spot a good site from a bad one within seconds of visiting the website, whereas it takes GBs of data, hours of analysis, millions of in and out links for the Google algo to get it right (and frequently, wrong). There will be some bias, but if the bias is unanimous and has the weight of experts behind it, then that's probably the good sort of bias that should push the judged website all the way to the top of the SERPs.

While we're discussing wish lists, for duplicate contents, Google should have some kind of API call to allow webmasters to push updates. Then encourage legitimate webmasters to implement this feature via the Webmaster Tools. The website that makes the first API update call for the unique piece of content becomes the original, everything else is a duplicate (so there's an inherent risk for scrapers to makes this API call, if they're stealing content, as it can also be used to easily identify them as content thieves - and if they don't make the API call, then they can't be the original). The only problem is webmasters not aware of this function and then getting their original content penalized as a duplicate just because a scrapper made the API call first, but then if you're that kind of webmaster, then you probably don't care about SERPs anyway. The other problem is how Google can handle so many calls, but I'm sure there's a technical solution there somewhere.