Can anybody explain me what the impacts of Duplicate content are? I Understand the impact copying something which has been used on the internet a tens or even hundred times, but what about stuff that only has been posted on the internet once, for example a story on Yahoo Answers, very long blog comments, Craiglist stories or Reddit comments. Since these all only have been posted once and don't usually never get copied, wouldn't it be possible to copy this content and backdate it before the post without getting any negative value for it?

Google considers unique content the "story" that finds it first,not by reading the creation date.So,if google discovered an article before you,then you go and copy it on your own website,you'll be penalised or ignored by Google

Google considers unique content the "story" that finds it first,not by reading the creation date.So,if google discovered an article before you,then you go and copy it on your own website,you'll be penalised or ignored by Google

Click to expand...

Is that actually true? I can't find much data on this after Panda. Does anyone know of a case study?

What I always read about duplicate content is that it doesn't matter how often the same content is posted around the web, as long as your own site isn't duplicating itself.

However, if that's the case, then I could theoretically have a $4 article written and blast it once to every site in my lists and they should all index at the same rate as if I had put up unique content. If I do some crappy spinning on it that gets me 50 "unique" articles, I could blast it to every site in my lists 50 times before I see issues. That's a lot of links from one $4 article. If duplicate content holds no water then why do people obsess over heavily spun articles? I can't imagine nobody has looked into this before, and if they did and this turned out to be true...well, the web would look a lot different than it does now.

edit: Found one http://www.jonathanleger.com/case-study-links-from-unique-vs-duplicate-content/

Google considers unique content the "story" that finds it first,not by reading the creation date.So,if google discovered an article before you,then you go and copy it on your own website,you'll be penalised or ignored by Google

Click to expand...

This is mostly correct. With Google Caffeine they spider so quickly that they can pretty accurately tell which article posted something first based on which one their spiders find first. I'm sure they use stuff like creation date to affirm that.

But if an article was posted at 12:30, you post it on your site on 12:45 with a faked creation date of 12:20, Google will almost certainly know that you were not the original author of the article based on their spidering data.

Duplicate content in SEO is truly any web content that is measured to be similar to another site. Search engines have actually implemented new filters specifically to observe these types of cunning attempts to improve site's search engine page rankings .Search engines riddle for duplicate content by using the same means for analyzing and indexing page ranking for sites, and that is through the use of crawlers or robots. These robots or crawlers go through different websites and catalogues these sites by reading and saving information to their database. Once this is done, these robots then analyze and compare all the information it has taken from one website to all the others that it has visited by using certain algorithms to find out if the site's content is relevant, and if it can be considered as a duplicate content or spam. Although you may not have any intentions to try and deceive search engines to improve your site's page ranking, your site might still get flagged as having duplicate content. One way that you can avoid this from happening is by checking yourself if there are duplicate contents of your page. Just make sure that you avoid too many similarities with another page's content for this can still appear as duplicate content to some filters, even if it is not considered to be a spam.

Duplicate content in SEO is truly any web content that is measured to be similar to another site. Search engines have actually implemented new filters specifically to observe these types of cunning attempts to improve site's search engine page rankings .Search engines riddle for duplicate content by using the same means for analyzing and indexing page ranking for sites, and that is through the use of crawlers or robots. These robots or crawlers go through different websites and catalogues these sites by reading and saving information to their database. Once this is done, these robots then analyze and compare all the information it has taken from one website to all the others that it has visited by using certain algorithms to find out if the site's content is relevant, and if it can be considered as a duplicate content or spam. Although you may not have any intentions to try and deceive search engines to improve your site's page ranking, your site might still get flagged as having duplicate content. One way that you can avoid this from happening is by checking yourself if there are duplicate contents of your page. Just make sure that you avoid too many similarities with another page's content for this can still appear as duplicate content to some filters, even if it is not considered to be a spam.

Click to expand...

what if youre making a deal/product site. the text/content of your site will be nothing but dup content since you'll be copying the specs/descriptions of products..

if you are displaying google adsense then google will ban you for copying content.I have done copying content from other blog 4 years before ,google slapped my face with ban.And if the blog owner is having copyright ,then it will take heavy toll on you.You dont get much traffic becauseof copied content.

Note that adblockers might block our captcha, and other functionality on BHW so if you don't see the captcha or see reduced functionality please disable adblockers to ensure full functionality, note we only allow relevant management verified ads on BHW.