Results of Google Experimentation - Only the First Anchor Text&nbspCounts

The author's views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Before I was inundated with the responsibilities of running a company and managing a few hundred emails a day, I used to spend a lot of time testing theories about how the search engines handled certain elements on a site or page. I'd test the engines to find answers to questions like:

Does a keyword perform better or worse if it's higher up in the code of a page? (better)

What's better, bold or strong tags? (used to be strong, now they appear equal)

Does a link with exact anchor text for a query perform better than one that has other words in the anchor text? (exact appears to be better)

NOTE: My tests on these are more than a year old, so things may have changed.

Obviously, to test the answers to questions like these, you need a very tightly controlled environment, and even then, your tests might reveal answers, but not the relative levels of impact. Sure, having a keyword on a page in strong tags is better than not, but by how much? If one link from the crappiest PR1 page gives more of a boost, is it really worthwhile?

I've talked about this testing phenomenon in the past in a Sphinn thread, about whether nofollow sculpting has any impact (I've copied the relevant bit below):

Step 1: Register a new domain (preferably one with a domain name that has no results in Google - like yorkfabuzapeloh.com or such)

Step 2: Link to that domain's homepage from some social media profiles or pages you control (but make sure they're very obscure and hard to find so no one else discovers and links to it - this is pretty easy to do)

Step 3: Create 6 pages on the site, the homepage (A) with two links to pages (B) and (C), pages (D) and (E) - both linked to by page (B) - and page (F) linked to from page (C). It's important to make sure that (B) is the first link on the homepage (A) and (C) is the second link.

Step 4: Target a nonsense keyword on pages (D) and (F), which are linked to by pages (B) and (C) respectively.

Step 5: Wait until Google has indexed all the pages (usually only a couple days if you link to them from a few sources), then run a search for the nonsense keyword you targeted on (D) and (F). Page (F) will rank first, because there's more link juice pointing to it than to (D), as (D) is only getting half the link weight provided by page (B) while (D) is getting all of (C)'s link weight.

Step 6: Add a nofollow to the link from page (B) to page (E), which we haven't done anything with until now. Wait until Google respiders, then check the results again. (D) should now be ranking in front of (F), because it's receiving the same link weight as (F) but the original link from the homepage (A) to (B) is higher up on the page, which gives it a tiny bit more weight.

We've replicated this experiment as have several others, and certainly any global link weighting system similar to the original PageRank formula would lead you to this conclusion as well.

And I used another test we've performed internally at last week's SEMpdx conference, which created a bit of confusion, and is, ultimately, the reason for this blog post.

Directly following my keynote, a question was asked in which link placement on a page became relevant. I commented that it was important to note that only the first anchor text to a given target page would be counted by Google (we haven't yet tested Yahoo!/MSN), but there were a great number of audience members who came up to me during the day asking for clarification -- even Rebecca! And thus, even though we usually keep this kind of information internal (Jane's planning to release a PRO guide with lots of these tests later this year), I figured the beans had already been spilled, so it's my responsibility to clean up the mess.

Here's what I mean -- let's say that on your website's homepage, you have two links to your blog. The first link is in the top level navigation, and the anchor text is "blog." The second link is in the body of the homepage and reads "celebrity news blog." That second link's anchor text is NOT going to help the blog page rank for "celebrity news" because Google doesn't appear to count the anchor text from multiple links to a target from a single URL. Here's a visual example:

Hopefully this clears up the misconception I created at the conference and helps get everyone thinking about the value of testing. It can be complex and time consuming (we run our tests on three nonsense domains, verifying that we get the same results every time), but rewarding. Obviously, even armed with just the knowledge from the test described above, there's a lot of extra thought to put into how your website's internal link structure should function (and yes, you can use nofollow on the first link if you want the second link's anchor text to count - let's test this - orgzhetwarhyu... tyynhaurslfhgn).

p.s. Two good questions were asked in the comments that deserve addressing in the post. First, this would appear to apply to the position in the code, not on the actual visual representation of the page, as Google isn't currently running 30+ billion documents through visual page analysis. Second, as far as PR "leaks" go, ideally you'd only want one link from any page to any other, but the original PR formula appears to do this for you, as they don't consider multiple votes for a URL by a single page to provide benefit (each page can only vote for another once).

p.p.s. On stuff like this, it's never a good idea to just take my word for it (or anyone else's) - run the tests yourself and see the results you get. Since the engines are evolving all the time, the results might be different in six months or six days.

I have a question - what defines the "first" anchor text on a page? Is it about how high up visually the link is on a browser, or how early in the code it appears? If the latter is true, then using the example in your post, could you control this by putting the menu lower down in the code, so that the 'celebrity news blog' link is higher up, but the menu displays higher due to CSS positioning? This way you could still have two links to the blog page without using nofollow, thus flagging it as more important than other pages on your site...

Sorry if this question has been discussed a lot before, but I'm not clear on it and would appreciate some help.

Yeah - sorry about that, I should have been clear. From my experience, it appears to be the location in the code. This makes sense, as it would be incredibly hard and time consuming to try to run every page on the web through visual analysis (something like Microsoft's VIPS).

Tools are nice, whiteboard friday interviews are interesting, conference and session recaps are ok - but posts like these (backed by experimentation, open to conversation and rebuttal, full of helpful info) - are epic.

See what I mean - they're complimenting each other (i.e. he's complimenting himself). You damn blackhat/s!

Just FYI - if you ever revealed your true "Identity", Brian/Vin or whatever the heck your name is, as of right now, you'd have a total of 1930 thumbs up and would be challenging Ciaran for the top "non-Moz employee" spot.

If and when I ever decide to leave SEOmoz - I'll 301 all of my thumbs to Identity and we'll give Ciaran a run for his money. I'm pretty sure that's the only way either one of us is going to catch up to him. ;-)

On a serious note - Sean, you've been moving up like wildfire recently, you've made the Critchlow and Dr. Pete rivalry some kind of mish-mash three-way race. If you keep it up at this pace (and everyone else remains at their pace) you may have Ciaran in your sights in another couple of months.

What you've done already is amazing (especially when you look at the number of thumbs per comments, etc.), if you caught up to Ciaran that would be even more amazing, but if you were to pass Jane up... well, I'm quite sure that would never happen.

Of course they said the same thing about the sound barrier and the 4 minute mile.

I think the thumb race is shaping up to be a nice little spectacle for this summer.

Like radio personalities only look good on radio, I only look good on avatar. So I guess we'll have to agree to be mutually complimented.

But don't fret on the thumbs race. Amazing how the tide can turn, especially when you get tied up in other areas. It seems like not all that long ago I was coasting along at a number 6 rank...then overnight, but a distant memory! But yes, Sean is burning up the track. But hard to mind being passed up but such a great bunch.

MozRank and thumbs aside, what truly matters is the great community and information sharing like this, along with some great comaradarie, of course.

A lot of my web pages use image tabs at the tops of pages to link to other internal sites. Will ALT tags in those image links trump anchor text down the page if both links point to the same page? Or can we get away with creating mulitple keyword relevancy for more than one link to a given page by using an image link in one place and a text link in another?

As I work on loads of small sites with 5 - 10 pages, I tried all sorts of bits and pieces - my mistake was to assume that the the anchor text in a top horizontal navigation count the most - this clears it up.

Say for example I am targetting "prostate cancer surgery" for one client - if I dont have a 100% relevant URL, then I make sure that the top horizontal navigation has a text link with those words linked to a page that has strong content centred around that phrase and the URL matches perfectly.

So, what is happening when an image link is first and followed by a text link to the same destination - typical of most sites where the logo links to the homepage and they have some type of text link to the homepage also.

Edit to add - I guess my question is which is counting; the image alt text or the anchor link text?

I did a little anchor text test on our blog and it seems (from that test anyway) that if the image is first, you get no anchor weight at all: not from the ALT and not from the second text link... Exactly as if the first link is 'nofollow'.

I believe the first two will be seen as identical, however the last one will be seen as being different. Assuming that the pages are identical, then Google will select one of the two to canonicalize to.

From a duplicate content issue, this will just be a filtering of duplicates, however, there are other reasons to canonicalize through 301 redirects and making sure that one URL is used consistently in linking:

PageRank will be diluted between the different URLs

Link popularity is being diluted

Crawling will be inefficient

Now, an interesting test might be to link to a canonical URL using one anchor text and then link to another URL that 301 redirects to the canonical URL using a different anchor text...my guess is, over time, we'll see the same results as Google fully associates to the canonical URL.

Since Google have a huge number of pages to crawl, they presumably try and find ways to limit the time they spend on a site, if they can find a reason to.

If a domain has a small number of links and not much content, they might reasonably assume that it's not worth spending much time on: perhaps they'll just look at the first 150 characters.

Additionally, what about the usability research, which the chaps at google undoubtedly know about, that show users tend to ignore navigation elements and look first at the main content and the links therein (http://www.useit.com/alertbox/20000109.html).

Aside from these specific points, my more general point is this: how can you be sure you have really successfully controlled for all other factors in this test? I don't think you can, particularly as their algorithm(s) are so obviously non-linear.

I haven't been in here for a while (you don't wanna know), and I'm thrilled to see there's lots of goodies for me to catch up on. Rand, you are amazingly consistent at coming up with truly useful info. TEN thumbs up.

Yes Rand this makes little sense, especially when Google considers only one keyword anchor text per page ( among a bunch of keyword anchor texts scattered on the page) to consider it as a vote to the other internal page of the website (I am taking about cannibalization as Rand mentioned here). There has to be someway for the search Engines to recognize, which keyword phrase among all on the page, is the actual keyword text voting. A Search Engine picking which ever first anchor text that come in its way sounds a little dumb on its part; there has to be something more to it... ! But it certainly is one of the factor..

You're right it sounds a bit dumb - but you could also say it sounds light on processing power.

You did however trigger a thought. Both the links "Blog" and "celebrity news blog" include the text "blog" as common. There is nothing I can see in the test description to rule out commonality of link-text words as being the ruling factor. You'd need a couple of pages to test: identical two word header link-text and then a second link-text with just one of those words. Also further pages to test with the third link.

Or is it absolutely clear that the spider doesn't record duplicate targets?

Hmm..But I dont understand one thing pbhj that - does a spider only crawls a portion of a page and not the entire page to save its processor energy (like you just said) ? Certainly not! - The crawler has to crawl the whole page anyway to index all the text and links on the page. If Google only considers the first anchor text on the page as a voting link, why do we(including seomoz) put intenal links in the footer, tag clouds and what not at the bottom of the page when those links are already there in the header and body copy? Sounds contradicting.. Isn't it?

Could you not make it so that the link at the top read celebrity news blog rather than blog? It would still be in context.

I also was confused as to whether the first anchor relates to within the code or in the actual page. From my limited knowledge of Google technologies, I'd guess it was where it was on the page from what I've seen of how the Google Spiderbot "reads" pages.

Yes - you could certainly alter the links as much as you want, which is why the experiment is valuable. It just shows us that we SHOULD be including the descriptive anchor text as the first link to a given target from the page.

In the original PR equation, they seem to suggest that a page can only cast a single vote for another page, which would indicate that multiple links to the same page won't impact the link graph or the PR. However, this is something where more testing would be needed to show it for sure.

There are a couple of interesting notes from the comments that I would love to see followed up on.

One is the idea of navigational links vs. content links because of the fact that Google is supposed to be ignoring redundant/templated areas of sites.

There are a ton of off shoot experiments to do based off of the post and comments here. It would be great if we all had more time and $$$ to do a ton of experiments ourselves. Actually, this has prompted a few meetings this morning and we will be doing some testing.

Thing is, Rand, you happen to be completely wrong on that, and apparently completely unaware of what the word "validate" means in Sean's question. If you put multiple links on a single page, all pointing to some second page using different anchor texts, then nofollow the first one, Google does not simply skip over the link. The order of operations is dedupe first (remove all duplicate links), and then apply the nofollows.

You don't need all of the complexity you tried to demonstrate here in order to prove this, either. I showed the exact same concept of single link per page credit here last year:

You mean the first link is not nofollowed but the second one is? Well... I would assume that then it would be the same as if none of them were nofollowed... the first link encountered is the one that is counted.

Here is what I am guessing is going on... when a spider hits a page, very limited analysis is done. The spiders main goal is to collect and index the links. They are meant to be lightweight apps capable of processing pages extremely fast. Therefore as a rule none of the actual ranking stuff would be processed by the spider... they would just parse the page and add the next links to be followed to a queue. If the link is a dupe, don't add it. If the link is nofollowed, don't add it. Therefore duped links with the first one being nofollowed meet this OR criteria, and get skipped altogether.

I honestly doubt this is by design, btw, and suspect it is more of an oversight.

What you suggest would actually be a good test though... it could be that instead of them taking just the first link, they might be collating the links, and therefore any of them having nofollow would have a chance of mucking it up. If true then that would of course definitely support the Nofollow Is Evil philosophy. :)

Actually, this highlights a huge potential concern in relation to PR sculpting, if a nofollow on one link negates all duplicate links, regardless of a nofollow present on the other links.

For example, it isn't uncommon with ecommerce or blog sites to have a couple links in the upper (category) levels to the post or product page.

One might be the post or product name while the other might be the less useful "more" type link. In which cases, the suggestion might be to nofollow the "more" link since it adds no useful textual signal

So we have to consider how these scenarios might be perceived by Google (and to some extent, all engines):

if links count once, then perhaps there is no need to nofollow the less valuable version (assuming it follows the textually-rich version). And, though of lesser importance, link popularity...if this counting is different in regards to link popularity and multiple links on a page carry a cumulative value, then better not to nofollow.

if nofollowing the first one negates the second, then this kind of sculpting could be detrimental. In the example above, this may be infrequent as the "more" type link will usually if not always be the second link.

if nofollowing any link negates all links on the page, then sculpting needs to be approached with caution.

I think Search Junkie nailed the problem with your methodology. You would need to build a site with a global navigation system and then add a link within the body text of some internal page to see if the anchor text was being picked up.

Even though I'm really late to read and comment on this, I'd still like to express that I really value posts like this which are based on a specified experimental method.

Too often SEO posts seem to be based upon little more than the author's opinion, And while the opinions of an experienced SEO pro like Rand are some of the most valid opinions, they are still opinions.

This is very interesting information. This goes against a few experiences we've had with regards to using nofollow, but it makes sense and I have not run thorough enough tests to contradict anything said here. If what I'm reading from Michael is correct, then I have some things to patch up with some of my consulting relationships. To this point, we only moved to crafting PR in industries where the competitiveness was extreme.

The first link thing is interesting. I typically use anchor text in the footer. If this information is absolute, then that is useless aside from Google possibly identifying the text and giving it the same value as body text.

I have a solution for the index page formulating in my head, but I don't think it can be applied to other pages without duplicate content issues. The solution would be to link the 'Home' link in the header to /index.html and link the footer anchor text to the absolute url such as http://www.mysite.com/. That way, Google would not see both links as linking to the same page. Just a thought.

I see a few potential issues with my idea. I don't know which is better. Having site-wide internal links pointing to the index with anchor text and a duplicate homepage getting counted by Google, or just have site-wide internal links pointing to index with useless anchor text such as 'Home'.

I guess you could hide the useless links lacking valuable anchor text in javascript or flash and then make the first code visible link have proper anchor text.

This sounds like a great excuse to start using flash for our top navigation again. Anchor rich text links used for navigation could be kept in the footer or left/right hand navigation.

Hey Rand, if you're still reading comments on this post... Does this work the same way with image links with alt text? Are they thrown into the mix with text links pointing to the same url? I would assume so, but jumping to conclusions is never wise in this industry.

I was also examining SEOmoz.org's nofollow useage with regards to this post and Michael's thoughts. A link with anchor text 'SEO moz' seems to be highest in the code, just above a nofollowed link 'Home'. In this instance the nofollow attribute used on the 'Home' link is useless.

I was thinking the same exact thing as you slingshot. I know this is an old post but here it goes.

Scenario:

Most people choose www.yoursite.com as the site main url.

Menu Nav links are usually short 1 or 2 words as to not look tacky. I would link to: yoursite.com from the nav menu link.

Sidebar navs can usually get away with 2 - 3 words so you can link to: yoursite.com/index.html or if your prefered anchor text will fit and not look lame link to: www.yoursite.com Which according to google is a different page.

And if you need more space to fit your anchor text without making your sidebar nav or menu nav look funky...

You can link to the page in your footer or body with your prefered link www.yoursite.com

I know, I know, Duplicate Content. Duplicate Shmooplicate.

Thats what the htaccess file is for.

Do a mod rewrite in the httacces file(apache servers only) so that all the url's 301 to the same url. No Duplicate content =P

Theoretically Google shouldn't catch it. I'm not sure how google treats these different URL's when crawling a page but if they can't distinguish between them when they are actual pages I don't see why they would be able to as a link.

Either way, with the redirect in place your site at worst will be exactly where is was before. Correct me if I'm wrong.

While you touch on it, it would be interesting to further test the impact of text in an "identified" navigational structure, versus the content area, and compare that against two links within the content area -- since Google can obviously identify a navigational element (on a site large enough with navigation), to see whether the content link gains preference over time...although this then brings into question the value of multiple links due to navigation vs. how much those links may be devalued due to being within a site-wide navigational structure.

Would also be interesting to see what happens over time, whether the second link could gain equal footing.

Also to see the impact of an external link (or how many external links) to the page with the anchor text used within the second link.

I think this is worth further consdieration. I imagine that boilerplate and block level analysis comes into play here when Google is determining which links are most relevant.

Just a hunch mind... Unfortuantely testing on a small site might not give you results anywhere near representative of what happens when Google identifies boilerplate.

I would imagine also that Mr. Cutts might neeed to revist his comments if this theory turns out to be true across the board - in that case PR sculptingcould be far more dangerous than he implied with some of his recent commentary on the subject.

I really enjoyed reading this one. As an SEO non-pro, I don't spend my time doing this kind of research, but I love reading the results when they are as well written, insightful, and actionable as this.

I suspect this is a topic that many of us amateurs would never have considered.

I'm actually very happy to see this test proven out as I think much of this same logic should be applied to all page elements. Great work as always Rand.

I've been waiting for the link placement technology that will value links more when they are in highly visible areas of web page real estate. I actually think that Google would benefit from this and have their AdWords ads become increasing more relevant when matching up highly weighted text due to page placement. I think we all agree that there is page real estate that is more valuable than other places on a page so why shouldn't that content be weighted higher? I think this is definitely a move in the right direction since it will solve a lot of problems I see with current ranking techniques. I actually wrote about this about a month or so ago.

In less words, if we have 2 internal links with same adress but different (or equal) keywords, google consider only 1 link , the first link and ignore the second? it's this what you mean? In this case all internal links in the middle of my online casino homepage are ignored because are already present in the header?

Even 18 months later this still interests me as something I have just been looking into. I am not sure if anyone will still be looking at these comments but if so can anyone let me know if there has been any research done on the other engines yet? If not I would definitely consider doing something for bing as I have strong feeling that in the case of bing more than one anchor text on a page is counted

I've been wondering, if it's true that Google doesn't count second link (external) to the same page, does that also means that it doesn't count second external link to different page on the same domain?

Sometimes site designs limit the number of characters in the main navigation if the site uses tabs. In this case would it actually be better to use images with keywords in the alt tag rather than simple/generic words such as "Home", "Insurance" etc?

Does this research only apply for anchor texts or does it also do the trick with other links on the site?

Lets suppose I have a banner on the top of the page referring to "site.com" and a link with anchor text (containing the keyword I want to aim at) at the bottom of the page in the footer. Is the link from the banner crawled as first, and therefore, the second link with anchor text ignored?

I have a question about this stuff. For example, I have 2 targeted keywords in my pages like "sample keyword" and "keyword sample" but they have the same URL. Does "sample keyword" anchor text counts and "keyword sample" anchor text does not count?

One question not already answered I have on this is if a blog article was to link three keywords to the same page. Different keywords the first link is counts which we have establised but what about the other two links is it un wise to link to the same page 3 times for example. Not just because it wont count but because it is over doing it ?

To explain further linking from the same article with three different keywords to the same page. Is this something you would reccomend or have tested ?

If you search in Google all 3 anchor texts, you will find the first one does NOT bring up the physicsprimer.com website (which was nofollowed), the second DOES bring up the physicsprimer.com website and the third does NOT bring up the physicsprimer.com website.

So based on the results of this, it looks like Google does skip the first nofollow link and give value to the second link and disregards the anchor text of the third link. So this should confirm Rand's initial post on only the first pagerank passing anchor text counts.

- "Does a link with exact anchor text for a query perform better than one that has other words in the anchor text? (exact appears to be better)"

Is this an understatement please? I would love to hear from you because from my short experience (less than a year) , the above is very true. I never made an experiment but i did the following. I have a method to aquire many backlinks from homepages but they all have to have same anchor. So i made it to be 3 words keyword and now im nr 3 for that keyword in google. It says "high competiton" for it in adwords tool. But, i was ranked at only 200-300 place for 2 words keyword derived from those 3 words before i started puting the exact 2 word anchor for it on some other sites and now im slowly entering the first page for it. I really believe its not just somewhat better to put the exact anchor you want to rank for, but that is way way better. Right now, im in a proces of changing my titles in every directory i filled with keywords, shortening them to 2-3 keywords.

Could this possible also have a negative effect for too much occurences of a link going to a single page if all the links are on the same page? Like too much links might pull the page down? I guess it makes sense if we are talking about juice leaks. But again, this needs further testing.

I remember watching one of the StomperNet videos by Andy Jenkins. He shares this concept that links and content on navigation that are present in all other pages are considered part of a template. And content in this area is less important than the main content of the actual site. I guess he may be correct when talking about the power of the content. But talking about the power of the link, I guess the answer is still test, test and test.

Thanks for another cool and enlightening post! Your experiment cleared a few things up for me, especially with the nofollow. I will replicate the experiment when I have time and will try and re-comment. No promises though.

This is somthing which has been troubling me lately - we've just re-engineered a website and went with droplist navigation - meaning every page is linked to from the main nav on all pages.

We felt it was best for usability but for SEO I'm worried it won't work - Google won't be able to determing what our most important pages are? Or though it ignores anchor text for a second link does it still see it as another "vote" for that page?

Is it better to have only a few links in the main nav and just link to sub pages from the section home page?

Droplist nav can be a problem as every page is linked to site-wide and can be counter-productive for SEO. I ran into the same problem for a client where I was trying to incorporate some tight siloing for different themes. Best solution I've found is by 'Plugin Labs.' All CSS drop down menus with a proprietary optimized linking script that only places relevant links on individual pages. Check it out...

Hello, this is a very informative article you have here. I did find a typo however. "while (D) is getting all of (C)'s link weight." The D is supposed to be F in the scenario that you gave. Thanks for sharing this information with us though.

Altho this obviously works in some cases it certainly isnt the case across the board so people all running out and putting no follow on thier home page links do so with caution as is going to at least see a major navigation change on your website..

With one or two similar tests I came to the conclusion many moons ago that google does in fact see and use the second link text and indeed that multiple links to a page do infact add strength. Yes it is only counted once in a backlink type search but somewhere in there it is making note that there are more than one link to a page.

If this wasnt the case then half the search results would be null and void for certain phrases.

How many people target one keyword or one phrase per page? More often than not we will have pages about two different things. For example our widgets page may be about blue widgets and red widgets. This would be linked to from the home page for example using both of those terms. 'red widgets' and 'blue widgets' both would be accurate, oth make sense to the viewer and the engines and from testing (to nonsense pages similar to above) I can say that both terms get ranked and the only way that can happen was via these text links.

I didnt really test to see if one was particularly stronger than the other and it may be accurate that there is more strength as initially depicted from Rands tests (altho with those being 12 months old there is a good possibility that they hold no relevance anymore anyway so adding old data is a bit, well, good for giving test ideas but not to be held as gospel) but the second links to a same page certainly are not pointless.

@Paz: excellent point. This kind of stuff is so hard to test accurately that sometimes its worth just asking "would it make sense for Google to put this in their algo given they're trying to get the best search index?"

Asking that question for this issue, I think, points to what you are saying.

Having said that I would love to see the results of any tests you did to arrive at that conclusion, if available.

Rand, correct me if I'm wrong but I don't think this post is trying to say don't use targeted anchor text because it's pointless... it still counts towards keywords on the page and remember, the big G can change its algo overnight on a whim. If it looks natural to target anchor text then continue to do it - however if you've got a choice of relevant destination URLs then consider using them.

I am writing you to ask for support or to hear if others have run into issues with Google’s Organic Search and Google’s AdWords. The company I work for, Laserfiche (www.laserfiche.com), spends more than $500,000.00 annually on AdWords. I’ve been having ongoing problems with sites that rank above us in the organic site of Google. I have done further research on these sites and have seen these sites use paid linking, paid directories, and forum pulling to gain higher page count for their sites. Google states that paid linking is against their policies and will sandbox sites that use just practices. We have reported these facts over and over to Google AdWords Account Reps and they have told us there is nothing they can do for us in regards to the sites in question. But they would help us with other marketing means to help our AdWords. We have asked for contact information to the Google Search Team and we still get nowhere with Google.

Lynette, this is probably a better question for the Q & A section of the site since it has nothing to do with this post that you are commenting on.

Google, and especially Mr. Schmidt, are not going to go out of their way to help you with organic rankings. There are billions of sites and if they started to help one then all of them would ask for help. Also, Google is quite the oposite of an SEO firm, they want your PPC $$$.

Google says this and that about paid links but the fact of the matter is that they cannot police it all. And if you want to place for any competitive terms these days then you need to buy links. Everyone does it whether they admit it or not. SEOmoz does it, iProspect does it, iCorssing does it, we all do it. Snitching to Google about other people will not help you either. And obviously you have learned this lesson first hand.

I suggest that you work with a professional SEO firm in order to get your site the rankings that you desire. It is not cheap and be prepared to spend about $2k-$5k a month. But the results, once acheived, are worth it! If try to do it in-house you will end up wasting more time and resources than if you go with a professional firm. Plus working with a firm gives you the benefit of utilizing an entire force of SEO people to help tackle your issues.

This is a really interesting post. I am really working hard to understand how Google works and learn more about SEO. My website about San Diego Real Estate is just starting to come around but it seems like it will take a lot of time to learn everything and then once that happens I will need to learn it all over again because everything is always changing! :D