The blog post - PageRank Sculpting - from the head Google's Web Spam team is a critical read for SEOs worldwide:

So what happens when you have a page with “ten PageRank points” and ten outgoing links, and five of those links are nofollowed? Let’s leave aside the decay factor to focus on the core part of the question. Originally, the five links without nofollow would have flowed two points of PageRank each (in essence, the nofollowed links didn’t count toward the denominator when dividing PageRank by the outdegree of the page). More than a year ago, Google changed how the PageRank flows so that the five links without nofollow would flow one point of PageRank each.

It's valuable to recall the illustration I put up on Google's initial announcement of this change:

This change in Google's treatment of nofollow links comes with some very interesting additional advice/clarification:

Q: Okay, but doesn’t this encourage me to link out less? Should I turn off comments on my blog?
A: I wouldn’t recommend closing comments in an attempt to “hoard” your PageRank. In the same way that Google trusts sites less when they link to spammy sites or bad neighborhoods, parts of our system encourage links to good sites.

Many in the SEO field have long suspected that linking out to good places can provide a positive benefit, but I'm afraid that's going to be very hard to quantify and therefore difficult to justify. In all honesty, I believe we're going to see SEOs and websites revert to what I'll call "old-school" PageRank sculpting - the kind prevalent prior to the existence of nofollow.

From now on, if you wish to sculpt PageRank, you'll want to use one of the following classic PR sculpting methodologies:

Option A: An embedded iFrame on the page containing the links you don't want the engines to follow (remember not to link to the iFrame URL, and potentially block it using robots.txt)

Option B: Links that call a Javascript redirect script with access blocked for search engine bots (as Google is also now crawling basic javascript and counting links through it)

Option C: An embed in Flash, Java or some other non-parseable plug-in that contains the desired links

Option D: Settings that turn off links for non-cookied or non-logged-in visitors

Tragically, while this action won't hurt spammers or those seeking to manipulate Google, it will seriously harm many thousands of sites that have employed nofollow internally as it was long considered a best practice (and messaged as such to the SEO community by the same source as this reversal). I suspect it will be several years and many re-designs before a lot of sites are able to clean up this solution-turned-problem.

I'm saddened to say that given this change, we, as SEOs, are going to have to also recommend the best practice that comments (in all forms of UGC) no longer accept links. While Google has said that linking out to "good places" provides some value, that merely suggests that webmasters and site owners should select good resources editorially and link to them with live, followed links. Comments that contain links, unfortunately, will actively detract from a site's ability to get pages indexed (as they'll pull away link juice from the places that need it). It's likely that a plug-in for Wordpress that sends comment links out through uncrawlable Javascript or uses iFrames will emerge in the very near future.

This is a disappointing move from Google on many fronts:

It allows malicious operators to actively hurt a site by adding nofollowed links in comments, forums and other open submission arenas.

It removes the protection webmasters thought was afforded by nofollowing links (you may not get hurt for linking to spam or paid links directly, but you're now indirectly hurting your site's PageRank flow)

While I'm personally frustrated, I'm also thankful to Google for publicly messaging this in an honest, open way. I hope that in the future, we'll get this notification in a more timely fashion. SEO consultants and in-house analysts are going to have their work cut out for them over the next few months.

BTW - Although Google has almost certainly messaged this honestly, we've got some tests running to make sure this is the case (with both the nofollow and the iframe/javascript solutions). Results will be posted here once our tests have been confirmed. We're also going to be making changes to how Linkscape's mozRank scoring system, modeled around similar intuition as PageRank, will treat nofollowed links in future indices.

p.s. Danny Sullivan's comment on Matt's blog post is also an essential read (and re-iterates many of the points above). A few valuable excerpts:

With this change, I can still get the $4 if I simply don’t allow comments. Or I show comments, but I use an iframe, so that the comment actually reside on a different page. In either case, I’m encouraged to reduce the number of links rather than let them be on the page period, nofollow regardless. If I’m worried my page won’t seem “natural” enough to Google without them, maybe I allow 5 comments through and lock them down after that.

Rather than clarify things, I feel like this is what your post is going to do -- cause people to consciously reduce the number of links they allow on their pages. We’re going to see an increase in iframe usage or other techniques to reduce links and flow more PageRank to the remaining links, for those who really worry/believe in such things.

It's been a long time since we had such a fundamental shift in SEO best practices (maybe the canonical URL tag, though it's effectiveness has been questioned and this PR sculpting reversal isn't likely to inspire confidence).

I second this comment. The big thing that stood out for me in Matt's post was that is all started over a year ago. With the pace that Google crawls the web, most sites would have seen the affect by now, no?

I'm not usually one to freak out over Google changes, but I think what really stings about this one is that nofollow is a tool Google explicitly gave us and endorsed, and they created it for some perfectly viable reasons. Now, after endorsing it and convincing us all to use it, they changed the rules a year ago without any warning at all.

When Google makes a set of secret rules in the algorithm, we try to divine those rules, and then they change them, that's one thing - it's the game, and SEOs have to play it. When Google says "This is what you should do" from the mountaintop and many of us do it to try to play by the rules (we're not all siloing for maximum gain), and then they secretly change those rules, then the message I personally start to hear is "Stop doing what we tell you, because we may just screw with you anyway".

The next time Google says we should do things a certain way, why should we listen? This is a bad trend and the Google team has potentially lost a lot of credibility with the white-hat community, IMO.

They endorsed it for blocking comment spam mostly and I think we as a industry help morph it into this sculpting thing. Cutts has always said it would not be the first thing to worry about. But then all "SEO tests" came out and said it did help pages rank, without any real proof. Michael Martinez has been posting that you cant control Page Rank in this sculpting fashion for well over a year, with specific comentson many of these blogs, but none would listen.

So many bought into it when they should not of. Good post by Aaron Wall on this "testing" today:

Practically speaking, I haven't used nofollow as a crutch and I'll survive, but I do think that it puts a serious dent in any future tactical endorsements Google makes. Even if it was just for comment spam, that still begs the question: what do we do now? Many people invested time and energy in custom code, plug-ins, etc. to use nofollow to combat comment spam and help Google, and now Google says "Sorry, try again".

Sure, plenty of people gamed the system, and I know Google has to contend with that, but what should the well-meaning webmaster who was trying to help Google fight spam think? That person may not listen to Google the next time, and I can't blame them.

agreed Pete - doesnt matter whether this was around nofollow or something else - the principle remains the same. How as whitehat people are we supposed to keep faith in being whitehat and following the rules when they are changed arbitrarily like this?

I think you need to read the article again and any previous articles related to the subject, for one you have been able to sculpt page rank with the nofollow attribute and internal link architecture, until recently Matt Cutts did say it was ok to do so except it was not something to put much weight on.

incrediblehelp I'm not arguing the fact of "when" the algo changed to address nofollow pagerank sculpting but that Matt did in fact say "you could". http://searchengineland.com/pagerank-sculpting-is-dead-long-live-pagerank-sculpting-21102 So going on and on about not being able to sculpt page rank from day one and act as if "it never existed" is wrong.

Google is never going to endorse a practice that focuses solely on improving rankings. Everything you ever ask Google will be answered in a way that promotes an improved user experience. The SEO community wasn't interested in whether or not Google "endorsed" PageRank sculpting with nofollow--it was interested in (1) does it work? and (2) are we allowed to do it without risking penalty?

The answers we originally received were (1) yes, it works, and (2) yes, you're allowed to do it.

The only thing that has changed is whether or the nofollow attribute is an effective tool for PageRank sculpting. We're now hearing that no, it is NOT an effective tool.

However, Matt Cutts continues to inform us that yes, PageRank sculpting works, and yes, we're still allowed to do it. These answers are as recent as 3 weeks ago, when he posted this video: What are your views on PageRank sculpting?

Summary

PR sculpting:

Does it work? YESIs it unethical? NO

PR sculpting with nofollow:

Does it work? NOIs it unethical? NO

And one last thing I'll add...

For God's sake, don't cite Michael Martinez like he's some kind of expert. He only comments on this blog in hopes that his contrarian perspective will confuse people into subscribing to his own blog. The only "evidence" he's provided that supports his anti-PageRank-sculpting philosophy is the following logical fallacy:

"Until you can measure it you cannot sculpt it."

I mean come on... is it really that difficult to understand why that statement is completely meaningless and invalid?

Even a broken record lines up with the music at some point. That doesn't mean you should dance to it.

Fair enough - it's true that Google has hinted at this. I don't want to come across like I'm crying about it. I just think there's a fundamental difference in this situation, which is that, unlike an algo change (where the algo is Google's property and we're all guessing at it), this is something Google publicly endorsed, and that creates a different level of expectation and accountability. It would've been nice to have seen the current statements appear a year ago and for Google to have been more transparent about this process.

Man oh man are paginated links going to be a pain. It will be interesting to see how people deal with the various places where nofollow was previously used (links to T&C, sign in, etc). iFrames, JavaScript, Flash, etc are all more complicated to get right and can often degrade the usability, look'n'feel and accessibility of a site.

Before making any changes I'd like to be sure about the other parts of PageRank after these cahnges, in particular:

1) If you have 2 links on a page to the same page and 1 link to another page, do the target pages get 1/2 PR each or 2/3 & 1/3 PR repectivily?

2) If a page links to itself using a # does that count towrads the PageRank allocation of the page? If I had 1 link to the same page and links to 2 other pages, do the 2 other pages get 1/2 PR each (link on page doesn't count) or 1/3 PR each (in which case where did the other 1/3 go - back to the original page or evaporated?).

These may seem like dumb questions but it might be worth checking our assumptions before changing our techniques. Look forward to the result of the tests Rand.

So whilst we may be tempted to use said techniques its going to heavily affect usability and accessibility - at a time when the EU etc is starting to implement legislation making it a legal neccisity to have full accessibility which if you resort to the above tactics could result in heavy fines & lawsuits.

Well one thing we all have to keep in mind, is that if users can't utilize your site, it doesn't matter how well it ranks. SEO's have to consider usability right alongside ranking well. If you're doing things that cause usability issues then you're focus is all wrong. (and by you I mean SEO's in general)

Only the first instance of a link to any individual page is counted, but whather they are discounted totally or whether further instances of links to that page would be utterly ignored or whether they would dilute PR in the way which nofollows now are, I could not say.

Those aren't dumb questions at all. Those are 2 of the questions that every SEO wonders about, as soon as he/she takes the time to really understand PageRank. I don't have the answers, but I'll add my thoughts.

If the "random surfer" analogy holds true, it would seem logical to factor all links into the PR calculation. In other words (referring to your #1 example here), Google should distribute twice as much PR to the first page, since a random surfer would have twice the probability of clicking on a link to that page.

However, keeping track of multiple links per page and using that data in the PR calculation seems like it would require a significant increase in programming, storage, and computational resources... which leads me to believe that Google does NOT track multiple links per page (at least not when Google was first released).

Also, if you read section 4 of the original Google paper, it says this:

The URLresolver reads the anchors file and converts relative URLs into absolute URLs and in turn into docIDs. It puts the anchor text into the forward index, associated with the docID that the anchor points to. It also generates a database of links which are pairs of docIDs. The links database is used to compute PageRanks for all the documents.

To me, this sounds like it's possible to get credit for multiple links with regards to anchor text... but not PageRank. Again, that's just my opinion, and certainly things may have changed since that paper was written.

Basically, I assume that Google calculates an on-page probability that factors in things like {relative font size of anchor text} and {location of anchor text within a document}. Using a complicated model like that would allow multiple prominent links to distribute more PR to a given URL--even if it might not be as simple as "twice the links = twice the PR."

I'm just glad you didn't think my question was dumb. Quite a compliment coming from you! Thanks for sharing your thoughts on it. I think you could be right, the question is, what experiment would you run to prove it?

It says one thing to me - Google feels much more confident in distinguishing between the quality of links. Maybe, it's time to abandon the nofollow at all (even for external links) and let search engines to decide which links are good and which are not.

But don't you-webmasters-SEOs feel stupid? We helped Google by nofollowing links to not count "wrong" links, because they couldn't do it on their own. Now when they feel confident, it turns out, it can even hurt your site.

Personally, I am not going to change anything on my websites for months, maybe never. But, of course, I am very interested in the test you run.

IMHO - adding IFrames will not be a long term solution. It is impossible to hide the target URL of the IFrame and, as webmasters add them to their pages, Google has no reasons not to consider content of those IFrames as part of the "regular" content on particular page. Eventually links in IFrames will be counted as links on page.

From point of view of search engineer I see no reason not to consider it as "regular" content on page even if it is framed or else why would you want to show it to your visitors? It might not be indexed as standalone URL because of settings in robots.txt but I truly believe frames might get flattened in a similar manner how graphical editors flatten image layers while exporting to JPG for example.

Why shouldn't it be done? Search engines want to see the same content that end user see on the site and this is just next step.

In order to "flatten" the original content and the iframe content into a single document, Google would first have to get the iframe content from your server. If you have the iframe URL DISALLOWED in your robots.txt file, Google cannot access it... unless googlebot makes an intentional decision to disobey the robots exclusion protocol.

In other words, to accomplish what you're suggesting... Google would have to spit in the faces of webmasters everywhere (not to mention, they would have to change this or admit it was all a lie).

But then isn't having an IFRAME to a page blocked in robots.txt a pretty good indicator for Google that you're being naughty (not a perfect indicator).

Plus, on the subject, how does Google discover cloaking? Unless they disobey robots.txt then they can't do it programmatically. That means that for most of the 200million websites cloaking wouldn't be found??

Cant show as it is a clients site, but I doubt it is on purpose. Usually happens after pages are indexed and you are trying to block them through the robots.txt file after the fact. The pages still get crawled and indexed. of course this is because the pages probably have links pointing to the pages, but still it should be blocked because of the robots.txt rule right?

once in the index you'd have to use the webmaster tools to remove the page - thats what its their for, you shouldnt be using robots.txt to do that as its bolting the door after the horse has bolted.
or however that saying goes... :D

Everfluxx, a page that is blocked by robots.txt can still accrue PageRank. In the old days, ebay.com blocked Google in robots.txt, but we still wanted to be able to return ebay.com for the query [ebay], so uncrawled urls can accumulate PageRank and be shown in our search results.

"Everfluxx, a page that is blocked by robots.txt can still accrue PageRank. In the old days, ebay.com blocked Google in robots.txt, but we still wanted to be able to return ebay.com for the query [ebay]"

If you use the "noindex" robots.txt directive instead of the "disallow", those pages will not show up in Googles index. That is for sure. I know it is unofficially supported by Google, but I still use it, and that for over a year without any problems. Basically I use it as one of my best tools for bots herding.

THAT is your basis for claiming that Google disobeys the robots.txt protocol?! I'm sorry, but that is just ignorant.

Before posting such bold claims, you might want to take the time to actually read the protocol in the first place.

The original robots.txt specifications were written before Google existed, so it was obviously not written with PageRank or Google's index in mind.

The point of robots.txt is to define which parts of a website are accessible to bots. Googlebot would be disobeying robots.txt if it requested disallowed resources from a server.

There's nothing that says Google can't return disallowed URLs in its search results. Google will display as much information about the page as possible, without actually fetching the page and crawling its content. Usually, that means it shows only a URL, but it can also use the OPD or possibly even archived content that was fetched before the robots.txt file was in place.

I'm now going to reread every comment you've made on this post and thumb down any that I disagree with or don't like.

The iframe itself might be counted as a link. I'm sure it doesn't pass pagerank but might be considered as a link. If so, creating an iframe for a single link would be the same as just nofollowing a the link.

4. I define a JavaScript function (in another external file) that writes the iframe element code into my original document.

5. I disallow the external JavaScript file in robots.txt.

6. I leave the un-important links on one or two pages (e.g. sitemap.html). This lets the search engines crawl those un-important pages, but without giving them more PageRank than they deserve. This also gives users access to those pages, even if they have JavaScript disabled.

7. Each page that used to have an entire block of code (of un-important links), now has just a single JavaScript function call, enclosed in <script> tags.

8. You could even take it one step further and define your JavaScript function to write the iframe element's code into a specific element ID (or using the DOM). This would eliminate the need for <script> tags, except for the ones in the <head> section that reference the external .js file.

The irony is... I'm obsessively white-hat. I'm not "hiding" anything or doing anything sneaky. I go through all that trouble just to make sure Google doesn't mistake my Privacy Policy for a landing page.

"Tragically, while this action won't hurt spammers or those seeking to manipulate Google, it will seriously harm many thousands of sites that have employed nofollow internally as it was long considered a best practice (and messaged as such to the SEO community by the same source as this reversal). I suspect it will be several years and many re-designs before a lot of sites are able to clean up this solution-turned-problem."

Matt's post said that this has been in place since last year, therefore surely this won't be the case? If anything it would have been an issue whenever this was implemented but the change supposedely wasn't picked up by anyone. I don't think we're suddenly going to see drastic drops in traffic as a result of this announcement.

I also wonder if this is a precursor to starting to penalise excessive use of nofollow. I read into Matt's comments a note of "you have used and abused this too much". By starting to tell people not to use it as much, they can then, as a next step, consider penalties based on large-scale use... The first step in wiki's downfall?

Not sure about the "wiki's downfall" theory. Wikipedia doesn't excessively use nofollows - only on external links, which are usually only a small proportion of total links on most pages.

I could see the penalty scenario happening for sites where every link except the sales letter page is nofollowed - but what would be the point in light of this announcement? By negating the tecnhique they will eliminate it without the need for a new penalty - a case of "its pointless" rather than "its manipulative"...

I would love to see this announcement be a precursor to Wikipedia dropping the nofollow from its references. Having a nofollow there in the first place is pretty rediculous since all of the content on their domain is attributable to said sources. It's like they're saying, don't trust these external links, but do trust our website even though all of its content comes from those external links.

I really don't think Google will be punishing "manipulators" as apparently it's been over a year since their "manipulation" had any effect at all...

For me the main negative of all of this, and one Rand referred to in an earlier post, is that all the X-File fans who think that Google is Skyney and Matt Cutts is Arnie, are going to be able to say "See, we told you!"

Option E - Increase the amount of internal linking and flatten site architecture.

My old Sandcastles linking structure works great with the new algos, though there is now a need to remove external links totally from dupicate content pages rather than nofollow them.

Wordpress does this by default with their really ugly automatic snippets

Option F - there is an even better way, that maximises the benefit of user generated content, still providing dofollow links, but retaining 95%+ of the juice from all external links on a page, without using nofollow at all.

Andy - Option E - are you suggesting linking to a page on the site that detects the user's click location and redirects them to the right destination somehow but then shows links back to the rest of the site on that redirection URL for engines? I'm a little confused, but if my interpretation's right, it sounds potentially manipulative.

Rand, and what do you think about using link www.domain.tld/#page-xyz instead of www.domain.tld/page-xyz with "nofollow" Google says everything after # sign doesn't count. Maybe it is not exactly the same, but at least you get an extra link to the home page instead of linking to page you don't want to link.

In my Sandcastles approach I used dynamically increasing number of tag links depending o the number of dofollow comment and trackback links.

The old code is broken due to WordPress and my clumsy editing, but needs a refresh due to changes in WordPress tagging anyway (I used to use UTW)

At the same time the tag pages now need to be much higher quality, and have external links removed rather than nofollowed.

The same effect can be created in many different ways, especially on social sites where you can have links to user profile pages that have lots of internal linking.

@TheLostAgency

There are possibilities of a negative effect, but the key is to have more linking on pages that have lots of external links, but on pages that don't have lots of external links, you can then use laser focus of juice and anchor text

As Rand correctly points out, this change suggests a return to "old school sculpting" -- what I originally labeled Dynamic Linking -- and the ebook still contains the original Javascript-based code.

But IF there really was a change in nofollow and IF the math of the example is being correctly described, then there is a "silver lining" to this change for all by the web's largest sites. Here's why.

PageRank is a probability distribution and the totality of PageRank across the entire Google index has to sum to 1, so the PageRank missing from this change does not simply "evaporate" -- it is added to the "random teleport" probability on the page where the nofollowed links appear. This is a random PageRank bleed!

In the comments section of Matt's post, he mentions how you wouldn't be too far off if you imagined that this PageRank was going to the reset vector, so your results of your original hack may not be far off if this information is to be trusted.

There is one interesting thing to note with the "simulation" you did. Even with that PageRank bleed, your 0 page still ended up with ~.2 PageRank points more than it had without the sculpting.

So in essence, perhaps you now have a very definite point of diminishing returns that will penalize for aggressive nofollow use but still provide benefit with light use.

Of course when we get out to the "real world" it's not quite so cut and dry when we don't have Google's PageRank numbers and we start adding all sorts of inbound and outbound links and so forth, but it would stand to reason that a very light use of nofollow would still provide benefit over not using it.

However if the above is true, I would prefer to look into other methods (external java, overhead page consolidation with name anchors, and taking more time to really think out site structure).

According to Cutts, the computations are way more convoluted than what we know based on the original PageRank patent. He also hinted that the lost PageRank is "reset", not redistributed to the rest of the web - whatever that means. So I'm not convinced that nofollows result in random pagerank bleeds.

He says that what happens to it is essentially what happens to the reset vector - the 15% (in the original PageRank paper) probability that the user requests a new page.

Here's what he says about the reset vector in his PageRank sculpting post - "you could think of it as 10-15% of the PageRank on any given page disappearing before the PageRank flows along the outlinks."

But as Leslie mentioned, probability values don't really "disappear." The reset vector would distribute PageRank across the index, as the user would be requesting a new random page.

So if what happens to the PageRank of a nofollowed link is essentially the same thing as what happens to the reset vector, then he's saying that it would be distributed back across the index.

Of course this is the first time I've really considered PageRank at this level of detail, but I think I got everything right. :)

"So what happens to the PageRank that belongs to those nofollowed links? For example you have a page with 50 “points” of PageRank, 50 links, and 25 of them are nofollow. So that page passes 25 points of PageRank. What happens to the other 25? Does it get discarded? Redistributed to the rest of the web?"

His answer:

"Halfdeck, it’s a bit complicated, esp. since Google doesn’t view pages exactly in the framework as “classic PageRank” any more. You can think of that PageRank going into the reset vector without being too far off."

There is not now, nor has there ever been, any visitor benefit in employing nofollow. It has always been a manipulation of the Google PR game. Period.

Google claims that sites built to be user-friendly and that incorporate great content don't have to play "SEO games". Yet they simultaneously built a system that encouraged and rewarded the use of nofollow to prevent the flow of PageRank from one page to another. Hypocrisy? Hmm...you be the judge.

Regardless of the intent of using nofollow - whether related to paid links or simply in an effort to preserve PR - this practice has absolutely no benefit whatsoever for a site's visitors. It is a major reason that SEOs with any amount of experience know that when Google says, "Don't worry about the nuances of our algorithm, just focus on building a user-friendly and content heavy site...", they're full of crap.

There is not now, nor has there ever been, any visitor benefit in employing nofollow. It has always been a manipulation of the Google PR game. Period.

I disagree... if only slightly. I'm certainly not an average visitor, so I do acknowledge that the following exception is an irrelevant extreme case, but nevertheless... it's an exception:

I have a customized stylesheet for Firefox that formats nofollowed links (including image links) differently than the rest. This allows me to easily spot all the links that a website has chosen to "not vouch for." This may seem like trivial information to most people, but it actually plays a significant role in my browsing decisions. For example, it suggests:

You're correct, Darren. I was referring to "direct/instantaneous" benefits to the average visitor while actually on an individual site - not the effect of nofollow on PR flow to other sites and potential impact of that on SERPs. There are always exceptions, and - as you pointed out - you're certainly not "the average visitor".

The vast and overwhelming majority of visitors don't even know of the existence of nofollow links, nor would they want to. Google encouraging webmasters to employ nofollow knowing that the primary intent in doing so is to manipulate algorithms flies in the face of everything they say about building sites for visitors, not for search engines.

It may have helped them do a better job of cleaning up SERPs, but it did nothing to improve the experience of someone visiting the site that employed the nofollow.

Option C: An embed in Flash, Java or some other non-parseable plug-in that contains the desired links

Just wanted to point out that Google is also parsing flash...great idea to use the iframe. However, I think the best solution is just to have great content and enough inbound links that it just doesn't matter that much if any link juice is lost.

I believe if Google wants SEOs to help them index relevant pages and bypass irrelevant pages through nofollow, there needs to be more of a tangible incentive for us. Hence the control of link juice was perfect.

This is definitely an interesting one. I think some re-reading of Matt's less than crystal clear post and Danny Sullivans respone are in order.

One the surface of it though my spidey sense seems to be saying this is a confusing and potenitally harmful change. From the comments on Matt's blog it's clear that he has already confused a lot of webmasters/website operators as well as riled/confused a lot of SEOs.

For some years we've been fans of "PageRank massaging" rather than "PageRank sculpting"...where we add a subtle "targeted link text paragraph" at the bottom on many pages, focussing text links back to say five pivotal pages of the site...with exceptional results, one of which is a remarkably high number of double/indented listings.

Doesn't always suit every site/brand/positioning but for a big site of 10k indexed pages, it really does provide uplift.

Thanks Rand.. would be great to see the results of the tests you mention.. wow what a change for SEO.. the industry remains anything but boring.

I guess very careful attention to site structure and hierarchy is required here, its going to push SEO's to demand some significant site design changes and we'll have to be very carefulwe dont harm site usability in the process.

Unfortunately Rand, I agree that this will probably lead to even more complex operations with how sites handle all the various kinds of links that a page might acquire.

Even more unfortunate, it may be a necessity to help protect the integrity of the page or site, such as the case of excessive links being added via UGC/comments to force dilution. Then again, for this to be effective it would probably have to be enmasse, and then not much different from regular comment spam.

The troubling part is that, aside from outgoing links in comments, no one had to adopt the idea of PageRank sculpting unless they wanted to....it simply could have been a tool to be used. This change seems to force a more active consideration on all sites.

I can't help but think there is a much bigger issue at hand. I always had concerns about users applying nofollows in any kind of sculpting manner without thought or planning since a website is like an organism, all interconnecting....whose to say what the greater impact might be to downstream PageRank flow from blocking off section. So many of us made careful, thought out decisions on usage, but still challenging for anyone to calculate impact to a site, let alone the entire linkgraph.

And perhaps that is the greater issue at hand, the larger impact to the linkgraph and any rippling effect that all this additional nofollow usage has had across the web. The fact that this change was made a year ago and is just being made known, while all the time Google representatives were making, albeit sometimes reserved or cautious, recommendations on the usage. I respect Cutts & White and the team at Google and feel they may be feeling a little caught in the middle here, more messengers than architects.

There are different alternatives mentioned here, how to hide links, like for example iframes and javascript. Why not hiding links from the bots with a server side (PHP) script in combination with some .htaccess rules?

There is a big difference between those mentioned and server-side solutions. With JavaScript, iframes, and the nofollow attribute, you're not showing different content to different users. You're simply exploiting the search engines' inability to crawl certain kinds of links. The risks of server-side techniques are much more serious, as you soon find yourself in cloaking territory.

It depends on the implementation. Did you have something specific in mind?

You mentioned .htaccess, so I'm guessing you're talking about something like this:

1. Server receives a request for an un-important page*.
2. Server determines if client is Google.
3. If not Google, return the resource (200 OK).
4. If Google, return a 301 Permanent Redirect to an important page (e.g. the home page).

*Un-important page = a page we don't want to accrue PageRank.

This example can be interpreted in different ways. You might interpret it as "serving users content that is not served to search engines at all." However, a Google employee might interpret it as "serving a body of text to users, but serving a redirect location to search engines."

Other implementations are possible, but I can't think of any that would retain 100% PageRank AND avoid violating the Quality Guidelines.

I have to disagree a few previous comments. If your site (or client) was in trouble with duplicate content, I think the proper use of Rel=canonical tags can help with actual pr leakage/duplicate content and therefore sub-index issues as one sees in the G. webmaster acount or site:example.com and analytics, and this can be proven if you have the experience.

Also the way it is described by Cutts, the no follow tag does seem to add to leakage or rather 'evaporation' of linkjuice and as yet test results have yet to creep up and yell BLAMO into your face -we can infer that by having tones of comment links on your site/forum/blog...You will need to look out for your own site, and remove/handle the user comments' with some care.

I'de say remove the standard url field... still accept comments, keep your forum open for links; as this is how you drive traffic to your site keep the lights on.

I have seen a site get well over 10k longtail terms so specific to the niche they dominate many topics. Yet it still goes without saying that forum creates traffic and link backs that all make it to the main page in the end.

If you do not have these components, pr sculpting is more of an art of keeping architecture on point over internal/external/inbound links as it always has been. Certainly linking to authorities/sources is sometimes necessary and according to Cutts, somewhat welcomed.

Matt C. himself explains he rel=nofollows for conflict of interest reasons and to keep his rss out of the serps (both still endorsements/payola). So with this I think we get the point. Use of rel=nofollow stops the endorsement according to the Goog, but in reality we all see who we link to and who is linking to what brand.

Matt is just asking we all be stewards of our links, so he himself does not have to control all spam linking, meanwhile as mentioned there are ways around not using the nofollow tag - it is a non endorsment link with costs associated to the linker and no benefit to linkee.

I'm going to basically repeat what Case brought up because I don't see an answer.

Let's say you have an image that is linked and a text link beneath it both going to the same URL. Previously, I would have said to nofollow the image link to give the text link the credit. Sounds to me like now, the better option is not to link the image at all or to turn that link into a format that Google doesn't see as a link (Flash, external javascript, etc.).

So tell me this, I remember running a site a few years back and granted things have changed but never the less, I was able to gain a PR of 1 within a few short weeks and now on a different site with a whole lot more quality and unique content, I can't seem to break the basic barrier of a 0 PR. I understand that there are different niches but it seems to me that something is terribly wrong.

When I do a back link check, I only have one link showing back from my YouTube channel, and a couple of months ago I had five or six showing up in Google's backlinks checker, then slowly they would disappear and now I am down to one link. I have backlinks from PR4 and PR5 sites, some from indexes and other writing recommendation posts with only my link there. None of them show up. And while I understand that Google may have changed their PR/link calculating algo to reflect the low PR link backs, it seems odd to me that a PR0 YouTube link would count for something while a PR4 or PR5 established site linking out would not.

Here's my address, keep in mind I have recently (as of this comment) moved to blogger and quite possibly lost SERP positions being that my host dissipated into thin air and I was unable to access my Cpanel to set 301 redirects and so I have unfortunately lost standings and traffic which I have been building up for months. I also hope Google doesn't penalize me for seemingly duplicate content because I reposted some of my older posts onto Blogger.com (still under my domain though) but now they are listed as all under one date. The quick fix would be to use the same permalink structure but unfortunately Blogger.com does not allow for changes in the link/permalink structure and so I am now stuck.

I know, why didn't you just find another host, well it's a long explanation and I've said enough. If anyone cares to add their two cents, I would certainly be very grateful to you and your help.

Hey Rand, I have just read through the whole comments thread, as usual their are a lot of interesting views on the subject.

Did you guys ever get the results back from the tests you mentioned? I think that would help clear up a lot of the speculation going around and also set out some guidelines that we can all work along. A few of the people I have been speaking too claim that PR sculpting using nofollow is still working really well for them.

I'm currently working on a tube/video site that has 230k inbound links but around 70% of links on the homepage are latest/featured videos, and then around another 15-20% of links are too external sites. From everything I've read it looks like the only solution is to iframe these links and increase the number of links on the homepage to the main categories we're trying to rank for. Anyone got any suggestions/alternatives on this?

Google say that Page Rank is not the most important factor to rank well on the SERPs, But in my experience the pages with most PR on my site (Free Android and Iphone games), rank higher than the pages with fewer PR. Thanks for the explanation of PR.

Hasn't Matt Cutts already stated that Google uses a method along these lines? True, he says the Google algorithm for calculating it is more advanced than the model that Rand is discussing suggests, but that still suggests that the old pagerank model is still part of their model. Similarly, the fact that Google has stopped people from using nofollow in this way would seem to suggest that it was influencing the Google results (why else would they discontinue it?), which means that pagerank sculpting using nofollow probably did work at one point

While Michael is right that we don't have a perfectly accurate model of how pagerank flows, that doesn't mean that we can't use past information to deduce that it does flow. It just means that, like any other method used in SEO, it should be augmented by other methods which give a good user experience. Sure, reducing the number of pages on a site will most likely influence page rank, it always would have, but since we can't measure how much it will influence it will have we should be wary of putting too much focus on it, especially if it could be badly applied and degrade the user experience

I've set up my robots.txt to disallow some javascript files that were causing errors when google would index my site and I'm still getting those error... Is google disobeying my robots.txt file?

I wonder if once they index a file and have it they'll continue to crawl it but if you disallow it from the get go they'll never get to it?

Or, do I have a unique situation where my sites reporting functionality, which tells me any error that happens on my site along with the ip of the user that had the error so I'm able to know that google is continually trying to crawl a file I disallowed?

Google definitely caches files in many different ways, but I don't know if they continue to access those cached files after the live version has been disallowed. However, according to the robots.txt protocol, there is nothing that says Google would be obligated to throw out their cached version, so technically they'd be "within their rights" to keep using it... as long as they aren't getting it from your server.

Something you could try:

Change the name of your .js file, disallow it from the very beginning, and delete the old one. Then see if the errors dry up.

If you our on an Apache server and you have the rights to edit your .htaccess file, you can use an X-Robots "noindex" directive. It is a bad practice to disallow Googlebot with robots.txt to access css or js files, as it seems like you are trying to hide something.

Some people said in google groups that the nofollow change affects only the internal links and not the external (that you do not loose pagerank for nofollowing external links). What is you opinion on this?

Thanks. So, you are suggesting just having two links, one an image and one text, both pointing to the same URL? I would think that you would want to be sure the text link got the credit. Some folks seem to think that the first link the spiders find is counted and the second link to the same URL is discarded. In this example, the image link would be crawled first.

Also, would that mean that if you had 50 images on the page with 50 corresponding text links that the link juice is divided by 100 or are the duplicate links ignored?

The 2-links-per-product setup is fairly common with ecommerce websites, and I would expect Google to recognize that by now. So my recommendation of 2 links is based partly on that assumption. Additionally, I've done a few informal experiments regarding the weight of text links vs. image links, and I haven't seen any difference as a result. But obviously, the image has to have the alt attribute in place. For example, I've tried changing this:

<a href="URL">KEYWORD</a>

to this:

<a href="URL"><img src=".JPG" alt="KEYWORD" /></a>

for links across an entire site, and nothing happened. (Plus, when you look at the cached text version of a page, you'll see that Google has changed all the alt attributes to regular text links. That's not proof... but it's a good sign.)

If you only want to use one link per product, I'd go with Option 2 from my previous comment.

Regarding your questions about which links get counted (1st, 2nd, or both), I don't think the SEO industry ever came to a solid conclusion on that. Your best bet at this point is to:

1. Avoid doing anything fancy that Google might not recognize. In other words, stick with what most sites are doing.

Since I first heard of PageRank sculpting via no-follow I've had a bad feeling about it, and I'm glad they've foiled it.

It has always seemed a little grey hat to take something that Google made so that you can essentially say, "Even though I'm publishing a link to this site, I'm not voting for the quality of the content." and using it to say, "this is how I want your algorithm, which is designed to try to improve the user experience and pull relevance from actual content, to interpret my website's relevance.

And I also think that the idea of No-Follow and how it was thought to (or did once) interact with pagerank was making SEO minded web professionals to be greedy with their link juice... which will kill the system of sending votes to others websites.

Now that they have ensured that no-following links will not increase juice saturation in the other links, webmasters can use no-follow in the way it was intended... to prevent voting with your pagerank to sites you don't approve of.

2. If your site really does use RDFa, you can use iframes the same way everyone else does. W3C's RDFa primer says:

To date, because XHTML is extensible while HTML is not, RDFa has only been specified for XHTML 1.1. Web publishers are welcome to use RDFa markup inside HTML4: the design of RDFa anticipates this use case, and most RDFa parsers will recognize RDFa attributes in any version of HTML. The authors know of no deployed Web browser that will fail to present an HTML document as intended after adding RDFa markup to the document. However, publishers should be aware that RDFa will not validate in HTML4 at this time. RDFa attributes validate in XHTML, using the XHTML1.1+RDFa DTD.

Hey guys - just a reminder that we love spirited debates and good back and forths, but it needs to remain professional and not get personal. Darren - no need to "call bullshit" - there are far friendlier ways to disagree and we'd love if you could switch to some of those. :-)

We want to make SEOmoz as welcoming a place as possible - if you're seeking more of a conflict-approving zone, there are lots of others places on the web that permit this.

1. Okay, I retract my "bullshit" statement. What I meant was... I find it very hard to believe that anyone has a website that:

a. Uses the XHTML+RDFa 1.0 DTD, and
b. Validates, and
c. Needs PageRank optimization, and
d. Requires the use of iframes.

I assumed you were just trying to be a wise guy. Honestly, I still don't believe such a site exists, but the point is... I should have expressed my skepticism more subtly.

2. I don't think I posted any "misleading, inaccurate and incorrect information" in my previous comment, especially since your question made no mention of W3C validation. Also, I made no attempt to interpret the quote from W3C--I simply copied and pasted it, allowing for readers to interpret it for themselves. In fact, I referenced the W3C specifically because I know I'm NOT an expert in document type definitions... nor do I claim to be.

With that said, I'll give your question another shot.

XHTML 1.1 may not include (i.e. validate with) iframe elements by default, but you can extend its definition or even define your own DTD. In other words, the "extensible" characteristics of XML/XHTML give document authors the power to control what is or isn't considered "valid."

But again, I'm not an expert, so I encourage you to consult the following resources for more information:

I don't understand that response. Maybe you misread my tone? My previous comment was borderline apologetic, yet it seems to have angered you even further. So let me make myself clear: I'm not trying to offend you.

I am also an advocate of web standards, semantic web, and accessibility... and I definitely understand the extra effort it requires, so kudos to you for setting the example with www.seoworkers.com.

The only thing that's still unclear is why you're asking about iframes if you don't need them?

My apologies if I have misunderstood you. I am glad we could get this out of the way.And thanks for the kind words about my web site.

Now about the iframes. I am also looking for solutions to block bots to follow certain links like affiliate or paid links and that would have the same effect, and that it is not violating any web standards and/or serach engines guidelines, and that they are also are still accessible and usable for users.

A long time ago I developed a sort of alternative to the nofollow attribute for the purpose above, but I am not happy with that. To be specific I am not happy with the url accessibility and usability.

I posted that recently at WPW forums, where you can have a look: http://www.webproworld.com/search-engine-optimization-forum/78553-new-canonical-tag-big-3-a-4.html#post440204

I know it is not the alternative to discount a link entirely and that there is a leak of PR juice so far I can tell. Or? But I am sure you know what I am about.

With all of the sculpting going on nowdays, I'm actually really surprised that we didn't see a major flux in rankings "more than a year" ago when they made the change in their algorithm. There are so many implications here:

Why didn't webmasters notice increased traffic to those internal pages that were "hidden" previously with nofollow??

My Take: this will discourage webmasters even more from linking out to valuable resources and will actually create more spam in the long run.

"All of the millions of published spam comments now get PR passed... (that is madness!)"

Really? Doesnt Cutts say this:

“[*] Nofollow links definitely don’t pass PageRank. Over the years, I’ve seen a few corner cases where a nofollow link did pass anchortext, normally due to bugs in indexing that we then fixed. The essential thing you need to know is that nofollow links don’t help sites rank higher in Google’s search results.”

Rand, this is a really helpful post. Looking forward to your test results. Basically it means that I have to start testing on my own again rather than relying on what official best practices are. What a pity, really. With Nofollow, we were actually going somewhere...

hi rand,
I read various blog post and all the comments posted on here as well as matt's blog. This is very disappointing especially for SEO's working on advance techniques as Google are changing algorithms without backward compatibility.

The solutions provided by you to keep blocking links getting indexed are good and this is the only alternative solution of link sculpting.

As far as i know, i will keep doing natural and thematic link building and i am sure it will rock.

I think this move will force webmasters to do a much more controlled version of sculpting on page, reducing the number of links per page.

so no we have a per page issues with link, we also know footer and templated navigation links are devalued at different degree's

We' see temporary spike in ajax, flash, iframe links espcailly from UGC, there also is some strong potential for some black hat style cloaking here.... anyone working on a wordpress comment Iframe plugin yet? (you heard it on SEOmoz first!)

I also think we'll increase the number of links in content that are really not contextual realvent just to get the links to other pagesand not loose the value of footer or navigation.

Lastly I believe the big silver bird will return if I could just figure out what to do

one other point, in matts post, he says he uses 'nofollow' only for his RSS feed, he does use rel="external nofollow" for every link on the page.

I can't find an RFC on nofollow but is google now treating rel="external nofollow" different then rel="nofollow"?

If you have a page with 60 PR points and 4 out-links. Those link would get 15 points each, right? Now nofollowing 2 of them would leave 58 points to divide... At least that's what Matt said. My point is: this also needs to be investigated. If the estemed colleagues didn't see a change last months, then this could be a reason. My thought on this: the nofollowed links don't evaporate all of the juice they would otherwise get.

Q2. So an "outgoing link" is also an external link? In one of his answers to questions Matt suggests, that external links are treated differently. A lot needs to be tested :)

“Google itself solely decides how much PageRank will flow to each and every link on a particular page. The number of links doesn’t matter. Google might decide some links don’t deserve credit and give them no PageRank. The use of nofollow doesn’t “conserve” PageRank for other links; it simply prevents those links from getting any PageRank that Google otherwise might have given them.”

I'm in no way on the level of those guys but it seems a very solid view to hold onto.

I'll preface this by saying that I'm still sorting out the details like everyone else, but my current understanding is that the PR would still be split 4 ways, but that the nofollow'ed link juice would evaporate. So, if you've got 60 "points" going to 4 links (15 each) and nofollow 2 of them, 30 point evaporate and 30 are left for the 2 followed links.

But how could it be, that when not all links weigh equal (confirmed), these nofollowed links would evaporate the same as all others? My guess is, that the algo is much more complicated than that. And that the rankpoint system Matt mentioned isn't a simple proportional thing. And he simplified the example for better understanding of what is going on.

Oh, absolutely - this is a gross oversimplification. The gist is that the nofollow'ed links are now being "counted" - instead of having their link juice flow into followed links, they simply lose that link juice into the void now.

And what is left for the live links?That is my point. You lose some, no question, but how much do you win if done right? Nofollows in the right places, dofollows on other (to links to the same pages). Endless variations are possible. One more successful than others.Again, you lose some, but the win might just be enough to do it anyway.

None of their options sound very 508/accessibility friendly (or at least easy). "An embed in Flash, Java or some other non-parseable plug-in"... I mean, come on. Non-Parseable? Maybe the iframe solution. But yeah you block that in robots.txt and a gazillion other bots/scrapers what have you that don't respect robots still picks it up. 10 bucks says Google revisits their stance on this sooner than later.

As a sometimes conspiracy theorist I have to associate 1) the delayed release of this information and 2) the direct relationship that Google places between nofollow and SEO's with this post from Outspoken Media. http://outspokenmedia.com/seo/google-profiles-seo-as-criminals/

It really is a shame that all the potential positives that nofollow brought into the SEO world are being tossed out the window with this pseudo-new announcement.

Also, I would like to openly call for a new tag since nofollow isn't going to work like we want it to. There is NO reason why we shouldn't be able to implement a rel="nocount" rule that eliminates a link from the denominator.

@alun I see benefits from Googles perspective on having people use nofollow. When sites use something else to accomplish the same end Google has less information about the nature of the web to use in its algorithm. There's a tradeoff to be had between Google collecting my nofollow related information and allowing intended flow of Page Rank with a rel tag. If Google see's a loss of valuable data as a result of sites linking to each other with Flash, Javascript and iFrame, would that be enough incentive for them to implement a similar tag?

Oh, and don't get me wrong, I'm all for the count-double, but I think that might be a little more difficult to sell them on.

If you tell Google not to index the page in between links and not to follow i.e with robots meta tag directives like noindex and nofollow you will have a problem. Google will not index the page, but don't forget that the noindex directive acrues PR. And if Google cannot follow the link or links of that page, you create an as known dangling page or node, with the result to leak PR.

I developed a solution which I mentioned above and posted at WPW forums here http://www.webproworld.com/search-engine-optimization-forum/78553-new-canonical-tag-big-3-a-4.html#post440204

For my understanding I cannot see how PR leak could be possible with it.

After thinking this over for most of the afternoon I have to wonder whether this whole issue is being blown slightly out of proportion.

Given that the change was announced over a year ago and no one really seemed to notice, in our agency it hasn't had an effect on best practice and for the most part the only people who seem to be confused are those not familiar with the practice of PR sculpting already, it would appear that a combination of over analysis and scaremongering may have led to a perhaps slightly sensationalist take on the annoucenment.

I'm not directing this at SEOmoz or anyone in particular as there are many posts about it across the SEO community, but much like the so called 'revolutionary' canonical tag this appears to be (until significant results prove otherwise) a flash in the pan.

OK so in short adding the nofollow to a link will stop Google from passing juice through that link but that nofollow link can still greatly change the weight and ranking abilities of the page it resides on.

No. This definitely does NOT mean siloing is "coming back." Siloing (in theory) used the nofollow attribute to channel topical relevance and create "themes" within a given website. The so-called practice of "themeing" a site died when Google gave the following answer in October, 2008:

Q: Let's say my website is about my favorite hobbies: biking and camping. Should I keep my internal linking architecture "themed" and not cross-link between the two?

A: We haven't found a case where a webmaster would benefit by intentionally "theming" their link architecture for search engines. And, keep-in-mind, if a visitor to one part of your site can't easily reach other parts of your site, that may be a problem for search engines as well.

The only way siloing might have improved rankings is if it was inadvertently channeling PageRank to landing pages. But in light of Matt's announcement, it should now be clear that siloing is dead. Or in other words:

siloing = creating a hierarchical information architecture that gathers small chunks of internal PageRank and sets them on fire.

I believe you are misunderstanding the original concept of what it means to use siloing on a site.

Siloing is simply doing what Matt recently advised when asked if we should pagerank sculpt - spend your time picking the best links for your main pages, to send your pagerank there.

Siloing is nothing more than providing graduated levels of internal linking that are contextual to the section of the site the surfer is in. You create a silo with links, and if you were trying to create them by nofollowing other links, it was never a good approach, and could never be comprehensive.

Link laterally to similarly themed pages, ones that share a large stem of the breadcrumb navigation - do this more than linking to pages further away in theme (or silo), and you eventually create the silo shape with your links.

You say that siloing might have improved rankings if it inadvertently channeled pagerank somewhere - but there is nothing inadvertent about traditional silo linking. You're linking up to partent category pages from lower level pages, you're channeling your pagerank there without subtlety. You're linking sideways to create relevance, and upwards to push PR - then you simply optimize the higher level pages for more competitive keywords, and if you pushed your pagerank (and other quality indicators) at them properly, you'll have a better chance of ranking on competitive stuff with your hub pages.

That's why we silo, to push juice up, to enchance relevancy and popularity of pages that need more to compete. It has very little directly to do with the nofollow attribute.

Damn, this is well said. You should see the looks I get from clients and other seo collegues when I say to do this! It is natural and commons sense for your syndication strategy for your custom, dynamic, content. This is why I also promote big websites!

This is more in line with the original intent of the NoFollow tag - it is intended to prevent PageRank flow to pages for which a webmaster cannot vouch. It was never intended to be used to discredit your own pages.

"Ben Finklea, this is a change that’s been live for well over a year; if you’ve got a site that works for you and you’re happy with, I wouldn’t worry about going back to change a lot of work."

I do SEO for a website with several million indexable pages, indexation issues are my bread and butter, we have seen very positive results those past months using a combination of changes including nofollowing internal links, this means that either nofollowing internal links never helped indexation or that if it helped, site architecture and general crawlability was largely the critical factor. Anyway we are redisigning and i still believe PR matters in some way for indexation so we will use old school javascript redirects or iframes as Rand points out.

I'm finding it very hard to imagine several million pages of quality content on a single site. Forum site? Photo site?

I'm guessing Google have noticed that sites with a high links per word per page have lower quality (higher bounce rate). I'm wondering now about stitching a page together in the style of 1997 frames but using iframes ... hmmm.

In summary... Screw page rank sculpting and screw the no-follow microformat. As I said, when they first came out, they are both a total load of B*$£$"X!!!!! No body should be using no-follow or page rank sculpting, if you can't do it the proper way then your wasting your time.

I do strongly oppose the idea people have, that pagerank sculpting is about "hiding content" from search engines.If you would do that, then I agree, it's downright stupid, except for those special cases like logins etc.There are other - and in my view smarter - ways to use nofollow on links.Like linking from the homepage, but not from others. Or vice versa. Blocking links to all categories, except to the category the product resides in plus the home page. That way you "sculpt" a real tree structure which is the most efficient way for medium sites and the fastest way for a robot to crawl the site.

I see that you’d need to overhaul the professional guide to page rank optimization in light of this announcement?

I wouldn't say it needs an overhaul. More like an update. For example, I would obviously remove the parts that suggest using the nofollow attribute.

However, the nofollow attribute is just one of several PageRank controls mentioned in that guide, and it's definitely not something I focused on or relied on.

In fact, as soon as I read Danny's post (which mentions JavaScript issues that are also relevant to PR optimization), I immediately reread my guide, to see what would need to be changed. And I was amazed by how well the guide holds up, despite the news from Matt Cutts. In particular, I think this paragraph did a wonderful job of preparing readers for the unknowns:

We will soon discuss several ways to code links that search engines can't crawl, but keep in mind that the only foolproof way to be sure a link isn't getting PageRank is to not have it there at all. The more natural your link code is, the less you have to worry about search engines crawling links you didn't want them to. So yes, we will be making assumptions about what types of links count towards PageRank distribution, but we don't know what the future holds. Google is constantly making improvements to their ability to crawl JavaScript URLs, forms, and even links in Flash files, but we may never fully understand how or if those links affect PageRank. The bottom line is: the only way to know for sure that a link isn't passing PageRank is to not put it on your page at all.

By the way... the more you focus on ROI, the more likely you are to engage in grey or black hat SEO. I'm sure spammers enjoy a very high return on their investments, but that doesn't make them exemplary SEOs.