Firefox have decided to implement the PING attribute, the idea of it is to ensure that what people click on can be tracked with minimum impact on the user. Currently tracking is done by a link to a url which then redirects to the correct site. This takes a fraction of a second, but it does take time, and if the middle site is down the user can’t get through to the end site, even if it’s up.

The PING attribute attempts to solve this by moving the tracking from the critical path into a seperate attribute which the browser POSTs to when the user “follows the hyperlink”. This feature also allows one new piece of functionality, the ability to track links within pages so <a href=”#’top” ping=”http://jibbering.com/tops”> would allow me to track how many times people used goto to top links.

The use case described in the documentation is “allowing advertisers to track click-through rates without obscuring the final target URI”. It also stresses that following the ping’s are optional, this has an important fact for anyone actually deploying adverts, if you use ping rather than the current reliable tracking methods then they will no longer be counted as a click-through. So if your ad agency changes to ping, you will lose clicks that you’re entitled to. I don’t know of any online Ad agencies which are planning to use ping, but the WHAT-WG specification is controlled by a Google employee.

Of course reputable Ad agencies have their click-throughs audited by independant auditors to ensure they are accurate, so any that switch to ping will soon be forced to switch back to ensure the results - and therefore the monies are accurately reported. It’s clear the use case described by the WHAT people is not met by the attribute, the only other use case mentioned is “track which off-site links are most popular”, hardly a particularly important use case, but if there are no downsides to the method, then does it matter?

Unfortunately there are downsides, existing tracking methods must end up at the site the user expects to go to, otherwise they’ll be annoyed, this method you can ping any site, for example<a href=”http://jibbering.com” ping=”https://bugzilla.mozilla.org/duplicates.cgi?maxrows=10000″> would ping a bugzilla page that causes a lot of processing on the remote server, and returns a lot of data. So this sort of simple Denial of Service method is made easy, users will never know it’s happening all they’ll just see a big slowdown in their connection as it spends its time requesting pointless resources. Then there’s the up-counting of clickthroughs itself, copy the ping attribute from your google advert onto your other links, then any link a user leaves the page from an advert click is clicked, this is hard to track as the ad provider is completely outside the link the only way to check is to see if the recieved links match the sent links.

Like much of the WHAT-WG proposals (but not all), this is a poorly thought out proposal and it’s disappointing that the browser vendors are not meeting it with the critical inspection they would any other proposal. PING fails to meet its own use cases and it introduces lots of potential for abuse, if you’re creating a user agent and thinking of implementing this - think hard about what it could be used for?

This entry was posted on Fri Jan 20 18:08:00 UTC 2006 and is filed under Script, Standards.

Comments

So far, I’m still waiting for someone to give a persuasive explanation of the DDoS risk: how is <a ping> more dangerous than <img src>? Sure, there are plenty of things like forums that refuse images for XSS reasons while allowing links, but are they really that smart while not being smart enough to strip unknown or undesired attributes on links? On the other hand, if Bugzilla’s duplicates.cgi doesn’t accept POST requests from some other form I’m forgetting about, one or the other of us ought to be filing a bug on it to reject them, at which point the DDoS risk to it from <a ping> goes away.

The speed advantage has been horribly poorly marketed, because it really isn’t that much of an issue when you are opening a single link. Where I get hit, badly, is opening ten or twelve external links that go through an internal redirector first. With dialup. It’s not that uncommon to get into situations where several will time out waiting for connections, since even though I’m connecting to ten or twelve different servers, first I need ten or twelve different requests to one server.

Between the requirement for opt-out in the spec, and the utter unreliability, I don’t actually see anyone using it for ad click-throughs: even Google doesn’t have enough muscle to force ad carriers to accept easy opt-out click-through. But if I could never again accidently post a Yahoo search result redirect URL in my weblog, or never again have to see someone’s “this is my RSS feed click tracking URL” posted in someone else’s weblog, I’d still call it a win. That some people won’t use it is no loss from the current situation, nobody has explained how the DDoS risk is any more than existing elements, so I just don’t get what all the uproar is about.

The DDOS risk is definately minimal, the point is an illustration of how it’s a very different sort of feature to redirects - because redirects are on the users path, they cannot do anything bad simply because the user has to end up on the site they expect. PING removes the notification from the users path, so now there’s nothing to stop the author from pinging anywhere - from adverts on the page to random sites purely for mischief.

That the bugzilla page returns content is not a bug, there’s nothing in the WHAT “spec” which allows server authors to distinguish pings from other requests. If you get hit with internal redirectors - use another service, stop using Yahoo, stop using Google with scripting enabled and your problem will go away. The complete unreliability of this feature is the main problem it simply does not meet the use cases provided, as you note one of the 2 use cases no-one is now claiming to be realisitic, the only remaining use case is something no users particularly care about.

There’s many ways to improve the feature, but first there needs to be clarity on the use cases, something that exists for just about all the WHAT features.

Actually, I guess there is one case where the DDoS risk is different than all the myriad other ways: if you need to attack a POST-only form responder, which will still create load from a POST without any params, then <a ping> would save you the trouble of having to script an XMLHTTPRequest POST, and let you use users with scripting disabled. Fairly minor, though, compared to the Chicken Little claims of the /. crowd.

That the Bugzilla page returns content is not the bug I was thinking of: my basic CGI security plan, right or wrong, is to always refuse GET if I expect POST, and always refuse POST if I expect GET, because anything I don’t expect is a bad thing.

But, “stop using Yahoo, and remember to disable scripting before using Google”? Surely you’re not serious about that as a widespread alternative to implementing something that won’t cure the world’s ills, but will meet a particular need?

PING doesn’t meet a particular need, that’s my point, google or yahoo can do PING today trivially with script, there’s no need to for a ping attribute, they choose not to because they want accuracy. PING is inaccurate, and even more than that it offers users nothing so Norton Security and Zonealarm and all those other “protect your computer” sites will block it. The problem is not the concept, although I don’t really see that much value in it, it’s the implementation, PING is simply not a good enough solution to be worth the implementation cost.

Designing the SHOULD UI from the WHAT spec is going to be extremely difficult, how do you explain “this link is going to track that you’ve clicked it telling X,Y,Z” in a way that a user is going to go sure, that’s okay.

So yes, I am saying if sites which use redirects for tracking stuff is too slow for you, use different sites, it’s a good solution, the site is doing something that harms you, you’re getting no direct benefit from it, why are you happy to pay the cost? There are other services you can use.

Incidentally, a ping has no cross domain security issues unlike PING which specifically requires the ability for cross site to meet its stated use cases, so it’s not quite related to XMLHTTPRequest.

Click pings meet a need for anyone who’d like to track outgoing clicks without turning all of their outgoing traffic into redirect URLs in a web-friendly way.

I’ve been hesistant to adopt a click-tracker on any site that I run because I like bestowing Google pagerank on destination sites. I’d like the information, though, and if I can get it in a way that browser users can turn off, I’d jump on the feature.

I implemented a PHP class library to support these pings today so I could see them in action. The sky-is-falling crowd is missing a chance to provide an opt-out, clean implementation of click tracking that can get rid of a bunch of clumsy, no-opt-out hacks that meet the same need.

both track external links without breaking the linkability of resources - obviously these are trivial scripts I’m going to put in a blog comment, not something that I’d actually recommend using, you need to do some object detection to prevent errors, and make sure it only happens on left clicks etc.

Certainly they’re not accurate, but neither’s ping, so that’s no different.

Click-trackers are of no direct use to users, and probably no indirect use either. The Sky is not falling in, unreliable tracking is a good idea, the PING proposal from the WHAT is not a good idea, it simply doesn’t do a good job of meeting the tracking use case.