Standardized analytics subject for debate

Given the faith that so many people are holding in the robustness and growth potential of Internet advertising, it's somewhat understandable that the news of Google's decline in paid clicks, according to last week's ComScore report, had many observers adopting a falling sky mentality. Certainly, any sign that Google is starting to feel the weight of recession is a cause for concern. But there are several possible factors that may be contributing to these numbers, including Google's commitment to reduce both accidental clicks and click fraud.

Further, some are suggesting that ComScore's figures for Google, coupled with the reports that other search engines have not seen a similar decline, could be an indicator that Google's previous click data were less than accurate in the first place. ComScore's measurement is not foolproof either, and besides, Hitwise indicates that the amount of traffic Google is driving to online retail is increasing.

Google's fate aside, what this really shows us is how hard it is to make critical analyses of performance in a world when numbers are frequently not transparent, and even when they are, they're likely not standardized. It's not just in the pay-per-click realm; this can be seen in so many parts of the marketing industry. As just one example, contradictory opinions on how to define e-mail deliverability have led the Email Measurement Accuracy Coalition to recently publish a suggested universal definition, the first of many for e-mail marketers.

Marketers and the companies that support them struggle daily to analyze incompatible results. Attempts are often made to standardize, but success is rare. While I'm not marching out with a shaking fist demanding change, I'd be interested to hear some of your suggestions and theories on standard measure­ments in various sectors. Please consider the debate open, and tell me what you think.