Google Click Fraud Report

In preparation for the court’s final approval of the settlement in the Lane’s Gifts v. Google click fraud case, the parties hired an expert to assess Google’s click fraud control mechanisms. The report can be found here. The report is a good but dryly written primer on CPC advertising, click fraud and the operational challenges detecting and correcting for it. The money line: “I conclude that Google’s efforts to combat click fraud are reasonable.” A few observations:

The Report Reached the Only Conclusion It Could Reach

This report’s finding is hardly surprising. Both the plaintiff’s counsel and Google wanted/needed this report to bless Google’s practices. The report’s principal audience is the judge who is approving the settlement. If this report said that Google was doing a lousy job, then the report might derail the settlement. But plaintiff’s counsel wants its $30M payoff, and Google wants to wipe the click fraud liability off its books for $90M (or less). Plus, if the report said that Google’s practices were unreasonable, the advertiser response would be deafening. So this report had to come out in support of Google’s practices–there really wasn’t any other option. I want to be 100% clear–I’m not saying the expert was biased or that plaintiff’s counsel or Google distorted the process. But I’m sure the expert was selected with this outcome in mind, and I would not ignore this document’s role as a sales document.

The Expert’s Engineer-Centric Bias

The expert makes a big point about how he thinks that only engineers should decide how to set the parameters for the click fraud detection algorithms, free from any influence of the finance or business departments (presumably, such as marketing).

Obviously, this is a very engineer-centric view of the world. From my perspective, I think this engineer-centric priority is misguided for three reasons. First, all of the employees are in the same boat. Engineers are not dumb; they know that their decisions can increase or decrease the value of their stock options. There’s no reason why engineers are less susceptible to this financial pressure than anyone else in the company.

Second, insulating the decision from marketing or finance does reduce the influence of their biases, but engineers have their own biases. In a sense, insulating the parameter-setting decision substitutes engineer biases for company-wide biases. Only another engineer would think this is a good idea (or not even recognize these latent biases in the first place).

Finally, other company constituents may have expertise that isn’t available to the engineering department. There may be revenue recognition implications from the parameter setting. Or, there may be legal implications of how the parameters are set. For example, a contract may promise an advertiser a certain level of protection, and if the engineers are unilaterally tinkering with the parameters, that contract may not be fulfilled. Or, the executives may have obligations under Sarbanes-Oxley to ensure the vitality of the algorithms, but if these decisions are being made without their influence, the executives can’t fulfill their legal duties.

Indeed, one of my biggest frustrations as in-house counsel occurred when I realized that engineers were constantly taking legally significant actions with every coding decision they made. I realized the only way for me to properly perform my job was to clone myself, look over the shoulder of each and every engineer, and ask them with every keystroke–“what are you doing? what assumptions are you making? are those good assumptions? and how do those assumptions comport with commitments we have elsewhere?”

This expert seems to assume that engineers will make all of those decisions correctly so long as they remain free from influence from the rest of the company. I can assure him and you, based on first hand experience–they don’t.

The Doubleclick Problem

The report criticizes Google for counting doubleclicks (two clicks on the same ad by the same person in a short period of time) but praises Google for filtering those out starting March 2005, even though that decision hit Google’s bottom line hard. I’m glad Google fixed the problem then, but what about the pre-March 2005 doubleclicks? There’s no way for advertisers to identify those clicks based on the reports from Google, so the chances of these clicks being fixed through the settlement process are low. As a result, in my opinion, Google should recheck its logs (to the extent it still has them) to identify impermissible doubleclicks and immediately and unilaterally offer credits to affected advertisers outside of the $90M settlement. I think doing anything less is unethically profiting on clicks that its own expert has declared to be invalid.

Where Did They All Go?

The report says that only 1 person from the initial click detection effort in 2002 is still at Google, and that person was on “extended leave.” Where did everyone go? I don’t think this is just the typical Silicon Valley turnover. All of these people are pre-IPOers, so my guess is that they are now in Barbados, the French Riviera, Tahiti, sailing around the world in their luxury yacht, etc.

Share this:

Perhaps I missed something, but I didn’t think the expert thought only the engineering group should handle the click fraud algorithms. He just seemed to point out that this was true. Ideally, it would be a good idea for the algorithms to reflect input from all departments (including the business side). IMO there should probably be an investigation to determine why this particular software didn’t/doesn’t undergo review from non-engineering departments.

Regarding doubleclicks, even if the initial log data is still available, the conditions under which the logs were processed may not be duplicatable. For example, the parameter settings may not have been archived. Another consideration is if they use geotargeting, the mappings that were in place during a particular time period may not be available.

Determining a policy for doubleclicks is problematic because there are quite a few conditions under which they could be considered legit. An example is two people using the same computer (with the same IP address) but different browsers. It’s possible (though perhaps rare) that they’d both click on the same ad at nearly the same time, such as two friends who happen to be fans of the same musical group. Another is users who are behind a proxy that click on the same ads at nearly the same time. This could happen for very popular ads (e.g. an ad for the Super Bowl displayed around the time the game is being aired. (It could be argued that while the IP may be the same, the HTTP user agents and/or cookies may be different, but some people disable the presentation of this information.) This is actually be a good example of something where the business and engineering sides should discuss options and formulate a policy that the engineers implement.

You might be interested to know that one of the AdWords engineering staff prior to the switchover from CPM to CPC posts on the Xooglers blog.

http://sethf.com/ Seth Finkelstein

Regarding “There’s no reason why engineers are less susceptible to this financial pressure than anyone else in the company.”

Well, I’m an engineer, so I’m very biased – but I think there is some reason to believe this. Engineers have an ethos reflected in a joke that they’ll get something right technically even if it literally kills them. Financial people have a joke that runs: “Q: How much is 2+2? A: How much do you want it to be?”

In my life, I’ve seen a lot of pressure from management to rush a product before enough QA, to “make a number”. I’ve never seen an engineer do something like that for the value of the stock option (granted, I don’t have experience with megamillion dollar stock options either).

Still, while I’m not saying engineers are all saints, given a choice, whether to trust engineering, finance, or business, on parameters of an algorithm – I’d trust the engineers every time.