I’ve been following the Dish Hopper lawsuit closely; in fact, it’s next week’s topic in my copyright seminar. If the name sounds familiar, that’s because the Hopper is the DVR that was judged “best in show” at the Consumer Electronics Show by CNET until its corporate parent, CBS, forced CNET to redo the vote with the Hopper excluded. It’s capable of time-shifting an entire week’s worth of prime-time network programming, and has a one-button commercial-skipping feature.

The district court’s opinion dealt with two extremely important fair use issues: is it still fair use to tape TV with a DVR rather than with a VCR, and is it fair use to analyze copyrighted works to extract uncopyrightable facts? That opinion is now on appeal, and I’ve joined an amicus brief explaining why the answer to both of these questions should be “yes.” The lead author was Berkeley’s Jason Schultz, with whom I worked at the EFF almost a decade ago. His writing is as punchy as ever; the brief is a good statement of what’s at stake in today’s fair use cases.

I remember Aaron confronting Peter Singer — intellectual founder of the modern animal rights movement — at the Boston Vegetarian Food Festival to ask if humans had a moral obligation to stop animals from killing each other. I lurked behind, embarrassed about the question but curious to hear the answer. (Singer sighed and said “yes — sort of” and complemented Aaron on the enormous Marxist commentary he was carrying.)

Here is the big idea from the big thing that I have been working on for a long time, using only the ten hundred most used words, just like in Up-Goer Five:

This paper explains the two things we should do about search. Some serious people think that search should help people who talk get their words to other people. Other serious people think that the people who offer search talk for themselves and we should leave these people alone. All of these serious people are wrong, because the most important thing about search is that search helps you find things. Not someone else. You, and also me and everyone. It’s good when we can find things because it means we can learn, which is even more important than helping the people who talk or the people who offer search.

There are so many things we could look at that we need help to sort through them. So the first thing we should do about search is that we should usually leave the people who offer search alone so they will keep on helping us find things. Not always, because maybe sometimes the people who offer search will lie to us about what things there are or where those things are. That’s bad because it makes it hard for us to find things. So the second thing we should do about search is not let the people who offer search lie to us like that.

It’s important to think carefully about what it means to say the people who offer search “lied.” Sometimes the thing you want to find and the thing I want to find aren’t the same. I’m not wrong and you’re not wrong. We just want to read different things. There has to be room for us not to agree on what things are best, which means there also has to be room for the people who offer search to guess at what you and I want when we search. So a search answer isn’t a lie just because the thing it suggests isn’t the thing someone else wanted it to suggest. It’s only when the people who offer search really believe you’re looking for something and decide to show you something else instead that it’s a lie. When they do that, it’s right to be angry at them and we should make them stop lying.

These two things we should do about search also give answers to other questions about search. One of them is whether the people who offer search should be able to tell you about things even when the people who own those things don’t want them to? Yes, because telling you about a thing isn’t the same as giving you the thing. No one owns the facts about where things are, even when someone owns those things. Search’s job is to help you find things, not to help the people who own the things. This is still being nice to the people who own things because once you find a thing, you can talk to the person who owns the thing and you can only take it from them if they let you buy it.

There will be much more soon, I promise.

(With help from the thing that helps make sure you really do say only the ten hundred most used words.)

The core of the case against Aaron Swartz was that he downloaded millions of academic articles from JSTOR without permission. He did so by sneaking into an MIT wiring closet and evading MIT’s and JSTOR’s attempts to detect and block him. But the heart of the case, the conduct without which there would have been no point and no problem, was the downloading.

To put this in perspective, I, too, am a bulk downloader. James has downloaded his thousands, and Aaron his ten thousands. And there but for the grace of the Assistant United States Attorneys (who wield god-like prosecutorial power), go I.

In law school, during my time at the Yale ISP, I wrote for and ran LawMeme, a blog about law and technology. (Here’s one of its greatest hits, Ernie Miller’s classic “Top Ten New Copyright Crimes”.) It was a Slashclone based on PHP-Nuke, and it ran from roughly 2001 to 2006 before succumbing to script kiddie penetration attacks, a lack of new content, and administrative neglect. The domain names expired, the content-management engine was hacked beyond repair, and the powers that be ultimately made the sensible decision to pull the plug and not to try reviving it.

But this meant losing an archive of about fifteen hundred posts. I had a strong personal attachment to some, like the post that would ultimately become Accidental Privacy Spills. Others, like my posts on the Search King lawsuit, were the first draft of history. Ernie’s posts on the copyright disputes of the early oughts were memorable, vivid pieces of writing that deserved to be saved.

So I took on the task of making a static archive of what could be salvaged from LawMeme. LawMeme itself had been dynamically generated: each page was assembled from various chunks of content thrown together by the server on the fly. The archive would consist simply of fixed, unchanging webpages. There’s no good index to them, but if you search for “LawMeme” and any of the topics we wrote about, you’ll see articles that look more or less as they did back in the site’s heyday.

But to create the archive, I couldn’t just go back to the long-defunct LawMeme site itself. Instead, I had to turn to the Internet Archive’s Wayback Machine, which keeps snapshots of webpages from over the years. But with well over a thousand posts to retrieve, I didn’t want to sit there copying by hand.

And so I became a bulk downloader. I wrote a Perl script: a simple, 70-line program that exhaustively went through the Wayback Machine, looking for a copy of each LawMeme article. Just like Aaron’s script, mine “discovered the URLs” of articles and then downloaded them. And just to show how mainstream this is, I’ll add that I built my script around an elementary one that Paul Ohm published in “Computer Programming and the Law: A New Research Agenda,” his manifesto for why more law professors should write code. Paul’s script downloaded and analyzed the comment counts on posts from the popular legal blog The Volokh Conspiracy.

I think this was completely legal. But in today’s environment of fear and prosecutorial intimidation, who can be sure? I own the copyright in my own posts, I had the permission of the ISP to create the archive, and the implied license that all of the contributors gave to LawMeme would almost certainly cover this backup. But almost certainly is not absolutely certainly. Maybe some AUSA wants to build a career taking down professors, putting me in the crosshairs.

Or take the Internet Archive’s terms of service. By using the site, I supposedly promised not “to copy offsite any part of the Collections without written permission.” The site’s FAQ qualifies this statement a bit, adding, “However, you may use the Internet Archive Wayback Machine to locate and access archived versions of a site to which you own the rights.” Again, I was confident that this covered me. But confidence is not certainty. I assumed that no one would care to press the question. After Aaron, is that such a safe assumption?

I can’t imagine that the Internet Archive would have a problem with what I did. Recreating lost websites for the sake of the public and posterity is completely consistent with Brewster Kahle’s expansive humanist vision of digital archiving. But JSTOR quickly made its peace with Aaron, and that didn’t save him. Would Brewster’s blessing save me from the wrath of the feds?

Indeed, my script waited a second between each download. I didn’t want to put too much of a load on the Archive’s servers. But a cyber-Javert could describe it as an attempt to evade detection. Then, to get the webpages to display right in the LawMeme archive, I wrote another script to delete the bits of HTML added by the Internet Archive to the pages in its archive. Was that an effort to hide my tracks?

Another one of Paul’s papers presciently predicted the way our computer misuse statutes were vindictively turned against Aaron. In The Myth of the Superuser, Paul describes how these laws are written to protect against a mythic bogeyman, the all-powerful demented superhacker, capable of breaking into and destroying any computer system, bent on sowing chaos and devastation online. But the laws are used to punish minor misdeeds by unthreatening defendants. Imagine Mr. McGregor training a howitzer on Peter Rabbit and you have the idea.

Aaron’s Law is a start, but the problems with our computer crime laws, and with criminal law in general, run much, much deeper. The Department of Justice thinks millions of parents who made Facebook accounts for their children are federal criminals. Read the majority opinion in United States v. Nosal and ask yourself whether you’ve fudged your age on a dating site, or let someone else use your account, or used a workplace computer to check the baseball scores. Judge Kozinski noted, skeptically, “The government assures us that, whatever the scope of the CFAA, it won’t prosecute minor violations.” Tell that to Aaron’s family.

Legal interpretation takes place in a field of pain and death. This is true in several senses. Legal interpretive acts signal and occasion the imposition of violence upon others: A judge articulates her understanding of a text, and as a result, somebody loses his freedom, his property, his children, even his life. Interpretations in law also constitute justifications for violence which has already occurred or which is about to occur. When interpreters have finished their work, they frequently leave behind victims whose lives have been torn apart by these organized, social practices of violence. Neither legal interpretation nor the violence it occasions may be properly understood apart from one another. …

Precisely because it is so extreme a phenomenon, martyrdom helps us see what is present in lesser degree whenever interpretation is joined with the practice of violent domination. Martyrs insist in the face of overwhelming force that if there is to be continuing life, it will not be on the terms of the tyrant’s law. Law is the projection of an imagined future upon reality. Martyrs require that any future they possess will be on the terms of the law to which they are committed (God’s law). And the miracle of the suffering of the martyrs is their insistence on the law to which they are committed, even in the face of world-destroying pain. Their triumph—which may well be partly imaginary—is the imagined triumph of the normative universe—of Torah, Nomos,—over the material world of death and pain. Martyrdom is an extreme form of resistance to domination. As such it reminds us that the normative world-building which constitutes “Law” is never just a mental or spiritual act. A legal world is built only to the extent that there are commitments that place bodies on the line. The torture of the martyr is an extreme and repulsive form of the organized violence of institutions. It reminds us that the interpretive commitments of officials are realized, indeed, in the flesh. As long as that is so, the interpretive commitments of a community which resists official law must also be realized in the flesh, even if it be the flesh of its own adherents.

If the prosecutor is obliged to choose his cases, it follows that he can choose his defendants. Therein is the most dangerous power of the prosecutor: that he will pick people that he thinks he should get, rather than pick cases that need to be prosecuted. With the law books filled with a great assortment of crimes, a prosecutor stands a fair chance of finding at least a technical violation of some act on the part of almost anyone. In such a case, it is not a question of discovering the commission of a crime and then looking for the man who has committed it, it is a question of picking the man and then searching the law books, or putting investigators to work, to pin some offense on him. It is in this realm—in which the prosecutor picks some person whom he dislikes or desires to embarrass, or selects some group of unpopular persons and then looks for an offense, that the greatest danger of abuse of prosecuting power lies. It is here that law enforcement becomes personal, and the real crime becomes that of being unpopular with the predominant or governing group, being attached to the wrong political views, or being personally obnoxious to or in the way of the prosecutor himself.

In times of fear or hysteria political, racial, religious, social, and economic groups, often from the best of motives, cry for the scalps of individuals or groups because they do not like their views. Particularly do we need to be dispassionate and courageous in those cases which deal with so-called “subversive activities.” They are dangerous to civil liberty because the prosecutor has no definite standards to determine what constitutes a “subversive activity,” such as we have for murder or larceny. Activities which seem benevolent and helpful to wage earners, persons on relief, or those who are disadvantaged in the struggle for existence may be regarded as “subversive” by those whose property interests might be burdened or affected thereby. Those who are in office are apt to regard as “subversive” the activities of any of those who would bring about a change of administration. Some of our soundest constitutional doctrines were once punished as subversive. We must not forget that it was not so long ago that both the term “Republican” and the term “Democrat” were epithets with sinister meaning to denote persons of radical tendencies that were “subversive” of the order of things then dominant.

Aaron Swartz’s extraordinary life was lived at 8x fast-forward. In the course of little more than a decade, he packed in every stage in a “normal” person’s career. After some early explorations, he found success in his chosen field, only to realize that the good life was not the life for him. There followed a period of wandering, learning, experimenting, in which he did some things almost as afterthoughts, things that bore fruit when he wasn’t even looking. And then, after his years in the wilderness, Aaron found causes and commitment, a way into organizing and politics. Nothing in his successes or his failures was so unusual; he just did it all before most of us have done any of it.

The word I have seen again and again today to describe Aaron is “gentle.” If he was slightly maladapted to the world, he happened to be suited to the world as it ought to have been.

Aaron Swartz, hacker wunderkind and digital activist, killed himself yesterday. He was 26. Aaron was a friend, and more than that, he was one of my heroes. No one I have known better embodied the bumper-sticker motto to “be the change you wish to see in the world.” It is hard to believe he is gone.

I use his work every day—twice over. Aaron helped write the RSS specification used to syndicate blog posts. He was 14. If you use a feed reader, Aaron’s work makes it work. He was also the chief—the only—beta tester on John Gruber’s Markdown tool for writing webpages using a simple plain-text syntax. He was 18. I write all of my blog posts, including this one, using Markdown, which remains a masterpiece of clean and minimalist design. I’m sorry that Jottit—a super-basic wiki using Markdown—never went anywhere, because it’s also a remarkably fun and easy-to-use tool.

Aaron didn’t play well with large institutions. He attended Stanford for a year and dropped out because he felt his classmates weren’t intellectual enough. He went through Y Combinator’s startup bootcamp and wound up on the Reddit team early enough that he was a founder in all but name. But when Reddit was bought by Conde Nast, Aaron didn’t last long. He was 20.

Instead, he directed his energy into finding things wrong with the world and fixing them. He helped found Creative Commons, making it technically and legally easy for people who want to share their work to do so. He was 16. He founded Demand Progress, a firebrand netroots lobbying group that fought for online civil liberties. He was 24. And he helped Larry Lessig launch Change Congress (now Rootstrikers), Lessig’s networked campaign to reduce the influence of money in politics. He was 22.

And, most famously, Aaron took direct, individual action to liberate America’s court documents. The federal courts use electronic filing system, called PACER. Everything is accessible to the public, but at a fee of ten cents a page. The money far exceeds the costs of running the electronic filing system; the courts are actually violating federal law by diverting the fees to cover their other expenses. Aaron believed that these public-domain documents should be genuinely public.

So when a team at Princeton developed RECAP, a browser tool for PACER users to contribute to a public archive of these documents, Aaron personally downloaded millions of filings for the archive. He did it by going to a library that had been approved for fee-free use of PACER. The officers who approved this public trial of PACER had presumably not expected that the public would actually access the documents it was entitled to access, and the trial was quickly terminated. They sicced the FBI on Aaron, too, who was more amused by the attention than anything else. He was 23.

But this informal resume badly misrepresents who Aaron was, because Aaron was also a funny, passionate, playful, thoughtful, true American original. His blog shows his agile, slightly perverse mind at work: deconstructing the underlying political vision of the Batman trilogy, offering advice on “getting better at life.” (I like to think of Aaron’s essays as what Paul Graham should have written.) His Twitter was more of the same, just pithier.

It was always a joy when Aaron dropped by my office or we met up for dinner. He was interested in everything, from puzzles to programming to history, and willing to explore the implications of anything. He had the same curiosity, wit, and commitment to rigor on display in XKCD’s What If?. He wasn’t going to follow anyone else’s path; he was going to drift and wander and live the modern bohemian 20-something life. But wherever he crossed through, people would be just a little bit happier, a little bit better at working together, a little bit more optimistic about the future.

Aaron was driven by a passionate vision that the world could and should be a better place, that computers and collaboration and sharing could and should help, that he could and should do something about it. But he was hard-eyed about the world, too, he had little patience for idealism without concrete action, or for action without a meaningful theory of change.

But if Aaron dedicated his life to the cause of information freedom, it may in the end have taken his life. At the time of his suicide, Aaron was facing federal felony computer intrusion charges. He carried a laptop into an MIT wiring closet and used it to download millions of academic articles from the scholarly archive JSTOR. It was a terribly stupid thing to do. His PACER stunt was legal through-and-through; Aaron even gleefully obtained his own FBI file from the pointless investigation. But the JSTOR downloads were trouble, and he was caught red-handed going to the closet, using his “bicycle helmet like a mask to shield his face.”

MIT and JSTOR backed away from the case; they had no further beef with Aaron once he stopped. But the United States attorney’s office, perhaps still smarting from the PACER affair, or harboring a grudge over his digital activism, decided to make an example out of him. Aaron was depressed about his pending trial; the charges carried theoretical maximums of decades in prison. If that was the chief cause of his suicide, then the U.S. government has caused a great evil in the course of trying to punish a much lesser one. He was almost certainly guilty of some computer-misuse misdemeanors at least, but to press such heavy felony charges against him was a serious misuse of prosecutorial discretion.

The last time I saw Aaron in person was over dinner in Cambridge. He was late, of course. We didn’t talk about his trial, or about any of his other data liberation exploits. Instead, we talked about puzzles, and teamwork, and coding, and politics. I was up for tenure that spring, and facing the prospect that for the first time in years I would be simply free to choose my projects, without any deadlines or institutions telling me what I ought to be doing. So I asked him, in essence, what I should do with my life, because Aaron seemed to have answered that same question for himself, with greater courage, in the face of greater uncertainty, and with greater success than anyone else I knew. He was 25.

I am so, so sorry that Aaron himself didn’t see it that way. I will miss him bitterly.

Correction: Aaron downloaded millions of PACER documents before the Princeton team started work on the RECAP archive.

I’m quoted today in a New York Times article on how the well-funded startups rushing to offer massive open online courses (MOOCs) are finding it hard to bring in much money, much less to turn a profit. None of their ideas so far—including licensing courses back to colleges, selling certificates of completion, or charging employers for access to students—has been a breakout success. My point is that we shouldn’t mistake the failure of MOOC business models for a failure of the MOOC educational model:

“No one’s got the model that’s going to work yet,” said James Grimmelmann, a New York Law School professor who specializes in computer and Internet law. “I expect all the current ventures to fail, because the expectations are too high. People think something will catch on like wildfire. But more likely, it’s maybe a decade later that somebody figures out how to do it and make money.”

I would add that it is possible that MOOCs will upend higher education even as no one makes much money offering them. Amazon is currently sucking lots of money out of the retail industry; that money doesn’t end up in Amazon’s coffers, it simply stays in consumers’ pockets. Online education could do to universities what Wikipedia did to encyclopedias, or what open source has done to many parts of the software industry, or what amateur shutterbugs are currently in the process of doing to professional stock photography. The prospect should both excite and terrify anyone who works in higher education.

There is a necessary, but necessarily tricky provision in Google’s commitment to let websites opt out of commercial search:

Exercise of this option will not … (2) be used as a signal in determining conventional search results on the google.com search results page.

Some such provision has to be in the commitment, because severe demotion in search rankings is the practical equivalent of delisting a site. Something on the thousandth page of results does not exist from searchers’ point of view. So riddle me this: how will this promise be enforced?

Delisting is binary; it can be observed by issuing a sufficiently precise search. If I search for “hamburgers in topeka ks” and I don’t see any organic Yelp results, that might be innocent. But if I search for “yelp reviews” and I don’t see Yelp, something has gone very wrong.

But demotions can be subtler. A devious Google engineer could kick Yelp to the fiftieth page of results for any search containing the name of a recognizable item of food, but leave “yelp” unaffected. Yelp might suspect that something is wrong, but not be able to prove it just by issuing search queries and observing the results. So a demotion can be much harder to detect than a delisting.

The demotion I just described would leave behind a smoking gun: the search algorithm would have a few lines of code that special-cased the Yelp penalty. IF Google has to show the FTC its algorithm, then FTC staff can — at least in theory — spot the malevolent tweak. How, however, would the FTC know to ask? And by what legal process would it be able to demand access to the algorithms?

These may be significant consequences of the FTC’s decision not to insist on a consent order. If the FTC suspects a violation of a consent order, it has contempt remedies available: it can run to court, reopen the investigation where it left off, and send in the geeks. But a mere voluntary commitment: for that, the FTC would need to spin up its enforcement apparatus from much closer to a standstill. Which leads me to suspect that the FTC isn’t actually planning to do so again anytime soon.

On the other hand, I can also make a strong case that Google could not as a company lightly afford to put this particular commitment into a consent order. Here’s why. Imagine the Google engineer is even more devious. There is no “punish Yelp” part of the algorithm. Instead, the engineers know which sites have opted out—as a class, they’ll often be easy to identify—and they look for general-purpose algorithmic tweaks that just so happen to disproportionately hurt the Yelps of the web, the sites that have opted out. There is no explicit discrimination, just intentional discrimination in effect.

If one is seriously concerned about this scenario, then every algorithmic change is potentially suspect. You need the FTC looking over Google’s shoulder constantly, and the FTC needs to be in a position to challenge any tweak it’s concerned about. But this is precisely the kind of comprehensive regulatory review of search algorithms that Google has been fighting ferociously against in the search-bias space. That’s the victory that Google won today: not to have to say “Mother may I” each time it changes its search algorithms. Full-on enforcement of promise not to retaliate for opting out of commercial search would involve the same kind of scrutiny as comprehensive observation by a Federal Search Commission.

So now I think I may have a handle on why today’s commitments took the form they did. Letting websites say no to commercial search without saying no to general search seems like a compromise, a self-contained solution to a self-contained problem. But the equivalence of demotion and delisting is a two-way street. To really make the opt-out stick, it’s necessary to say something about demotion as well as about delisting. But once you’re genuinely worried about demotion, there’s no escaping the full complexity and controversy of search algorithm oversight. Maybe not immediately, but quite possibly down the road. Google recognized this, and for that reason was intensely concerned that the opt-out commitment not be the nose that brings the regulatory camel into the tent.

For this reason, I predict that today’s settlement will be unstable.
By the end of its five-year term, one of two things will have happened. Either the FTC and the world will have stopped caring about the opt-out, and it will be broadly accepted that search engines have a free, or at least not closely watched, hand here. Or the search-bias investigation will have started up again as the regulators insist on ongoing, perhaps quite intrusive, oversight.

So ends the Federal Trade Commission’s long and contentious investigation into Google. Out of the four serious issues on the table, Google walks away cleanly on one (“search bias”), the FTC gets a clear victory on one (“standards-essential patents”), and Google makes mushy-mouthed “commitments” on the remaining two (“vertical opt-out” and “ad portability”). But the issue on which the FTC let Google walk—“biasing” its search results to favor its own content over competitors’—was far and away the most important. The mood over at the Googleplex has to be pretty good right now.

Standards-Essential Patents

The FTC didn’t start out focused on Google’s use of patents. But as the smartphone industry descended into a post-apocalyptic Hobbesian landscape in which scavengers picked over the shells of burnt-out companies for usable patents and litigators started waving around patent arsenals of fearful destructiveness, the FTC followed the smell of gunpowder to its source. Google acquired Motorola in large part for its patent portfolio.

One key piece of Google’s strategy, like that of everyone else holding smartphone patents of immense value, was to seek injunctions against other companies’ phones. These injunctions, however, undermine a key premise of the standardization process that makes these wonderful gizmos feasible: fair, reasonable, and nondiscriminatory (FRAND) licensing for patents that are “essential” to implementing a standard. FRAND licensing is in effect a pact by the companies putting a standard together that once they agree on a standard, none of them will try to hold the others for ransom by brandishing its patents like a C-list rapper. FRAND licensing doesn’t mean free licensing, but it does mean being willing to license at all, rather than asking the authorities to drop competitors’ phones into a Blendtec.

Recent judicial decisions—including this Richard Posner barn-burner involving Motorola—have been skeptical of injunctions on standards-essential patents. The FTC concluded, 4-1, that the practice is an unfair method of competition, and Google agreed not to engage in it in the future.

A cynic might say that Google threw in the towel because the winds in the land were already blowing against standard-essential patent injunctions. An even bigger cynic might add that Google and its Motorola marionette have tended to be at the wrong end of the patent stick more often than they’ve been at the right end, so Google in particular has even less to lose, and might even gain from entrenching the rule it just agreed to as an industry-wide one. A truly epic cynic might ask whether Google turned to standards-essential patents in the first place because the rest of its larder was so bare. A mere realist would look at today’s settlement as good news for Apple: one more Jersey barrier has been cleared from the path of the iPhone juggernaut.

Ad Portability

One of the defining features of the search industry is that it’s a two-sided market. In addition to the consumers who use Google, there are the advertisers who actually pay for it. Google’s first and most important money factory was its ability to match advertisements to specific search queries. In the years since, Google’s AdWords program has grown into a huge and complicated system that lets advertisers customize almost everything about their ads and when they’ll be displayed. Other search engines that compete with Google for users and for advertisers have their own similar systems; the competition on the a din vertiser side thus involves advertisers transferring campaigns from one search engine to another.

Google’s critics alleged that Google restricted the use of AdWords in one clever and crucial respect: “The AdWords API Client may not offer a functionality that copies data between Google and a Third Party.” That is, you can advertise on both Google and Bing, but you can’t use a program to copy your Google AdWords campaign over to Bing. This provision and a few related ones, it was alleged, made AdWords into the Roach Motel of search advertising. A medium-sized advertiser—one big enough to have a campaign that would be painful to replicate by hand but small enough to depend on third-party tools to manage its campaign—would have a hard time switching out.

I have seen defenses of this rule. Indeed, FTC Commissioner Rosch offers one in his dissent from the commission’s statement closing the investigation: the rule “ensur[es] that third-party intermediaries take advantage of the unique features available on AdWords.” And even if the rule has few benefits, neither Rosch nor his colleague Commissioner Ohlhausen thought that it harmed consumers in a way that brought it within the FTC’s antitrust enforcement powers.

But this was never going to be the hill on which Google made its last desperate stand. However much Google gains from encouraging advertisers to create uniquely tailored campaigns by limiting the use of third-party tools, it has more to lose from a protracted legal battle. The only question is why Google didn’t simply delete the term earlier to make the issue go away. But today’s settlement gives the answer: a bargaining chip can only be bargained away once. Google held on to the term so it would have something to give the FTC as part of a compromise that preserved the powers Google really cares about.

Vertical Search Opt-Out

Google has had a series of long-running disputes with websites over just it uses the content they put online. Google does offer an opt-out for websites. In fact, it offers three: a robots.txt file, META tags in webpages’ HTML, and its own Webmaster Tools forms. Any of these opt-outs can be used on a page-by-page basis, so I could, for example, ask Google to index this blog’s front page, but not the archives.

But what these protocols don’t do is let websites opt out of particular uses that Google might make of the pages it indexes. So if you’re Yelp, and you didn’t want Google copying short snippets of user reviews for its own local search, your only practical option was to tell Google to go away entirely. But that would mean giving up your Google traffic, and one significant way that users find sites like Yelp is through Google searches in the first place. From Yelp’s point of view, Google was using Yelp’s own content to compete with it. Newspaper publishers have regularly raised the same complaint about Google’s snippets in Google News: they like the traffic Google provides but not Google’s competing product. The publishers tried their own alternative proposal for more granular opt-outs: ACAP. But it was badly designed and never was adopted by the one group whose adoption mattered: search engines.

Under pressure from the FTC, Google has now agreed to offer an opt-out from one of its vertical search properties. The details are interesting—and, as usual, all of the interesting details are in the footnotes. First, the opt-out only applies to “Covered Webpages,” which “have the primary purpose of connecting users with merchants.” Thus, Yelp can opt out of Google Local without opting out of Google, but if you want to opt out of Google Image Search without opting out of Google, no such luck. Thus, this rule really only applies to transactional vertical searches: ones whose goal is to find and buy something. It only really protects other organizing intermediaries, like Yelp and Expedia. Google can still strong-arm content producers, like newspapers and blog empires, to the fullest extent allowed by law.

Second, the opt-out is site-wide. You can opt out example.com as a whole, but not sub.exmple.com or example.com/sub by itself. I wouldn’t be surprised if there’s a back-end reason for this related to how Google’s crawlers and search-box integration systems work. It also doesn’t seem like a dealbreaker from the websites’ side of the table.

The opt-out changes the terms of the bargain between Google and websites moderately, but not very dramatically. Websites have always had the ability to opt out entirely, so the question is what kinds of concessions they can extract from Google in exchange for not taking their ball and going home. Google has proven willing to dicker over details in the past; it has changed its crawling rules to play nicely with newspaper paywalls, for example.

Here, Google conceded the essential point long ago: it gave Yelp the opt-out it wanted. Today’s concession institutionalizes this practice and makes it broadly available to other websites. But Google was clearly prepared to make this retreat. Call it the recognition of the inevitable, call it a reasonable compromise, or call it a concession offered to help the FTC save face, something like this has been waiting in the wings for months.

Search Bias

The center of the investigation, and of the complaints against Google for the past four years, has been “search bias”: the accusation that Google deliberately slants its search results to favor its own sites (like its own local results and flight search) over its competitors (like Yelp and Expedia). Last year, FTC staff wrote a 100-page report that reportedly recommending suing Google to prohibit search bias. But today, the FTC by a 5-0 vote decided not to take any action against Google over search bias. According to the FTC, then, whatever bias Google has engaged in is fine.

This outcome is a giant middle finger extended in the direction of the Google critics, like FairSearch, who have been calling for stringent FTC action on search bias. (Just yesterday, FairSearch put up a plangent blog post telling the FTC that there was “no reason to rush” its investigation.) I cannot overstate the extent to which the anti-Google case was premised on search bias. It was the hub to which all the other issues connected. Google’s vacuum cleaner approach to personal information and its systematic manipulation of its search results were the two constant themes Google’s enemies used to explain its actions and to appeal for government intervention.

But today’s settlement directly repudiates the search bias claims. It doesn’t say, “Google might be doing something bad but we didn’t find a smoking gun,” or
“Google is doing something bad but not something the FTC can prevent.” No, the settlement says, “We looked, and what Google is doing is good.” If the final FTC statement had been any more favorable to Google, I’d be checking the file metadata to see whether Google wrote it. Just look what the FTC concluded:

Google was pure of heart: “The totality of the evidence indicates that, in the main, Google adopted the design changes that the Commission investigated to improve the quality of its search results, and that any negative impact on actual or potential competitors was incidental to that purpose.”

Google helped users when it helped itself: “Notably, the documents, testimony and quantitative evidence the Commission examined are largely consistent with the conclusion that Google likely benefited consumers by prominently displaying its vertical content on its search results page.”

The data agree with Google: “Analyses of ‘click through’ data showing how consumers reacted to the proprietary content displayed by Google also suggest that users benefited from these changes to Google’s search results.”

Everyone else did it, too: “We also note that other competing general search engines adopted many similar design changes, suggesting that these changes are a quality improvement with no necessary connection to the anticompetitive exclusion of rivals.”

Users win when (some) websites lose: “For example, for shopping queries, Google demoted all but one or two comparison shopping properties from the first page of Google’s search results to a later page. … These changes resulted in significant traffic loss to the demoted comparison shopping properties, arguably weakening those websites as rivals to Google’s own shopping vertical. On the other hand, these changes to Google’s search algorithm could reasonably be viewed as improving the overall quality of Google’s search results because the first search page now presented the user with a greater diversity of websites.”

Relevance is subjective: “Reasonable minds may differ as to the best way to design a search results page and the best way to allocate space among organic links, paid advertisements, and other features. And reasonable search algorithms may differ as to how best to rank any given website.”

Google needs a free hand: “Challenging Google’s product design decisions in this case would require the Commission – or a court – to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.”

The decision to drop the search bias investigation, and this statement all but lauding Google, were made by 5-0 votes. That’s three Democrats and two Republicans, at the end of an investigation the FTC fought hard to get, for which it hired a veteran outside litigator, and for which its own staff was raring to go. If they just wanted to make the case go away, it could have been dropped more quietly, without the encomia to Google’s search-engine design. No, the end result of the FTC investigation is a prominent public vindication for Google.

The public case against Google—and hence also the legal case against Google in these uncharted antitrust waters—is now immeasurably weaker than it was before Google’s critics pushed so hard for an investigation. This is one of the biggest lobbying backfires I have ever seen. In fact, an ironist might argue that the lobbying backfired because it was so aggressive. Certainly it put Google on the defensive, hiring lobbyists and proxies. But it also seems to have raised hackles inside the FTC, undercutting critics’ credibility.

The FTC will “continue to monitor” Google on search bias, but further action seems unlikely unless something especially dramatic happens illustrate bad bias in action. Google is still in negotiations with the EU over the same slate of issues, and there have been reports that the EU was preparing to take a tougher line than the FTC. (State attorneys general, also on the case, may well also try to bring their own actions against Google.) So Google might now take this outcome across the pond to push for a similar resolution there. Or, if the EU is able to extract bigger concessions, the FTC could end up looking gun-shy.

One Last Thought

One interesting piece of timing is that yesterday, the Senate confirmed legal scholar Josh Wright to be an FTC commissioner. Wright, who has written Google-funded papers defending Google from antitrust arguments over search bias, has recused himself from all Google-related matters for two years. So if the FTC had waited until Wright took his seat, it would have faced the risk that it would end up deadlocked 2-2 on this high-profile case. On the main issue—search bias—this wouldn’t have been a problem, but the FTC was fragmented over whether a voluntary “commitment” by Google was an appropriate remedy on vertical opt-out. Being down a Commissioner would have increased the risks of deadlock somewhere in the case.