Google Spain and the “Right to Be Forgotten”

The European Court of Justice (CJEU) has decided the Google Spain case, which involves the “right to be forgotten” on the Internet. The case was brought by Mario Costeja González, a lawyer who, back in 1998, had unpaid debts that resulted in the attachment and public auction of his real estate. Notices of the auctions, including Mr. Costeja’s name, were published in a Spanish newspaper that was later made available online. Google indexed the newspaper’s website, and links to pages containing the announcements appeared in search results when Mr. Costeja’s name was queried. After failing in his effort to have the newspaper publisher remove the announcements from its website, Mr. Costeja asked Google not to return search results relating to the auction. Google refused, and Mr. Costeja filed a complaint with Spanish data protection authorities, the AEPD. In 2010, the AEPD ordered Google to de-index the pages. In the same ruling, the AEPD declined to order the newspaper publisher to take any action concerning the primary content, because the publication of the information by the press was legally justified. In other words, it was legal in the AEPD’s view for the newspaper to publish the information but a violation of privacy law for Google to help people find it. Google appealed the AEPD’s decision, and the appeal was referred by the Spanish court to the CJEU for a decision on whether Google’s publication of the search results violates the EU Data Protection Directive.

The question presented to the CJEU by the case was “whether [the Directive is] to be interpreted as enabling the data subject to require the operator of a search engine to remove from the list of results displayed following a search made on the basis of his name links to web pages published lawfully by third parties and containing true information relating to him, on the ground that that information may be prejudicial to him or that he wishes it to be ‘forgotten’ after a certain time.” The CJEU has now answered that question in the affirmative:

[I]f it is found, following a request by the data subject…, that the inclusion in the list of results displayed following a search made on the basis of his name of the links to web pages published lawfully by third parties and containing true information relating to him personally is, at this point in time, … inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue carried out by the operator of the search engine, the information and links concerned in the list of results must be erased.

The search engine operator’s obligation, the CJEU went on to say, is triggered whether or not “the information in question in the list of results causes prejudice to the data subject.” It is limited, however, by a preponderant public interest in access to the information in question, if, for example, the subject is a public figure.

Google Spain is not the first case involving search and the “right to be forgotten” in the EU. In January of this year, a German court ordered Google to purge its image search results of six images showing former Formula One president Max Mosely participating in a “Nazi-themed” sex party. The German decision followed a similar ruling in France involving the same photos. The Mosely photos, to my mind, present a closer privacy case than the Costeja auction announcements because they are intimate in nature and were not a matter of public record until they became fodder for scandal sheets. Whatever the circumstances of Mr. Costeja’s financial difficulties, they resulted in official action against his property. And, as any lawyer in the U.S. knows, an assessment of one’s character and fitness to practice law entails inquiry into how one handles money and debt. That may not be true for lawyers practicing in the EU, but it’s certainly true here.

The evolving law in the EU concerning the “right to be forgotten” is problematic not only from the perspective of intermediary liability but also from the perspective of the integrity of the historical record. The CJEU’s embrace of a double standard with respect to the information (i.e., it was permissible for the newspaper to publish it in 1998 but not for Google to link to it in 2014) raises a difficult question about the law’s role in empowering people to sculpt their public information profiles to conform with an image of themselves that they want the world to see. There is no question in Mr. Costeja’s case that the information at issue was true or that it was appropriately publicized by the newspaper. Does damaging public information become private simply by virtue of the passage of time? How stale does information have to be to be considered “irrelevant or no longer relevant”? And what is the standard for measuring relevance? Relevant to what, to whom, or for what purpose? I can only imagine how the cottage industry of online reputation management will grow in the face of this expanding “right to be forgotten.” Search intermediaries will be more than ever curators of the content they index, which is a development that I, as a consumer of information and a user of search, don’t welcome.

There is, undeniably, a certain brutality to the fact that information on the Internet can live forever. And there are surely circumstances in which private personal information that has become public is so damaging when weighed against its truth value that it shouldn’t be kept alive through persistent links. I worry, however, about the broad scope the CJEU seems to be creating for the “right to be forgotten.” As this door opens wider, the burdens on search intermediaries are becoming both greater and more nebulous, and the threat to the integrity of the digital historical record is increasing.

Comments

It’s the erasure that bothers me (perhaps wrongly). Google and other search engines tinker with the order of search results all the time for their own business reasons, so I don’t think I would have a problem with (under certain conditions) a judicial determination contributing to the downranking of some pages. Especially since most of us don’t commit news very often, and thus anything spectacularly bad that happens to us (whether our fault or not) will persist for a long, long time.

On the other hand, this case suggests that exactly the wrong people will be taking advantage of compelled amnesia — insofar as the streisand effect allows them to. For example, I wouldn’t be surprised if searches on Mr. Costeja from now on contained a huge pile of top references to his insistence on purging references to his financial troubles from the internet — until he sues to purge those stories as well…

Vinge’s “Friends of Privacy” to the rescue: several percent of the people on the planet were at that Nazi sex party after their stuff got auctioned. (It was a stressful situation for all, so of course everyone had to unwind.) Though less popular, in the apartment upstairs from there, another few hundred million people were at the Bolshevik-themed sex party a week later, after their drunk driving convictions. I think I was at the Star Trek themed sex party after that tense, close statutory rape acquittal that you all read about in the papers .. except it was so long ago, I don’t remember whether or not that really happened! Did it really happen, or do I just remember reading about it through Google News back in 2024 and thought it so hilarious, that I incorporated into my little mythos? Damn you, Friends of Privacy!

But conceptually, how does this differ from US laws that say credit report agencies have to delete a bankruptcy from a report on a person after a number of years? That bankruptcy is a historical fact – is not putting it on a credit report, no matter how long ago it was, a “threat to the integrity of the digital historical record” Can a credit report agency now start opposing such laws as problematic under the justification that they run a “financial search engine”?

Conceptually, I guess it doesn’t differ. Congress made a policy judgment in bankruptcy law that the interest of a bankrupt person in having a “fresh start” outweighs the interest of potential future creditors in knowing the past bankrupt’s full and accurate credit history. The CJEU is making a similar judgment in its interpretation of the Data Protection Directive. In both cases, the law is requiring the erasure of accurate historical information, and to that extent, the erasure alters the historical record. One can agree or disagree over whether such erasures are good policy. My concern about the CJEU decision is that the parameters it sets for when such erasures are justified are much too broad and indistinct.

But how are the parameters different from standard judicial concepts such as “fair use” or “in the public interest” or “reckless disregard for truth” or “due diligence”? If we can have removal of links for copyright infringement, based on not passing a notoriously complex, vague, difficult to understand, lawyer battlefield, of “fair use”, how does this even rate on that scale?

It’s a fair question. Generally, we leave it to courts to work out difficult, standards-based analyses of things like fair use. Making sure fair use rights are respected has been one of the most difficult problems associated with scaling and automating online copyright enforcement. In the copyright context, intermediaries understand their removal obligations through the safe harbor provisions of Section 512 of the Copyright Act (the DMCA), which sets out clear parameters for when an online service provider (OSP) must remove allegedly infringing content if it wants to avoid secondary liability for infringement. From the OSP’s side, an assessment of fair use doesn’t factor into removal decisions within the framework of the DMCA; if the OSP gets a compliant notice of infringement from a copyright owner, the material has to come down if the OSP wants to be protected by the safe harbor.

There is, however, a fair use safety valve in the DMCA, in the form of a provision that allows a user who believes his or her content has been wrongfully taken down to file a counter-notice with the OSP and have the content restored. When an OSP receives a compliant counter-notice, it has to notify the copyright owner that it will restore the content unless the copyright owner timely files suit against the user. If the copyright owner decides to sue, then the user’s assertion of fair use will ultimately be evaluated by a judge. If the copyright owner elects not to sue, then the counter-noticed material gets restored. Under the DMCA, if the OSP follows the protocol, it’s off the hook to the user with respect to the initial removal and to the copyright owner with respect to restoration of the material prompted by receipt of the counter-notice.

There is no analogous “fair use” safety valve in the CJEU’s decision in the Google Spain case. My concern about the decision is that it doesn’t give search engine operators any clear parameters for when they have to eliminate links to contested content. Without clear parameters, the inclination will be for the search engine operator to avoid risking liability by simply removing everything that it’s asked to remove–no questions asked. Pretty quickly, that takes us down a bad road in terms of censorship and expressive freedoms for the third parties who are the original source of the (lawful) information and for other third parties to whom the information might conceivably have a legitimate value or relevance. There’s really nobody to push back on a requested “right to be forgotten” removal, and there’s no reason for the search engine operator to go out on a limb to maintain links that might fall in a gray area.

“If the copyright owner decides to sue, then the user’s assertion of fair use will ultimately be evaluated by a judge.”

Copyright laws dictating the need for erasure was my specific thoughts when you made the assertion that deleting historical records “is problematic not only from the perspective of intermediary liability but also from the perspective of the integrity of the historical record.” So, I am glad someone brought it up… saves me a lot of typing.

You go on to make an assertion that copyright lawsuits ultimately are evaluated by a judge. In the entire history of the DMCA, I bet you could list on one hand the actual number of copyright lawsuits based on an initial DMCA take down; that have been evaluated by a judge. Most of the cases get settled out of court, and most of those settlements are fair users paying what amounts to blackmail money because it is cheaper to pay the blackmailer than to duke it out in court. Something that a Freedom to Tinker contributor should be aware of as that is brought up a lot in this blog.

However, that said, your final paragraph of analysis there, answers my own query into why you feel this is problematic, and from that standpoint I agree. If there is no oversight on the other end, the trend will be for search engines to remove anything that they get asked to remove. And I see how that could become problematic.

I would address that with exactly what you ask for “clear parameters for when they have to eliminate links to contested content.” But that is not for a court to decide, that would be for the governing bodies to dictate (e.g. U.S. Congress in the US and whatever the Spanish equivalent is in Spain); and then like you said let the courts handle the specific cases after that.

—-

My analysis and my reasoning from nothing more than your post is that this court did get it right in this instance however. This man clearly had a reason to have that content removed from search.

You go on to make an assertion that copyright lawsuits ultimately are evaluated by a judge. In the entire history of the DMCA, I bet you could list on one hand the actual number of copyright lawsuits based on an initial DMCA take down; that have been evaluated by a judge. Most of the cases get settled out of court, and most of those settlements are fair users paying what amounts to blackmail money because it is cheaper to pay the blackmailer than to duke it out in court. Something that a Freedom to Tinker contributor should be aware of as that is brought up a lot in this blog.

You’re right that few lawsuits resulting from the notice and counter-notice framework in Section 512 of the DMCA ever get filed. In practice, pretty few counter-notices ever get sent to OSPs. The fact remains, however, that the counter-notice provision in the DMCA is a mechanism for protecting fair use. Or, at least, it was intended to be so. That’s really the point I was trying to make.

The cases to which (I think?) you refer as blackmail schemes are not section 512 cases; they’re mass John Doe P2P file sharing cases that involve so-called copyright trolls. P2P users and networks fall outside the scope of notice and takedown in section 512(c), so that’s a related, but different conversation to have. I’ve written on mass John Doe litigation as a consequence of the DMCA’s failure to scale for enforcement in the P2P context. If you’re interested in the article, you can find it at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1739970.

Prof. Bridy’s closing paragraph misses the point, I think, in two respects. First, it is irrelevant that the information persists. The Internet is replete with persistent records viewed regularly by approximately nobody. Second, search engines have assumed the burden of assessing relevancy, or so they proclaim. This ruling doesn’t suddenly impose it upon them. I assume Google routinely explores the questions in the penultimate paragraph here: “[W]hat is the standard for measuring relevance? Relevant to what, to whom, or for what purpose?” It answers these questions in terms that suit its business model, rather than, as in this case, in terms of Mr. Costeja’s view of relevancy. If there is a genuine legal distinction between public figures and private figures, and if Mr. Costeja is among the latter, then it hardly seems appropriate for Google’s interests to ignore that distinction. (In the age of viral YouTube posts, I’d say it’s the blurring of that distinction that complicates this case.) Google and others argue that Google “merely points to” the information. (See here, for instance: http://www.wired.co.uk/news/archive/2014-05/13/right-to-be-forgotten-ruling) This is laughable. As commenter Paul notes above, Google and other search engines obviously tinker with results and the algorithms that generate and order them.

In terms of persistence, I referred in the post not to the persistence of the underlying information, which will of course remain for nobody to find if it is never indexed, but to the persistence of the link to the information. It’s the link that keeps the information alive and easily accessible, so it’s the link that must be severed for the forgetting to occur.

In terms of Google’s “objectivity” with respect to search results, I recognize that Google’s algorithms assess relevancy and that Google already curates search results, hence my comment that search engines will be curating “more than ever.” My friend James Grimmelmann, whose work I’m sure many FTT readers know, has written very intelligently about how points of view inhere in search. I highly recommend his article called The Google Dilemma.

With respect to judgments about relevancy, Google’s existing metric for relevancy, as I understand it, is relevancy to the terms of the query based in part on what Google knows from the searcher’s past interaction with Google’s products and in part on “black boxy” factors that undoubtedly have to do with “what suits its business model.” The relevancy that the CJEU is now making Google and other search engine operators assess is much more open-ended and difficult to ground.

I recognize the severity of the technical and policy difficulties of implementing an “inadequate, irrelevant, or excessive” standard. Still, I’m not convinced that it isn’t salutary for search engines to have to pay more attention to curation in cases like these. This is not because I share the CJEU’s views of privacy afforded an EU citizen, but because the judgment requires search engines to take a more nuanced view of relevancy. (I haven’t yet read the Grimmelmann piece.) One unfortunate blunt outcome of this ruling for end-users appears to be not that search engines must merely downgrade the relevancy rankings pertaining to individuals who successfully dispute them, but that they must entirely eliminate results pointing to the “irrelevant” information (the link “must be severed,” as you put it). I wonder whether the case would have arisen had hits for a search for Mr. Costeja not appeared among the first few pages of “relevant” results, i.e., had a more sophisticated relevancy algorithm been in effect. But the judgment is not clear about this. It only mentions that a searcher “would obtain links to” the offending stories. Now, of course, it’s too late to test.

Offhand, I don’t know whether or not La Vanguardia, the newspaper whose old stories about Mr. Costeja were a subject of the dispute, is indexed by any third-party service not a search engine. If so, would this ruling require it, too, to remove index records available freely over the ‘net? Available commercially over the ‘net? In print?

Theoretically, I guess that in some cases it is useful to be able to forget certain information, and it’s an interesting intellectual exercise to think about how it might be done. On the other hand, I guess I’m more concerned at the logistics and unintended consequences of actually implementing such a capability. Do we trust Google or Bing or whoever to sit in judgment on whether or not a specific document should be hidden or not? The link scrubbing algorithms are sure to make lots of unadvertised assumptions on our behalf (as search algorithms already do). Who’s going to draw the lines about what is forgettable and what is not, and under which conditions? Do we really want to put Google in the position of making policy decisions about which records we don’t get to see at all? As it is, I have to be a little suspicious about search rankings. Completely removing certain links from search results just gives search engines an additional level of control over our perceptions.

Freedom to Tinker is hosted by Princeton's Center for Information Technology Policy, a research center that studies digital technologies in public life. Here you'll find comment and analysis from the digital frontier, written by the Center's faculty, students, and friends.