Primer on European Union’s Right To Be Forgotten (Excerpt from My Internet Law Casebook) + Bonus Linkwrap

I haven’t yet written about the ECJ Right to Be Forgotten ruling directly, though I’ve already referenced it on the blog a few times. The ruling itself came out during a business trip when I was tied up, so I missed the initial news cycle and the voluminous number of articles on the topic. Since then, I simply haven’t had a chance to organize my thoughts cohesively.

As part of updating my Internet Law casebook, I finally put something together. Although I don’t normally cover international topics in my Internet Law course, I felt I had to acknowledge the significance of this event. (In contrast, the words “Garcia v. Google” appear nowhere in the text, because I still hope that mistake will be fixed soon). As a result, I wrote the following overview/primer about the topic, and I thought it was worth sharing here. After the excerpt, I’ve included a linkwrap of some articles that caught my attention on the right to be forgotten.

I think the primer makes my views about the right to be forgotten clear, but I did try to tone down my rhetoric given this excerpt is part of a casebook, not a blog post. If, for whatever reason, it isn’t clear, I am NOT A FAN of the right to be forgotten generally, and especially how the ECJ implemented it. I imagine I’ll use saltier language in future discussions of the issue.

Now, the excerpt:

The “right to be forgotten” (RTBF) is a weird term. We cannot make people “forget” things, but we can require database operators to “forget” content by deleting it or making it more obscure.

Europe has done exactly that. In Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González, Case No. C-131/12 (2014), the European Court of Justice (ECJ) reached four important conclusions.

1) Privacy Directive Applies. First, the ECJ concluded that “the activity of a search engine consisting in finding information published or placed on the internet by third parties, indexing it automatically, storing it temporarily and, finally, making it available to internet users according to a particular order of preference” constituted “the processing of personal data,” so as data “controllers,” search engines must comply with the 1995 European Data Privacy Directive (95/46/EC).

This conclusion represents search engine exceptionalism. Search engines are subject to the RTBF, while other online media enterprises aren’t. The court made this ruling because Google argued it wasn’t a media enterprise, presumably to navigate away from other unwanted regulatory consequences. Still, a legal distinction between search engines and other media enterprises doesn’t make sense to many folks. See Eric Goldman, Search Engine Bias and the Demise of Search Engine Utopianism, 8 Yale J. L. & Tech. 188 (2006).

2) Sales Office Confers Jurisdiction. Second, the ECJ concluded that Google’s advertising sales office in Spain meant that Google was processing personal data (its search database) within the European Union.

3) Search Results De-Indexing. The ECJ concluded that search engines must “remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third parties and containing information relating to that person,” even if that information remains on the third party website, and even if the third party lawfully published the information.

4) Limit to De-Indexing Right. The right of people to erase search results about them “override…not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name.” However, this right can be trumped by the “preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question,” such as the person’s role “in public life.”

The court’s inscrutable language, and its failure to specify the exact grounds on which a person may request erasure of search results, baffled everyone trying to figure out exactly what the court wanted, including search engines and the data protection authorities who will hear complaints about the search engine’s decisions. Google [FN1] has interpreted this ruling to mean that individuals can ask Google to remove links to “irrelevant, outdated, or otherwise objectionable” information from search results for the individual’s name; but Google won’t de-index a search result if “there’s a public interest in the information,” such as information “about financial scams, professional malpractice, criminal convictions, or public conduct of government officials.”

[FN1] We’ll focus on Google’s responses because it lost the case and Google has 90%+ market share in most European countries.

Google’s interpretation means that a de-indexing request only affects when the person’s name is used as the search keywords. Searches for other search keywords could still display those search results. Furthermore, Google will de-index the search result only from its European databases; i.e., the search result will be scrubbed from but not (More on that shortly). Meanwhile, even if Google de-indexes the search result, the original source material will remain online. Thus, if an online newspaper article is subject to a de-indexing request, Google might remove the search result from the name search but the original article is still available at the online newspaper. This is one reason the “right to be forgotten” is a misnomer—the newspaper doesn’t have to “forget” the article but the search engine does.

Google’s de-indexing standards—content that is “irrelevant, outdated, or otherwise objectionable” and not “in the public interest”—is obviously problematic. What makes information “irrelevant,” and to whom? Even if 99% of searchers would find the information irrelevant, what should we do about the 1% of searchers who would find it relevant? Similarly, how do we determine that information is outdated, “otherwise objectionable” or not in the public interest? It’s not easy to figure out what information has historical significance, and the significance of information might change over time (i.e., something that appears irrelevant at one point in time might be a crucial piece of information at a different time).

We know that the ECJ doesn’t want search engines to rely on the initial publishers’ judgments of what’s relevant or important. In Costeja’s case, he complained about an article referencing him that was published in a traditional newspaper, which we normally assume only publishes items of “public interest” (at least to its local audience). Yet, the ECJ’s view is that Google must de-index the newspaper article for searches on Costeja’s name. Thus, the ECJ has required a for-profit enterprise, to interpret its inherently subjective standards—without any knowledge about the underlying facts.

Furthermore, Google will make its decisions on an ex parte basis, based solely on representations by requesting individuals. The de-indexed publisher will not get a chance to plead its case before Google makes its decision; nor can the de-indexed publisher or public interest advocates effectively protest Google’s decisions after the fact.

In contrast, if Google denies an individual’s de-indexing request, the individual can “appeal” Google’s decision to a data protection authority. Given Google’s incentives when evaluating de-indexing requests—potential punishment for denying a de-indexing request, no immediate complaints for accepting a de-indexing request—odds are high that Google will err on the side of de-indexing.

The ECJ ruling shocked many Americans because American law would not permit a similar result. As an online content publisher, Google is protected by the First Amendment’s free speech and free press clauses. Thus, any regulatory effort to tell Google what to include or exclude in its search index is almost certainly unconstitutional. See, e.g., Search King, Inc. v. Google Technology, Inc., 2003 WL 21464568 (W.D. Okla. 2003); Langdon v. Google, Inc., 474 F. Supp. 2d 622 (D. Del. 2007); Zhang v., Inc., 2014 WL 1282730 (S.D.N.Y. 2014); Eugene Volokh & Donald M. Falk, First Amendment Protection For Search Engine Search Results, April 20, 2012.

Furthermore, Section 230 (both (c)(1) and (c)(2)) statutorily immunize search engines for their indexing decisions, including their refusal to de-index content (even if that content is tortious). See, e.g., Maughan v. Google Technology, Inc., 143 Cal. App. 4th 1242 (Cal. App. Ct. 2006); Murawski v. Pataki, 514 F. Supp. 2d 577 (S.D.N.Y. 2007); Shah v. MyLife.Com, Inc., 2012 WL 4863696 (D. Or. 2012); Merritt v. Lexis Nexis, 2012 WL 6725882 (E.D. Mich. 2012); Nieman v. Versuslaw, Inc., 2012 WL 3201931 (C.D. Ill. 2012); Getachew v. Google, Inc., 491 Fed. Appx. 923 (10th Cir. 2012); Mmubango v. Google, Inc., 2013 WL 664231 (E.D. Pa. 2013); O’Kroley v. Fastcase Inc., 2014 WL 2197029 (M.D. Tenn. 2014).

Collectively, U.S. law makes it clear that search engines, including, cannot be legally compelled to implement a right to be forgotten. [FN2] This emphasizes that U.S. law and European law have diverged widely, and perhaps may be on a collision course.

[FN2] In 2013, California passed an “online eraser” law that requires user-generated content websites to let minors remove their posts. This law has not been subject to a constitutional challenge, but the law could violate the websites’ First Amendment interests. See Eric Goldman, California’s New ‘Online Eraser’ Law Should Be Erased, Forbes Tertium Quid Blog, Sept. 24, 2013,

Putting the legal issues aside, the “right to be forgotten” sounds quite appealing, at least superficially. After all, who wouldn’t prefer having greater control over his or her public identity and reputation? In the first couple of months, Google received over 90,000 de-indexing requests, and one study found that “74% of U.S. adults would delete themselves from search results if they could.” Greg Sterling, Survey: 74% Of U.S. Adults Would Delete Themselves From Search Results If They Could, Marketing Land, July 16, 2014, Furthermore, why would Google want to keep “irrelevant” or “outdated” information in its database? We expect Google to deliver relevant results, so perhaps the law is just requiring Google to what it should have been doing voluntarily anyway.

Still, Google might rationally conclude that information is rarely irrelevant or outdated, at least to some of its users, or that it doesn’t want to make such nuanced decisions. Assuming Google wants to keep the search results subject to de-indexing requests, the right to be forgotten might be viewed as a regulatory effort to make search engines dumber than their technical capacity.

Will searchers notice this deliberate dumbing-down of search engine capabilities? If the de-indexed results are truly irrelevant or outdated, then searchers won’t notice or care about their absence. On the other hand, if the de-indexed results were in fact relevant, then searchers may notice that Google isn’t as good as it used to be, and they will start looking for tools that do a better job meeting their needs.

The fact that European users can search any Google search databases worldwide creates some interesting possibilities for how Google or European citizens could work around the ECJ ruling. Google has made the decision that its U.S.-based index,, will not honor de-indexing requests. European citizens who dislike their localized Google service could double-check the search at Indeed, if de-indexing requests become so numerous that they overwhelm the integrity of Google’s database, Europeans could choose to migrate all of their search queries to This would also completely defeat the point of the ECJ’s ruling.

The ECJ opinion didn’t limit its ruling to Google’s European offices or operations. Thus, Google decision to honor de-indexing requests only in Europe may not comply with the ECJ opinion. It’s possible EU regulators will insist on global de-indexing. If Google isn’t required to de-index globally, another possibility is that we’ll see greater Internet balkanization, i.e., European Internet access providers could be required block access to any Google index that doesn’t honor de-indexing requests.

Recognizing the possibility that local Google searchers could access a foreign Google index instead of the local scrubbed index, in 2014 a British Columbia court ordered Google to de-index an allegedly defamatory search result from its database in addition to its database. Equustek Solutions Inc. v. Jack, 2014 BCSC 1063 (B.C. Sup. Ct. 2014).

[UPDATE: A reader helpfully clarified that the underlying action in the Equustek case is a trade secret misappropriation, not a defamation claim.]

The court order, though logical in the context of BC law, highlights the fundamental problem of the Internet’s global publication capability. If other countries make the same order, then Google may be forced to honor a de-indexing request from the most “restrictive” jurisdiction in the world. In other words, an individual unhappy with a search result could forum-shop to find a country where his/her objection to the search result is legally cognizable, and then use that court order to de-index the search result worldwide—even from jurisdictions where the search result was legitimate. So, for example, U.S. residents who want to scrub search results from could bring a defamation lawsuit in British Columbia; and if they win, they could get a worldwide de-indexing order. Unless the B.C. ruling is overturned, you might consider building up your expertise on B.C. defamation law because there will likely be substantial client demand.


Bonus Track: A Linkwrap

Some of the links about the EU Right to Be Forgotten that caught my attention:

* The opinion

* Writing for the Guardian, Viktor Mayer-Schönberger said: “Such a deletion right has existed for 20 years, and very few of us have used it. There is little reason to believe that will change. Moreover, search engines don’t have to redesign themselves to comply. Google is already handling millions of deletion requests for copyright violations every month, so even a couple of hundred insisting individuals won’t make much of a difference.”

Latest tally: over 90,000 takedown requests, a majority of which have resulted in takedowns without any DPA proceeding. And I don’t trust anyone who thinks the prevailing copyright takedown practices are a good precedent for the “careful balancing” contemplated by the opinion.

* Writing for the New York Times, Jonathan Zittrain argued in favor of allowing people to “comment” on their vanity search results, like Google News used to do. I believe Zittrain first started arguing for this at least as early as 2007. See Zittrain’s further amplifications and his initial thoughts.

A couple of months later, Zittrain revisited the issue. He wrote: “But once we’ve gone so far as to allow a properly adversarial process in deciding upon takedowns, we highlight the incongruity of having Google – or any private party, for that matter – as a decision maker about rights. To place Google in that role is to diminish Europe’s sovereign power, not enhance it, even if the role is compelled by European authorities. It turns a rights problem into a customer service issue, and one that Google and others in its position no doubt rightly disdain.” He proposes a solution: “redaction decisions [should] be limited in time. Successful claimants should register and maintain an email address for a reminder that a redaction is about to expire. Prior to expiration a claimant should have to seek to renew the redaction. That way the memory hole is temporary rather than permanent – and a redaction must be justified to account for changing circumstances.”

* Zeynep Tufekci raises questions about how the right to be forgotten principles could be used to “forget” genocides.

* Danny Sullivan’s initial roundup of the law. Also, 10 People Who Want To Be Forgotten By Google, From An Attempted Murderer To A Cyberstalker.

* Another Guardian editorial, “Only the powerful will benefit from the ‘right to be forgotten’

* WSJ Blog: “Right to Be Forgotten’ Is a Foreign Concept in America”

* At Wired, Evan Selinger and Woody Hartzog criticize the nomenclature “right to be forgotten” (as well as “erasure” rights).

* Larry Page of Google fears that the EU right to be forgotten will be used as a trojan horse for further government-mandated censorship.

* Marketing Land: “Survey: 74% Of US Adults Would Delete Themselves From Search Results If They Could”

* 26 Questions EU Regulators Want Google to Answer. Google’s answers.

* UK Parliament’s report on the law.

* Wikipedia pages censored in European search results

* The Switch: “a story that’s more than a decade old vanishes from Google. Then the newspaper that wrote the story writes another story saying the first story was deleted from Google. Then, inexplicably, the newspaper deletes its own story about the story about the content that somebody decided should be scrubbed from search results. Then that repeats. Twice.”