Bracha Responds re. Search Engine Regulation

By Eric Goldman

Last week I blogged on the new paper by Frank Pasquale and Oren Bracha advocating for substantive regulation of search engine operations. In my critique, I said:

My biggest beef with the paper is that it focuses principally on a search engine’s intentional and manual biasing of its algorithmic rankings to suppress/omit a specific website for illegitimate reasons (i.e., I think the authors are OK with search engine anti-spam efforts, at least if they are motivated for that purpose). While illegitimate suppression is an analytically interesting issue (not dissimilar to a situation where a newspaper editorial team is out to “get” a particular individual or company), the paper doesn’t offer much empirical evidence showing search engines actually engage (or have engaged) in the behavior that the paper seeks to redress. Thus, the paper may build a strong theoretical construct to attack a non-existent practice.

Oren responded to me privately because my comments are still shut down from my last comment spam attack, and this led to a good email exchange. With Oren’s permission, I’m sharing our back-and-forth:



The point about the lack of empirical evidence of relatively targeted discrimination by search engines is a good one.

Do search engines engage in targeted discrimination?

There are some suspicious signs or hints that they occasionally do. But this is anecdotal at best. The fact is that we don’t know.

In fact, it is nearly impossible to know.

This is an important part of the problem and this is our point!

Search Engines’ practices form a “black box” nearly inaccessible to others.

The basic technology is Asymmetric: the search engine has all the information about its practices and the user or listed entity has almost none. As a matter of business practices, search engines are very secretive and protective in regard to information about their algorithms and ranking practices, and for good reasons, some of which are quite legitimate. The law too plays its part in creating this veil of secrecy. Trade secret law affords protection to such information. The sweeping rejection of any cause of action in regard to manipulation minimizes the prospect that the information will ever be divulged.

The net result is an opaque veil which is very hard to pierce. We simply don’t know what search engines do. This is part of a broader, and to my mind, troubling phenomenon of “the closed hood society” where areas of information crucial to public policy are shut away behind a secrecy wall.

To put it in fancy legal theory terms, there is a classic article by Felstiner, Abel and Sarat called “The Emergence and Transformation of Disputes: Naming, Blaming, Claiming.” It explains how before a legal dispute can ever arise there have to be conditions that allow claimants to recognize (name) the wrong, associate it with the culprit (blaming), and claim redress. In the current black box world of search engines there are some first signs of naming, but blaming and claiming is almost impossible.

Our first important point in the paper is that, while there are some legitimate reasons for search engines’ secrecy, someone has to be allowed to peer into the black box and scrutinize their practices, to ascertain whether there is a problem. Toward this end we discuss several mechanisms that can balance search engines’ legitimate interest in secrecy and the need to look inside the black box.

This is the first necessary step. It will enable whichever intuition is in charge to know in specific cases and more generally whether there is a need to proceed to step [2] and do something about targeted manipulation. But a sweeping dismissal of step [2] by assuming that search engines can do whatever they want with “their” rankings prevents us from ever getting to step [1], looking inside the black box and answering the empirical question that Eric correctly identifies as an unanswered one.

Concentrated power is suspicious. Concentrated power that operates in the dark is even more suspicious.

Oren Bracha



Do a global replace of the words “search engine” below with “newspaper editor” and does your analysis change one bit? If not, what implications for your article? In other words, do you know what method or criteria your newspaper editors (or any other print publisher) decide which stories to cover, how many column-inches to give stories, and where to place them in the newspaper? (i.e., what stories go on the front page with a big headline; what stories get reduced to a “notes” section of a paper)? If you don’t know, does it matter? I think explaining how/why search engine placement decisions differ from newspaper editorial decisions will make explicit a set of key assumptions about the ways that people “consume” media and the nature of trust we as media consumers repose in our media intermediaries. Eric.



You ask: “Do a global replace of the words ‘search engine’ below with ‘newspaper editor’ and does your analysis change one bit?”

The answer is yes and no.

“No” because, as we explain at length in the paper, the search engines as intermediaries debate is really a reincarnation of the familiar debate about the mass media. In the early days of the Net it was common to think and argue that the Internet with its “democratizing” effect would do away with the intermediaries and solve all the problems of the mass media system. It seems a bit naive now, but there are newer more sober versions of this line of thought (e.g. Benkler’s argument about filtration from below in his book may be read that way). It turns out that search engines are the new intermediaries that replicate many of the difficulties raised by the old ones. Thus the debate over search engines has the same general patterns as that over traditional mass media. However, there are differences of degree and nuances between the two that may convince in the search engines context even some that were concerned by mass media but were ultimately unconvinced that anything should be done.

Which brings me to the “yes” or to some of the differences between the two contexts.

First, one important difference seems to be concentration. Traditional mass media is concentrated enough, but I don’t think it is as concentrated as the search engine market (especially if the yardstick is newspapers). I would be much less worried in the absence of a picture in which a handful of titans control the bulk of the market. The point is not merely the existence of gatekeepers, but the fact that a very small number of them control huge chunks of the market. In this respect search engines seem, at the moment, worse than the mass media.

Second, it is somewhat disingenuous for search engine to claim that they are just like newspapers editors. In other contexts search engines strongly maintain that they are merely conduits and not media outlets. They need that in order to justify sweeping immunity as in the case of DMCA 512 and CDA 230. But they cannot have it both ways. Search engines cannot be “just conduits” for purposes of immunities and “media outlets” when it comes to regulating their discriminatory practices. If you ask me they are right when they claim to be closer to mere conduits. Of course, both conduits and media outlets have a gatekeeping element, but search engines as conduits seem to be located at a deeper layer of the system. Also they are and are perceived as less associated with the content toward which they channel people.

Third a related point is the first amendment implications. It’s impossible to develop here the full array of arguments for and against first amendment protection to search engines’ discretion/discrimination, but one point is directly related to their distinction from more traditional media. Even if one starts from the (controversial but good law) Tornillo premise that interfering with the media’s absolute discretion about what to carry would be prohibited forced speech, that does not mean that there are good reasons to apply this rule to search engines. The common rationale of Tornillo and all its various extensions was that the “carrier” is associated with the speech or content. However, because search engines, unlike a newspaper for example are generally perceived and experienced as conduits rather than speakers or media outlets, they are simply not associated with the content toward which they point people. Most people do not associate Google with the content they find using its search engine, just as they do not associate the content of a telephone conversation with their telephone service provider. Hence, the Tornillo line of case is distinguished and there is less doctrinal and substantive reason to shield search engine’s discretion.

The list could be extended, but I’ll stop here. In short, despite the general common patterns, there are nuanced important differences between search engines and other more traditional media. Most of those differences, I think, point in the direction of a stronger justification to impose some scrutiny on search engine’s discretion to discriminate and manipulate.

Oren Bracha



[Note: I hate to cut off this interesting discussion, but Oren said in the previous reply that I could have the last word. As I’m sure you know, I’m always going to take an offer like that!]

I think your weakest argument is that people *think* of search engines as conduits. Even if this is true, I guarantee that in the near future people will realize that search engines are active content mediators. When this consumer perception changes, you have a tough time distinguishing Tornillo. Also, assuming you read the newspapers, does it bother you that you don’t know how your editors make decisions?

It’s also pretty weak to argue that newspapers in the 1970s (i.e., at the time of Tornillo) were less concentrated than the search engine market today, both as a matter of local market concentration and regarding switching costs/procuring substitutes. Eric.

[Note: As I said, comments are down, but if you want to chime in on this debate, feel free to email me and I will post your comments.]



I can’t speak to the state of newspapers in the 1970s, but I do have some insight into search engines. From the data I have there are three that show up and only one of those really matters. Some events earlier this year drove home the fact that if you run a web business these days, you stay on Google’s good side or you don’t have a business. While I don’t have any reason to believe that Google is specifically targetting anyone in particular, I can echo the fact that its impossible to know. I’ve spent a lot of time researching how to stay on Google’s good side and its all rumor, inuendo, and guesswork.



I don’t know a ton about newspapers in the 1970s, but I know something (I come from a family of newspaper people). Media concentration among print media in particular was a high-profile issue in the 1965-1975 time period. During the 1960s, a number of cities with multiple daily newspapers turned into one- or two-newspaper towns. The reaction was a bit of regulatory intervention from Congress called the Newspaper Preservation Act of 1970, which created conditions for exempting Joint Operating Agreements from the Sherman Act. A JOA enabled two newspapers in a city to combine advertising, printing, and distribution facilities while preserving separate editorial operations. Post-NPA, there were close to 30 JOAs in existence; today, there are a dozen or fewer.

The points being:

The newspaper market of the 1970s was more competitive than today’s search engine market, but only if you consider “the market” to be essentially national. In any given geographic area, editorial competition was on the way out, and in JOA contexts it was preserved only artificially, via regulation. Many local markets were effectively monopolized by one paper.

The specific regulatory intervention of the NPA didn’t open up the editorial black box, as Oren and Frank’s paper would do in the case of search engines. The First Amendment had a talismanic authority in the newspaper context. But background conditions of the industry assumed that left to their own devices, journalists would compete like crazy anyway; the point of the regulation was to enable them to do so in spite of economic pressures that would otherwise kill media firms. I take Oren and Frank’s ultimate point to be essentially the same; their regulatory proposal is a cure for a related kind of market failure.

I note, however, that the history here doesn’t necessarily make me sympathetic to their proposal. The NPA was viewed at the time as the product of interest group capture (the primary sponsors were believed to be handmaidens of big media interests), and the subsequent history of the media industry shows that little could have saved the idea of multiple independent daily papers in any but a handful of American cities (New York, Chicago, Boston, DC). It’s hard to argue that the NPA was a good idea in cost/benefit terms.



Frank has posted more thoughts in reply to my comments here.



Seth posts some comments at his blog. An excerpt:

But the authors’ specific attempts to find a hairsplit for search engines (my paraphrase here) – secret algorithms, or overblown marketing claims, or Google-is-God perceptions, or defining it as not discussion among citizens – just seem to me to be playing to the discomfort that some liberal-arts types have with anything involving technology.

August 15: Seth adds more thoughts here.



Greg Sterling picks up the discussion at Search Engine Land. An excerpt: “While fairness in search results sounds good to some in the abstract, the practical implementation of such a regulatory scheme is where it all might break down and wreck havoc.”