George Mason “Law and Economics of Search Engines and Online Advertising” Conference Recap

By Eric Goldman

Last week I participated in a conference entitled “The Law and Economics of Search Engines and Online Advertising” at George Mason Law School, sponsored by Google. In light of this week’s disclosures about the FTC, state AGs and Senate Judiciary Committee all circling around Google looking to carve it up like a turkey, the event was exceptionally timely.

Because Google sponsored the conference, this event was much more Google-partisan than we normally see at an “academic” event. (Note: George Mason deals with all sides; it did an analogous event sponsored by Microsoft in 2009). Two speakers were Google employees; at least two other speakers expressly acknowledged their financial support from Google; and several others were unabashedly Google-friendly, even if they don’t explicitly enjoy any Google largess.

As usual, these talk notes are my filtered impressions of the day’s proceedings and not verbatim transcriptions. You should double-check anything you want to cite or quote.

Panel 1: Network Effects in Search?

Geoff Manne: Google faces competition for advertising from online and offline sources. Online ads are a small part of the global ad industry’s revenue. Ex: Pepsi didn’t advertise during the 2009 Super Bowl and instead put that money into social media advertising. For online ads, Google competes with other search engines, social media sites, and other sites that aggregate eyeballs. Google’s ads also face competition from its own organic search results. Finally, there’s a cross-elasticity of demand between SEO and paid ads.

Network effects: ad networks aren’t just selling consumer quantity; they are selling their ability to deliver relevant consumers. The ad quality score helps mediate ad relevancy for consumers. Advertisers pay only for clicks, not for access to the entire ad network’s population. The auction format eliminates any uninternalized externalities. [In Q&A, Katz pushed back on whether this an “externality.”] Advertisers’ willingness to pay is based on improvements in click quality, not a competition-reducing network effect.

Michael Katz: Market characteristics: (1) weak/non-existent network effects, (2) switching costs are low, and there’s no evidence that personalization has had a lock-in effect, and (3) room for product differentiation. If a new search engine enters the market, searchers can move and it doesn’t matter if other searchers do (no coordination between searchers is required). Because consumers and advertisers can try multiple vendors for free and without switching costs, multiple competitors should be able to survive. For advertisers, the auction format permits them to try multiple sites—there’s no quantity discount motivating them to consolidate their business with one vendor.

Stanley Liebowitz: Clearly no network effects for searchers. The story is more complicated for advertisers. In 1990s, we were concerned about lock-in effects and allowing an inferior offering to become the winner. If there were network effects among advertisers, he might worry more than Michael. But he hasn’t found examples where network effects cause consumers to get stuck with the wrong products. Many past claims to the contrary look silly now. We can’t predict where markets are going.

Q&A

Bill Page: does a search algorithm improve with greater number of searches? Liebowitz: he would call that an economy of scale, which could lead to natural monopoly. But are we worried about getting stuck with the wrong natural monopoly? Katz: is Google pushing its competitors below the minimum competitive scale? Manne: how many consumers make decisions on failed long tail searches? [at Epinions, failed long tail searches were make-and-break. Damien Geradin made this point in Q&A.]

Manne: Bing’s cashback program for searchers who bought from advertisers was an example of how new entrants can buy market share.

Liebowitz: he’s a little surprised by consumer stickiness. Why don’t consumers try different search engines more frequently?

My Q: advertisers face transaction costs dealing with multiple publishers/ad networks, and as a result, they may require a minimum quantity of eyeballs to consider working with a publisher/network. How would those transaction costs affect the competitive environment? Katz: larger advertisers are more likely to multi-home with large search engines [multi-home being jargon for using multiple vendors simultaneously]. [My note: even if this is true for the larger search engines, start-ups could be frozen out from ad dollars]. Manne: all search engines in the market face this same problem. Katz: VCs are willing to provide enough capital to let companies ramp up enough scale. [I wasn’t totally satisfied with these answers. I think there’s a possible paper topic on understanding the transaction costs that advertisers face in vetting and managing multiple ad networks and publishers and how those transaction costs help or hinder competitive forces.]

Panel 2: Defining the Relevant Market

Dan Rubenfeld: Internet advertising has developed rapidly and is likely to keep changing rapidly.

Does online advertising constrain offline advertising? Not clear. His gut: yes, but not in every case. There may be geographic or industry advertisers who find offline advertising more useful.

Does offline advertising constrain online advertising? Also not clear. There are lots of offline advertising options, and many advertisers use multiple advertising options.

Focusing just on search vs. non-search online advertising markets. In DoubleClick, the FTC said they are two separate markets. But it’s too simple to say that display ads are for branding and search advertising for direct marketing. Many advertisers use search advertising to build their brands, so they may be the same market.

Damien Geradin: In Google/Doubleclick, the FTC concluded that offline and online advertising markets are different. Why? (1) Online ads can reach more targeted audiences in more effective way. (2) Reporting mechanism lets advertisers see their ad performance. (3) Differences in pricing mechanisms. This was confirmed in Microsoft/Yahoo. EU lawyers think the market definitions are already settled, so why are we still talking about this?

Search ads v. other online ads. Damien thinks they are different:

* Not demand-side substitutes. French Competition Authority (2010): search advertising is distinct relevant market. Google/Yahoo 2008: search advertising and search syndication are different markets. DoubleClick 2007: search and display are different markets. Damien: 1) different characteristics and intended use of search and non-search ads (brand v. direct marketing). 2) Different pricing methods (CPC v CPM). 3) ROI from search ads is higher than non-search ads because it can be better tracked.

* Not supply-side substitutes. A publisher/network faces lots of costs and time switching from running display ads to providing a search advertising offering.

* Algorithmic search doesn’t constrain search advertising. SEO isn’t a substitute for advertisers because SEO is less effective than ads at driving traffic, search engines change algorithms, and search ads yield better conversion.

Catherine Tucker: results of study on alcohol advertising restrictions:

* online display ads had the largest impact on consumers in geographic jurisdictions that legally restricted out-of-home ads

* the Internet reduces the effectiveness of these local regulations

* thus, online ads substitute for offline advertising

Another study: How offline advertising affect online ad pricing. Personal injury lawyers will pay more for online advertising when there is a ban on direct (offline) marketing to victims.

Her conclusion: “offline advertising appears to regulate both the effectiveness and pricing of online advertising.” This is consistent with other literature that online activity is integrated into an entire marketing campaign.

Q&A

Baye: He was the FTC chief economist during the DoubleClick review. Facebook wasn’t even on the commission’s radar screen at the time. [Eric: This isn’t surprising; the market continues to change rapidly.]

Geradin: Google has way more advertisers than Bing.

Q: if ROI is higher in search ads, doesn’t that indicate a price constraint? [The discussion was odd, but it seems that if search ad ROI was truly higher, advertisers should shift dollars to search ads over lesser-performing ads and arbitrage the ROI advantage. So either some factor is preventing advertisers from doing this, advertisers don’t know their ROIs enough to realize they should be shifting their dollars, or the ROI argument is wrong.]

Manne Q: even if the government agencies have made a judgment about market distinctions in the past, we should later check if they got it right.

Berin Szoka: what if Google synced with other search engines so that advertisers could have a one-stop place to manage their campaigns across all engines? Would that raise/end antitrust Qs? [There was a lot of consensus in the room that this would be a good thing.]

Lunch: Mark Paskin, Google Engineer

Search has moved far beyond 10 blue links. When the database was static, people thought search was about looking for encyclopedic information. Google’s freshness has changed this. There are 1 trillion documents on web, 1 billion searches/day, and 1 million new spam pages created every day.

In A/B tests of search relevancy, 25% of queries lead to irresolute differences about what people thought was the best ranking. This suggests there’s no “right” order.

Google’s ranking signals include how often query terms appear, where the query term appears on the page, and document quality signals like PageRank. Any one signal isn’t enough. Hand-coded knowledge can’t handle diverse queries.

Ways that Google exercises manual controls over rankings:

* security. Ex: warning that a site might harm the user’s computer if they click on the link

* legal issues, such as child porn and copyright infringement

* exception lists = when algorithm fails but it’s easy to correct manually. Ex: essex.edu being blocked by SafeSearch because it contained the phrase “sex”

* spam. Google uses manual actions to deal with the whack-a-mole problem

Google decides what algorithm changes to explore by “losses,” which it defines as the opportunity to improve results. One solution: reinterpret searches to substitute synonyms, but those reinterpretations depends on context. Ex: GM cereal vs. GM motors vs. GM university vs. GM tomatoes.

Google’s process for making algorithmic changes:

Hypothesize an idea of how to change the algorithm => implement the idea in sandbox => generate a sample of before/after differences => send the differences to carefully trained external raters (who act as “proxies for users”) and look for statistically significant differences in ratings as well as the best and worst results from the changes => divert a tiny slice of live traffic to the sandbox and see where they click => forward data to data quality analyst, who prepares an independent report => forward the report to the launch committee, who approves the change or denies it. Launch considerations: (1) benefit to users, (2) how simple is the implementation, (3) is it an efficient use of resources. Factors that aren’t considerations: how the change will affect ads or other monetization, how partners/clients rank, external metrics. There is a wall between search and ads: people in search don’t talk to people in ads about work. Divisions are in different buildings.

Last year: 13,311 algorithmic change ideas led to 8,157 side-by-side A/B experiments, which led to 2800 click evaluations, which produced 516 algorithm changes.

Dealing with misspellings: some users missed Google’s “did you mean” suggestion and clicked on low quality results. Even for situations where Google was really sure the consumer meant something different than they wrote, 5% of users missed Google’s prompt. In response, Google changed the presentation to show results for the correction. It told the user and showed the substitute results page and gave user a prompt to go back to original results. Some users get frustrated by that. For a while, they tried a compromise: show the top 2 results at the suggested query, but below that, show results for the original query. Their way of thinking about it: which it comes to misinterpreting misspellings, a loss is worse for searchers than a win is good. Google’s standard for misspelling changes: it should get 50 searcher wins for every searcher loss. Now: when Google is very sure for the correction, it shows the replacement results plus the option to get the original results; otherwise, it gives the original results with a “did you mean?” prompt.

Baye Q: does Google show location-specific results? A: location is a big part of user’s context

Q: duration of retaining user queries. A: if you have 2x users in a month, it’s the same as retaining data from 2 months (but he noted the challenge of seasonal queries). He thinks personalization isn’t essential because web information is so rich.

Berin Q: can Microsoft replicate what Google does? A: in theory, but Google’s algorithm is really good, and that’s a key differentiator. Also, Google has a good system for improving efficiently.

Berin Q: does it matter that Microsoft has a smaller sample size for experiments? A: there are diminishing returns from bigger datasets.

Pasquale Q: does Google’s algorithm preference its own pages? A: no, and he didn’t agree with the Q framing. Ex: if Google prioritizes image search results, those pictures are still 3rd party info.

Q: how do you find testers? A: Google uses third party contractors to find testers. They muist have a minimum educational level. Q: does Google screen out testers who have a pro-Google bias?

Panel 3: Competition and Search Markets

Ben Edelman did his usual shtick.

Randy Picker: His talk generally related to how platforms try to extend their boundaries and what happens when two platform providers collide. His talk slides.

EU’s remedy of ordering Microsoft to distribute OS without media player completely failed. Almost no one took the version without the media player. But this was irrelevant to the development of the media player space—Apple rolled up the space and left Microsoft behind.

Then, EU deal with Microsoft re IE: order to let users choose between different browsers (5 shown first, but 12 available totally). Confusing for users.

Google Buzz was attempt to leverage its platform to strengthen its identity. Google is now in FTC Hell for 20 years.

ITA acquisition was conditioned on neutral treatment and spillover of information.

Paul Liu (Google Economist): How to define a market when the core product is free?

Search queries can be:

* navigational (30% of queries). Alternative options for searchers: typing into URL bar, selecting bookmark, clicking on link. 68% of users did both direct navigation and navigational queries in a week. Google did an unintended A/B test when Google accidentally flagged all sites as malware. At the peak of the incident, this led to a 9% increase in direct navigation (or 80% of navigational queries shifted to direct navigation).

* informational (50% of queries). Alternative options for searchers: other search engines, other websites (especially brand-name trusted sites), mobile apps. Many well-known branded sites get a small fraction of their traffic from search engines.

* transactional (products/services) (about 20% of queries, maybe half product and half service). Alternative options for product searches: other search engines, marketplaces, retailers, shopping comparison sites, review sites, social networks, official stores/malls, mobile apps. Many alternatives don’t depend on search engines for their traffic. Mix of online and offline activity: 51% of consumers research online and buy in-store. 32% research online, visit offline store, then buy online.

Service queries (i.e., trip planning, local services). Alternative options for service queries: other search engines, review/vertical sites, offline (yellow pages, word-of-mouth, radio, TV, magazines, local advertising), mobile apps (many sites have their own apps). In response to these searches, searchers clicks on local organic results: 28%; clicks to other organic results: 67%; 5% to Google Places. Some sites in this group get a lot of traffic from search: CitySearch gets nearly ½).

Most of Google’s search revenue comes from transactional queries. If Google degrades search result quality, users will switch to transactional alternatives—buy from Amazon, research travel on TripAdvisor, book restaurants on OpenTable, buy services at Groupon. If Google loses even a small percent of its transactional queries, that would have a disproportionate decrease in Google’s revenues.

Q&A

Liu in Q&A: switching between travel CRSs was much harder than switching with search engines. Google always focuses on user experience, so the changes are what consumers want.

There was some discussion about how some of Google’s competitors do the same thing as Google. If the competitors do the same thing as Google, does this indicate that those practices aren’t the result of market power but instead benefit consumers?

Then, as expected, the Q&A devolved into heavy questioning of Ben’s assertions and lighter but still active questioning for Paul.

Panel 4: Potential Costs and Benefits of Search Regulation

Frank Pasquale: He’s interested in transparency. His troubles with Google: the algorithm is a black box. Opacity + market concentration = concern. Frank’s core message for Google: if you’re ashamed about what you’re doing such that you don’t want it disclosed, then don’t do it.

Google’s duality: Is Google a platform for innovation, or an innovator? Is it a conduit or a content provider? We should be skeptical of blanket antitrust exception and blanket First Amendment protection.

Regarding technological innovation. Google’s unassailable competitive advantage: ability to self-promote itself + ability to impose blackbox penalties on others. Like a pharaoh, Google may be strangling its competitors in the crib. Inconsistent to say that competition is one click away while aggregating big search data.

Regarding the First Amendment. Google says it’s protected by the First Amendment. But this is discordant with being a conduit enough to take advantage of 47 USC 230 and 17 USC 512. [I didn’t understand this point–neither of those statutes depend on “conduit” status.] When Google’s owns its properties and they show up in organic searches, he thinks this is like an undisclosed ad.

Examples of Google’s transparency efforts:

* Webmaster Forum

* They show up at SEO conferences

* Discovery in litigation

* StopBadware participation

Still, it’s not easy to see what they are doing. This is a general problem with companies driven by trade secret protection rather than patents.

Google’s definition of duplicate content: Google’s search engine looks like its own duplicate content.

His solutions:

* more technical expertise among government regulators so they can do better auditing

* more due process for penalized sites

* publicly funded alternatives

Eric Goldman. I blogged the notes from my talk yesterday.

David Balto: Where’s the beef?

Search engines democratize information and make markets more efficient.

There isn’t enough emphasis on the remedies for problems with search engines.

Consumer sovereignty: choice, transparency, lack of conflicts of interest. Google’s conduct aligns with all three elements of consumer sovereignty. [I thought this was an odd argument because Google’s opponents are contesting them on all three.]

Search providers’ conduct aligns with consumers’ interest: accuracy, relevancy, disclosure, self-regulation. [Again, I think this is exactly what people are contesting]

Google’s algorithm is a fair umpire—it calls balls and strikes the right way. Google is agnostic as to the source of content. Google’s focus is matching consumer demand.

Evaluating cures:

* relevancy is inherently subjective

* transparent adjudication system would be ineffective

* increased disclosure is unnecessary

* consumers do not want regulation of search

Q&A

Frank’s Q: maybe the Google/CIA co-venture is an alternative model to a fully government-owned search engine. [I thought this was a disconcerting example!]

Randy Picker Q: we may see a government-operated search engine from the detritus of the Google Book Search settlement. The parties may develop a public digital library, and it may not be indexable by commercial search engines.

Berin Q: Frank is talking about Fannie Mae/Freddie Mac meets NPR meets search.

Josh Wright Q: why not just apply the standard antitrust model looking for consumer harm? [I didn’t answer this Q at the event, but I think it’s a useful framework. If no one can show consumer harm, then there’s no problem that antitrust law needs to fix. However, this only addresses antitrust issues; there may be other problems where a different standard of harm is useful.]

Damien Q to Balto: is your argument that we should just trust Google? David’s A: he sees industries where consumer interests aren’t prioritized, and Google acts as if the consumer could walk away.

Damien Q to Frank: nationalizing a search engine doesn’t seem like a good idea. In antitrust, the first remedy is to ask the company to stop the offending practice.

Liebowitz Q to Balto: advertisers are the consumers, so Google tries to maximize revenue from them. This is where the problem would be located, not on the searcher side. Eric’s A: I wonder if the auction model for pricing for advertisers partially ameliorates those concerns.

Mark Paskin Q for Frank: how would transparency affect gaming/spam? Frank: not arguing for complete transparency, just oversight of the algorithm. This is a general concern about the competitive restrictions when a company relies on trade secret protection.

Q: EU remedy on Microsoft Explorer was ridiculous. Assume any intervention in search would also be ridiculous. How could we deal with the situation that only one competitor in market is regulated while others aren’t? Frank: we do long-term regulatory oversight of marketplace actors. Ex: ASCAP. [once again, I wasn’t sure this was a good exemplar]