“Economics of Privacy” Conference Recap

By Eric Goldman

Earlier this month, I attended an event at University of Colorado Boulder called “The Economics of Privacy,” sponsored by the Silicon Flatirons center. A couple photos from the event: 1, 2. As usual, these notes reflect my impressions of the discussion. They aren’t verbatim transcriptions, so please double-check before attributing anything to anyone.

Paul Ohm was the principal event organizer. He offered a thesis: the legal academy has ignored economics and markets in its privacy scholarship. This is because a decade ago, privacy scholarship got rooted in consumer autonomy. As a result, we are letting waves of new economics discussions go past without incorporating into the privacy scholarship. He thinks this is a missed opportunity. This conference was intended to fix that.

Keynote: Alessandro Acquisti

Can market forces adequately “protect” information privacy? Answer: a resounding “it depends.”

Notifying consumers isn’t good enough. Less than 3% read privacy policies; people don’t understand them; people assume “privacy policy” implies privacy protection; if people actually read the policies, we lose significant social resources in the opportunity costs of their time; and outright deceptive bypassing of policies can go unpunished.

Consumer control is illusory. In fact, by making people feel more in control, consumers may take greater privacy risks.

Can self-regulation protect privacy? Alessandro thinks probably not. Hyperbolic discounting means consumers will take the immediate benefits and ignore future costs/risks. Further, technology keeps changing. Consumers who try to optimize for current technology are required to learn the newer technology. It’s overwhelming for consumers. Thus, the empirics of privacy shows that hurdles in decision-making render self-regulatory solutions untenable.

Where do we go from here? Currently, unless there’s a quantifiable economic harm, there’s no legally recognizable harm. However, by focusing on tradeoffs, we’ve lost the non-economic benefits of privacy, like personal autonomy. The lack of adequate consumer protection also leads to socially wasteful investments, ex post damages, shrinking share of consumer surplus, others. We can do better than telling consumers that they need “quantify the privacy costs incurred or be quiet.” Privacy enhancing technologies allow both data sharing and data protection. We should put burden of proof on data holders: prove you can’t provide same services with less data, or be quiet. Finally, he rejects the privacy fatalism that “data is price for content.” In fact, consumers pay when advertisers use the data to develop manipulative marketing.

First Panel

Lior Strahilevitz. Information asymmetries led to19th century English workhouses (like homeless shelters). The government wouldn’t provide welfare payments because recipients knew better than the government if they were worthy, so workhouses were an alternative to providing wasted welfare. The consequence of this information asymmetry was the growth of government services and poor living conditions.

India is experiencing something similar. To address this, India is collecting biometric information on its poor (the “AADHAAR”). Some Indians feel this data collection is empowering—it gives them an identity.

Homogeneity enables mass-market products, but precludes catering to idiosyncratic needs. On the other hand, we should favor serendipitous exchanges between disparate people, and that’s essential for us to function as a society.

Lior is concerned that people will buy products for signaling purposes, not because they want the goods. For example, it turns out that people who buy felt pads for their furniture are good credit risks. Knowing this, people might buy felt pads to send false signals. Peppet’s comment: signaling is exhausting. We’re always communicating through our actions, and that’s tiring. It’s rational for consumers to respond by just deleting their Facebook accounts entirely.

Alessandro’s comment: matching systems will never be perfect; they will always make errors. But if decision-makers overly rely on the technologies, we may not be able to protect ourselves from these errors.

Lorrie Cranor. In 1996, there was a lot of talk about notice-and-choice and that privacy policies were unreadable, but the thought was that privacy seals and P3P could save privacy policies. We’re at that exact same place today, but the technology hasn’t changed much. In fact, the current Do-Not-Track technology is lower-tech than P3P was.

What went wrong with P3P? 5 years of haggling led to a computer-readable language for privacy policies. It’s still incorporated into Microsoft Internet Explorer, but it only focuses on cookie-blocking decisions. To avoid Microsoft’s cookie blocking, sites enacted P3P policies. At least a third of P3P policies had errors, including major sites (Amazon, Facebook), so P3P may be counterproductive (i.e., consumers relying on P3P will not have their preferences effectuated). She hopes regulators will investigate.

Based on our experiences with P3P, online behavioral advertising tools aren’t promising. Companies aren’t providing clear policies to consumers or working opt-outs; consumers don’t recognize the icon; and consumers won’t click on it because they expect to get more ads, not to opt-out. She has a feeling of déjà vu: privacy tools empower consumers, but when people inevitably lose interest in developing the tools, privacy issues will become moribund again.

In contrast, incorporating automated privacy information into search results made consumers more aware of privacy concerns, and consumers showed they were willing to pay extra for additional privacy benefits.

Julie Cohen. The term “information privacy market” is weird. The market doesn’t produce information privacy; it produces information that’s used for market segmentation and risk management. There are social costs of information privacy markets—do we need less of the outputs from this markets?

Deeply-held ideological considerations drives privacy norms. Many of us are socialized to believe that more information is better. This skews the discussion as privacy advocates try to get around this norm.

We should be skeptical of information collection practices. Social benefits don’t necessarily grow as information becomes more precise. Gaps in knowledge lead to serendipitous matches that benefit society.

Innovation is used as an excuse to stiff-arm regulators because it’s too complicated for regulators. We’re bad at valuing systemic risks.

Scott Peppet. He sees parallels between Occupy Wall Street and the concerns about privacy. We don’t know how companies are tracking us, and that lack of knowledge makes us uncomfortable. Our economy is built on data, but we don’t understand how that system works. Data collectors are getting big, and we don’t know what they are doing. Perhaps some data collectors get too big to fail—we couldn’t let Facebook’s database go through bankruptcy.

Q from Berin Szoka: why isn’t the common law system adequate to deal with exigencies? For example, the FTC can enforce P3P misrepresentations even if the private lawsuit fails in court? Why do we need additional regulation?

Q&A on self-regulation

Lorrie: self-regulatory model requires enforcement. We have some leaders in the industry doing a great job, but they aren’t getting the requisite enforcement backup.

Alessandro: self-regulation doesn’t work because it relies on notice-and-consent, and that doesn’t work. Instead, he would like to see self-regulation include broader deployment of PETs.

Peppet: he expected, but has failed, to find role-modeling privacy intermediaries such as infomediaries (see my 2005 blog post on the absence of infomediaries). Even companies that are leaders on privacy have unreadable privacy policies. His hypothesis: it’s more profitable to disrespect privacy.

Strahilevitz: self-regulation is best for handling data that’s been recently collected, not on historical data. No one has a good response to deal with new data uses enabled by evolving technologies. Data retention may be an appropriate place for government regulation.

Keynote: Joe Farrell (speaking for himself, not the FTC)

Economics assumes consumer sovereignty. Consumers have wants; the marketplace supplies them. His starting point: consumers value privacy. It’s hard to measure how much. We shouldn’t ask why or how much. We should ensure the market doesn’t thwart their desires.

If we focus on consumer sovereignty, notice-and-choice should work. This minimizes the need to figure out how much consumers value privacy and why; it enables competition on privacy; and the market can cater to consumers’ preference heterogeneity. Notice-and-choice is difficult, but we should try to fix it. However, even experts can’t tell what will happen to privacy in the future; and consumers can’t tell how their information disclosures are affected by information disclosures of other consumers.

Taxonomy of consumer data uses:

• order fulfillment (responding to consumer request). For consumers’ mail orders, it’s not surprising that retailer will tell shipper your address. This directly serves the transaction the consumer wanted, and it’s unthreatening. Leave this out of the regulation.

• Profitable re-uses that consumer may not directly like. Need to distinguish between deals consumer would be willing to strike (data-for-content) and unacceptable deals.

When marketers deceives consumers, it trains them not to trust anyone. This is a harm to society. Ad hoc case-based enforcement doesn’t fix this harm.

Teaching consumers is hard, even if both parties are motivated. This is the basic problem with “disclosures.” But when advertisers don’t have full incentive to be forthcoming, consumers are even less likely to learn.

When the market price is zero, it’s hard for consumers to discount the price further to reflect the costs of privacy risks. Micro-payments actually solve this problem (we saw some of these advantages with the move from broadcast TV to cable TV) but micro-payment service providers create their own privacy paradox.

We should be open to private law solutions, such as trustworthy intermediaries or the adoption of liability-type commitments.

Panel 2

Ryan Calo moderated this discussion, which didn’t have presentations. Because I was part of the panel, my notes are a little sketchy.

Aleecia McDonald: Definition of behavioral advertising = advertising that’s based on data collected about individuals about the websites they visited and their search terms and used to create a profile to trigger ads. Behavioral advertising can be done on a third-party or first-party (e.g., Amazon) basis. Some folks believe that online behavioral advertising only means third party behavior.

Laura Kornish: Can self-regulation work? The Behavioral Advertising icon has been around a year. The icon and linked information doesn’t answer the Qs very well of why the ads are appearing. It’s not working so well, and she’s not sure why. It depends on whether educating consumers about behavioral advertising is a technical challenge. If it is, the icon probably isn’t salvageable. In contrast, it would work if consumers get clear information about why they are getting the ads.

Eric Goldman: the point of advertising is a conversation between marketers who want to sell and consumers who want to buy. If behavioral advertising improves the conversation, there’s no problem that regulation needs to fix.

Seth Levine: He doesn’t favor regulation. As an investor, we don’t see companies trying to create containers for consumer data to give marketers. He does see entrepreneurs trying to fix the fact that publishers let a lot of data leak out to advertisers.

Eric: publishers need to manage the trust relationship on behalf of readers. It’s weird to me how few publishers take this responsibility seriously.

Aleecia: There’s currently a schism between EU and US about holding first party data controllers responsible for third party actions.

Catherine Tucker discussed her paper. The punchline: EU advertising effectiveness decreased by 65% compared to the US due to privacy regulations. Small unobtrusive ads were particularly affected because these are more informational and need to be more relevant. Blaring intrusive ads weren’t affected. Most adversely affected websites: general news sites, not niche-y sites (probably because contextual targeting on niche sites was a passable substitute for behavioral advertising).

Seth: an ad impression based on data about the consumer is 3x-10x more valuable than an ad impression without consumer data. Online brand advertising isn’t very effective, so the Internet relies on direct response advertising. If brand advertising worked online, there would be less motivation for behavioral advertising.

Aleecia: Q to Catherine. What legislation caused the difference in ad performance, especially because the EU directive isn’t being enforced?

Catherine: She focused on the 2002 EU directive but the rules were rolled in over time, and advertisers were uncertain about its implementation. Some advertisers pulled away from using cookies due to the uncertainty. Health ads, in particularly, were much less effective.

Aleecia: Catherine’s study is good news for privacy advocates. It shows regulation can work.

Eric: it “worked” how? Some of the adverse consequences from privacy regulation: more intrusive ads, and some matches were foreclosed in the marketplace.

Aleecia: if regulation results in fewer beacons and tracking, this is a good result for healthcare data.

Seth: the advertising marketplace is big enough to incent investment in innovation.

Eric: the best way to spur innovation: give immunities and safe harbors. [I have a more detailed blog post in process making this point in greater detail.] The privacy plaintiffs’ bar is imposing a huge tax on advertising privacy innovation today.

Seth: existing technologies allow private/anonymous browsing. Less than 5% turn it on, and usually turn it on in the middle of the day, perhaps to hide information from their employers.

Aleecia: some consumers want to block ads, but the dominant reason for blocking ads is privacy concerns. Many of the tools are flat-out unusable. 6% of browsers have adopted DNT. On mobile, 17% have adopted DNT (and this is hard for them to do). Definition of DNT = allows users to put up their hand and request privacy. It’s not a technical mechanism; it’s just an HTTP header. What should websites do when the header is present? That’s still being discussed.

Eric: the devil of DNT is in the details. We’ll know how important/useful DNT is when we see what websites do when they know consumers have raised their hand.

Catherine: consumers don’t understand online behavioral advertising, so they need protection, but maybe consumers are ahead of regulation and thus regulation would be redundant.

Seth: Solutions to privacy issues should be technology-based. If you’re 18 and don’t have a Facebook account, you’re dead. But Facebook does a terrible job with monetization: they have a huge audience and but get only a small percentage of online ad dollars.

Peter Swire Q: getting consumers adopt PETs is hard, so 5%-17% adoption is huge. Also, Julie Cohen’s right to read anonymously.

Seth: we would all agree that we should have user-driven right to read anonymously.

Panel 3

Scott Peppet. Ways to connect digital identity to physical identity:

• facial recognition. We can now do searches using a face as the search query.

• iris recognition. The technology can read irises on the run. If the technology became widely installed, it can do highly accurate individual identification.

• Car chips measure usage of cars. Insurance companies will find this information useful.

• Biometric. Your scale can broadcast your weight; it can even post to Twitter. It may be entertaining to measure oneself; but that data has substantial commercial value, and marketers may be willing to pay to get it.

• Smart goods. A sweater has been chipped to provide interested consumers background information about the exact sheep whose wool was used.

Ways to tie Digital Space to Physical Space

• Augmented reality. Smartphone can provide this functionality. Car can display information on the windshield.

• Pranav Mistry’s Sixth Sense.

Berin Szoka. Lessig outlined a dystopian view that code will become a perfect form of control. In contrast, the Supreme Court has said that technology expands consumers’ capacity to choose. So, does technology empower or enslave?

First Amendment is baseline for the (lack of) regulation of information. Government can and should punish fraud and deception. Government can validly compel disclosure of objective factual statements (Cass Sunstein’s “smart disclosure”). With proper narrow tailoring, government can intervene in other situations—user empowerment tools, limiting government, educating consumers.

Chris Hoofnagle. He favors competition-enhancing enforcement. Problem: privacy policies that are internally inconsistent; they say “we don’t share” and then say they work with third-party marketers. He also favors an enforcement action that says companies can’t force tracking onto consumers. If consumer manifested their intent not to be tracked, companies can’t undo that. Also, companies are resistant to working with privacy agents where consumers pay someone to help them opt-out; they want to confirm this intent. Companies can’t imagine that consumers don’t want their advertising.

Peter Swire. He worries about security. There’s no way to fix theft of biometrics. Iris scans can be defeated by high-quality print of a third party’s iris.

What if data = speech? (IMS v. Sorrell). He reads Sorrell to say that many privacy laws are subject to heightened scrutiny. Ex: the FCRA says CRAs can’t report credit data more than 7 years old. This limits speech by limiting data. Thus, arguably it’s both a speaker- and content-based restriction.

Berin: he hopes Sorrell will bring more rigor to legislative drafting. The Vermont statute didn’t have any showing of harm. He doesn’t think all privacy statutes are dead, but he hopes the ruling will encourage an emphasis on less restrictive measures.

Chris: Sorrell involved a dumb law, but most privacy laws are dumb because corporate lobbyists muck up well-meaning legislative proposals. He thinks libertarians should hate the Sorrell ruling—the government forced the collection of information and then it was shared with the private sector.

Berin: He doesn’t mind the government data collection in Sorrell because he believes the private sector would have generated the information anyway. Sorrell has no bearing on government-compelled disclosure.

Fernando Laguarda. The Sorrell decision was a reaction to a poorly drafted statute. Information dissemination is speech.

Paul Ohm Conversation with Julie Brill

Paul: it’s the 1 year anniversary of the FTC’s privacy report. What’s happened since then?

Julie: the FTC has spoken loud and clear on social networks (Facebook, Google Buzz). It’s brought some good cases on behavioral advertising and COPPA. The report didn’t preview the FTC’s directions; instead, it describes the problems the FTC has been running into when it brings enforcement actions, especially with notice-and-choice and consumer harm. It sums up where the FTC has been.

The report’s basic principles:

• Companies should build privacy into their foundation

• Simplify notice-and-choice. For example, on mobile devices, privacy policies are too long and not readable. Give more layered notices. Companies are burying the most important disclosures in the policy.

• Transparency. Give consumers more information about the company’s practices, but also show the data that the company has collected about the consumer and give them the right to correct. Analogy: FCRA. Data brokers that don’t come under FCRA should still give access to consumers.

What’s happened since the report? A majority of commissioners have embraced “Do Not Track.” A lot of technological development has occurred in a year—DNT technology, browser-based restrictions, BA icon.

Paul: What does Do Not Track mean, and who enforces any violations?

Julie: there isn’t consensus of what “do not track” means. A header-based solution is one way for consumers to express their preferences. But will websites honor the header? Another solution: the blacklist/whitelist built into Microsoft browser. Advertisers feel that is more draconian. The icon-based system is another solution.

She believes Do Not Track efforts have to cover data collection and retention in addition to tracking. When issue final report (maybe by end of 2011), she hopes it will include data collection.

Who decides—self-regulatory groups or browser companies? Once promises are made to consumers, FTC and state AGs need to enforce.

Paul: academics like Alessandro and Lorrie expressed a lot of skepticism about notice-and-choice. Should the FTC still be pushing it?

Julie: FTC commissioners typically agree about the FTC’s specific enforcement actions. Most opinions are unanimous, especially on privacy and consumer enforcements. When commissioners are debating theory, as opposed to a specific enforcement action involving a particular company, the commissioners disagree more. She thinks the commissioners disagree about notice-and-choice. We shouldn’t throw out notice-and-choice, nor should we throw out PII, but Julie is a skeptic on notice-and-choice. Consumers aren’t the least cost avoiders. The safety analogy is useful—just like we don’t want consumers policing aircraft, consumers shouldn’t be policing privacy. She would like more dashboards for consumers.

Paul: how do we resolve any specific privacy problem—self-regulation, FTC, Congress—who?

Julie: this is a big question, and there’s no single answer. She likes the bully pulpit; she raises her eyebrows a lot! This can lead with a lot of dialogue between industry and the FTC. The FTC has to account for the political environment. Legislative discussions look different now than a year ago; Congressional-enacted regulation isn’t realistic right now because Congress doesn’t have the bandwidth. The industry is a little emboldened because it knows the FTC can’t get Congress to act.

Eric’s Q: there have been lots of Internet privacy lawsuits, but they are routinely getting tossed. How does this affect the FTC’s calculation about whether or not to intervene?

Julie: privacy lawsuits based on deception still require the plaintiffs to show consumer damages. FTC/state AGs aren’t bound by this restriction. The FTC also has authority under unfair acts. Unfairness requires balancing of economic interests. Harm is essential to the balancing. But what about embarrassment, such as an unwanted outing or unwanted Facebook photo posting? The FTC report argues that they should expand the harms. About a decade ago, Eli Lilly had a website for Prozac users. When it decided to shut down the website, it included everyone’s email address in their announcement. This was a huge breach. FTC said it was either a deceptive or unfair act. She thinks it was really an unfairness case; it was wedged into the deception prong.