FTC Privacy Roundtable Recap

By Eric Goldman

[Introductory note: I have repeatedly criticized the FTC on this blog, and this post may implicitly criticize them as well. At the same time, I want to share a couple of compliments for the FTC. First, the FTC did a terrific job preparing for this event. For the panel I participated on, we had two official group organizing calls, plus I had at least 3 individual calls as well. I can’t recall another event which had more pre-event preparation efforts. Second, I remain consistently impressed with the dedication of the FTC staff attorneys. The FTC attorneys I’ve met uniformly seem to be trying to do the right thing, even if bright minds might disagree about what that is.]

Last week, the FTC held the second of three privacy roundtables at UC Berkeley. A large crowd (I estimate 200+ people) showed up, and I know that many other people watched online. Combined with my conversations with the FTC folks prior to the event, I took away a few meta-observations:

1) The FTC is Facebook-obsessed. FTC staff kept citing Facebook examples. It’s clear that the FTC is paying extraordinarily close attention to Facebook.

2) The FTC has embraced the idea of “data as currency.” The concept is that online services that don’t make consumers pay with cash instead make consumers “pay” by providing their personal data. This didn’t come up much at the second roundtable, although I understand it was a big issue at the first.

It’s a little dispiriting to see this argument gain traction. I have repeatedly criticized this concept before (see my Coasean Analysis of Marketing and Data Mining and Attention Consumption articles), so I will only briefly recap its deficiencies here. Basically, the concept treats the provision of personal data as an automatic detriment to the consumer, which creates a zero-sum game—just like the transfer of cash, the service provider wins at the consumer’s expense. Although consumers may suffer negative consequences from providing their personal data to service providers, the overall concept is wrong because many service provider-consumer relationships are “win-win” where both the consumer and the service provider are better off due to the data transfer. I build some economic formulas in my articles to explain these scenarios with more rigor. Win-win can occur, for example, if the service provider can provide better services to the consumer based on access to personal data. Personalized search is one example. Ultimately, any policy proposals predicated on treating data as currency are likely to overregulate by reducing or eliminating potential win-win scenarios.

3) The term “privacy enhancing technologies” or PETs lacks a consensus definition. Because we didn’t agree on what qualifies as a PET, we couldn’t determine if they had been successful or not.

Construed narrowly as add-on technologies that guard against specific vectors of privacy intrusions, it’s clear that PETs have failed as a mass-market offering. Hardcore privacy folks may seek out tools that advance their interests, and they may even be willing to pay for those tools, but most folks don’t care enough to pursue such solutions–even those available for free. (I highlight this tension in my 2002 Forbes editorial.)

However, if we construe PETs more broadly, they have been massively successful. For example, I would consider anti-spam/anti-spyware/anti-virus software as PETs. Obviously those software programs have other benefits, such as security protection, but they solve a variety of privacy-related problems too. For example, my Gmail spam filter learns my preferences and, over time, blocks some types of unwanted emails (such as repeat emails meant for other “egoldman”s like Emma Goldman) from showing up in my in-box. Similarly, PETs have been incorporated into the browsers and provide default protection to their users. If we can get past the one-off single-vector conception of PETs, we may find lots of successful examples.

4) The online “privacy” dialogue hasn’t advanced very far in the past 15 years. I felt like much of the 2010 roundtable’s discussion would have been apropos 15 years ago. For example, instead of discussing cookies in 1995, in 2010 we are discussing flash cookies and supercookies. There’s no real difference in the underlying principles; we’re simply at a new point in the technological arms race. Just like technology evolved to provide user control over cookies, it will eventually catch up to flash cookies and supercookies and super-duper-cookies or whatever the next iteration of persistent client-side identifiers is called. Unless we look past the specific technological implementations and focus on broader concepts, we are doomed to repeat the same conversations.

5) Due to the semantic ambiguity of the word “privacy,” “privacy” inquiries are guaranteed to fail. Ultimately, I found much of the roundtable discussion unenlightening because the “privacy” umbrella is too broad and ambiguous. From my perspective, the term “privacy” is always fatally ambiguous to any productive conversation; I just don’t understand what it means. As a result, at the roundtable, panelists were simultaneously discussing privacy, security, anonymity and a variety of other concepts. The result was a jumbled doctrinal mess and a lot of talking past each other.

At the same time, the “privacy” umbrella hindered the inclusion of non-privacy concepts that might have helped overcome the deja vu tendency. The panel titles were:

Panel 1: “technology and privacy”

Panel 2: “privacy implications of social networking and other platform providers”

Panel 3: “privacy implications of cloud computing”

Panel 4: “privacy implications of mobile computing”

Panel 5: “technology and policy”

My latest project on reputation is relevant to the issues discussed at the roundtable, but where does “reputation” fit into these panels? Everywhere–and nowhere. Similarly, I was hoping to discuss the implications of 47 USC 230(c)(2), the immunization for filtering technologies, but where does that fit in? I hoped to discuss it in the first panel but we ran out of time. Using a classic “privacy” structure for the discussion implicitly stifles these important non-privacy considerations from emerging. As a result, this structure almost guarantees a “same old, same old” discussion by precluding new concepts from joining the discourse.

Before the panel, lame-duck Commissioner Pamela Jones Harbour gave some opening remarks. She expressed displeasure with Facebook’s resetting of privacy defaults and disagreed with Mark Zuckerberg’s quoted remarks that the technology change reflects emerging social attitudes. She also gave a lengthy shout-out to Paul Ohm’s paper on de-anonymization/re-identification of non-PII. Note that we will have an evening panel event featuring Paul Ohm at SCU on April 7. Please put that on your calendar now. Paul’s paper is already affecting the considerations of FTC Commissioners; come hear what the fuss is about.

After Commissioner Harbour, David Vladeck (head of the FTC’s Bureau of Consumer Protection) gave some opening remarks as well. He summarized three conclusions from the first roundtable:

* Consumers don’t understand commercial information-collection practices (ex: data brokers, behavioral targeting).

* Lengthy privacy policies aren’t effective, but privacy disclosures are important.

* Consumers care about privacy.

He concluded his remarks with an ominous threat. He noted that the FTC continues to bring privacy-related enforcement actions, and in particular (a quote from his prepared remarks) “we are currently examining practices that undermine the effectiveness of tools consumers can use to opt out of behavioral advertising, and we hope to announce law enforcement actions in this area this year.” I’m not sure what this means. Perhaps the FTC is fed up with NAI’s behavioral ad network opt-out tool? I have not been able to make the tool work properly for years.

Finally, I’ll mention a few thoughts from the social networking panel, which featured Erika Rottenberg of LinkedIn, Nicole Wong of Google and Tim Sparapani of Facebook. Given all the Facebook-bashing throughout the day, Tim was in the hot seat!

One of Tim’s talking points was that 35% of users customized their privacy settings in response to Facebook’s privacy default resetting and its subsequent requirement that they review the settings. 35% user participation would be a remarkably high percentage for any website, and it’s incredible for Facebook with 350M claimed users.

Tim’s other talking points didn’t go over as well. He claimed that there are no barriers to entry for other social networking sites. This is technically true but woefully incomplete. It could very well be that the optimal number of social networking sites that consumers can actively embrace is precisely one, and there is good reasons to believe that social networking sites experience powerful network effects. See, e.g., Reuter’s article about the tipping point between MySpace and Facebook.

Further, although the friendship relations are sticky, Facebook’s real stickiness comes from the self-published content on Facebook that cannot be exported to another site. Tim completely chunked the question about data portability from Facebook, slavishly espousing his talking point that Facebook will delete user accounts on their request–a non-sequitur that made most people in the audience quietly groan. We all understand that Facebook will kill content upon request, but the question on the table was how Facebook will allow users to move their extensive content to a competitor. Tim ducked that question because Facebook doesn’t enable it. Facebook does not offer a front door for data portability, and Facebook has been shutting down the backdoor by suing folks like Power.com who try to create an unsanctioned portability method. To be clear, I’m not 100% convinced that Power.com is the good guy in that dispute, but I’m pretty confident that Facebook doesn’t tolerate backdoor data portability.

Even so, I think Facebook’s biggest threat is itself. Few users will get so mad that they will delete their accounts (I still have my Orkut and Friendster accounts, for example). Instead, Facebook should be concerned that users will simply reduce their usage because they get burned out or lose trust in Facebook. Ultimately this will cause users to migrate elsewhere, so the end game for Facebook could be a whimper, not a bang.

As an example of this latter phenomenon, Tim’s talking points claimed that Facebook gives users control over who they want to share every piece of data at the time they publish the data. He rightly praised this granularity but I am still grumbly that Facebook killed the setting that kept my comments and likes off my profile page. Now, if I don’t want those items to show, I have to manually delete each one. So I do have control over my publications as Tim touted, but the additional transaction costs cause me to comment on and like other posts less frequently than I used to. This seems like more of a bug than a feature in my book.

In contrast to Facebook, Nicole Wong hammered the point that Google embraces data portability and builds it into the design of many of its services. As she said (I’m paraphrasing her), because users can leave with a click, we have to better with every product every day, and it makes us build better products. That’s the spirit! Facebook, are you listening?