Barnes on Adware Contracts
By Eric Goldman
Wayne Barnes, a law professor at Texas Wesleyan University School of Law, has posted “Rethinking Spyware: Questioning the Propriety of Contractual Consent to Online Surveillance” to SSRN.
The first 50 pages largely recap the technology and the case law. If you’re new to the area, this is an admirable summary of what’s been happening. If you’re already pretty familiar with the adware space and the case law, you might start at page 50.
Starting around page 50, the paper gets especially interesting. The paper applies some Restatements and UCC provisions to the contracting process. The paper also tries to establish the proposition that adware-mediated online surveillance is so intrusive–as intrusive as a human stalking you in physical space–that we should simply ban adware. Or, if we choose a lesser step, we should require adware vendors to make extreme disclosures and then require consumers to provide repeated consent just to make sure they really meant it.
I had many critiques of the paper, which I emailed to Wayne privately. With Wayne’s permission, I have included my global structural comments at the end of this post so that you can “look over our shoulders” to see what I suggested to him.
One thing I agree with Wayne about: the contract formation process with adware raises important questions about the validity of private ordering in this context. In other words, do we really believe consumers when they say “Yes” to adware? If we don’t–and there are some good reasons why we might not–then we may have a defect in private ordering. However, I’m not yet convinced that’s a big problem. First, I’m not sure such a defect is unique to adware–I think we have many private ordering crises throughout the law of contract, and we’ve resolved this issue before (maybe not satisfactorily, but we’re not dealing with new issues here either). Second, even if there is a defect, I’m not 100% sure that there’s any cure that’s better than the current situation. Wayne’s paper is a commendable first step in addressing these questions, but I think a lot more theoretical and empirical work will need to be done on this topic before we reach really satisfactory resolutions.
The spyware epidemic has reached new heights on the Internet. Computer users are increasingly burdened with programs they did not knowingly or consciously install, which place strains on their computers’ performance, and which also trigger annoying “pop-up” advertisements of products or services which have been determined to match the users’ preferences. The users’ purported preferences are determined, in turn, by the software continuously monitoring every move the consumer makes as she “surfs the Internet.” The public overwhelmingly disapproves of spyware which is surreptitiously placed on computers in this manner, and also largely disapproves of the pop-up advertising paradigm. As a result, there have been many legislative proposals, on a state and federal level, to address the spyware problem. All of the proposals assume that, if knowing and effective consent to spyware installation is granted by the consumer, then the software is lawful. Existing case law would seem to provide a means for corroboration of this conclusion. However, the implications of allowing such profound and invasive surveillance appear to be largely ignored in all of the proposals and discussion concerning spyware. This may be because of the “problem of perspective” concerning online activities, as first highlighted by Professor Orin Kerr. This article seeks to illuminate the true nature of the spyware bargain, and questions the propriety of sanctioning such “surveillance bargains” under principles of contract law. Such bargains may often be unenforceable because a term allowing continual surveillance may be beyond the range of reasonable expectations of most consumers. Even if not, however, the privacy implications are such that we as a society may wish to condemn such “bargains to be spied upon,” and conclude that such contracts should simply be unenforceable as a matter of public policy, and therefore banned.
My conceptual global comments to Wayne about the paper (cut and paste from my email to Wayne):
1) I think you did an admirable job recapping the debate over the words “adware” and “spyware.” However, in the end, I was still confused throughout the paper exactly what “spyware” meant to you. It may simply be that you are hypothesizing a unique type of software that both (a) watches behavior, and (b) triggers ads. However, does your hypothesized model also report the captured data back to a home base, or does the data just sit in a directory on the user’s computer? (And does it matter?). In the most narrow form of hypothesized software you might be discussing, I wonder if anyone is actually doing what you’re concerned about. In the broadest interpretation of your hypothesized software, I think in many cases you’re really discussing adware (as I would use the term), not spyware. Perhaps this confusion was mine alone, but I was confused throughout the paper about exactly what were the essential attributes of the software you were characterizing as “spyware.”
2) A centerpiece argument in your paper is that electronic surveillance is fully equivalent to human surveillance. I know that some agree with you, but this point deserves considerable attention. I think, if I understand your argument (and it depends exactly on the key attributes of your definition of spyware), that you think that electronic software aggregating electronic behavior onto a hard drive but not telling anyone (and no human ever cognitively considering that data) is as bad as a person physically being in the same room as another person, watching them have sex, masturbate, douche, go to the bathroom, inject themselves with cocaine or commit the other myriad of personal activities that one might want to keep “private.” Note that while a person can engage in cybersex, there are some qualitative and cognizable differences between cybersex and physical-space sex or masturbation…aren’t there? And while I personally feel uncomfortable with the idea of a stranger (or even my son) watching me defecate, exactly how can the computer monitor a person going to the bathroom?
[Note, I address the inconsequence of inchoate data collection in http://papers.ssrn.com/sol3/papers.cfm?abstract_id=685241]
If there are some activities that are conducted in physical space that aren’t replicatable in cyberspace, or if the idea of a computer recording electronic behavior is different from having a person in the bathroom with me while I’m defecating, doesn’t this undermine your analogy on Page 52? And if that analogy is weakened, I think that also weaken the various conclusions you draw from it.
Your analogy on Page 52 also assumes that a person is stalking another person “for whatever purpose.” I agree that would be creepy, but that’s not the right analogy to spyware, which is monitoring behavior putatively to deliver contextually relevant results that would create positive utility for me. I wouldn’t want someone tailing me for no reason, but I might very well want people tailing me if they were going to help me. Thus, I disagree with your point on Page 53, where you say no one would voluntarily seek to be tailed–I can think of several situations where that’s not true because the tailer would provide valuable services to the tailee–for example, medical care, nannies/butlers, stars with entourages.
Because of this general problem of distinguishing the physical from the virtual, your analogy to the Article 9 repossession law didn’t work for me. I’m not an Article 9 expert, but I can think of several distinctions. With repossession, there is a risk of physical altercations that create a breach of the peace. There’s also the risk that the repossessor lacked the rights to do what they are doing, thus depriving a legitimate owner of possession and use of their chattel and being difficult to distinguish from an outright theft/conversion of the chattel.
All told, I personally thought your analogy and your virtual/physical distinction (or lack thereof) didn’t clarify things for me, but instead raised lots more questions.
You might also note that there is a large literature on this general question–you cited a couple of articles (like Dan Hunter’s article), but I’m sure you know that the literature on the physical/virtual distinctions in Cyberlaw, and developing the appropriate analogies, is very rich.
3) You repeatedly take potshots at spyware because it is typically bundled with “modest-value” applications. There are 3 issues with this.
a) there is always a risk of a person externally valuing the subjective utility a party receives from a transaction. This is why the law doesn’t question the adequacy of “consideration.” But you were willing to make an across-the-board assessment of the subjective value that users place on the bundled applications. You might want to acknowledge more explicitly the basis on which you form that conclusion of subjective value.
b) If you are trying to cost-account for the benefits consumers receive, you need to include any ads received that create positive utility.
c) Current practices are changing rapidly. So whatever the current value proposition is, we should be reluctant to make any predictions about future value propositions.
See https://blog.ericgoldman.org/archives/2005/10/does_anyone_rea.htm for more on this point.
4) I have an idiosyncratic hot button. Personally, I am completely unpersuaded by any policy rationale in the Internet space about preserving the “sanctity of the home.” I know this meme has propagated widely, but in the online context, it’s nonsensical. Spyware can be installed on home computers and office computers. If the law depends on the “sanctity of the home,” then it should make a difference–home-installed spyware bad, office-installed spyware not as bad. Is that what you intend? Also, assume spyware is installed on a laptop that the owner shuttles between the office, home and a cybercafe. Now what? I think there may be underlying policy concerns masked by the “sanctity of the home” argument that deserve to be unpacked and examined more closely. Otherwise, I personally find the rationale very empty.
5) Based on the problems of defining spyware, and tying it to your core objection about surveillance, I think your proposed solution covers a wide range of software–virus checkers, parental filtering software, Windows XP, Google Desktop’s Sidebar. Are you really proposing that all of these software programs should be banned because of their surveillance capacity? If so, you might want to acknowledge that expressly. If not, then I’m not sure I understood how you distinguish between the objectionable features of spyware and the ubiquitous monitoring capacities of software.