Ezor on Email Blocklists
By Eric Goldman
Jonthan Ezor has posted a short paper (10 pages + endnotes), Busting Blocks: Appropriate Legal Remedies For Wrongful Inclusion In Spam Filters Under U.S. Law, to SSRN. This article deals with thorny issues created by email blocklist services, although he focuses specifically on volunteer organizations. The article discusses an email marketer’s recourse for incorrectly being listed as a spammer on a spam blocklist, including defamation and intentional interference with prospective business relationship claims, as well as the limits of those claims under 230(c)(1) and 230(c)(2). He concludes that blocklist vendors should use objective criteria, should have an appeals process to correct mistaken listings, and should be surgical in blocklisting IP addresses. He also concludes that vendors should be:
held to professional standards of conduct, including objectivity, reasonable care, and (to the extent their activities cause harm) accountability. The alternative, relying on their good faith and internal procedures, is no longer acceptable, given how critical e-mail has become.
The issues raised by blocklist services are complex, and they span a variety of rating services online, including spyware filters, Google’s PageRank and eBay’s feedback forum. On the one hand, filters are simply in the opinion “industry,” and they add significant value by centralizing behavior monitoring because it’s too expensive for each of us to independently form our own opinions.
On the other hand, by ceding control to filter vendors, we have to trust that these vendors will make good choices. There have been plenty of examples where filter vendors have made questionable choices–the RBL was notorious for being arbitrary and unresponsive, but I’ve heard plenty of complaints from software vendors upset by their characterizations as adware/spyware and even more complaints from websites unhappy about the operation of Google’s PageRank filter. So the centralization of opinion formation can have significant private (and perhaps social) costs if done poorly, and I’m not entirely clear that the market for centralized opinions is particularly efficient.
Thus, opinion vendors can have a lot of power but may not be fully accountable for wielding that power unwisely. Despite this, I favor the production of such opinions, so from a legal standpoint, I think filters should be broadly protected for their choices. On the other hand, we as consumers of filters need to be vigilant about the filters we trust.