Denver University “Cyber Civil Rights” Symposium Recap
By Eric Goldman
The week before Thanksgiving, I attended an unusual symposium sponsored by the University of Denver Law Review entitled “Cyber Civil Rights: New Challenges for Civil Rights and Civil Liberties in our Networked Age.” The symposium covered standard Cyberlaw topics, but the raison d’être was University of Maryland law professor Danielle Citron’s two recent articles on online harassment of women: “Law’s Expressive Value in Combating Cyber Gender Harassment” (Michigan Law Review) and “Cyber Civil Rights” (Boston University Law Review). It is unusual for a law school to celebrate another school’s professor and her research, especially when the professor is fairly junior. Nevertheless, Danielle’s participation brought together academics from both the Cyberlaw and civil rights communities, which provided a rare and interesting mix of folks..
First Panel
Danielle Citron started off by recapping her two papers. Online participation, such as blogging, is essential to professional standing, and employers are reviewing online profiles of prospective employees as part of their hiring considerations. However, women are being targeted for abuse online. These attacks are harming women by changing their online and offline activities, reducing their job opportunities, and causing women to change their gender representations online. Further, folks are trivializing these problems. Women are underreporting the attacks, and law enforcement only intervenes when there are offline harms. New laws can serve an expressive function to communicate that online attacks against women are socially unacceptable. The new laws can validate women’s feelings that they have been harmed and encourage law enforcement to pursue more cases.
Commenting on the papers, Robert Kaczorowski of Fordham Law (and Danielle’s stepdad) made an extended analogy between the Ku Klux Klan and cybermobs.
Wendy Seltzer asked if we could deemphasize the effect of words rather than prohibit them. Danielle responded that we don’t know how seriously to take any particular threat.
An audience member asked if is there a difference between mobs and individual actors who are just taking advantage of being anonymous. Danielle answered that groups can become more extreme online. I think this point deserves more exploration: a series of uncoordinated individual decisions to “pile on” to an attack can look like a coordinated attack to the victim. This is part of why I thought the KKK references were puzzling—KKK activities are clearly coordinated, while online attacks against women can succeed without any coordination or ongoing connection between the attackers.
Paul Ohm argued that that legal solutions are better for cyber civil rights problems than technological solutions. Paul discussed what he labeled “Felten’s Third Law.” (He doesn’t know of two earlier laws named for Ed Felten; he just assumes they exist given Ed’s impressive and influential oeuvre). As articulated by Paul, Felten’s Third Law is that in Cyberlaw conflicts, lawyers love technical solutions and technologists love legal solutions. In other words, we love the solution we don’t know because we assume it has to be better than the one we do. As both a law professor and technologist, Paul picks law over technology for these problems.
Paul categorically rejects any technical solution that would create a “fully identified Internet.” For example, we should not mandate server log retention because we know the logs will be co-opted to regulate other forms of unwanted content, not just online harassment.
Wendy Seltzer discussed the unintended consequences of legal intervention. For example, mandatory Internet filtering in school libraries hasn’t stopped kids from bypassing the filters, but it has facilitated a marketplace for improving filtering technologies that has benefited repressive regimes. Another example: anti-circumvention technology fails to restrict copying but has reduced innovation around DRMed content. Wendy also noted how norms can help curb abuses. For example, while there are online cesspools, she praised Wikipedia’s evolving guidelines for living people’s biographies.
In response, Danielle admitted that her solutions need to be more surgical. She said she might consider moving from a notice-and-takedown model to a notice-and-preserve model for intermediaries.
Second Panel
This panel was composed of three women academics from the civil rights community, so it was a noticeable shift from the typical Cyberlaw academic discussion.
Mary Anne Franks is a University of Chicago Bigelow Fellow and soon-to-be full-time law professor. She expresses our collective disappointment that cyberspace isn’t a utopia that allows people to escape offline discrimination and harassment. She laments that women can lose control of their identities online, such as when someone creates a fake online profile in their names.
She then addressed how cyberspace is unique/special/different with respect to gender harassment. Many commentators try to duck cyberspace exceptionalism, so it was refreshing to see her tackle the issue squarely. Existing offline discrimination/harassment laws assume interactions between repeat players at work and school; online harassment can be divorced totally from any existing social networks. However, because the online activities still harm targeted individuals at work and school, we should treat the harms the same. Offline, there are switching costs to changing jobs or school; online, search engines’ consolidation of results for search on a person’s name creates a different type of switching cost. In terms of supervisory power, she thinks web operators have analogous control to employers or school administrators. Thus, when web operators receive notice of online harassment, they should have a duty to do something about it. Offline, employers can develop a variety of responses and policies to combat workplace harassment. Web operators should have similar latitude; for example, they can delete offending posts or suspend/ban accounts.
Helen Norton, a University of Colorado law professor, did not share Danielle’s optimism (expressed in her first article) that existing discrimination laws can curb online harassment. Instead, Helen thinks a new civil rights statute is needed, but she might limit its remedies to exclude money damages. Helen is pessimistic that there will be regulation any time soon, noting that it can take years to enact civil rights legislation. Helen would also like to see more precise definitions of the exact harms that women are experiencing only online.
Nancy Ehrenreich, a Denver University law professor, began her talk by saying that we should not overstate the Internet’s benefits. She then clarified that we should not assume that disadvantaged folks can overcome barriers online. For example, we impose cultural categories on people in every interaction, so even if people try to mask their identity online, they can’t really escape. She wondered why we aren’t talking about an anti-discrimination law for the web. Her concern is that discrimination denies individuals access to the Internet.
In Q&A, Paul Ohm observed that civil rights scholars often invoke free speech as the countervailing concern to their desired regulations, but Cyberlaw scholars are often more interested in other “generative” effects of the Internet, such as new business models, new labor models and new modes of production.
Panel 3
James Grimmelmann (see his slides) started with the Skanks in NYC case. In that case, the defendant criticized someone else in her social network on a blog, calling the plaintiff (among other unflattering things) a “skank.” The plaintiff sued to obtain the blogger’s identity. After a successful unmasking, the plaintiff dropped the lawsuit, having successfully publicly shamed the blogger.
James hypothesized that this unmasking and shaming was an appropriate remedy—the blogger got shamed (like “an eye for an eye”), and unmasking is a better outcome than other legal remedies like damage suits. James then posited a thought exercise that provided plaintiffs with an expedited unmasking procedure if they drop any damages claim. This would have a number of benefits. Unmasking curbs online harassment is especially effective at busting online mobs. Also, an unmasking remedy avoids messy debates over the First Amendment’s scope, and it may be more desirable than trying to hold online providers liable.
Having advanced his own strawman, James then cut it down. In some cases, defamation remedies may be more desirable, and plaintiffs may not know that until they learn the putative wrongdoer’s identity. In other cases, plaintiffs who just want unmasking would appreciate a lower legal hurdle. Also, we provide legal protection for anonymity for good reasons.
James’ lessons from the thought exercise: we should consider ways to decouple an unmasking remedy from litigation. At the same time, we need to protect defendants from pretextual unmasking; in some cases, retaliation is a big concern, and we should incorporate this concern into the unmasking decision.
From Chris Wolf’s talk (see his full remarks), the most interesting thing I learned is that 18 states have laws banning wearing masks in public, enacted to suppress KKK activities. This was the second speaker’s KKK reference of the day, and it made me wonder if we were experiencing some variation of Godwin’s Law.
Panel 4
Viva Moffat observed that secondary liability issues generate the most heat in online harassment discussions. She expressed concern that imposing legal duties on third parties may not help law’s norm-shaping effect, and it’s not appropriate to impose liability just because the provider has deeper pockets or the direct actor can’t be found. She also suggested that imposing liability on third parties creates a greater risk of collateral damage than direct liability. [Note: I would like to know more about this last assertion. I suspect we cannot make a utilitarian calculation a priori]. As a result, she favors focusing more efforts on sharpening direct liability.
Ed Felten talked about identifying and anonymizing online activity. He explained the usual sequence of events in chasing bad online content:
log file => IP address => identity => justice
But the IP address => identity step breaks down when users use an anonymizing proxy or the user’s network uses network address translation (used by home wireless routers or in coffee shops) and all connected devices’ requests share a single IP address. He said that a majority of Internet connections use NAT.
Because IP address tracebacks can dead-end at the intermediary, an IP address can reveal too little information. However, even when users aren’t investigatory targets, IP addresses can reveal too much information, such as geolocation. This paradox—IP addresses simultaneously reveal both too much and too little information—reflects that the IP address system was built for routing, not identification. So could we design a better authenticating technology?
He then conducted a “semi-realistic” thought experiment of a new technological “tag” that could be used instead of IP addresses. This tag could have the following attributes:
* can be placed by any intermediary
* conveys no information about the sender unless unwrapped by the intermediary (presumably for good legal cause)
* unwrapping the tag yields the best identity information the intermediary has
* the tag’s use is voluntary as a technical matter
* the tag is removable as a technical matter
I then batted clean-up. A summary of my remarks:
Today’s conversation has revisited long-standing Cyberlaw issues, such as:
* anonymity v. accountability, and who should be responsible for online content and actions
* cyberspace as a physical place. See, e.g., Noah v. AOL (an online discrimination case), National Federation of the Blind v. Target (also an online discrimination case) and Estavillo v. Sony
* cyberspace exceptionalism and cyberspace utopianism (on the latter point, see my article on search engine utopianism)
* when is the optimal time to regulate rapidly evolving technology? Early, when the technology is still in its infancy, or later, when market forces and new technological evolutions may have cured the early problems?
Danielle’s articles convinced me that women are experiencing serious harms online that men—including me—could easily trivialize. Danielle’s articles also convinced me that online harassment has strong parallels to the 1970s legal evolution of workplace harassment doctrines, where a big part of the battle was to get people to take the harms seriously.
While I find a lot of descriptive value in Danielle’s work, the normative implications are not as clear. As usual with attempts to regulate rapidly evolving technology, there are many important but overwhelmingly hard definitional challenges, such as who is an “intermediary,” what are “online mobs” and what constitutes online “harassment.” For example, I do not think the Skanks in NYC incident is an online harassment case or an “attack,” but James Grimmelmann’s talk assumed those characterizations.
While we can debate what should be the right level of regulatory intervention, we should not overlook that Congress already enacted a law squarely governing intermediary liability for online harassment: 47 USC 230. The angst that prompted this conference—bad behavior online—is the logical consequences of 230’s broad immunity. The statute enables websites to adopt policies that they will not police user content or retain server logs of user activity. These choices aren’t a surprise or a per se abuse of the immunity; instead, they are the unavoidable implications of Congress’ action.
We might question Congress’ wisdom in adopting 230, but we should not diminish its potential importance to the Internet as we know it. [In Q&A, Chris Wolf asked about the comparative experience in countries that don’t have such broad immunity. In those countries, we know that websites take down user content much more freely, and I believe that the most interesting UGC innovations are all taking place here in the US, not countries with more restrictive UGC liability.] I can, at most, only prove correlation and not causation, but I believe 230 is one of the main causal reasons why the Internet has succeeded so well.
When I speak around the country about 230, I often encounter folks who generally accept 230’s immunity scope but want just one new exception, i.e., their pet topic. If everyone got their “just one” exception, the law would be eviscerated. (I said it would be Swiss-cheesed to death; maybe I should have said it would be overcome by a thousand duck bites). I’m not rejecting new exceptions categorically (they should be each considered on their own merits), but in aggregate 230’s immunization benefits are actually quite precarious. I believe 230 works precisely because of its strength and simplicity, so adding more exceptions could significantly reduce its efficacy.
I concluded my remarks by observing that online harassment is a subspecies of bullying and incivil behavior in our society. While we can and should work to curb online harassment, I am more interested in addressing bullying and incivility in all its forms, wherever it takes place.
In this regard, I have been impressed by how my son’s school is proactively addressing bullying. See more about this effort, called Project Cornerstone. The school is teaching kids not to bully or to tolerate being bullied, and the project gives bullied kids tools to go on the offensive against bullies. There’s no guarantee that anti-bullying programs will work in the short or long run, but I remain hopeful that online harassment today partially reflects that many current Internet users never got any anti-bullying education. Perhaps, then, online harassment issues will naturally abate (without any regulatory intervention) as new generation of Internet users, better educated about bullying, come onto the Internet.
Following my remarks, we had more Q&A.
Paul Ohm Q: Some cyber folks argue against secondary liability because they believe that a victim can pursue a direct action, but Ed’s talk suggests that user anonymity will continue to be possible.
Mary Anne Franks: civil rights isn’t about individual claims because victims have to bear too high a burden to pursue claims. Instead, civil rights are about changing large-scale social norms. The goal is to achieve anti-discrimination by any means necessary. Thus, civil rights scholars have already discussed and concluded that it’s appropriate to impose liability on intermediaries like employers and schools.
Danielle: intermediaries are the lowest cost avoiders.
James Grimmelmann: no, the harassers are the lowest cost avoiders. Civil rights folks would get more support from the Cyberlaw crowd if they focused their regulatory desires towards intermediaries who are in active concert with the bad actors.
Danielle’s Wrap-Up
We all agree that:
* education can make a big difference
* online communities need to self-police
* there are numerous limits to using the law as a solution, including that lawsuits don’t make sense and 230’s immunity.
We don’t agree on what to do next. There are First Amendment limits, and technology doesn’t offer any panaceas.