The “EARN IT” Act Is Another Terrible Proposal to “Reform” Section 230
About the Drafts
A first draft of the bill leaked in late January. Most of the initial commentary focused on that version. A subsequent draft circulated privately in early February. I’m not aware of that version being publicly posted, and I haven’t gotten authorization to post it. Nevertheless, I’m writing my blog post in reference to the second draft. There appear to be some important changes in the second draft, some of which I’ll flag below. It’s possible/probable that the bill will see further changes when it’s officially introduced.
Nomenclature note: the bill targets what we historically called “child pornography.” The bill uses the phrase “online child sexual exploitation material,” but to its discredit, the bill never defines that term. A more modern and possibly more expansive term for “child pornography” is “child sexual abuse material.” This post uses the acronym “CSAM” to describe this category of content.
What the Bill Says
Creating a New Commission
The law would create a new commission called the “National Commission on Online Child Sexual Exploitation Prevention,” consisting of the following members:
- US DOJ AG or designee, who will act as Commission chairperson
- Homeland Security Secretary or designee
- FTC Chair or designee
- 2 law enforcement representatives
- 2 prosecutors
- 2 representatives of NGOs “providing services for victims of online child sexual exploitation”
- 2 technologists
- 2 representatives of large Internet services with 30M+ monthly registered U.S. users [“registered” isn’t defined in the statute, and it’s much more restrictive than the more typical “monthly average user”/MAU metric. For example, I doubt Wikipedia, one of the top 10 most-trafficked websites, would qualify for this standard.]
- 2 representatives of small Internet services with <10M monthly registered U.S. users
- 1 representative with “experience in consumer protection matters related to privacy and data security representing civil society or public interest organizations.”
Note: this adds up to 16 spots, though the bill says the commission has 15 members. The last bullet, the privacy expert spot, was added in the most recent redline. It appears the drafters intended to expand the commission to 16 but failed to update the number from 15.
To mitigate partisan bias, of the seats other than the first three, 3 would be appointed by the Senate Majority Leader, 3 by the Senate Minority Leader, 3 by the House Speaker, and 3 by the House Minority (note the continuing math problem with 15 vs. 16 seats). I don’t really understand how these folks will divvy up the rights to designate seats; and many of the positions seem like they should be inherently nonpartisan.
The Commission’s Charge
In __ months (a placeholder says 18), “the Commission shall develop and submit to the Attorney General recommended best practices that providers of interactive computer services may choose to engage in to prevent the online sexual exploitation of children and reduce the proliferation of online child sexual exploitation material.” (I put “best” in quotes due to the Commission’s procedural defects, which aren’t likely to yield “best” practices). The bill enumerates 11 specific topics for the “best” practices to address. The “best” practices are supposed to reflect the nature of different services, including that some services publish content to the public, and consider costs/technical limits as well as impacts on competition and privacy. Nothing is supposed to compel services to look for CSAM. Any Commission vote requires 11 yes votes. The Commission is supposed to update its work every 2 years.
In the first draft, the Commission’s reports went to the AG, who could do whatever he wanted with it. It even appeared that the AG could substitute in his own “best” practices for the Commission’s recommendations (the prior draft said the AG could unilaterally “modify” the recommendations). I believe the second draft changes this. It now reads that “the Attorney General, upon agreement with the Secretary of Homeland Security and the Chairman of the Federal Trade Commission, shall approve or deny the recommended best practices.” The revised language is better than giving the AG carte blanche, but still, how will this work? Will this require unanimity among the 3? Majority (2 of 3)? Does the AG have the final say but must consult? In a later paragraph, the bill says “Any denial of the recommended best practices by the Attorney General under paragraph (1) shall be accompanied by public written findings setting forth the basis for, and reasons supporting, the denial.” This seems to imply that the AG has unilateral rejection authority, but my instinct is that the latter provision didn’t get properly refreshed in the second draft and should be updated to reflect the triumvirate decision-making. Otherwise, the provisions don’t seem to cohere.
Note: it’s terrible committee design to have the decision-makers of the committee’s work product (AG, DHS, FTC) also participate as voting committee members. Other committee members are going to be reluctant to speak in opposition to, or vote against, the final approvers because of the likely futility. This increases the odds that the Commission will just whitewash the agendas of the final approvers.
Also new to the second draft: The approved “best” practices automatically go into effect 90 days after Congress is notified of them, though Congress can disapprove them. 90 days sounds like an unreasonably short turnaround time for Internet services to change their practices, especially complex services run by a small number of employees. That reminds me a little of the mad (and extremely expensive) scramble that the CCPA’s too-quick go-live date caused for California businesses–except this process is scheduled to repeat itself every two years with a potentially constant flow of significant new obligations. During this 90 day period, Congress may be hold hearings and vote to disapprove, so the mad scramble may be wasted effort. A better design would give a longer phase-in period that commences only after all approvals/disapprovals have been completed.
Furthermore, getting Congress to do anything in 90 days seems impractical, especially in light of the incredibly and unusually specific Congressional disapproval process spelled out in the bill. The second draft added about 10 pages (about 1/3 of the total bill length) containing highly technical and detailed procedures governing Congressional disapproval. For example, the way I’m reading it, in the House, only the majority or minority leader can introduce a disapproval motion; other Congressmembers can’t. These hoops make it even more challenging for Congress to ever disapprove any Commission decisions. That’s likely a feature to the bill drafters, who seemed not to want Congress to disapprove any Commission decisions.
Certification of “Best” Practices
Every year, an Internet service can self-certify compliance with the approved “best” practices. If the DOJ believes a certification is false, it can send a C.I.D. The person who signs a false self-certification can be criminally prosecuted. Good luck finding a corporate officer who wants to sign that certification!
Section 230 Amendments
Like FOSTA, the bill curtails Section 230 to allow new civil and state criminal claims:
- civil claims pursuant to 18 USC 2255 or state law for activity that violates 18 USC 2252 or 2252A (which cover CSAM distribution or receipt) or 2255(a)(2) (which appears to be a typo; 2255(a) doesn’t have any subparts). The law reduces the requisite scienter in the underlying statutes from “knowing” to “reckless” violation.
- any state criminal prosecutions for conduct that violates 18 USC 2252 or 2252A
The bill allows Internet services to “earn” back a safe harbor to this newly exposed liability by maintaining a current “best” practices certification on file with the DOJ or by adopting other “reasonable measures…to prevent the use of the interactive computer service for the exploitation of minors.” It’s hard to imagine why any service would rely on door #2. The legal standard is exceptionally vague (“exploitation of minors”). Also, much like the DMCA’s safe harbor helped reconfigure how courts think about the prima facie elements of secondary copyright infringement, courts are likely to reference the Commission’s “best” practices when defining what constitutes reasonable measures. The bill could say that courts shouldn’t consider the “best” practices when determining reasonable measures, but doesn’t.
The Section 230 amendments take effect the earlier of 1 year after the “best” practices become effective or __ years (4 is the current placeholder). This is a poison pill of sorts. It ensures the Section 230 scaleback will happen even if no “best” practices are effective. That could happen because the Commission never agrees on recommendations, the AG/approvers never approve the recommendations, or Congress disapproves the “best” practices. If no “best” practices are approved in 4 years, the only applicable “safe” harbor will be the “reasonable measures” language, which isn’t defined and will require development of a brand new body of common law. If we don’t actually believe that Internet services can confidently rely on the “reasonable measures” safe harbor, the Section 230 carveback’s automatic implementation will cause additional avoidable chaos. An easy fix would be to implement the carveback on the LATER of 4 years or 1 year after “best” practices adoption, but I suspect the poison pill was intentional.
Like FOSTA, the bill nominally retains the Section 230(c)(2)(A) defense. During the SESTA-FOSTA debates, I explained to Congress why that defense is meaningless. (I even proposed suggested language to make it meaningful!). One of several reasons: 230(c)(2)(A) isn’t likely to protect trying-and-failing, and that’s the likely grounds for any newly tenable EARN IT Act claims.
Expanded Reporting Safe Harbor
Existing law requires Internet services to report known instances of CSAM to NCMEC. A safe harbor, 18 USC 2258B, insulates them from liability for making these legally obligated reports. The bill would expand and clarify the 2258B safe harbor to include compliance with search warrants/court orders or voluntary research to combat CSAM or reporting. It would be remarkable if anyone feels safe relying on this safe harbor to conduct CSAM research–it’s an extremely hazardous area of study.
Problems with the Bill (Selected)
I’ve already mentioned a few tactical problems with the bill above, and I could spend many thousands more words on structural problems with this bill. Instead, I’ll highlight “just” 6 of the many big problems.
Section 230 Already Doesn’t Apply to CSAM, So Removing Section 230 Won’t Motivate Internet Services to Fight CSAM Harder
The EARN It Act implicitly assumes that Section 230 currently inhibits Internet services from combating CSAM more aggressively. Otherwise, why condition Section 230 on the services doing more? (I’ll address more cynical explanations shortly).
Unfortunately, this makes no sense. Due to several attributes that are unique to CSAM compared to other types of awful content, fighting CSAM already gets the highest levels of attention from virtually all Internet services:
- Section 230 does not apply to federal criminal prosecutions, so Internet services have never had Section 230 immunity for federal CSAM crimes. That exposure helps motivate Internet services to combat CSAM. Indeed, the US DOJ actively prosecutes CSAM cases–2,776 cases in 2013–so Internet services won’t gamble on underenforcement.
- As already mentioned, Internet services must report known CSAM items to NCMEC (18 U.S.C. § 2258A). Congress expanded the 2258A reporting obligation in 2018. The statutory expansion possibly explains the N.Y. Times’ 2019 breathless panic that CSAM reports had doubled in the past year. Due to the 2258A reporting duty, many Internet services have designated employees responsible for redressing CSAM, which ensures that Internet services pay more attention to CSAM than other types of awful content.
- CSAM is the paradigmatic example of awful content that can be identified based solely by examining the content itself, without any additional context. This makes CSAM the ideal case for automated filtering. Over a decade ago, Microsoft developed an automated filter, PhotoDNA, which it now makes freely available to other industry participants. Thus, unlike many other types of awful content, a non-proprietary CSAM automated filter is already widely adopted across the industry.
Virtually all Internet services consider CSAM the most pernicious type of user-supplied content and already apply zero-tolerance policies. It’s laughable to imply that Internet services are blase about CSAM on their networks. Of course Internet services could do more to suppress awful content generally, but those steps aren’t specific to CSAM and are likely to affect wide swathes of legitimate UGC.
In sum, in light of the anti-CSAM efforts already being deployed, exactly what new anti-CSAM steps will be motivated by the removal of Section 230 immunity? The EARN IT Act appears to be motivated more by other considerations, not actually helping combat CSAM or protecting CSAM victims.
The Effects of New Civil and State Criminal Liability Are Unclear
Making Internet services civilly liable for “recklessly” distributing CSAM opens up a Pandora’s box of unknown consequences. Congress recently enacted the “Amy, Vicky, and Andy Child Pornography Victim Assistance Act of 2018” (18 USC 2259) to clarify, and likely expand, civil liability for CSAM. I’m not sure how that change will affect the liability of Internet services. At minimum, the “reckless” scienter standard increases defense and adjudication costs, regardless of the claims’ merit, because plaintiffs will always plead facts to claim recklessness. This would negate Section 230’s critical procedural benefits that currently negate expensive and time-consuming judicial inquiries into scienter.
State criminal enforcement creates additional uncertainties. 20 years ago, NY AG Spitzer prosecuted an Internet access provider, BuffNET, for hosting USENET groups that carried CSAM from third-party non-subscribers. What should we make of that example? It implies that Section 230 has never prevented state CSAM prosecutions. At the same time, the fact that there have not been similar prosecutions in 20 years suggests that perhaps Spitzer overreached and such prosecutions not actually are appropriate or defensible.
(As I’ve explained elsewhere, there are other good reasons why Section 230 prevented state prosecutors without restricting federal prosecutors).
To further confound the matter, FOSTA recently opened up Section 230 for state criminal prosecutions and civil claims related to sex trafficking. As far as I know, no state prosecutors have used these new tools in the past 2 years; and civil plaintiffs only just started trying in the last couple of months. Apparently, the Section 230 carveback in FOSTA didn’t have an urgent pent-up demand. Will the CSAM carveback face similar dynamics? I’m not sure.
Ignoring Results from Related Efforts
As I’ve already mentioned, Congress has amended CSAM law recently, including the expanded Internet service reporting requirement and the expanded civil CSAM liability. FOSTA is also partially related. It would make sense to absorb the lessons from these policy changes. Are they already redressing the problems? If so, why do we need more laws? If not, what went wrong with the recent policy changes, and are we sure this bill will avoid their failings?
In particular, the early assessments of FOSTA are horrendous. It appears to have made many communities worse with no countervailing benefits to other communities. The EARN IT Act copies some of FOSTA’s structure, so it seems likely to repeat some of FOSTA’s errors.
The Encryption Battles, Redux
In December 2019, AG Barr spoke to NAAG and linked together tech power, Section 230, CSAM, and encryption. It was jarring to hear AG Barr target Section 230, because the DOJ is the only entity in the world that can automatically bypass any Section 230 defenses (due to 230’s federal criminal prosecution exception). Thus, the DOJ never has been restricted by Section 230 and has minimal first-hand experience with it. Indeed, the DOJ has had numerous high-profile enforcement successes unfettered by Section 230, including its 2007 $31M bust of search engines for accepting allegedly illegal gambling ads, its 2011 bust of Google for accepting allegedly illegal pharmaceutical ads that led to a half-billion dollar payment, and the life sentence of Ross Ulbrecht for running Silk Road.
Instead, Barr’s remarks reignite long-standing battles over encryption. Law enforcement has sought to eviscerate online encryption for at least a quarter-century. The EARN IT Act creates a new way to attack encryption, by stripping Section 230 immunity for any services deploying encryption. The Commission’s composition–packed with law enforcement representatives–stokes that fear.
[I’ll treat “encryption” as a binary state of either accessible or inaccessible to unintended audiences, but that’s a gross oversimplification]
Any 230/encryption linkage makes no sense. First, Internet services usually don’t need Section 230 protection for private messages (encrypted or not) they automatically transmit. I’m not aware of a specific immunity covering this activity, but it would be hard to come up with a prima facie case where a provider of private messaging was liable for automatically processing private messages. In fact, services usually are legally blocked from examining private message contents except in specific circumstances, so those services have extremely limited tools to redress awful private message content. Thus, I’m struggling to imagine the circumstances where private message providers would stop adding their own encryption layer to private messages in fear of losing Section 230 immunity.
Second, the Section 230/encryption deal won’t prevent users from encrypting the private messages they send. That content will be completely opaque to the services, and it’s virtually impossible to imagine how Internet services could be liable for encrypted messages. I guess the Commission could adopt a “best” practice that Internet services must block/ban all user-encrypted content to get Section 230, but I don’t know how that could work technically or why that would matter to the Internet services.
Third, non-private messages generally can’t be encrypted, and CSAM is rarely shared non-privately for obvious reasons. So an anti-encryption “best” practice wouldn’t reach CSAM this way either.
Thus, I don’t see how conditioning 230 on Internet services’ dropping encryption would redress the CSAM problem. If acted upon by the Internet services, however, it would make it easier to intercept any private messages (including messages containing CSAM) that senders don’t encrypt themselves. This wouldn’t be restricted to CSAM; it would be an open door to intercepting messages of all flavors. And it wouldn’t be restricted to U.S. law enforcement; other state actors and malefactors would be delighted to listen in as well. Thus, if the EARN IT Act links 230 to no-encryption, the Internet would become less secure for users, and more users would be motivated to encrypt their messages.
There is good reason to fear that the EARN IT Act is cynically invoking CSAM as a trojan horse for discouraging encryption deployment. It just seems like the trojan horse doesn’t actually solve any of its purported problems.
The Commission Composition Won’t Produce “Best” Practices
Beyond encryption bans, there are many other bad ideas that could emerge as “best” practices. For example, inspired by the UK Defamation Act, Section 230’s immunity from CSAM claims could be predicated on Internet services having validated identifying information about violating users. Such a “best” practice would force Internet services to authenticate all users (because the services don’t know in advance which one(s) will commit CSAM crimes). That would functionally eliminate anonymous online content and create a honeypot database for law enforcement, plaintiffs, and malefactors.
Age authentication is another obvious vector for a bad “best” practice. Some states already (unsuccessfully) tried to mandate age verification in the Backpage battles. The Commission could create a “best” practice for Internet services to verify the age of anyone depicted in any “pornographic” image or video as a precondition for Section 230 immunity. Because Internet services cannot possibly do this, a “best” practice like this would immediately eliminate most publicly available pornography on U.S. servers, whether or not constitutionally protected.
Note: following Tam and Brunneti, I think it violates the First Amendment for the government to withhold a valuable privilege based on preconditions that, if promulgated as standalone obligations, would violate the First Amendment. So I offer up some bad “best” practices without opining whether those practices could survive a constitutional challenge.
Even if the Commission doesn’t try to adopt terrible “best” practices, I don’t expect any approved practices actually will be “best” for the Internet due to the Commission’s composition. Recall that 6 “no” votes will be required to stop any proposal.
The Internet services get 4 seats total, so they alone cannot spike any bad ideas. Furthermore, the Internet services aren’t likely to agree with each other isn’t high. In particular, big Internet services embrace regulatory requirements that thwart their competition and entrench their positions. That’s why Facebook keeps begging for government regulation (Zuckerberg just did it again this weekend). Accordingly, the big Internet services may not oppose any costly/onerous law enforcement proposals that hurts their competition. The bill’s nod to competitive concerns won’t inhibit this dynamic at all
One obvious omission from the Commission’s composition: advocates for free speech or users who aren’t disseminating CSAM, because those interests will surely be overridden by the Commission’s recommendations.
Even when the large and small Internet service representatives vote with each other, they’ll struggle to shape the recommendations. Law enforcement has 7 votes, and the 2 NGO votes will inevitably side with law enforcement. That leaves only 3 votes up for grabs to assemble the necessary 11 yes or 6 no votes: the technologists and the privacy representative. We shouldn’t assume the technologists will side with the Internet companies. Many technologists are pro-censorship. The privacy representative’s predispositions are also hard to anticipate: they may be skeptical of efforts to undermine encryption, but privacy advocates often favor regulating Internet services. Due to the voting structure, I expect the Commission will recommend “best” practices that are pernicious for Internet services and all their users.
Section 230 Shouldn’t Be a “Privilege” to be Earned
It has become a popular DC meme to view Section 230 as a “payoff” for doing something the regulators want. This regulatory approach treats the loss of Section 230 immunity as the consequence of noncompliance with the regulators’ demands. However, without Section 230, many Internet services will be overrun by litigation and go out of business–an outcome that typically will be disproportionate to the associated legal violation. If Congress affirmatively wants Internet services to do something, they should mandate it directly–and create just and proportionate remedies for any violation–without otherwise modifying Section 230.
The “Principles”
Last year, 53 academics and 28 organizations issued a statement of 7 principles to evaluate new revisions to Section 230. There’s so much wrong with this bill, I didn’t need to run through the 7 principles to find the problems. I encourage you to do the exercise yourself FWIW.
Selected Other Commentary
- Emma Llanso, “Privacy, Free Expression, and Security Threatened by Graham Proposal“: “Preventing the exploitation of children is an unquestionably important aim. However, there is little evidence that Section 230, which protects intermediaries such as website operators from liability for the information their users post, is a barrier to those efforts.”
- Riana Pfefferkorn, “The EARN IT Act: How to Ban End-to-End Encryption Without Actually Banning It“: “This bill is trying to convert your anger at Big Tech into law enforcement’s long-desired dream of banning strong encryption. It is a bait-and-switch. Don’t fall for it.”
- Berin Szoka, “Lindsey Graham’s Sneak Attack On Section 230 And Encryption: A Backdoor To A Backdoor?“: The bill is a “monstrous, Rube-Goldberg-esque legal contraption.”
- Alan Rozenshtein, “Congress, Not the Attorney General, Should Decide the Future of Encryption“: “the question of whether to permit ubiquitous encryption is the sort of high-level policy decision that is best handled not by the executive branch but by Congress, which best represents the public and its different constituencies and interests.”
- Mark Rasch, “Bill to Prevent Child Porn Could Kill Encryption“: the law “may have the effect of prohibiting the kinds of good security practices that prevent unauthorized access to medical records, credit card records, bank records and other personal information, and may make the internet a hell of a lot less secure.”
The EFF asks you to take action against the EARN IT Act.
Conclusion
What is it about Section 230 that it attracts such dreadful reform proposals? FOSTA has been an unmitigated disaster; Sen. Hawley and Rep. Gosar’s bills full-throatedly embrace censorship; the PLAN Act is pure rent-seeking by hotels; and then this bill comes along. These reforms aren’t just terrible; they encapsulate everything that’s broken in Congress today.
The EARN IT Act shares a common philosophy with the UK Online Harms proposal. In both cases, to combat awful online content, the government helps develop a set of industry-wide standards that are the only viable path to avoiding ruinous liability. See my critique of the UK Online Harms proposal. I characterized the UK Online Harms proposal as one of the most enthusiastically pro-censorship proposals to come from an industrialized Western country. The EARN IT Act is a close second.
Pingback: News of the Week; February 19, 2020 – Communications Law at Allard Hall()