The EARN IT Act Partially Repeals Section 230, But It Won’t Help Children
I previously blogged a pre-introduction version of the EARN IT Act. The bill underwent substantial changes from the February draft–a few good, mostly not–so I’m doing a full blog post on the bill as introduced rather than updating my prior post. Still, you may want to read my prior post first to establish the context.
Nomenclature note: as I mentioned in my previous post, the bill uses the term “child sexual abuse material,” or CSAM, instead of the historical term “child pornography.” Not only does the bill adopt the term CSAM, but it proposes to make the CSAM-for-child pornography nomenclature replacement throughout the entire U.S. Code. 22 pages of the bill are a global find-and-replace of the existing code to make that change.
The bill repeals Section 230’s immunity for publishing user-generated content with respect to state criminal prosecutions and civil claims related to CSAM. That repeal takes place no later than four years from passage. However, defendants can nevertheless “earn” a safe harbor for those newly exposed claims by: (1) implementing “reasonable measures relating to the [11 topics within the EARN IT Act’s scope] to prevent the use of the interactive computer service for the exploitation of minors,” or (2) complying with any “best” practices recommendations that are developed by a newly created commission and then approved by Congress. The new commission is called the “National Commission on Online Child Sexual Exploitation Prevention,” but I’ll call it the “Censorship Board.” I put the word “best” practices in quotes because the Censorship Board’s recommendations aren’t likely to be “best.” We can’t yet know the full scope or perniciousness of the “best” practices, but those recommendations likely would be wide-ranging and will potentially dramatically reconfigure the current Internet. If no “best” practices ever obtain Congressional approval, the Section 230 partial repeal nevertheless occurs in 4 years, leaving “reasonable measures” as the only statutory safe harbor.
Detailed Bill Analysis
Censorship Board’s Membership
The board will have 19 members:
- DOJ AG or designee. This person will chair the Censorship Board.
- Secretary of Homeland Security or designee.
- FTC Chairperson or designee. I’ll call these first three positions the “agency heads” cohort.
- 2 law enforcement representatives with experience investigating online child sexual exploitation crimes.
- 2 prosecutors with the same experience.
- 4 online child sexual exploitation victims or representatives of victim advocacy groups. The express reference to victims is new to this draft. Victims have an essential perspective that needs to be heard, but whatever they say will almost certainly inhibit further discussion.
- 2 experts in constitutional law, consumer protection, or privacy. These expertises do not inherently relate to each other, so it’s not clear who would be a paradigmatic appointee in this category.
- 2 technologists with “experience in computer science or software engineering related to matters of cryptography, data security, or artificial intelligence.” The reference to cryptography expertise is one of many clues that the bill is targeting encryption.
- 2 representatives from Internet companies with 30M+ “registered monthly users.” The industry standard measure is “MAUs,” or “monthly active users.” In contrast, the term “registered monthly users” is ambiguous. Perhaps it means 30M+ registered users who use the service in a month?
- 2 representatives from Internet companies with <10M registered monthly users.
Excluding the agency heads cohort, the bill divides the remaining members into four cohorts of 4 members each:
- the law enforcement cohort (law enforcement/prosecutors)
- the victim cohort
- the experts cohort (the legal and technologist experts)
- the companies cohort
To nominally reduce partisanship, each of the following partisan leaders can appoint one member of each of the four cohorts: Senate majority leader, Senate minority leader, Speaker of the House, and House minority leader. If a vacancy occurs, it appears that the partisan leader who initially picked that person can fill the vacancy. Appointments last for 5 years. Appointments are required to be made in 90 days.
In the prior draft, the commission was 16 members total, and the draft before that was 15. The most recent expansion reflects 2 additional victim representatives and one more legal expert. The introduced bill also clears up how the partisan leaders divide their appointments, which was unspecified in the prior draft.
Censorship Board’s Duties
In 18 months, the Censorship Board is required to submit to the DOJ AG “recommended best practices that providers of interactive computer services may choose to engage in to prevent, reduce, and respond to the online sexual exploitation of children, including the enticement, grooming, sex trafficking, and sexual abuse of children and the proliferation of online child sexual abuse material.” The reference to “choice” is intentionally ironic. By threatening the legal shield that allows UGC services to exist, the bill ensures that Internet companies won’t have any real choice at all.
The board’s subject matter scope has expanded from the prior draft and now reaches far beyond CSAM. Indeed, given its broad charge, I’m not clear what is OUTSIDE the board’s scope. The board is specifically supposed to address 11 statutorily enumerated topics; example topics include “preventing, identifying, disrupting, and reporting child sexual exploitation,” age-gating, and content moderator training. The board’s recommendations are supposed to account for alternative approaches to reflect the company’s size and business model and whether it publishes UGC or allows private UGC messages; as well as things like cost and competition.
Board approval requires 14 votes. That threshold means that the Internet companies cannot unilaterally block any bad ideas, assuming they vote as a bloc. My prior post explains why we shouldn’t assume bloc voting among the companies cohort. Among other things, the larger companies’ representatives will be likely to support expensive “best” practices that they can afford and other industry players can’t; and pro-regulation/anti-Section 230 technology companies with minimal UGC exposure, like IBM or Oracle, could be selected to further stack the deck against the Internet. If they do bloc-vote, the Internet companies still need to draw 2 votes from the other 15 members to block any bad proposals. The most likely source of those votes are the experts cohort, though I expect the experts cohort will be pro-regulation as well. Due to this dynamic and the overwhelming presence of members from the law enforcement community, any adopted “best” practices likely will reflect the agenda of law enforcement.
Remember that if no recommendations emerge from the Censorship Board, the partial Section 230 repeal still takes effect in four years. This poison pill will motivate recalcitrant Internet companies to “take a deal” and eventually accept odious provisions to avoid even worse outcomes.
The board is required to update its recommendations at least every 5 years. This is a change from the prior draft, which required 2 year updates. The longer time period reduces the risk of a never-ending stream of regulatory changes; but 5 years is a long time on the Internet and “best” practices will fray over time. The fact that neither time period feels right is a sign that the bill’s underlying architecture is troublesome.
The bill specifies that “Nothing in this Act or the amendments made by this Act shall be construed to require a provider of an interactive computer service to search, screen, or scan for instances of online child sexual exploitation.” This language was added since the last draft, but it might be there to reduce the odds that Internet services will be deemed government agents for Fourth Amendment purposes. Furthermore, it seemingly conflicts with other parts of the bill.
Approval of the Censorship Board’s Recommendations
The board sends its recommendations to the DOJ AG, who can then either approve or deny the recommendations “upon agreement with” the DHS Secretary and FTC Chair. I still can’t tell if the AG has unilateral approval power, if 2 of the 3 agency heads decide the fate, or if all 3 must be in agreement about approval/denial. Among other sources of confusion, Section 4(b)(2) articulates the criteria that “the Attorney General” should consider in approving/denying–but why are these criteria only directed to the AG if the DHS/FTC representatives also must “agree”?
Either way, as my prior post mentioned, it’s a basic design flaw to have the people who will approve/deny any board recommendations also vote as regular members. Board members will be inclined to vote in line with the agency leads because there’s little point in contravening their votes.
Congressional Enactment of Approved Recommendations
The bill could have just formed a commission of experts and stopped there, allowing Congress to evaluate the commission’s recommendations in regular order. It didn’t. Instead, the bill tries to force the commission’s recommendations into becoming Congressionally-approved law through unusual and dubious procedural tricks.
The prior bill draft made any AG-approved recommendations automatically effective unless Congress disapproved, and 1/3 of that draft’s text was spent providing a streamlined process to discourage Congress’ disapproval. That procedure got scrapped. In a slight improvement, this bill now specifies that no board recommendations become effective until approved by Congress. However, due to the poison pill, if Congress never approves any of the recommendations, then Section 230 gets partially repealed in four years.
The bill has 8 pages specifying how Congress can evaluate the approved recommendations. The basic goal is to limit the time and scope of Congress’ review to increase the odds of verbatim passage. One piece that I found especially odd: the bill REQUIRES specified future members of Congress to introduce the approved recommendations. This is not my area of expertise, but I don’t understand how Congress can force future members of Congress to introduce bills that they don’t want to introduce. That seems like a dead-hand control over the autonomy and authority of future Congressmembers.
Certification of “Best” Practices
Within a year after Congress approves “best” practices, Internet companies can earn the statutory safe harbor by certifying that they don’t have any “material non-compliance” with the “best” practices. This timing is an improvement from the prior draft, which had the compliance clock running during Congress’ disapproval period.
The AG will publish a list of companies who have filed such certifications. This is a new provision, and I’m not sure about its purpose. Maybe it makes it easier for plaintiffs to pick who to sue?
The AG can send C.I.D.s when it has reason to believe that a certification is false; and knowingly submitting false statements in the certification will be a crime.
Partial Repeal of Section 230’s Immunity
The bill would repeal Section 230’s immunity for civil claims and state criminal charges for activity that violates 18 USC 2252 or 2252A, which relate to CSAM distribution and receipt. (There is still an erroneous reference to 2255(a)(2), a provision that doesn’t exist). Note the mismatch of the Censorship Board’s charge and the Section 230 clawback: the repeal only relates to CSAM, but the board’s recommendations will cover activities far beyond CSAM.
In place of the repealed portion of Section 230, the bill creates a new statutory safe harbor for companies that filed a timely compliance statement with the DOJ, meaning that the company has claimed to have satisfied the “best” practices approved by Congress. The bill also creates a second statutory safe harbor for implementing “reasonable measures” to address the board’s full scope, which extends beyond CSAM. As I mentioned in my prior post, the “reasonable measures” standard is unhelpful to Internet companies because they have no idea what those measures are, and it will take many lawsuits and years of common law development to begin to map out its contours. No company would choose to rely on the “reasonable measures” prong.
Internet companies would also retain Section 230(c)(2)(A) defenses, though as I’ve explained before, that provision is worthless in this context.
Expansion of Civil Liability
The bill would amend 18 USC 2255 to reduce the civil liability for Internet services to require only reckless, instead of knowing, violations. I have questions about the Constitutionality of this amendment after the SAVE Act tried a similar change (see Backpage v. Lynch). The reduced scienter might apply to Internet service “willful blindness,” such as reducing screening efforts or encouraging encryption usage by users. (More on encryption below).
The bill would require Internet services to disclose any available identifying and location information for minors depicted in known CSAM items, as well as any location information for alleged perpetrators. There is also more legal protection for CSAM researchers and more freedom for NCMEC to share hashes to other entities.
Some Reasons Why the Bill Is Terrible Policy
It Won’t Curb CSAM or Help Victims. I’m feeling deja vu from the SESTA/FOSTA debacle. Those bills were sold as ways of preventing sex trafficking and giving financial recourse to victims. However, FOSTA hasn’t helped solve any of its targeted problems, and it has created many problems as FOSTA opponents predicted.
This bill similarly won’t reduce CSAM or help victims:
- Section 230 has never restricted federal criminal prosecutions, and that exposure has already motivated Internet companies to treat CSAM seriously. Thus, new state criminal or civil actions won’t necessarily motivate new behavior by Internet companies.
- Because of this, Internet companies already treat CSAM as toxic content and devote more resources to combat it than any other type of illegal content. I’m not sure what CSAM-specific “best” practices could be advanced that most Internet companies aren’t already doing.
- To the extent that CSAM or other online child abuse takes place via private messages that Internet companies ordinarily are legally restricted from reviewing, the Internet companies have little ability to redress the problematic content.
- Furthermore, Internet companies already may rely on legal doctrines other than Section 230 to protect them from liability for private messaging. If so, mucking with Section 230 won’t change their behavior.
Meanwhile, there are many existing problems with CSAM enforcement, such as the many referrals to NCMEC that are not being pursued due to budget constraints. If we’re serious about combating CSAM, fixing those problems seems like a key step.
The seeming mismatch of the bill’s goals and effects exposes the bill’s cynical justification. If the Censorship Board isn’t likely to materially move the needle on efforts to combat CSAM, then it’s almost certain that the bill isn’t actually intended to address CSAM problems.
The Censorship Board’s Recommendations Are Likely to Censor the Internet. The board’s recommendations could cover an infinite number of subjects. There is really no limit on what crazy preconditions that the Censorship Board could impose on liability reduction.
Most folks fear that the Censorship Board will require Internet services not to provide end-to-end encryption for private messages over their networks (or mandate a law enforcement backdoor, which is functionally the same thing). AG Barr has expressed his desire for this outcome, and the Censorship Board provides an indirect way to achieve what probably isn’t politically palatable to pursue directly. The recklessness scienter standard could further mean that Internet services cannot carry incoming messages that are already encrypted. If the EARN IT Act’s real goal is to eliminate encryption, the bill drafters are using CSAM as a MacGuffin to advance that objective.
In my prior post, I gave examples of other possible “best” practices recommendations that could emerge from the Censorship Board:
- Internet services must obtain, verify, and retain valid contact information for all users to qualify for the safe harbor. This would improve enforcement against violators, but it would cover all UGC, not just CSAM, so it would effectively eliminate unattributed UGC. This would reduce or eliminate whistleblowing and discourage other high-value/high-risk content.
- Internet services must authenticate the age of anyone depicted in pornography to confirm that they are all above-age. This would effectively eliminate UGC pornography because Internet services cannot reliably or cost-effectively do this authentication.
- Age-gating is specifically enumerated in the board’s scope. A “best” practice could require Internet services to authenticate the age of all users and offer restricted or no services to minors. This would shrink the Internet for children and probably for adults as well.
I can’t imagine the potential universe of pernicious regulations that could emerge from the Censorship Board; my mind isn’t that creative or evil. Even so, think about an Internet where your private messages cannot be encrypted (or are subject to a backdoor), every UGC site authenticates your age and identity, and anything you post that might be potentially sexually suggestive is banned. How would that Internet look different than today’s? Is that the world you want, especially if it wasn’t clear that any of those measures actually helped combat CSAM or protect victims? If this vision of the Internet troubles you, this bill should worry you a lot.
It Is Possibly Unconstitutional. This bill could trigger constitutional scrutiny in multiple ways, including:
- the mandate that future Congressmembers must introduce bills against their will
- the imposition of liability for reckless distribution of CSAM
- the conditioning of the immunity/safe harbor on potentially unconstitutional “best” practices recommended by the Censorship Board. I think Tam and Brunetti limit the government’s ability to do so.
- efforts to ban encryption
- the possibility that evidence provided by Internet services becomes subject to Fourth Amendment challenges by doing searches and reporting as mandated by the government
This bill plays on our sympathies about techlash and CSAM to potentially impose more censorship on the Internet. It’s a slick form of political theater, but its depraved core reminds us why Americans universally have a low opinion of Congress.
This bill has the potential to move quickly, so NOW is the time to speak out. I’m angry that one of my Senators, Dianne Feinstein, signed on as a co-sponsor. If you’re a Californian, give her office a call and ask her to reconsider. (202) 224-3841. The EFF also has a way to take action.