Comments on the “Protecting Constitutional Rights from Online Platform Censorship Act”

A tsunami of new Section 230 reform bills is coming soon. The early previews suggest those bills will be just as terrible as the bills from the 116th Congress.

This bill comes from Rep. DesJarlais (R-TN), who voted against certifying the presidential election results. As usual for Section 230 reform bills, the bill’s name, “Protecting Constitutional Rights from Online Platform Censorship Act” (HR 83), means the exact opposite of what it does. Governments censor, not “platforms”; and at this point, the “platforms” are the entities most needing protection of their “constitutional rights” from Congressional censorship. A more accurate bill title might be something like “Stripping Constitutional Rights from Online Platforms Because This Is a Censorship Act.”

What the Bill Says

First, the bill says that it’s unlawful for “internet platforms” to take any “action to restrict access to or the availability of protected material of a user of such platform.” The bill defines “protected material” as “material that is protected under the Constitution or otherwise protected under Federal, State, or local law.” Essentially, this provision creates a new legal concept of unlawful content moderation.

Second, the bill creates a new cause of action for unlawful content moderation, and affected users can get statutory damages between $10,000-$50,000.

Third, the bill eliminates the existing Section 230(c)(2) text entirely.

Why It’s Terrible

Five problems with the bill:

The Bill Drafting is Terrible. Some examples:

  • The term “platform” is undefined. Which “providers or users of interactive computer services” (230’s defined term) does this bill apply to?
  • The bill doesn’t make the private right of action the exclusive remedy for unlawful content moderation. Other unspecified remedies may be available. Does that include criminal prosecution?
  • How can material be unprotected by the “Constitution” yet still protected by other laws? If a state law “protecting” material has been ruled unconstitutional, is it still unlawful to remove the material?
  • What material is “Constitutionally protected”? Some content categories are typically unprotected by the First Amendment, but virtually all speech regulations receive some level of constitutional scrutiny. Also, the bill didn’t restrict itself to the First Amendment, so could other parts of the Constitution “protect” content? For example, the IP clause authorizes Congress to provide patent and copyright protection. Is that Constitutional protection for the bill’s purposes?
  • The bill doesn’t distinguish between amateur and professionally produced content. Could the new unlawful content moderation cause of action apply to professionally produced content licensed by “platforms” (including B2B licenses)?

The Bill is Unconstitutional. Miami Herald v. Tornillo said that publishers can’t be compelled to publish content they don’t want to publish. This bill says it’s unlawful to reject/remove content they don’t want to publish–a direct conflict with the publishers’ First Amendment-protected editorial discretion. The bill would unquestionably fail in court. See Halleck.

The Bill Protects Garbage Content. The First Amendment protects hate speech, incivil remarks, trolling, and many other types of anti-social material. Do we really want to ban services from cleaning up those materials? “Platforms” would become even junkier than Parler.

The Bill Demands Error-Free Content Moderation (which is impossible). Material that isn’t First Amendment-protected is almost certainly criminalized. Per Section 230(e)(1), Section 230 doesn’t apply to federal criminal prosecution. Thus, services already feel compelled to remove content that isn’t First Amendment-protected by the time they “know” about it (and often earlier). However, if the material is First Amendment-protected, the bill says it’s unlawful to remove it, with each mistake costing $10,000+. To navigate the overlapping risks of criminal liability and statutory damages, the bill essentially requires “platforms” to make perfect error-free judgments about every item of content in their database. However, perfect content moderation isn’t possible, especially at scale, so “platforms” would be in a no-win position. In turn, platforms wouldn’t choose to leave more content up (as the bill seemingly contemplates); they would choose to stop being “platforms.” This bill would be a hard shove towards shutting down the Internet.

Eliminating Section 230(c)(2)(B) Would Damage Cybersecurity. Putting aside the debates about Section 230(c)(2)(A)’s efficacy, I have yet to see anyone advocating for repealing Section 230(c)(2)(B)–for good reason. A number of industries depend heavily on Section 230(c)(2)(B), including anti-spam, anti-spyware, and anti-virus software vendors. I’ve covered the risks of narrowing Section 230(c)(2)(B) extensively in the context of the Enigma v. Malwarebytes lawsuit. Basically, diminishing Section 230(c)(2)(B) reduces the ability of anti-threat software vendors to vigorously protect their users, which jeopardizes cybersecurity.

Conclusion

It’s tempting to indulge the bill as a stunt to rile up DesJarlais’ voter base or bolster his #MAGA brand. But given the relentless (and usually misdirected) public denigration of Section 230 recently, I have no more patience for awful Section 230 reform proposals–even unserious ones. Congressional staffers, paid by our tax dollars, spent time working on this 💩 instead of the thousand other things that need immediate Congressional attention. Congressmembers who misallocate those resources are part of the structural problems facing our country, not part of the solution.

UPDATE: Jess Miers’ coverage of the bill.