Comments on NTIA’s Petition to the FCC Seeking to Destroy Section 230

As requested by the Trump anti-Section 230 Executive Order from May, NTIA submitted a 57 page petition to the FCC asking the FCC to make rules interpreting Section 230. The FCC, in turn, has put the petition out for public comment (Docket RM-11862), with the initial comment submission period ending September 2.

I previously described NTIA’s petition this way:

Normally we expect a government agency like NTIA to provide an intellectually honest assessment of the pros/cons of its actions and not engage in brazen partisan advocacy. Not any more. This petition reads like an appellate brief that would get a C- in a 1L legal writing course. It demonstrated a poor understanding of the facts, the law, and the policy considerations; and it ignored obvious counterarguments. The petition is not designed to advance the interests of America; it is designed to burn it all down. I hope the FCC will reject it, but even in that case, malefactors will cite NTIA’s petition in support of their deliberate misrepresentations of Section 230.

I’m not doing a more-typical comprehensive takedown of the petition because it’s not a serious attempt at making policy. Instead, I decided just to redress some of the petition’s worst parts.

* * *

NTIA: “A handful of large social media platforms delivering varied types of content over high-speed internet have replaced the sprawling world of dial-up Internet Service Providers (ISPs) and countless bulletin boards hosting static postings.”

My corrections: This statement implicitly assumes that Google and Facebook have become the Internet. However, compared to the 1990s, today’s Internet has vastly more high-quality online content sources; it’s way easier to find diverse publishers; and the lock-in effects of Google and Facebook are trivial to the lock-in effects we faced when using siloed BBSs and walled-garden commercial online services.


NTIA: “with artificial intelligence and automated methods of textual analysis to flag harmful content now available, unlike at the time of Stratton Oakmont, Inc., platforms no longer need to manually review each individual post but can review, at much lower cost, millions of posts. Thus, the fundamental assumptions driving early section 230 interpretation are antiquated and lack force, thus necessitating a recalibration of section 230 protections to accommodate modern platforms and technologies”

My corrections: No one uses “artificial intelligence” today for content moderation. Large services may use machine learning for content moderation purposes, but those systems are expensive to build and maintain so they are unusual or non-existent at smaller services. Furthermore, often the machine learning systems prioritize the queue of items for human review, not cut humans out of the loop. So the passage’s factual proposition–that machines magically solve content moderation problems–is fictional.

From the beginning, Section 230 had three interrelated objectives: (1) prevent the moderator’s dilemma, (2) reduce barriers to entry, which enhances short- and long-term competition, and (3) preserve the freedom to innovate so that the technology and business practices could keep evolving. It’s popular to say that times have changed since 1996, but the truth is that all three of these rationales remain fully valid today. In particular, Section 230 “recalibrations,” like NTIA’s proposal, would freeze the current industry configuration and thwart new UGC innovations from emerging. I oppose this because I feel like we’re closer to the beginning of the UGC innovation cycle rather than the end.


NTIA: “The Commission itself has previously recognized the importance of enabling ‘the widest possible dissemination of information from diverse and antagonistic sources.'”

My corrections: Diverse and antagonistic sources need not all reside on the same service. Section 230 facilitates the diversity of publishers in the Internet ecosystem; but they might exist on separate services, often catering to niche/specialist audiences, that would be undermined if they are forced to carry “diverse and antagonistic” content not fit for their audiences.


NTIA: “large online platforms appear to engage in selective censorship that is harming our national discourse.”

My corrections: Private publishers don’t engage in censorship. They engage in editorial discretion. That freedom of the press is expressly protected by the First Amendment. In contrast, the government can, and all too often does, engage in “selective censorship”–as Trump has personally embraced repeatedly as President (see also the TikTok and WeChat EOs)–that is patently unconstitutional. If NTIA is truly concerned about “selective censorship that is harming our national discourse,” it has many high-value targets to redress in other executive agencies.


NTIA: “Unfortunately, few academic empirical studies exist of the phenomenon of social media bias.”

My corrections: So….in other words…there is no credible evidence that the big social media services like Facebook are biased against conservatives??? This is technically true, though there is plenty of evidence that conservatives have worked the refs and skewed Facebook IN FAVOR OF conservatives. Normally, I would suggest that if there’s a key gap in the academic literature, taxpayer dollars should be used to fill the gap. But what government entity could produce or sponsor credible research right now? Not NTIA, that’s for sure.


NTIA: “nothing in the law’s history, purpose or text allows for the conclusion that internet platforms should avoid all responsibility for their own editing and content-moderating decisions.”

My correction: A strawman argument. The “all” claim is obviously false. Section 230 has several statutory exceptions that allow for responsibility for a service’s editing and content-moderating decisions. Furthermore, no one claims that Section 230 insulates *all* editing and content-moderating decisions. As pointed out, if a UGC site edited a user post saying “John isn’t a thief” to read “John is a thief,” Section 230 wouldn’t apply.


NTIA: “By expanding protections beyond defamation, these courts extend to platforms a privilege to ignore laws that every other communications medium and business must follow and that are no more costly or difficult for internet platforms to follow than any other business.”

My corrections: Section 230 is an exceptionalist statute. THAT’S THE POINT. Section 230 facilitates the emergence of socially beneficial content and activity that never existed offline, such as social media, consumer reviews, online marketplaces, how-to and cat videos on YouTube, and a crowdsourced encyclopedia. So the costly/difficulty comparison is a false equivalency that is helpful only if you are trying to destroy the exceptionalist benefits we get.


NTIA: “The Commission should promulgate a regulation to clarify the relationship between the two provisions so that section 230(c)(1) does not render section 230(c)(1) superfluous.”

My corrections: Did you spot the painful typo? The second 230(c)(1) reference was supposed to be 230(c)(2). Oops. (More mockable typos are coming).

The caselaw has already explained the interaction between Section 230(c)(1) and Section 230(c)(2) and why a broad interpretation of Section 230(c)(1) does not make Section 230(c)(2) superfluous. See Barnes v. Yahoo (9th Cir. 2009):

Crucially, the persons who can take advantage of [230(c)(2)(A)] liability are not merely those whom subsection (c)(1) already protects, but any provider of an interactive computer service. See § 230(c)(2). Thus, even those who cannot take advantage of subsection (c)(1), perhaps because they developed, even in part, the content at issue, see Roommates, 521 F.3d at 1162-63, can take advantage of subsection (c)(2) if they act to restrict access to the content because they consider it obscene or otherwise objectionable. Additionally, subsection (c)(2) also protects internet service providers from liability not for publishing or speaking, but rather for actions taken to restrict access to obscene or otherwise objectionable content.

The Ninth Circuit reiterated this conclusion 2 months ago in Fyk v. Facebook. If NTIA aspired to provide an intellectually rigorous analysis, it would have engaged this long-standing and recently-used Ninth Circuit’s interpretation.


NTIA: “the FCC should make clear that section 230(c)(1) applies to liability directly stemming from the information provided by third-party users. Section 230(c)(1) does not immunize a platforms’ own speech, its own editorial decisions or comments, or its decisions to restrict access to content or its bar user from a platform. Second, section 230(c)(2) covers decisions to restrict content or remove users.”

My corrections: This is slippery advocacy because it commingles together true and false statements:

  • Claim: 230(c)(1) applies to “liability directly stemming from the information provided by third-party users.” Response: This restates the law. Section 230(c)(1) expressly says it does not apply to “any information provided by another information content provider.”
  • Claim: “Section 230(c)(1) does not immunize a platforms’ own speech.” Response: This also restates the law. The “platform’s own speech” isn’t information provided by another information content provider.
  • Claim: “Section 230(c)(1) does not immunize a platforms’…own editorial decisions or comments.” Response: The reference to “own comments” again restates the law. However, the reference to a platform’s “own editorial decisions” breaks new and troubling ground. Publishing/not publishing third-party content is always an editorial decision, so this seemingly seeks to eliminate Section 230(c)(1) entirely.
  • Claim: “Section 230(c)(1) does not immunize a platforms’…decisions to restrict access to content or its bar user from a platform.” Response: Another typo (“its bar user”). This is another attempt to ignore the Barnes precedent prioritizing Section 230(c)(1) over Section 230(c)(2)(A).

Even if the last bullet point became the prevailing interpretation of Section 230(c)(1), NTIA doesn’t acknowledge that the substantive outcomes probably wouldn’t change. In Langdon v. Google, the plaintiff sued search engines for rejecting his ads. The court rejected his claim both Section 230(c)(2)(A) and the First Amendment grounds. Indeed, the First Amendment categorically protects an online publisher’s decisions to restrict access to content or refuse to permit third-parties to use its publication tools. Carving back Section 230(c)(1) would not change this First Amendment backstop; it would just increase everyone’s adjudication costs. (Note: the NTIA petition ignored the First Amendment entirely, another indicator that it wasn’t seriously engaging with the law).


NTIA: “If ‘otherwise objectionable’ means any material that any platform ‘considers’ objectionable, then section 230(b)(2) offers de facto immunity to all decisions to censor content.”

My corrections: FFS, the “230(b)(2)” reference is another typo. It should be “230(c)(2).” The petition makes this same erroneous substitution on page 28, so it wasn’t just a slip of the fingers. I have never seen this typo by anyone who actually understands Section 230. It’s so frustrating when our tax dollars are used to fund a B-team’s work on this petition (sorry for the pun).

As for the substance, this is another misuse of the term “censor content.” Also, Matt Schruers argues why “otherwise objectionable” is broad but not unlimited.


NTIA: “Good faith requires transparency about content moderation disputes processes.”

My corrections: This purported requirement is complete fiction. See Holomaxx Technologies v. Microsoft Corp., 783 F. Supp. 2d 1097 (N.D. Cal. 2011):

Nor does Holomaxx cite any legal authority for its claim that Microsoft has a duty to discuss in detail its reasons for blocking Holomaxx’s communications or to provide a remedy for such blocking. Indeed, imposing such a duty would be inconsistent with the intent of Congress to ‘remove disincentives for the development and utilization of blocking and filtering technologies.’


NTIA: “Interactive computer services that editorialize particular user comments by adding special responses or warnings appear to develop and create content in any normal use of the words. Analogously, district courts have concluded that when interactive computer services’ “employees . . . authored comments,” the interactive computer services would become content providers. In addition, prioritization of content under a variety of techniques, particularly when it appears to reflect a particularly [yes, another typo…] viewpoint, might render an entire platform a vehicle for expression and thus an information content provider.”

My corrections: More strawman arguments. Everyone agrees that “special responses or warnings” and “employee authored content” are first-party content not protected by Section 230. When Twitter fact-checks Trump’s lies, Twitter accepts liability for the wording of its fact-check statement and any supporting first-party content.

It’s slippery to commingle that discussion with the “prioritization of content,” which is an editorial judgment about third-party content that is clearly covered by Section 230 and has been from the beginning. The caselaw emphatically refutes the claim that prioritization of third-party content converts the UGC site into “an information content provider” of that content. Instead, such a standard would necessarily eliminate Section 230 for most or all UGC sites.

The verb “editorializing” is also slippery. What exactly does that mean? Does that mean adding first-party content that comments on third-party content? Or the ways in which a site characterizes third-party content? Or the general exercise of editorial discretion? The verb collapses these activities to try to put all of the activities outside Section 230.


NTIA: “For purposes of 47 U.S.C. § 230(f)(3), “responsible, in whole or in part, for the creation or development of information” includes substantively contributing to, modifying, altering, presenting or prioritizing with a reasonably discernible viewpoint, commenting upon, or editorializing about content provided by another information content provider.”

My corrections: This is the petition’s payload. This position seeks to overturn about 1,000 court opinions interpreting Section 230, to pernicious effect. Plaintiffs can always allege that a UGC site engaged in one or more of these activities. That ensures, at minimum, that a case will survive a motion to dismiss (thus forcing websites to endure expensive discovery and possibly a trial — or simply settle bogus cases because it’s cheaper to do so); but more likely, this legal standard would eliminate Section 230 for every UGC site that currently relies on it. If NTIA had tried to fairly engage the jurisprudence or conduct a reasonable statutory interpretation, it could not possibly reach this conclusion.


NTIA: “One of the animating concerns for section 230 was court decisions holding online platforms liable as publishers for third-party speech, when in fact they were merely passive bulletin boards.”

My corrections: The reality is 100% opposite of this claim. In Stratton Oakmont v. Prodigy from 1995, the court punished Prodigy because it sought to do more than be a “passive” bulletin board. The legislative history explains how Section 230 sought to fix that ruling:

One of the specific purposes of this section is to overrule Stratton-Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material

Jeff Kosseff further corrected the record on this point.


NTIA: “when a platform moderates outside of section 230(c)(2)(A), section 230(c)(1) does not provide an additional, broader immunity that shields content takedowns more generally. Such affirmative acts are outside of the scope of (c)(1). Second, when a platform reviews third-party content already displayed on the internet and affirmatively vouches for it, editorializes, recommends, or promotes such content on the basis of the content’s substance or message, the platform receives no section 230(c)(1) immunity.”

My corrections: I previously discussed Barnes’ characterization of Section 230(c)(1) as Section 230’s primary operative provison. NTIA seeks to reverse that and make Section 230(c)(2)(A) the primary operative protection for UGC sites, turning Section 230(c)(1) into the gap-filler. This disregards hundreds of cases reaching a contrary conclusion.

The verbs in the third sentence are more slippery advocacy, including:

  • “affirmatively vouches for it”: what does this mean? The “vouching” verb does not exist in the Section 230 caselaw. It also makes no sense. The act of publishing content, and providing context around it, necessarily “vouches” for it.
  • “editorializes”: I already explained how this verb incoherently collapses discrete functions into one.
  • “recommends” and “promotes”: these verbs are also ambiguous. All content publication necessarily recommends and promotes the content; and of course UGC sites provide navigational aids to help consumers find relevant content.


NTIA: “information about an interactive computer service provider’s content moderation policies would help entities design filtering products that could improve the providers’ implementation of those policies, or assist consumers in remedying the gaps they may see in the providers’ policies”

My corrections: My PACT Act post explains some problems created by government-compelled transparency about UGC site operations.

The benefits specified here are incoherent. What does it mean to “help entities design filtering products that could improve the providers’ implementation of those policies”? UGC services can and do transact with third-party vendors to assist with content moderation, but that will be handled by contract, not mandated transparency. Maybe NTIA thinks third-party software would do client-side filtering of UGC site content? No widely used program does that today.

And what does it mean to “assist consumers in remedying the gaps they may see in the providers’ policies”? How can consumers “remedy” the gaps? Through litigation? Consumer advocacy? Technologically? I have no idea what this language contemplates.


NTIA: “Consumers today have a one-way relationship with social media transparency; platforms know everything about consumers, but consumers know very little about how or why platforms exercise influence or direct control over consumers’ speech.”

My corrections: This sounds like a gotcha, but actually, it describes pretty much every content publisher. Readers/consumers have little insight into how any content publisher makes its editorial decisions; while publishers know more about their customers as part of their marketing research or selling reader demographics to advertisers.

* * *

The petition starts with this quote:

As Ben Franklin stated, “[w]hoever would overthrow the Liberty of a Nation, must begin by subduing the Freeness of Speech.

Like most Trump administration rhetoric, this is pure projection. The U.S. government, through efforts like the NTIA petition (and the anti-Section 230 EO; and the anti-TikTok EO; and….), is working hard to subdue the freeness of our speech; and the current administration would be thrilled to overthrow the liberty of our nation. The fact that government employees at NTIA–whose salaries I am forced to pay with my tax dollars–are eagerly advancing censorial authoritarianism highlights how much work we have to do to “make America great again.”

* * *

Personnel/procedure note: a majority of the FCC commissioners must vote to authorize a notice of proposed rule-making (NPRM), the next procedural step to act on the NTIA petition. President Trump had renominated Commissioner O’Rielly for a second term; however, after O’Rielly publicly criticized the Trump anti-Section 230 EO, Trump pulled his renomination. If O’Rielly steps down from the commission before his term expires at the beginning of the year, or recuses himself (which he will be effectively required to do while searching for another job), the FCC will have only four voting commissioners. Two of those commissioners, Starks and Rosenworcel, have clearly expressed skepticism about the NTIA petition, so it seems impossible to get 3 affirmative votes to approve the NPRM. There’s no chance that a new FCC commissioner replacing O’Rielly will be approved before Trump’s current term ends. By torpedoing O’Rielly, Trump seemingly ensured that the NTIA petition could not progress before the end of his term.

Superficially, this appears to be an own-goal: Trump’s retribution against O’Rielly prevents Trump from advancing his anti-Section 230 EO. But that assumes Trump actually cares if the FCC proceeds with the rule-making. Why assume that? The EO was always about campaigning to his base, and sacking O’Rielly was a simple act of punishing disloyalty. This is how far the federal government has sunk: Trump makes policy moves to manifest his vanity, not build our country.