The 5th Circuit Puts the 1st Amendment in a Blender & Whips Up a Terrible #MAGA Kool-Aid–NetChoice v. Paxton

[This is a 6k+ word blog post that was joyless to write and most likely will be joyless to read.]

If you want a distillation of this decision, consider this line: “Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite.” Yes, a federal judge literally thinks the Internet services are “obsessed” with “censoring” Nazis and terrorists. If that’s your kind of Kool-Aid, then you will like this opinion. Otherwise, bring tissues as you read how an appellate court remixes the rule of law to rationalize government censorship.

Background

In 2021, Florida and Texas passed laws regarding “social media platform censorship.” The bills stylized Internet services as the titular “censors” that needed curbing. In fact, “censorship” occurs when the government tells publishers what they can and can’t publish. Thus, the bills unintentionally adopted accurate titles, and they self-admitted their unconstitutionality.

NetChoice and CCIA filed lawsuits against portions of both laws, and federal district courts in both Florida and Texas granted injunctions. The legal challenges then diverged. The Florida law went to the 11th Circuit, which upheld most of the injunction but excluded some editorial transparency components from the injunction based on a misreading of Zauderer (I will have more to say about Zauderer soon). Florida and the plaintiffs will cross-appeal the 11th Circuit ruling to the Supreme Court in the very near future.

The Texas law went to the Fifth Circuit, a well-known graveyard for the rule of law, and in May 2022 the panel reversed the injunction without issuing an opinion explaining why. That was rogue behavior by the panel. Dissolving the injunction changed the status quo, but the panel didn’t explain why, leaving the plaintiffs with no clear basis of an appeal. When courts change the status quo, due process requires that they explain their reasons so that the explanation can be contested in further proceedings. (And, ironically, the upheld Texas law requires Internet publishers to provide explanations for their editorial decisions, but the Fifth Circuit viewed an explanation for its decision supporting that law as optional).

NetChoice and CCIA appealed the Fifth Circuit’s unexplained ruling to the Supreme Court via the “shadow docket,” and in June, the Supreme Court restored the injunction pending the issuance of the Fifth Circuit opinion. Justice Alito wrote a dissent joined by two other judges, saying that the law was swell, and Justice Kagan voted with Alito but provided no explanation for her vote.

On Friday, the Fifth Circuit issued its decision that explains why it lifted the injunction. The decision consists of three opinions from a three-judge panel: Judge Oldham’s majority opinion, Judge Jones’ concurrence (mostly), and Judge Southwick’s concurrence on the transparency piece and dissent on the rest. The issuance of this decision now tees up the more typical appellate options, which could include an appeal to the Fifth Circuit en banc, an appeal to the Supreme Court’s shadow docket for an emergency restoration of the injunction, or an appeal to the Supreme Court through the normal certiorari petition process. Judge Oldham’s lead opinion goes out of its way to embrace a circuit conflict (“The Platforms urge us to follow the Eleventh Circuit’s NetChoice opinion. We will not.”), but I don’t think the Supreme Court needs any additional motivation to grant cert. I expect the Florida, and possibly the Texas, laws will produce a Supreme Court opinion this term. The fate of UGC hangs in the balance.

The Texas law has four main provisions. Here’s where they stand after the Fifth Circuit’s ruling:

  • mandatory editorial transparency requirements. The panel voted 3-0 to lift the injunction based on a misapplication of Zauderer. This is the same outcome as the 11th Circuit.
  • digital due process requirements, including an appellate process for aggrieved users. The panel improperly treated this as a subset of editorial transparency and also unanimously lifted the injunction.
  • restrictions on viewpoint-biased content moderation. The panel voted 2-1 to lift the injunction for multiple reasons. However, only one vote (Oldham) endorsed the common carriage justification. (It was too crazy for Judge Jones, and that’s saying a lot). This vote diverged from the 11th Circuit opinion.
  • a ban on email service providers deploying anti-spam filters unless they give appellate rights to all filtered senders. No one has yet challenged this provision, so it was never enjoined and remains available for AG Paxton to enforce. I am not aware of any such enforcements, and holy hell will rain down on anyone who causes email services to turn off their spam filters.

This post will focus on Judge Oldham’s terrible majority opinion.

Structural Problems With the Opinion

This section highlights six structural problems with the opinion:

  • The Indeterminate Universe of “Social Media Platforms”
  • Treating Private Action Like State Action
  • Denigrating the Constitutional Importance of Curation
  • Congress Can’t Circumscribe Constitutional Rights
  • The Judge Mangles Zauderer and NIFLA

(With a bonus note about the problems of Trump’s Federalist Society judges).

The Indeterminate Universe of “Social Media Platforms”

It’s a simple question: which entities need to comply with the law? The judge says “The law regulates platforms with more than 50 million monthly active users (“Platforms”), such as Facebook, Twitter, and YouTube.” The “such as” reference implies that the law reaches other services beyond those three. In contrast, Texas falsely claimed that ONLY these three services are covered. It was a brazen lie.

According to this Wikipedia entry, the following US-based social media services (beyond the big three) have over 100M MAUs: TikTok, Snapchat, Pinterest, Reddit, Quora, Skype, LinkedIn, Microsoft Teams, imo/PageBites (who?), Picsart (who?), Discord, Twitch, Stack Exchange (who?), Zoom, and iMessage/FaceTime. Some other services with over 50M MAUs (thanks to Daphne Keller’s sleuthing): Wikipedia, Glassdoor, Vimeo, and Steam. The definition of “social media platform” reaches all UGC services, not just “social media” as traditionally conceived. So I’m sure other US-based UGC services clear the 50M MAU threshold (Minecraft and Roblox are two likely examples). Also, I’m not including the many foreign-based services over 50M MAU, but some of them may be subject to the Texas law as well.

Despite this uncertainty, the judge flatly asserted: “The plaintiff trade associations represent all the Platforms covered by HB 20.” How does the judge know that? It’s almost certainly false. Based on his empirically unsupported assumption, the judge claims: “the entities subject to HB 20 are large, well-heeled corporations that have hired an armada of attorneys from some of the best law firms in the world to protect their censorship rights.” Among the wide-ranging universe of UGC services with 50M+ MAUs, this statement is decidedly not true.

The judge then uses the trade association status against the plaintiffs, saying: “To the extent that these representations vary between Platforms, that further cuts against the propriety of this facial, pre-enforcement challenge…So the Platforms may not now rely on individualized facts to claim that, for example, one Platform operates like a newspaper even if the others don’t.” In other words, the judge doesn’t care about the services’ heterogeneity because it would mess up his narrative. This emboldens the judge to misleadingly treat one service’s past public statement as if it spoke for all of its competitors too.

Treating Private Action Like State Action

As I mentioned at the beginning, the Texas and Florida bills define “censorship” to mean an Internet service’s application of its editorial discretion to user content. This rhetorical move misleads the public into equating private editorial decisions with government censorship (which we should all oppose). It’s all deceptive word games, though. It’s Conlaw 101 that the First Amendment protections for free speech and free press only restrict the actions of state actors, not private actors. So when the government tells private actors what they can or can’t publish, that’s “censorship.” When private actors decide what they choose to publish, that’s the publisher’s “freedom of speech” and “freedom of the press.” By attacking private “censorship,” the government is actually attacking the private entity’s constitutionally protected rights.

Most judges understand this distinction intuitively because they learned as 1Ls that the Constitution only restricts state action, not private action. And yet…this judge embraces the deceptive sophistry and repeatedly equates private entity publication decisions with government speech restrictions. This leads to a turn of the phrase the judge thought was worth repeating three times:

  • “if Section 7 chills anything, it chills censorship.”
  • “Section 7 does not chill speech; instead, it chills censorship”
  • “Section 7 chills no speech whatsoever. To the extent it chills anything, it chills censorship. That is, Section 7 might make censors think twice before removing speech from the Platforms in a viewpoint-discriminatory manner”

It’s such a clever phrase that maybe the judge should get a trademark for it. 😑

By tossing a foundational Conlaw 101 principle into the trash, the judge can then repeatedly rail against private entity “censorship”:

  • “we reject the idea that corporations have a freewheeling First Amendment right to censor what people say”
  • “the Platforms have censored what Texas contends is pure political speech”
  • “the Platforms have pointed to no case applying the overbreadth doctrine to protect censorship rather than speech”
  • “Section 7 does not operate as a prior restraint on the Platforms’ speech—even if one accepts their characterization of censorship as speech….The Platforms operate “the modern public square,” Packingham, 137 S. Ct. at 1737, and it is they—not the government—who seek to defend viewpoint-based censorship in this litigation.”
  • “the Platforms cannot invoke “editorial discretion” as if uttering some sort of First Amendment talisman to protect their censorship”
  • “We reject the Platforms’ attempt to extract a freewheeling censorship right from the Constitution’s free speech guarantee. The Platforms are not newspapers. Their censorship is not speech.”

Judge Jones also embraces this nonsense: “What the statute does, as Judge Oldham carefully explains, is ensure that a multiplicity of voices will contend for audience attention on these platforms. That is a pro-speech, not anti-free speech result.”

Seriously? It is never “pro-speech” for the government to dictate the editorial decisions of publishers. This is one of those day-is-night sophistry moves that is a top reason why laypeople hate lawyers. By coopting the term “censorship,” the opinion seeks to normalize government censorship.

Denigrating the Constitutional Importance of Curation

The judge believes curatorial decisions deserve no Constitutional protection. Thus, in the judge’s alt-version of our republic, the only parties with Constitutionally protected speech rights are individual speakers. Internet services encroach those speakers’ rights by making Constitutionally unprotected curatorial decisions. That leads to statements like:

  • “We reject the Platforms’ efforts to reframe their censorship as speech. It is undisputed that the Platforms want to eliminate speech—not promote or protect it. And no amount of doctrinal gymnastics can turn the First Amendment’s protections for free speech into protections for free censoring.” [I believe this is supposed to be another trademarkable turn of the phrase, but it’s only clever if you’ve drunk enough Kool-Aid.]
  • “The Platforms are nothing like the newspaper in Miami Herald. Unlike newspapers, the Platforms exercise virtually no editorial control or judgment. The Platforms use algorithms to screen out certain obscene and spam-related content. And then virtually everything else is just posted to the Platform with zero editorial control or judgment….). Thus the Platforms, unlike newspapers, are primarily “conduit[s] for news, comment, and advertising.”” [Conduits. Really?]
  • In a footnote: “The Platforms never suggest their algorithms somehow exercise substantive, discretionary review akin to newspaper editors.” [Is there only one Constitutionally canonical way to exercise editorial discretion?]
  • “the Platforms permit any user who agrees to their boilerplate terms of service to communicate on any topic, at any time, and for any reason. And as noted above, virtually none of this content is meaningfully reviewed or edited in any way.” [The word “meaningfully” is doing a lot of work in that sentence.]
  • “there’s no First Amendment right for censors to select their targets.” [If the judge is referring to “publishers” as “censors” and “editorial curation” as “select their targets,” then that’s exactly what the First Amendment permits.]
  • “the applicable inquiry is whether Section 7 forces the Platforms to speak or interferes with their speech. Section 7 does neither of those things. It therefore passes constitutional muster.”
  • “Herbert involved discovery into how an editor selected, composed, and edited a particular story. See 441 U.S. at 156–57. But the Platforms, of course, neither select, compose, nor edit (except in rare instances after dissemination) the speech they host. So even if there was a different rule for disclosure requirements implicating a newspaper-like editorial process, that rule would not apply here because the Platforms have no such process.”
  • “A person’s social media feed is “curated” in the same sense that his mail is curated because the postal service has used automated screening to filter out hazardous materials and overweight packages, and then organized and affixed a logo to the mail before delivery. And it has never been true that content-agnostic processing, organizing, and arranging of expression generate some First Amendment license to censor.” [The judge is obviously wrong when it caricatures how social media services “curate.” It simply focuses on the power of other vendors to exclude, not to uprank or downrank. That postal service analogy is flat-out terrible. The postal service doesn’t sort the mail I get so that one envelope is in front of the other. The judge simply ignores the full range of curatorial decisions.]

Judge Jones even gets into this action: “it is ludicrous to assert, as NetChoice does, that in forbidding the covered platforms from exercising viewpoint-based “censorship,” the platforms’ “own speech” is curtailed…It is hard to construe as “speech” what the speaker never says, or when it acts so vaguely as to be incomprehensible….The Texas statute regulates none of their verbal “speech.”” [Verbal speech??? Seriously?]

One Judge Oldham quote in particular deserves more careful review:

If the Platforms wanted the same protections, they could’ve used the same ex ante curation process. Early online forums and message boards often preapproved all submissions before transmission….the Platforms made a judgment that jettisoning editorial discretion to allow instantaneous transmission would make their Platforms more popular, scalable, and commercially successful. The Platforms thus disclaimed ex ante curation—precisely because they wanted users to speak without editorial interference. That decision has consequences. And it reinforces that the users are speaking, not the Platforms.”

Say what now? The judge apparently is saying that if the Internet services had done prescreening all along, their curation would have qualified for constitutional protection and perhaps Texas could not have enacted its content moderation obligations. However, the Texas law applies to both pre- or post-publication content moderation, and the judge is saying that it’s OK for Texas to impose the restriction now. So the judge seems to be saying that prescreening would have gotten more constitutional protection if it had been adopted pre-legislation than it would now get if adopted post-legislation. I’m pretty sure the Constitutionality of a statute does not depend on whether the publisher adopted the practices in question before or after passage. Also, note that the judge is basically instructing Internet services how they should have run their editorial operations (i.e., prescreening > post-publication curation). That’s about as blatant as government censorship can get.

In dissent, Judge Southwick has a serviceable rebuttal to this anti-curatorial nonsense:

The First Amendment, though, is what protects the curating, moderating, or whatever else we call the Platforms’ interaction with what others are trying to say….The majority’s perceived censorship is my perceived editing….I see the Platforms’ curating or moderating as the current equivalent of a newspaper’s exercise of editorial discretion…the majority’s research and reasoning supports the Platforms’ contention that First Amendment protections attend the publishing process as well as the actual published content.

Congress Can’t Circumscribe Constitutional Rights

The judge says that Congress’ adoption of Section 230 should influence how we should interpret the Constitution. Wait…what? Nations adopt Constitutions to restrict or define the authority of legislatures, and they typically make the Constitution harder to amend than by legislative vote alone to ensure that the legislatures don’t change those restrictions whenever it suits them. This nature of Constitutions, as distinct from legislation, is stuff 5th graders learn here in California (I’m not sure what they teach in Texas).

As a result, a legislature can’t tell courts how interpret the Constitution. That would be like legislatively amending the Constitution, which is contrary to the whole point of adopting the Constitution. These are very fundamental principles underlying the American system of checks-and-balances between the three branches of government.

Nevertheless, the judge says: “if § 230(c)(1) is constitutional, how can a court recognize the Platforms as First-Amendment-protected speakers or publishers of the content they host?…§ 230 reflects Congress’s factual determination that the Platforms are not “publishers.”” If Congress says that Internet services aren’t publishers, then that can be true for all of the places Congress can legislate. It does not then mean that Congress has defined the term for Constitutional purposes. It literally lacks that power.

The judge keeps trying: “§ 230 is nothing more (or less) than a statutory patch to a gap in the First Amendment’s free speech guarantee. Given that context, it’s strange to pretend that § 230’s declaration that Platforms “shall [not] be treated as . . . publisher[s]” has no relevance in the First Amendment context….” No, what’s strange is thinking the legislature can define the Constitution. Can you imagine how this judge would respond if Congress tried to provide its interpretations of the phrase “well regulated Militia” or “arms” in the Second Amendment? Plus, this is a misunderstanding of Section 230. Section 230 does more than gap–it’s also a procedural fast lane to reach the inevitable result.

The Judge Mangles Zauderer and NIFLA

Zauderer is a 1980s commercial speech case that subjects a very specific type of compelled commercial disclosure to relaxed constitutional scrutiny. Pro-regulatory forces have twisted Zauderer into a magic wand authorizing regulators to compel businesses to disclose whatever information they want. This typically overreads Zauderer and its thin Supreme Court progeny (in 37 years, only one other Supreme Court case has actually applied the Zauderer test). I have a much-needed article clarifying the application of Zauderer to editorial transparency laws that I hope to make public in a couple of weeks. For now, for more on the constitutional problems with editorial transparency, see my Hastings Law Journal article.

As I explain in my forthcoming article, there are five preconditions before a compelled commercial disclosure law qualifies for Zauderer’s relaxed scrutiny. What the judge said about each:

  • (1) the regulation applies to ad copy. The judge ignored this prerequisite, even though it was reiterated in NIFLA.
  • (2) the regulation seeks to prevent ad copy from containing material omissions that would deceive consumers. The judge recapitulates this by saying the “disclosure requirements must be reasonably related to a legitimate state interest, like preventing deception of consumers. See ibid. Texas argues—and the Platforms do not dispute—that Section 2 advances the State’s interest in “enabl[ing] users to make an informed choice” regarding whether to use the Platforms.” But the judge ignores the deception requirement; he simply points to the statute’s pretextual self-serving claim that it enables informed consumer choices, which isn’t the same as preventing deception.
  • (3) the required disclosure is uncontroversial. The judge acknowledges this prerequisite but doesn’t explain why the law met this standard.
  • (4) the required disclosure is purely factual information. The judge acknowledges this prerequisite but doesn’t explain why the law met this standard.
  • (5) the regulation addresses disclosures of the terms on which the regulated entity offers its goods or services. The judge didn’t address this requirement at all.

To recap, the judge didn’t actually show that the Texas law satisfied ANY of the Zauderer preconditions for relaxed Constitutional scrutiny, let alone all of them. 🤷‍♂️

[I’ve reluctantly decided to delete my discussion about NIFLA because I do a more precise job analyzing that case in this paper.]

Finally, the judge says that Zauderer applies to the mandatory appeals process. But appeals are not a disclosure obligation, so Zauderer shouldn’t apply to them at all. Instead, they should be subject to strict scrutiny because they dictate to publishers how to run their editorial operations–something that I believe has never been tried with traditional media. Can you imagine if a book publisher had to give appellate rights to authors who submit unsolicited manuscripts? Or if law reviews had to give appellate rights to law professors whose articles they reject? 🤣

A Note About Trump’s Federalist Society Judges

I read thousands of opinions a year, and I often can spot-on-sight an opinion by a Trump appointee from the Federalist Society (which includes Judge Oldham) without looking at the judge’s name. When those opinions stand out, it’s usually for two reasons. First, the opinions contain a breezy and casual arrogance that condescendingly denigrates all opponents as idiots or enemies of the state. As just one example from this opinion, the judge castigates the plaintiffs’ “wooden metaphysical literalism”–even though that’s a pretty apt description for most of the judge’s analysis (as the saying goes, every accusation is an admission). Second, the opinions weaponize the rule of law selectively. On some points, they demand exacting fidelity to the rules; for other points, they simply wing it or create a pastiche of citations to second-tier authority while disregarding the direct-on-point rules or precedent. But…selective adherence to the rule of law is actually the opposite of the rule of law. We will be dealing with opinions like this for decades…

Additional Critiques of the Opinion

You might choose to stop reading the blog post here. You’ve gotten about 80% of the total value of the blog post already. This section comments on additional specific points in Judge Oldham’s terrible opinion.

* * *

* An overall note: none of the opinions cited the Reno v. ACLU Supreme Court decision, even though it expressly said that regulators can’t treat the Internet like broadcasting or telephony for First Amendment purposes. As the judge kept rotating through broadcasting and telephony analogies, he ignored the Supreme Court precedent that told him not to do that.

* “Section 7 provides a narrow remedial scheme….any fear of chilling is made even less credible by HB 20’s remedial scheme. Not only are criminal sanctions unavailable; damages are unavailable. It’s hard to see how the Platforms—which have already shown a willingness to stand on their rights—will be so chilled by the prospect of declaratory and injunctive relief that a facial remedy is justified.”

The court says the remedies are “narrow” because damages aren’t available for the censorship provision, only injunctions, attorneys’ fees, and costs. But, injunctions that restrict or compel speech are not “narrow”–they strike at the very heart of the First Amendment’s protections.

* “the judicial power vested in us by Article III does not include the power to veto statutes.”

This is a strawman. The plaintiffs asked the court to determine that the law violates the Constitution. As every 1L learns, usually in the first week of Conlaw 1, judges’ power to adjudicate Constitutional limits on legislative authority was confirmed in 1803 in Marbury v. Madison (but maybe #MAGA is coming for Marbury too?).

* “Invalidate-the-law-now, discover-how-it-works-later judging is particularly troublesome when reviewing state laws….The respect owed to a sovereign State thus demands that we look particularly askance at a litigant who wants unelected federal judges to countermand the State’s democratically accountable policymakers.”

Is the judge saying federalism gives the states a get-out-of-the-Constitution-free card? To be clear, the “democratically accountable policymakers” do not deserve deference when they disrespect the Constitution–which they do often, which, as every 1L learns, is why we have Constitutional limits on the legislative and executive branches.

* “HB 20’s prohibitions on censorship will cultivate rather than stifle the marketplace of ideas that justifies the overbreadth doctrine in the first place.”

This is demonstrably false. It has been proven over and over again that UGC services that do not curb trolls, spammers, and griefers go into a downward spiral that drives productive conversations away.

* “the Platforms principally argue against HB 20 by speculating about the most extreme hypothetical applications of the law,” calling those “whataboutisms” (a phrase used twice in the opinion).

It’s the district court’s job to do fact-finding. This appellate judge surely has no basis, other than his own intuition, to claim that the plaintiffs’ arguments are “hypothetical.” Indeed, the services would happily invite the judge to do content moderation for just 15 minutes and see if the judge still thinks the services’ concerns are “extreme hypotheticals.”

* “Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite.”

THE JUDGE SAID WHAT??? Would the judge prefer the Internet services do less to curb terrorism and Nazis? The judge doubles down, saying “The Supreme Court has instructed that “[i]n determining whether a law is facially invalid,” we should avoid “speculat[ing] about ‘hypothetical’ or ‘imaginary’ cases.” Wash. State Grange, 552 U.S. at 449–50. Overbreadth doctrine has a “tendency . . . to summon forth an endless stream of fanciful hypotheticals,” and this case is no exception.” Where is the factual support to indicate that the concerns about unrestricted content from terrorists or Nazis is a “fanciful hypothetical”? This is trivially easy to disprove.

* “the Platforms have virtually unlimited space for speech, so Section 7’s hosting requirement does nothing to prohibit the Platforms from saying what they want to say.”

The “unlimited space” claim is misleading. The law isn’t just about storage space on servers. The law restricts efforts to de-amplify user content, and that’s a zero-sum game for a scarce resource, i.e., only 1 item of content can be ranked first.

* “no category of Platform speech can trigger any additional duty—or obviate an existing duty—under Section 7. And Section 7 does not create a special privilege for those who disagree with the Platforms’ views….Rather, it gives the exact same protection to all Platform users regardless of their viewpoint.”

Given the expansive definition of “censor” to include content ranking, the service literally cannot give equal treatment to all “viewpoints” because only 1 item can have the top ranking.

* “the Supreme Court’s cases do not carve out “editorial discretion” as a special category of First-Amendment-protected expression.”

Miami Herald v. Tornillo referred to “editorial control and judgment,” but the judge split hairs to distinguish that from “editorial discretion.”

* “the shopping mall in PruneYard and law schools in Rumsfeld could have changed the outcomes of those cases by simply asserting a desire to exercise “editorial discretion” over the speech in their forums.”

Ugh. PruneYard and Rumsfeld are about activities on real property. The property owners never “published” anything. They are not comparable to, you know, PUBLISHERS.

* “the Platforms can’t just shout “editorial discretion!” and declare victory.”

Actually, if this judge respected Constitutional law, they could.

* “an entity that exercises “editorial discretion” accepts reputational and legal responsibility for the content it edits.”

It’s true that publishers accepted common law liability for their editorial decisions. But legislatures can vary those common law principles. Also, an= Internet service faces significant “reputational consequences” for the “content it edits”–mishandling lawful-but-awful content can be make-or-break decisions for the service.

* “the Platforms strenuously disclaim any reputational or legal responsibility for the content they host.”

Services can’t disclaim reputational responsibility. Consumers decide how the services’ choices affect their reputation.

* “editorial discretion involves “selection and presentation” of content before that content is hosted, published, or disseminated.”

Why? Where does the Constitution say this? Or is the judge saying that new technologies can never adopt different editorial practices and get First Amendment protection? A reminder that Reno v. ACLU is relevant here.

The judge adds: “The Platforms offer no Supreme Court case even remotely suggesting that ex post censorship constitutes editorial discretion akin to ex ante selection.” OK, show me the Supreme Court precedent saying that ex post “censorship” doesn’t constitute “editorial discretion.” That doesn’t exist either. The question has never been asked of the Supreme Court. The fact that we’re dealing with terra nova legal issues doesn’t make the absence of precisely-on-point Supreme Court precedent the gotcha this judge thinks it is. As every 1L learns, the whole point of the common law is to apply the legal principles to new facts.

* “§ 230(c)(2) only considers the removal of limited categories of content, like obscene, excessively violent, and similarly objectionable expression.”

This is an intentionally deceptive paraphrase. Section 230(c)(2)(A) talks about “otherwise” objectionable, not “similarly” objectionable, and many courts have held that “otherwise” does not mean “similar.” The court responds with a generic “ejusdem generis” explanation rather than engaging with the caselaw rejecting those arguments. Also, the judge completely ignores the statutory exclusions to Section 230 (federal criminal prosecutions, IP, ECPA, FOSTA)

* I’m not going to deconstruct all of the common carriage discussion because it was just one judge’s (terrible) dicta. Still, I will point out the sloppiness of the judge’s terminology saying “Platforms are communications firms” just like the communications firms (telegraphs, telephony, mail, etc.) that are indeed subject to common carriage obligations. The judge calls them “communications firms” because they “enable[] users to communicate with other users” (recall, this was the unprecedented aspect of the Internet in the Reno v. ACLU opinion), so “the Platforms are no different than Verizon or AT&T.” First of all, really? Equating “edge” providers with “access” providers undoes the entire telecommunications regulatory scheme, but I guess this judge has to break a few eggs to make an omelet. Second, the Reno v. ACLU opinion rejected the equation of websites with telephony service providers.

* There’s also this nugget in the common carriage discussion: “each Platform has an effective monopoly over its particular niche of online discourse.”

As any student in Econ 101 will quickly point out, if there is more than one player in the market, then by definition there is no monopoly. The judge tries to overcome this basic economics principle by arguing that “it’s difficult or impossible for a competitor to reproduce the network that makes an established Platform useful to its user.” To be clear, unique customer bases or unique product niches do not define a monopolist, because then most businesses will be monopolists by definition. To bolster this fiction, the judge says “political pundits need access to Twitter.” Ironically, Trump has found a new audience AND MADE MORE MONEY post-Twitter deplatforming.

* The judge cites Lochner 8 times, and in a footnote, he rejects the plaintiffs’ request “for the more drastic remedy of invalidation of an economic regulation.” No, the plaintiffs are asking the court to reject a SPEECH regulation, and it’s condescending, insulting, and frankly embarrassing to position this law as a simple economic regulation.

* The judge characterizes “requirements to publish an acceptable use policy and disclose certain information about the Platforms’ content management and business practices” as “one-and-done” disclosures.

Huh? These disclosures are not one-time obligations. They are perpetual continuous obligations triggered by every change in the service’s editorial operations, no matter how minor. The way the judge marginalizes these obligations as one-and-done, when they are NEVER DONE, is condescending, insulting, and embarrassing. Furthermore, it is impossible to make these disclosures to the law’s satisfaction for reasons I’ve explained in my Hastings LJ piece. Essentially, any undisclosed editorial policy shows that the AUP wasn’t complete, and it’s impossible to disclose every editorial policy.

* The judge says that the plaintiffs may have shown that “some of the transparency report’s disclosures, if interpreted in a particularly demanding way by Texas, might prove unduly burdensome due to unexplained limits on the Platforms’ technical capabilities. But none of these contingencies have materialized.”

Can you guess why none of these contingencies have materialized? Maybe it’s because THE LAW HAS BEEN ENJOINED UNTIL NOW, AND THE JUDGE IS WRITING THE OPINION THAT WILL LIFT THAT INJUNCTION.

* In Herbert v. Lando, the Supreme Court says that legislatures may not subject a publisher’s “editorial process to private or official examination merely to satisfy curiosity or to serve some general end such as the public interest.” (I discuss this language in the Hastings LJ article). This judge distinguishes the language because the Lando court authorized the plaintiffs’ requested discovery. It’s correct that the Supreme Court authorized DISCOVERY in that case. The whole point of the Court’s warning was that a statute compelling disclosure obligation is NOT THE SAME AS DISCOVERY for multiple reasons, including the lack of judicial supervision and the fact that the defamation plaintiff will have made a prima facie showing of the defendant’s falsity before getting access to discovery. This distinction between discovery in litigation and ex ante statutory compelled disclosures is obvious in the Lando opinion. So the judge’s gotcha actually just confirms the judge misread ANOTHER key Supreme Court precedent.

* The judge acknowledges that Florida’s law has a “consistency” obligation, but the judge simply ignores that requirement when comparing the Florida and Texas laws, saying there’s nothing in the Florida law like the prohibition on viewpoint-discriminatory content moderation and thus the laws are distinguishable. I guess  one way to make distinctions between laws is to ignore the inconvenient parts.

Implications

Judge Oldham’s opinion helps crystallize the issues in this case for the Supreme Court. I can’t imagine the three “liberal” justices will vote to back any opinion that thinks Internet services are obsessing about Nazis and terrorists, even though Justice Kagan voted to dissolve the injunction during the shadow docket vote. The judge’s flippancy should cement their votes. The Internet’s future rests on whether two of the remaining six judges have sufficiently avoided the #MAGA Kool-Aid. I don’t love those odds, but I am trying to stay optimistic.

I trust the stakes of this case are clear. If Internet services are obligated to engage in “viewpoint-neutral” content moderation, then they cannot do content moderation at all. (There is also the impossibility of distinguishing legal from illegal content perfectly, a point I explain here). Without content moderation, the trolls, spammers, and griefers take over, and everyone who wants to have productive conversations leaves. At that point, the existing incumbent services give up on UGC and transition into professionally produced in-licensed content. To fund the licenses, the services will adopt paywalls. This paywalled professionally produced content will lead to less people getting to speak, less diversity among the speakers, less diverse content, less free access to content, and a deepened digital divide. Virtually everyone loses in this scenario. #MAGA/#MTGA.

Case Citation: NetChoice, LLC v. Paxton, 2022 WL 4285917 (5th Cir. Sept. 16, 2022)

Case library (see also NetChoice’s libraryCourt Listener page, and the Supreme Court page):