Another Texas Online Censorship Law Partially Enjoined–CCIA v. Paxton
After segregating minors, the services must then block minors from accessing content that “promotes, glorifies, or facilitates” the following content categories:
(A) “suicide, self-harm, or eating disorders”;
(B) “substance abuse”;
(C) “stalking, bullying, or harassment”;
(D) “grooming, trafficking, child pornography, or other sexual exploitation or abuse”; and
(E) material that qualifies as obscenity for minors under Texas Penal Code § 43.24.
The law also provides privacy-style protections to minors, including bans on targeted advertising and parental control over minors’ activities online. The law also provides transparency obligations for content ordering algorithms.
The court enjoins the content blocking requirements and defers on the rest.
Scrutiny Level
The court says HB18 is “a content- and speaker-based regulation, targeting DSPs whose primary function is to share and broadcast social speech.” The law excludes certain speakers (“news, sports, commerce, and provider-generated content”) while treating UGC more harshly, which triggers strict scrutiny. Cites to NetChoice v. Yost and NetChoice v. Fitch. The court explains:
HB 18 discriminates based on the type of content provided on a medium, not just the type of medium. A DSP that allows users to socially interact with other users but “primarily functions to provide” access to news or commerce is unregulated. An identical DSP, with the exact same medium of communication and method of social interaction, but “primarily functions to provide” updates on what a user’s friends and family are doing (e.g., through Instagram posts and stories), is regulated. If there is a difference between the regulated DSP and unregulated DSP, it is the content of the speech on the site, not the medium through which that speech is presented.
Application of Strict Scrutiny
Following the Supreme Court’s instructions in Moody, the court walks through each provision to apply the applicable scrutiny.
The Data Privacy, Parental Control, and Disclosure Provisions
The court says these provisions “are largely unrelated to First Amendment expression.” I’m confused by this. Targeted advertising, for example, always involves First Amendment expression. Also, as I discuss in my Segregate-and-Suppress paper, parental control provisions are highly problematic from a speech standpoint. The court says “it is not clear that a law requiring parents to be allowed to access and change their children’s privacy settings implicates First Amendment concerns,” but this is clearly an incorrect statement when it comes to minor’s consumption of content (plus, the problem with deploying a credible parental authentication solution). And more generally, any obligation for publishers to age-authenticate users is categorically constitutionally problematic, and without the segregation of minors, there’s no way to apply to minor-specific rules. The court sidesteps all of these obvious problems. Perhaps CCIA/NetChoice will need to hammer these points again in further proceedings. Problems with the age authentication and parent authentication provisions aren’t severable either, so I hope these provisions will fall eventually.
Monitoring-and-Filtering Provision
The law’s content blocking provision isn’t the least bit subtle. The court says: “The monitoring-and-filtering requirements explicitly identify discrete categories of speech and single them out to be filtered and blocked. That is as content based as it gets.”
The court questions if blocking the verboten content is a compelling state interest, “such as regulating content that might advocate for the deregulation of drugs (potentially “promoting” “substance abuse”) or defending the morality of physician-assisted suicide (likely “promoting” “suicide”)….Much of the regulated topics are simply too vague to even tell if it is compelling. Terms like “promoting,” “glorifying,” “substance abuse,” “harassment,” and “grooming” are undefined, despite their potential wide breadth and politically charged nature.”
The court says the provision isn’t narrowly tailored and didn’t adopt the least restrictive means. The ambiguous terminology is also overbroad, so “HB 18 will likely filter out far more material than needed to achieve Texas’s goal.” Plus, the law is underinclusive because it only restricts UGC, not content from first-party publishers. The law also “threatens to censor social discussions of controversial topics”–an especially pernicious outcome by cutting off teens from key parts of the Internet. “Texas also prohibits minors from participating in the democratic exchange of views online. Even accepting that Texas only wishes to prohibit the most harmful pieces of content, a state cannot pick and choose which categories of protected speech it wishes to block teenagers from discussing online.”
Vagueness
The plaintiffs challenged the DSP definition as vague because it turns on those services’ “primary” function. The court says the argument is a stretch, and “the term ‘socially interact’ is capable of a limited common-sense meaning.” (Which is…? I have no idea). The court also says that the plaintiffs’ members seem to be clear they are regulated under the law. Yes, but they are just exemplars of an entire industry with uncertain boundaries. The court says the companies can take that issue up in an as-applied challenge.
The court is more troubled by the blocking provisions’ verbs “promote, glorify, and facilitate.” As just one example, the court notes that “pro-LGBTQ content might be especially targeted for ‘grooming'”–a reasonable fear given Texas’ weaponization of LGBTQ status in its culture wars.
As a gotcha, Paxton argued that the allegedly vague terms shows up in some companies’ TOSes. The court’s response is withering: “a self-policed rule does not suffer the same vagueness problems as a state-backed proscription. Facebook may know their internal definition of “glorify” but the company cannot be assured that Paxton or Texas courts will abide by the same definition.”
Section 230
HB 18’s monitoring and filtering requirements, unlike HB 1181, do impose liability based on the type of content that a website hosts. A website only filters out content that falls into certain categories of prohibited speech (e.g., “glorifying” an “eating disorder” and “promoting” “substance abuse”). The monitoring-and-filtering requirements necessarily derive their liability from the type of content a site displays
Courts have been skeptical of Section 230 as a sword to strike down legislation (e.g., NetChoice v. Reyes), so it’s nice to see that work here.
Conclusion
Like the recent 9th Circuit ruling in NetChoice v. Bonta, this ruling lives in a hypothetical alternative universe where the court disregards the age-authentication mandate. But if the age-authentication mandate itself is unconstitutional–which I think it is–then all of the associated provisions should be unimplementable due to the impossibility of sorting minors from adults. By sidestepping the law’s core defect, the opinion occupies a limital space that should be resolved by the Free Speech Coalition v. Paxton appeal (I will be filing an amicus brief in that case, drawing material from my Segregate-and-Suppress article).
If the non-enjoined provisions don’t fall eventually, they could pose huge problems for the Internet. I won’t go through those problems in detail now because of the limital ambiguity about age-authentication, but obviously other legislatures and courts will take great interest in any space where they are free to regulate.
While the FSC v. Paxton appeal is waiting with the Supreme Court, this ruling will get appealed to the Fifth Circuit, which has yet to meet a censorship law it doesn’t like. That decision, whatever direction it goes, also seems destined for the Supreme Court. The Moody v. NetChoice case went through the exact same step-by-step process, and we might have a deja vu at each step.
Case Citation: Computer & Communications Industry Association v. Paxton, 1:24-cv-00849-RP (W.D. Tex. Aug. 30, 2024)