Court Enjoins the Utah “Minor Protection in Social Media Act”–NetChoice v. Reyes
Utah’s Minor Protection in Social Media Act contains two major provisions. First, it requires social media companies to conduct age assurance of their users to a 95% accuracy rate, along with an appellate process for misclassified users.
Second, once minors are segregated:
- social media companies must “set default privacy settings to prioritize maximum privacy” (giving specific examples of regulated settings), and parental consent is required to make any adjustments to those settings.
- social media companies must disable features that prolong user engagement
- social media companies’ TOSes are presumed to include an assurance of confidentiality for minors’ data (with numerous statutory exceptions, waivable only with parental consent).
If you’re new to seeing these “but think of the kids” bills, this disjointed mélange of undertheorized policy ideas is typical nowadays. Legislators don’t actually know what effects any one of these changes would do, let alone the package all at once. Indeed, laws like this are very likely to HURT children MANY different ways, and the Utah legislature and governor couldn’t care less about those pernicious effects.
In my prior post on this case, I covered how the court denied efforts to enjoin the Utah law on Section 230 and dormant commerce clause grounds. While I think both doctrines play a role here, it was always clear that the First Amendment was going to be the real action. And as expected, the court indeed enjoined the law on First Amendment grounds.
Content Restriction
The law’s attempt to pretextually hide its censorship behind a privacy framing fails miserably:
there is no dispute the Act implicates social media companies’ First Amendment rights. The speech at issue in this case—the speech social media companies engage in when they make decisions about how to construct and operate their platforms—is protected speech
The court says the Utah law draws distinctions between social media companies and the rest of the Internet. This distinction “singles out social media companies based on the “social” subject matter “of the material [they] disseminate[],”” which is a content-based distinction. Utah argued that the law doesn’t suppress any specific topic. The court responds:
the focus of NetChoice’s challenge is that the Central Coverage Definition restricts social media companies’ abilities to collage user-generated speech into their “own distinctive compilation[s] of expression.”
The court repeatedly endorses social media companies’ editorial rights, which was never in doubt in my mind but certainly benefited from the Moody endorsement.
Strict Scrutiny
Compelling State Interest
The court says that Utah hasn’t shown a compelling state interest.
The court rejects the state’s evidence about the purported harms of social media usage. “Defendants have not provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts.” Utah cited the 2023 Surgeon General’s report, but the court says that report “offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by Defendants.” In other words, the court actually read the report and understood it, unlike the pro-censorship advocates who cite that report for propositions it expressly can’t support.
Utah also offered a declaration from Dr. Jean Twenge (SDSU) about the purported harms, but the court says “the majority of the reports she cites show only a correlative relationship between social media use and negative mental health impacts.” Any causal inferences from the Twenge declaration are only for limited user subpopulations on limited issues (girls and body image, but even that is contestable, as I will explain in my Segregate-and-Suppress paper).
The court also notes that while purporting to protect minors’ privacy, the “personal information a minor might choose to share on a social media service—the content they generate—is fundamentally their speech.” In other words, the law affirmatively HARMS minors by taking away their freedom of speech.
Finally, to the extent the law empowers parents’ rights, the court says there are other ways to address that issue.
Narrow Tailoring
The court says the state has “not shown existing parental controls are an inadequate alternative to the Act…parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls.” The state hasn’t tried to fix that information gap. Indeed, from my perspective, parents receive woefully inadequate education about how to guide our children through the digital era.
With respect to the allegedly addictive attributes of social media, the court says:
Defendants do not offer any evidence that requiring social media companies to compel minors to push “play,” hit “next,” and log in for updates will meaningfully reduce the amount of time they spend on social media platforms. Nor do Defendants offer any evidence that these specific measures will alter the status quo to such an extent that mental health outcomes will improve and personal privacy risks will decrease.
In other words, the legislature had precisely zero clue about what this law would actually do in the field and whether it would help or harm children. Nice.
The court says the law is also underinclusive because minors can still spend as much time as they like on social media, so the law doesn’t address its purported goal of protecting minor from excessive usage. The law also doesn’t curb any minors’ overuse of other Internet services, i.e., the law does “not account for the wider universe of platforms that utilize the features they take issue with, such as news sites and search engines.”
The court summarizes:
Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns
As usual, those judge-crafted hypothetical laws would also likely be unconstitutional.
Implications
User-Plaintiffs. The court says that the user-plaintiffs lacked standing due to lack of redressability. That’s because the law only regulates the behavior of social media companies, not users, and how the social media companies translate the law to affect users is within their constitutionally protected editorial discretion. In support of this, the court says in a footnote: “Courts have uniformly held, social media companies are private entities, and the public does not have a First Amendment right to use their platforms.” Cites to: O’Handley v. Weber, Prager Univ. v. Google, DeLima v. Google, Fed. Agency of News v. Facebook, Davison v. Facebook, and Nyabwa v. FaceBook.
How Many Ways Did the Utah Legislature Fail? This law was a compendium of unconstitutional and undertheorized censorial ideas: age authentication! parental approval! restrictions on how social media companies publish content! unsupported distinctions between different players in the Internet ecosystem! And so much more.
One possible takeaway from this ruling is that legislatures cannot cleave social media companies from the larger universe of UGC publishers and expect to justify the distinctions. Social media-specific censorship should always fail because it’s underinclusive. (As I’ve long said, if regulators can’t define it, they can’t regulate it). But legislatures also can’t impose the censorial restriction across all UGC publishers without being massively overinclusive. In this respect, legislatures can never size the regulation’s scope perfectly–but that’s because they shouldn’t be pursuing censorial policies in the first place.
As an indication that the Utah law can fail many different ways, the court says in a footnote:
the Act also appears overinclusive in so far as it affects users’ speech. For example, the Act’s age assurance provision, which applies to all users, broadly burdens adult users’ ability to “access a broad range of protected speech on a broad range of covered websites.” Fitch, 2024 WL 3276409 at *12. As another example, the Act’s restrictions on minors’ ability to connect with those outside their immediate networks broadly burdens minors’ ability to share and receive protected speech
A reminder that, this term, the Supreme Court will be reconsidering age authentication requirements in the Free Speech Coalition v. Paxton case. The court could have struck down the law completely on the age authentication issue alone, and that consideration could come back in play if this ruling doesn’t hold.
H8rs Gonna H8. In passing censorial “but think of the kids” laws, legislatures routinely cherrypick evidence to tell a story of how social media harms children. As you could see in this case, that cherrypicked evidence didn’t pass muster when evaluated by an independent court.
Worse, the cherrypicking is condemnable as a foundation for policymaking because it completely disregards the pro-social and beneficial effects of social media, of which there are many. As I will explain in my Segregate-and-Suppress paper, legislatures are giving a massive F-U to the communities who would be hurt by losing those benefits. If we’re going to have a “debate” about regulating social media, we have to seriously account for social media’s benefits and the difficulties of balancing tradeoffs among different Internet user communities. Anyone who pushes for social media regulation without doing benefit-side accounting proves that they don’t care about children as much as they think.
Case Citation: NetChoice LLC v. Reyes, 2024 WL 4135626 (D. Utah Sept. 10, 2024). Case library.
Pingback: Links for Week of September 13, 2024 – Cyberlaw Central()