Court Enjoins Ohio’s Law Requiring Parental Approval for Children’s Social Media Accounts–NetChoice v. Yost

Ohio enacted a law, the “Parental Notification by Social Media Operators Act,” Ohio Rev. Code § 1349.09. The law requires certain websites and services to obtain verifiable parental consent before children are allowed to register or create an account.

The regulated entities:

  • “target” children under 16 or are “reasonably anticipated to be accessed by children.” Similar to COPPA, the law enumerates 11 considerations the AG can use to decide if a service is oriented towards kids.
  • have Ohio users
  • allow users to do all of the following:
    • interact socially with other users
    • construct a public or semipublic profile (the “semipublic” term in a legal rule always makes me laugh)
    • “Populate a list of other users with whom an individual shares or has the ability to share a social connection”
    • “Create or post content viewable by others” [what’s the difference between interact socially and create/post content?]

Obviously, these definitions reach most user-generated content (UGC) services, not just “social media” in the classic sense. However, regulated entities do not include review websites and “Comments incidental to content posted by an established and widely recognized media outlet, the primary purpose of which is to report news and current events.”

Unlike many of the other online child safety laws, the law does not explicitly require regulated entities to deploy age authentication. Indeed, the AG’s office “assured the court at the Rule 65.1 conference that it did not intend to enforce an age verification requirement.” (The legislature could have made that explicit in the law, but didn’t). Even so, the law still compels regulated entities to sort users by age, and the law uses a more stringent test than COPPA. The reality is that most websites are likely to have some 15-year-old users, even if the site isn’t designed to appeal to that audience. Thus, almost all UGC websites must consider their risk of violating the law. Most websites will see few ways to mitigate that risk other than to do age authentication and screen out the under-16s.

Ohio AG Yost claims the law is about contracts, not censorship, i.e., the law helps parents supervise the contracts their kids enter into. This framing would be a lot more credible if the requirement wasn’t deployed exclusively against services that allow users to talk with each other. (Recall the term “social media” is literally in the bill title). Thus, given the speech implications of the targeted services, Yost’s explanation looks clearly pretextual.

NetChoice sought a late TRO to block the law. The lateness of the request forces the court to do a rush job, which the court does admirably. The court finds that NetChoice has associational standing and grants the TRO. Some of highlights:

Due Process: Void for Vagueness

The court has concerns about how the regulated entities are defined:

the Act purports to apply to operators that “target[] children” or are “reasonably anticipated to be accessed by children.” On its face, this expansive language would leave many operators unsure as to whether it applies to their website. The legislature’s apparent attempt at clarity is also unilluminating. The Act provides an eleven-factor list that the Attorney General or a court may use to determine if a website is indeed covered, which includes malleable and broad-ranging considerations like “[d]esign elements” and “[l]anguage.”

The Act also contains an eyebrow-raising exception for “established” and “widely recognized” media outlets whose “primary purpose” is to “report news and current events”…the Act also provides no guardrails or signposts for determining which media outlets are “established” and “widely recognized.” Such capacious and subjective language practically invites arbitrary application of the law.

It’s so unusual to see a legislature adopt censorial regulations using capacious and subjective language that supports arbitrary enforcement. 🙄 Would Ohio’s AG really weaponize the law’s subjective language to advance goals that aren’t actually in Ohioans’ interests? 🙄

First Amendment: Restrictions on Protected Speech

The court thinks that the law is likely to be subject to strict scrutiny because it’s both a speaker-based and content-based restriction:

On its face, the Act distinguishes between different websites—exempting some and targeting others—and therefore, appears speaker-based….The Act’s exemption of “widely recognized” “media outlets” and product review sites bolsters this conclusion…

Particularly relevant here is the Supreme Court’s holding that even if “the state has the power to enforce parental prohibitions”—for example, enforcing a parent’s decision to forbid their child to attend an event—“it does not follow that the state has the power to prevent children from hearing or saying anything without their parents’ prior consent.” As the Court explained, “[s]uch laws do not enforce parental authority over children’s speech and religion; they impose governmental authority, subject only to a parental veto.” The Act appears to be exactly that sort of law. And like other content-based regulations, these sorts of laws are subject to strict scrutiny.

Naturally, if strict scrutiny applies, the law isn’t likely to survive the constitutional challenge:

it is unlikely that the government will be able to show that the Act is narrowly tailored to any ends that it identifies. Foreclosing minors under sixteen from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media’s harm to children. The approach is an untargeted one, as parents must only give one-time approval for the creation of an account, and parents and platforms are otherwise not required to protect against any of the specific dangers that social media might pose.

And even if the government can show that the interest in question is indeed protecting minors from entering into contracts, the Act’s inclusions and exemptions are not tailored to that end. For example, the Act would arguably permit a minor to create an account, subject to contract, with the New York Times, raising similar concerns to the ones involved in contract formation with Facebook, which the Act appears to target. In other words, the Act appears, at this juncture, to be both underinclusive and overinclusive, irrespective of the government interest at stake

The court is taking potshots at the legislative drafting choices, which might seem a little like Monday-morning quarterbacking. However, had the legislature redrafted the law to anticipate and avoid these concerns, the court would have taken potshots at the fixes. In other words, the problem isn’t just this incarnation of the policy objective; EVERY incarnation of the policy objective is likely to be speaker-based, content-based, and under- and overinclusive. Because the court can’t say that, it leaves open some room for censorial-minded regulators to keep trying some iteration–rather than forcing the regulators to abandon their censorship quest and instead carefully consider what mix of regulatory and social initiatives can maximize the value of online communications while minimizing the harm to user subpopulations.

Implications

“Protect the kids” laws are often pretextual. One of government’s most important roles is to protect vulnerable populations, and children are viewed as among the most vulnerable in our society. Thus, constituents/voters generally are inclined to support any legal efforts that purport to protect children. Knowing this inclination, regulators can cynically coopt the “protect children” policy mantra to advance laws that clearly do not actually protect children–or that even will make children less safe. If no one is able to convince voters of the regulators’ duplicity, the regulators face no downside to such manipulation.

In a techlash era, the pretextual adoption of “protect kids online” justifications is being widely abused to advance regulators’ censorial goals. Regulators will keep doing this so long as it keeps working politically, even if it means passing blatantly censorial laws that will waste taxpayers money to defend in courts.

REQUEST FOR A FAVOR: I know it’s common to call laws purporting to advance child safety “well-meaning” as way of making a positive concession before unleashing a critical assessment. Don’t do that unless you are convinced that the proposer actually was trying to protect children and did the requisite research to understand the many conflicting tradeoffs and risks of counterproductive outcomes. It’s not “well-meaning” to wade into the complex world of child safety without doing your homework; and it’s definitely not well-meaning when the effort is pretextual. For that reason, I won’t concede that the Ohio law was “well-meaning.” It’s not a close call in my mind.

“Parental approval” laws can be highly problematic. The Ohio law isn’t expressly about protecting the kids. The word “child” or “children” doesn’t even appear in the law’s title LOL.

Instead, this law fits into the broader initiative of “parental rights,” i.e., giving parents more power over their children’s activities. Parents know their children better than anyone else, so parents are often the best decider about their child’s interests. However, there are times when parents’ interests may be adverse to their children’s interests. Abortion rights and LGBTQ issues are two prominent examples. Furthermore, there may be times when parental approval may be complicated, such as with divorced parents who don’t agree with each other, or children who are in the foster care system or have guardianships that may be qualitatively different than a parent-child relationship. In other words, parental rights assumes a single paradigm of the parent-child relationship that oversimplifies matters considerably. When it comes to something as important as teenagers’ independent self-expression, giving “parents” all the power does not strike the right balance.

Mandatory age authentication in sheep’s clothing. AG Yost made an important concession when he claimed that the state wouldn’t require age authentication. It wasn’t much of a give because, for now, it’s clear that such mandates are unconstitutional. Soon enough, we’ll see if the Supreme Court still agrees. But this law is a sign of how legislatures are slipperily trying to impose mandatory age authentication while having plausible deniability that they aren’t doing so. I hope courts won’t fall for it. This one didn’t (so far).

Case citation: NetChoice, LLC v. Yost, 2024 WL 104336 (S.D. Ohio Jan. 9, 2024)

Selected Related Blog Posts

* Louisiana’s Age Authentication Mandate Avoids Constitutional Scrutiny Using a Legislative Drafting Trick–Free Speech Coalition v. LeBlanc
* Comments on the Ruling Declaring California’s Age-Appropriate Design Code (AADC) Unconstitutional–NetChoice v. Bonta
* Two Separate Courts Reiterate That Online Age Authentication Mandates Are Unconstitutional
* Do Mandatory Age Verification Laws Conflict with Biometric Privacy Laws?–Kuklinski v. Binance
* Five Ways That the California Age-Appropriate Design Code (AADC/AB 2273) Is Radical Policy
* An Interview Regarding AB 2273/the California Age-Appropriate Design Code (AADC)
* Op-Ed: The Plan to Blow Up the Internet, Ostensibly to Protect Kids Online (Regarding AB 2273)
* A Short Explainer of How California’s Age-Appropriate Design Code Bill (AB2273) Would Break the Internet
* Will California Eliminate Anonymous Web Browsing? (Comments on CA AB 2273, The Age-Appropriate Design Code Act)
* Minnesota Wants to Ban Under-18s From User-Generated Content Services