Two Separate Courts Reiterate That Online Age Authentication Mandates Are Unconstitutional

[I will blog the NetChoice v. Bonta ruling very soon.]

Many state legislatures draft Internet regulations without any genuine concern for whether or not the laws violate the First Amendment. This isn’t a partisan thing; both Democrats and Republicans do it. As a result, state legislatures, both red and blue, are producing a flood of Internet censorship laws will tie up the courts for years.

Many recent laws essentially mirror the “protect the kids online” initiatives of the 1990s and early 2000s, all of which failed as unconstitutional. In 1996, Congress passed the Communications Decency Act (CDA) to restrict minors’ access to indecent and patently offensive content online. The CDA essentially required websites to authenticate user age. The CDA was struck down in 1997 in a landmark case, Reno v. ACLU. Among other things, the Supreme Court rejected the arguments that legislatures could treat the Internet like telephony or broadcasting. Congress then passed the Child Online Protection Act (COPA) in 1998, which was CDA 2.0 but limited to commercial websites and with tighter definitions of the restricted content. The Supreme Court essentially struck down COPA in 2004 in Ashcroft v. ACLU, saying that filtering solutions were less restrictive than server-side content controls (it took another 5 years of litigation before the challenges were fully resolved).

As state legislatures continue to copy “protect kids online” laws that were struck down decades ago, unsurprisingly they are getting the same (negative) results. In Arkansas, a law requiring parental consent before minors sign up for social media accounts didn’t survive intermediate scrutiny. In Texas, the court applied strict scrutiny to strike down a law that required commercial websites with 1/3 or more of their content as pornography to age-authenticate all users and block minors. [As I will blog soon, California’s law requiring online businesses to advance the “best interests” of children also failed a constitutional challenge.]

These rulings may seem obvious given the precedent, but they are confirming that technological and social changes over the intervening years have not changed the results. The principal remaining Q: what will happen to these rulings on appeal?

* * *

Note About the States’ “Expert” Tony Allen

Both Arkansas and Texas retained Tony Allen from the UK as an “expert” witness. It’s fair to say that Allen’s testimony didn’t strengthen the states’ cases. Allen championed the merits of the UK Online Safety bill (my critique about that endeavor), even though the bill breaks new ground in online censorship that will inspire repressive regimes worldwide. Perhaps with a touch of national pride/hubris, Allen repeatedly denigrated the Texas and Arkansas laws as inferior to the UK bill. It really hurts a party’s position when their expert witness supports the opposition, so Allen’s cocksure approach inadvertently did the Internet a solid. 🙏 However, I don’t expect his “expert” services will be in high demand going forward LOL.

* * *

Arkansas Act 689 of 2023

NetChoice LLC v. Griffin, 2023 WL 5660155 (W.D. Ark. Aug. 31, 2023)

Arkansas’ law says minors need parental consent before creating social media accounts. A simple goal to state, but virtually impossible to implement constitutionally.

Void for Vagueness. The court says the statutory definition of “social media company” is impermissibly vague, especially its reference to a service’s “primary purpose.” The court was unimpressed by the state’s condescending response:

The State argues that Act 689’s definitions are clear and that “any person of ordinary intelligence can tell that [Act 689] regulates Meta, Twitter[,] and TikTok.” But what about other platforms, like Snapchat? David Boyle, Snapchat’s Senior Director of Products, stated in his Declaration that he was not sure whether his company would be regulated by Act 689. He initially suspected that Snapchat would be exempt until he read a news report quoting one of Act 689’s co-sponsors who claimed Snapchat was specifically targeted for regulation.

So the legislator didn’t agree with the state. Neither did the state’s expert, Tony Allen, who said he thought the definition covered Snapchat. So I guess the state thinks the legislator and its own expert aren’t “people of ordinary intelligence”? And because the judge disagreed, the state also implied the judge was not a “person of ordinary intelligence.” Nice.

Further picking apart the definition of “social media platform,” the court says it’s vague to refer to the service’s “substantial function” but also include key exceptions, making services uncertain whether they fit into the exceptions or not.

It’s not surprising that the statute uses a vague definition of social media. Past statutory definitions of the phrase have been all over the map, such as California’s definition of “social media” as all digital data, even if stored on a hard drive, which is really kind of the opposite of social media. I don’t believe it’s possible to create legally precise definitions of “social media” that don’t include all UGC services. My perennial reminder: if you can’t define it, you can’t regulate it.

Having shredded the definition of “social media platform,” the court next turns to the impossibility of authenticating parental status:

Act 689 also fails to define what type of proof will be sufficient to demonstrate that a platform has obtained the “express consent of a parent or legal guardian.” If a parent wants to give her child permission to create an account, but the parent and the child have different last names, it is not clear what, if anything, the social media company or third-party servicer must do to prove a parental relationship exists. And if a child is the product of divorced parents who disagree about parental permission, proof of express consent will be that much trickier to establish—especially without guidance from the State….

once Act 689 goes into effect, the companies will err on the side of caution and require detailed proof of the parental relationship. As a result, parents and guardians who otherwise would have freely given consent to open an account will be dissuaded by the red tape and refuse consent—which will unnecessarily burden minors’ access to constitutionally protected speech.

We have known for a long time that authenticating parental status online is error-prone and not scalable. For example, this issue remains a challenge with COPPA, passed in 1998. Tony Allen, the state’s “expert,” agreed that it’s hard to authenticate parents online, another time his testimony hurt the state’s case.

Burdens on Users’ Rights

The court says that if the state’s goal was to protect children from lawful-but-awful speech, strict scrutiny applies. The state responded that the law was a “location restriction,” analogous to restrictions on minors entering bars or casinos. Really??? In her concurrence/dissent to Reno v. ACLU, Justice O’Connor claimed that content regulations could be analogized to “cyber-zoning” rules. That argument fell flat in 1997 and hasn’t gotten better with time.

Thus, the court responds: “the primary purpose of a social media platform is to engage in speech, and the State stipulated that social media platforms contain vast amounts of constitutionally protected speech for both adults and minors.” In contrast, a bar’s purpose is to get people drunk. I don’t know why it’s so difficult to see the differnces between online speech venues and offline retail establishments, but I keep seeing people embrace this obviously terrible analogy.

(The state also crazily argued that if a shopping mall had a restaurant with a bar, the mall  would need to screen minors before they could enter the mall. The court says flatly, “Clearly, the State’s analogy is not persuasive.” Seriously, if you actually believe kids should be screened before entering shopping malls because a bar is located somewhere in the mall, you probably should not be doing Internet Law).

The court hammers the definition of social media platform again, especially the fact that it’s riddled with exceptions:

Act 689’s definitions and exemptions do seem to indicate that the State has selected a few platforms for regulation while ignoring all the rest. The fact that the State fails to acknowledge this causes the Court to suspect that the regulation may not be content-neutral

Disingenuous/pretextual defenses of the law? Inconceivable.

Benefits/Burdens Analysis

The court concludes that it thinks the law should be subject to strict scrutiny, but at this preliminary stage of the litigation, it gave the state a break and only applied intermediate scrutiny. This is a blessing in disguise, because the law failing lesser scrutiny makes it harder for other legislatures to succeed.

The court acknowledges the state’s important interests of protecting kids.

With respect to burdens on adults, the court says:

Requiring adult users to produce state-approved documentation to prove their age and/or submit to biometric age-verification testing imposes significant burdens on adult access to constitutionally protected speech and “discourage[s] users from accessing [the regulated] sites.” [Cite to ACLU v. Reno, Supreme Court 1997]. Age-verification schemes like those contemplated by Act 689 “are not only an additional hassle,” but “they also require that website visitors forgo the anonymity otherwise available on the internet.” [Cite to American Booksellers Foundation v. Dean, 2d Cir. 2003; ACLU v. Mukasey, 3d Cir. 2008]

(Yes, in 2023, we are relitigating issues that were resolved in 1997).

The court credited the concerns, surfaced in the COPA litigation, that age authentication creates security concerns. “It is likely that many adults who otherwise would be interested in becoming account holders on regulated social media platforms will be deterred—and their speech chilled—as a result of the age-verification requirements.”

With respect to burdens on children, age authentication and parental consent chill their access to content:

Act 689 bars minors from opening accounts on a variety of social media platforms, despite the fact that those same platforms contain vast quantities of constitutionally protected speech, even as to minors. It follows that Act 689 obviously burdens minors’ First Amendment Rights [cite to Brown v. Entertainment Merchants]

Narrow Tailoring

The court says the law isn’t narrowly tailored because “the connection between these harms and ‘social media’ is ill defined by the data.” For example, the state’s evidence indicated YouTube was a source of harms, but definition of “social media platform” excluded YouTube. Tony Allen, the state’s “expert,” criticized the law for excluding too many service, including Kik and other messaging services. The state claimed it based the list of targeted services on the ones most frequently mentioned by NCMEC…but that data wasn’t regressed for service size, and…

Act 689 regulates Facebook and Instagram, the platforms with the two highest numbers of reports. But, the Act exempts Google, WhatsApp, Omegle, and Snapchat—the sites with the third-, fourth-, fifth-, and sixth-highest numbers of reports. Nextdoor is at the very bottom of NCMEC’s list, with only one report of suspected child sexual exploitation all year, yet the State’s attorney noted during the hearing that Nextdoor would be subject to regulation under Act 689

The court says dryly: “if the State claims Act 689’s inclusions and exemptions come from the data in the NCMEC article, it appears the drafters of the Act did not read the article carefully.” 💀

The court again criticizes the definition of “social media platform,” this time the arbitrary $100M annual revenue cutoff (see this article for more on size-based definitions of Internet services):

None of the experts and sources cited by the State indicate that risks to minors are greater on platforms that generate more than $100 million annually. Instead, the research suggests that it is the amount of time that a minor spends unsupervised online and the content that he or she encounters there that matters. However, Act 689 does not address time spent on social media; it only deals with account creation. In other words, once a minor receives parental consent to have an account, Act 689 has no bearing on how much time the minor spends online. Using the State’s analogy, if a social media platform is like a bar, Act 689 contemplates parents dropping their children off at the bar without ever having to pick them up again. The Act only requires parents to give express permission to create an account on a regulated social media platform once. After that, it does not require parents to utilize content filters or other controls or monitor their children’s online experiences—something Mr. Allen believes the real key to keeping minors safe and mentally well on social media.

[This implies that state-imposed restrictions on social media usage, such as time limits or curfew hours, might fare better from a constitutional standpoint…? I don’t think that’s right.]

With regard to the argument that parental consent implies parental supervision of children’s social media usage, the court says coldly: “there is no evidence of record to show that a parent’s involvement in account creation signals an intent to be involved in the child’s online experiences thereafter.”

This leads the court to conclude: “Act 689 is not narrowly tailored to target content harmful to minors. It simply impedes access to content writ large.” In other words, this law imposes censorship at a mass scale–and with no countervailing benefit: “If the legislature’s goal in passing Act 689 was to protect minors from materials or interactions that could harm them online, there is no compelling evidence that the Act will be effective in achieving those goals.”

The court compares the Arkansas law and the UK Online Safety bill. To be clear, the UK Online Safety bill is terrible, and it unquestionably would violate the US Constitution many times over. Here, the court highlights how Arkansas’ law doesn’t solve any problem. The court says the U.K. Online Safety bill attempts to protects kids from accessing harmful content, rather than restricting account creation, and restricting harmful content access is more consistent with Supreme Court precedent. (This latter point is an unnecessary digression and obviously wrong, because the CDA and COPA were both unconstitutional). The court summarizes: “Age-gating social media platforms for adults and minors does not appear to be an effective approach when, in reality, it is the content on particular platforms that is driving the State’s true concerns.”

Despite the brief wacky turn, the court concludes that age authentication can’t be saved per the 2004 Ashcroft v. ACLU decision:

Age-verification requirements are more restrictive than policies enabling or encouraging users (or their parents) to control their own access to information, whether through user-installed devices and filters or affirmative requests to third-party companies

Texas HB 1181

Free Speech Coalition, Inc. v. Colmenero, 2023 WL 5655712 (W.D. Tex. Aug. 31, 2023)

[UPDATE: On September 19, the 5th Circuit–doing 5th Circuit things–administratively stayed the injunction without issuing a substantive opinion, thus putting the law back into effect for now. This is the same shenanigans that occurred with the NetChoice v. Paxton case, and like that case, this move will likely prompt the plaintiffs to appeal to the Supreme Court’s shadow docket. As I say below, the 5th Circuit is “where the rule of law goes to die,” and putting the law back into effect without a written opinion is prime evidence of that. So shady.]

Texas’ law requires commercial websites to age-authenticate users and block minors when their sites contain 1/3 or more material that is “sexual material harmful to minors.” If these provisions sound familiar, it’s because this is basically a redux of COPA. The law also requires those websites to make consumer disclosures about harms caused by pornography.

Standing of Foreign Websites

The court says that foreign publishers can challenge the constitutionality of US laws. “The constitutional rights of foreign companies operating in the United States is particularly important in the First Amendment context….To hold otherwise would drastically expand the government’s ability to restrict ideas based on their content or viewpoint. States could ban, for example, the Guardian or the Daily Mail based on their viewpoint, because those newspapers are based in the United Kingdom. Alternatively, those websites could be subject to relaxed defamation laws without any First Amendment protection.”

In a FN, the court explains:

Defendant repeatedly suggests that Plaintiffs should not able to avail themselves of First Amendment protections when they have not availed themselves of personal jurisdiction in Texas…. foreign pornography websites have been held subject to U.S. jurisdiction in other contexts. But if there was any doubt, purposeful availment would likely be established when a website knowingly accepts driver’s license data from a state resident, transmits that data to the state, and then proceeds to grant that visitor access to the site, as H.B. 1181 requires. [cite to Zippo and Johnson v. TheHuffingtonPost.com, Inc., 21 F.4th 314, 318 (5th Cir. 2021)]. At any rate, it is the threat of enforcement, not the existence of personal jurisdiction, that would lead to First Amendment chill.

The court implies that age authentication using government documents would likely subject the website to personal jurisdiction in every state where authenticated users are located, unless the website bounces all users from that state. This isn’t how personal jurisdiction is supposed to work, but it highlights how proliferation of age authentication mandates could potentially impact personal jurisdiction law.

Level of Scrutiny

This is not a hard question:

Just like COPA, H.B. 1181 regulates beyond obscene materials. As a result, the regulation is based on whether content contains sexual material. Because the law restricts access to speech based on the material’s content, it is subject to strict scrutiny

To overcome this, the state invoked Scalia’s dissent in Ashcroft v. ACLU. The court says LOLNO. The state also argued that all of the regulated content is obscene, another argument that has failed repeatedly.

Similar to Arkansas, the state tried the mockable argument that the law was a time/place/manner restriction. The court responds: “the notion is plainly foreclosed by ACLU v. Reno.” In a FN, the court blows up the state’s ridiculous physical space analogy:

H.B. 1181 does not operate like the sort of “strip club” restriction that Defendant analogizes to. It does not just regulate the virtual equivalent of strip clubs or adult DVD stores. Rather, a more apt analogy would be that H.B. 1181 forces movie theaters to catalog all movies that they show, and if at least one-third of those movies are R-rated, H.B. 1181 would require the movie theater to screen everyone at the main entrance for their 18+ identification, regardless of what movie they wanted to see.

In other words, mandatory online age authentication is the equivalent of requiring both toddlers and adults to show their IDs to see a G-rated movie just because the theater is showing R-rated movies elsewhere. Quit it with the dumb offline analogies already, please.

Narrow TailoringUnderinclusive

The court says the law is “dramatically” and “severely” underinclusive because:

  • Search engines can still direct children to porn.
  • The law only applies to websites subject to Texas personal jurisdiction. Thus, the law doesn’t address foreign content. The court sidesteps the interstate commerce issue, but it too seems like a problem.
  • The 1/3 cutoff excludes many social media, so services like Reddit, Tumblr, Facebook, and Instagram may have lots of porn that the law doesn’t address. “The problem, in short, is that the law targets websites as a whole, rather than at the level of the individual page or subdomain. The result is that the law will likely have a greatly diminished effect because it fails to reduce the online pornography that is most readily available to minors.” The 1/3 cutoff is obviously arbitrary and thus under/overinclusive.
  • The compelled disclosures are misdirected because the regulated websites already screen out kids. Thus, “a health disclaimer, ostensibly designed for minors, will be seen by adults visiting Pornhub, but not by minors visiting pornographic subreddits.”

Narrow Tailoring–Ambiguous

The law applies equally to minors from ages 0-17, even though what’s patently offensive, and what has social value, varies by age in practice:

A website dedicated to sex education for high school seniors, for example, may have to implement age verification measures because that material is “patently offensive” to young minors and lacks educational value for young minors. Websites for prurient R-rated movies, which likewise are inappropriate and lacking artistic value for minors under the age of 17, would need to implement age verification (and more strangely, warn visitors about the dangers of pornography).

This is a very hard problem to draft around, because scaling the legal obligations to reflect what’s offensive based on user age increases the law’s vagueness.

The court also questions how services are supposed to compute the 1/3 ratio. For example, “the law offers no guidance as to how to calculate the “one-third”—whether it be the number of files, total length, or size.” The court notes that if the 1/3 is based on the quantity of materials, then websites can easily just add non-sexual content to game its numbers. (This sounds like a good use of generative AI–manufacture high volumes of nonsense content purely to satisfy Texas’ anti-porn censorship).

The court also questions what it means to use “commercially reasonable” age verification.

Narrow Tailoring–Overbreadth

This isn’t complicated: “Courts have routinely struck down restrictions on sexual content as improperly tailored when they impermissibly restrict adult’s access to sexual materials in the name of protecting minors….Plaintiffs are likely to succeed on their overbreadth and narrow tailoring challenge because H.B. 1181 contains provisions largely identical to those twice deemed unconstitutional in COPA….Despite this decades-long precedent, Texas includes the exact same drafting language previously held unconstitutional.” Yes, we are literally doing this all over again.

To overcome this obvious barrier, the state argued that the Supreme Court upheld dial-a-porn restrictions in Sable. This exact argument failed in Reno v. ACLU, so why not try it again? Hello Rule 11.

The court summarizes:

The law sweeps far beyond obscene material and includes all content offensive to minors, while failing to exempt material that has cultural, scientific, or educational value to adults only. At the same time, the law allows other websites to show and display explicit material, as long as they have two-thirds non-obscene content. The result is that H.B. 1181’s age verification is not narrowly tailored and fails strict scrutiny.

Least Restrictive Means–Risks of Age Authentication

Ashcroft v. ACLU held that client-side filtering was less restrictive than server-side controls. The state argued that the factual predicates to that case are dated. The court agrees about that… “But as determined by the facts on the record and presented at the hearing, age verification laws remain overly restrictive. Despite changes to the internet in the past two decades, the Court comes to the same conclusion regarding the efficacy and intrusiveness of age verification as the ACLU courts did in the early 2000s.” đŸ’„

The court credits the privacy risks of age authentication: “adults must affirmatively identify themselves before accessing controversial material, chilling them from accessing that speech. Whatever changes have been made to the internet since 2004, these privacy concerns have not gone away, and indeed have amplified.”

The court also credits concerns about sexual privacy. The court notes that Texas has criminalized gay sex, so and concerns about future prosecutions that will deter users from accessing LGBTQ-focused pornography if they must go through age authentication.

The state argued that the law requires the deletion of authentication data. The court says this “assumes that consumers will (1) know that their data is required to be deleted and (2) trust that companies will actually delete it,” and the court says both premises are “dubious.” The court also notes that intermediaries don’t have the disclosure obligation. Thus, “it is the deterrence that creates the injury, not the actual retention.”

Plus, “the First Amendment injury is exacerbated by the risk of inadvertent disclosures, leaks, or hacks…The First Amendment injury does not just occur if the Texas or Louisiana DMV (or a third-party site) is breached. Rather, the injury occurs because individuals know the information is at risk. Private information, including online sexual activity, can be particularly valuable because users may be more willing to pay to keep that information private, compared to other identifying information….It is the threat of a leak that causes the First Amendment injury, regardless of whether a leak ends up occurring.”

This is trenchant criticism of all of the vendors who keep claiming–almost certainly falsely–that they have privacy-protected age authentication solutions. If users are deterred by the potential risks, it doesn’t matter what steps the vendors actually take.

I note that these privacy and security concerns are screamingly obvious to anyone who understands Internet privacy, but obviously the state and pro-censorship forces want everyone to ignore them. Kudos to the judge for grasping it so clearly, and credit surely goes to the litigation team for connecting the dots for the judge.

The court summarizes:

In short, while the internet has changed since 2004, privacy concerns have not. Defendant offers its digital verification as more secure and convenient than the ones struck down in COPA and the CDA. This simply does not match the evidence and declarations supported in the parties’ briefing. Users today are more cognizant of privacy concerns, data breaches have become more high-profile, and data related to users’ sexual activity is more likely to be targeted.

In other words, the argument against mandatory online age authentication has gotten STRONGER in the past 25 years, not weaker, despite the technological evolutions. This is a powerful retort to all of the pro-censorship advocates who keep assuming that technology has magically solved all problems with age authentication. It has not–and cannot.

Least Restrictive Alternatives–Other Options

The plaintiffs cited regulatory alternatives to age authentication:

  • IAP blocking until an adult opts-out (I hate this idea–ugh)
  • adult controls on children’s devices

The state’s “expert” Tony Allen tried to argue that age authentication is better than these options, but the court says Allen’s assertions aren’t in the actual statutory requirements. ÂŻ\_(ツ)_/ÂŻ

The state’s other evidence boomerangs as well:

Defendant’s own study suggests several ways that H.B. 1181 is flawed. As the study points out, pop-up ads, not pornographic websites, are the most common forms of sexual material encountered by adolescents. The study also confirms that blocking pornographic websites and material altogether is extremely difficult to accomplish through “legal bans.” And most crucially, the study highlights the importance of content filtering alongside parental intervention as the most effective method of limiting any harm to minors. Defendant cannot claim that age-verification is narrowly tailored when one of their own key studies suggests that parental-led content-filtering is a more effective alternative.

The court notes the benefits of client-side filtering:

  • It allows for item-specific blocking, rather than sitewide blocking.
  • It lets parents configure age-appropriate blocks, and this spurs healthy parent-child dialogues.
  • “content filtering also comports with the notion that parents, not the government, should make key decisions on how to raise their children.”
  • minors can circumvent geographic restrictions through Tor and VPNs.
  • it restricts content from foreign websites, “while age verification is only effective as far as the state’s jurisdiction can reach.”

The state claimed parents don’t use filtering, but the court tartly responds: “Defendant has not pointed to any measures Texas has taken to educate parents about content filtering…. Texas cannot show that content filtering would be ineffective when it has detailed no efforts to promote its use.” (I’ll note that there may be other deficiencies in the Texas educational system LOL). The court also notes that the state could impose penalties on parents who don’t use filtering, but I’m pretty sure that’s not correct–this conflicts with the claim the court just made “that parents, not the government, should make key decisions on how to raise their children.”

The court summarizes (bold added): “changes to the internet since 2003 have made age verification more—not less—cumbersome than alternatives. Parental controls are commonplace on devices….even as Defendant’s technical expert noted at the hearing, content filtering is designed for parents and caretakers to be easy to use in a family. The technical knowledge required to implement content-filtering is quite basic, and usually requires only a few steps.”

The court concludes: “The complete failure of the legislature to consider less-restrictive alternatives is fatal at the preliminary injunction stage….it is clear that age verification is considerably more intrusive while less effective than other alternatives.”

Compelled Speech

With respect to the mandatory consumer disclosures, the court has no problem classifying them as compelled speech.

The court says the disclosures “are content-based, regardless of whether they regulate commercial activity.” The court notes that the disclosures must be shown to existing customers, so the disclosures reach beyond proposed commercial transactions, and the court thinks that the state couldn’t impose similar restrictions on newspapers’ landing pages. (I agree, but I would have loved to see the court expand the online/offline comparisons). The court summarizes: “To ignore the content-based nature of the regulation overall would be to allow the government to regulate disfavored speech with less scrutiny, so long as the government only targets the commercial aspects of that speech.”

(I must confess that this part of the opinion confused me, but I blame the anarchistic nature of the law of compelled speech).

The mandatory disclosures don’t survive strict scrutiny. The court again notes that the disclosures only target adults who pass the age verification process, so there’s a fit mismatch with the justification that the disclosures protect kids. “A law cannot be narrowly tailored to the state’s interest when it targets the group exactly outside of the government’s stated interest.”

The court also calls out the state for not showing how the disclosures actually solve any problem. The disclosures are written at a level above most kids’ reading levels, and the specified words “have not been shown to reduce or deter minors’ access to pornography.”

The court separately says the mandatory disclosures don’t survive Central Hudson intermediate scrutiny if that applied. For example, “the disclosures state scientific findings as a matter of fact, when in reality, they range from heavily contested to unsupported by the evidence.” Also, the “mere fact that non-obscene pornography greatly offends some adults does not justify its restriction to all adults.” The court concludes: “if the interest is in protecting children, then it may arguably be substantial, but it is advanced indirectly. If the interest is in changing adults’ attitudes on porn and sexuality, then the state cannot claim a valid, substantial interest.”

The court rejects the application of Zauderer. (Remember, Zauderer’s relaxed scrutiny is a boon to censorship-minded regulators). The court says the messages are unduly burdensome, “the warnings themselves are somewhat deceptive” (in a FN, the court adds “Ironically, while Zauderer allowed the government to regulate deceptive speech, here, it is the government’s message that is potentially deceptive”), “Texas’s health disclosures are either inaccurate or contested by existing medical research” (in a FN, the court adds “many of [the state’s] claims are entirely without support”), and the messages are “deeply controversial,” especially the “deep controversy regarding the benefits and drawbacks of consumption of pornography and other sexual materials.” In other words, the compelled disclosures lie to consumers, which proves they won’t help consumers.

(For more on the inapplicability of Zauderer to mandatory online disclosures by publishers, see this article).

Section 230

The court says some relatively novel things about Section 230 and foreign websites: “foreign website Plaintiffs may claim the protection of Section 230 when failing to do so would subject them to imminent liability for speech that occurs in the United States. Because the foreign website Plaintiffs host content provided by other parties, they receive protection under Section 230.” Although some plaintiffs create their own content, other plaintiffs who publish third-party content are entitled to an injunction based on Section 230.

(For more on Section 230’s extraterritorial reach, see this article by Anupam Chander).

(The court’s analysis of Section 230 is accurate, but its organization and analysis of 230 was suboptimal. That makes it vulnerable on appeal).

Irreparable Harm

The court identifies some of the irreparable harms created by the law:

  • non-recoverable compliance costs, including age verification costs of $40k for 100k visits.
  • violation of First Amendment rights. “A party cannot speak freely when they must first verify the age of each audience member, and this has a particular chilling effect when the identity of audience members is potentially stored by third parties or the government.”
  • association with an unwanted compelled message, including loss of goodwill.
  • non-recoverable litigation costs for claims preempted by Section 230.

Note About the Judge

The judge is David Alan Ezra, who is a district court judge in Hawaii but has been designated a judge in the Western District of Texas to help with the workload. Judge Ezra is a Reagan appointee. I imagine some Texans would happily buy him a one-way ticket back to Hawaii.

Conclusion

There would be more to celebrate in these two rulings if we weren’t reduxing the legislative and judicial battles already fought and won 25 years ago. Plus, there are dozens of other pending and forthcoming cases where courts could reach different results.

Further, neither of these district court rulings will be the final word. As the Texas court expressly noted, the state made numerous arguments hoping the Supreme Court will reverse its existing precedent. The Arkansas case will be appealed to the 8th Circuit; the Texas case will be appealed to the 5th Circuit (where the rule of law goes to die); and regardless of who wins on appeal, both will be appealed to the US Supreme Court. So we are years away from knowing if the latest spate of censorship has mortally wounded the Internet.

Even so, today we can celebrate the good news that, as our founders contemplated, judges will keep cool heads when legislators do not.