Courts Enjoin Internet Censorship Laws in Louisana and Arkansas
[I have a mondo draft roundup blog post, coming soon, covering a lot of segregate-and-suppress rulings. For now, I’ve prioritized coverage of these two rulings due to their importance. I’ll discuss the Louisiana law first, then the Arkansas law. Warning: this post is 5k words 😴.]
* * *

[s]ocial media compan[ies]” providing “[s]ocial media platform[s]” must impose age-verification, parental consent and parental controls, a prohibition on certain data-collection and advertising to minors, and restrictions on direct-messaging between adults and minors whose accounts are not “already connected . . . on the [social media] service.
In other words, a typical segregate-and-suppress law with a parental consent/control kicker.
In an excellent 94 page opinion, the court rules for NetChoice on all major issues. The court summarizes its conclusions:
(1) NetChoice has standing to challenge the Act on behalf of its members and their users. (2) NetChoice is entitled to summary judgment on the basis of its First Amendment as-applied challenge. And (3) NetChoice has shown that it is entitled to permanent injunctive relief.
This opinion is so long and makes so many good points that I’m going to quote the best parts with occasional light commentary.
As-Applied Challenge
the Act implicates the First Amendment in two distinct but related ways: (1) The Act burdens covered NetChoice members’ ability to publish speech on their respective platforms—and to publish such speech to minor users specifically. And (2) the Act burdens prospective and current users’ ability to access and to engage in speech on covered platforms.
__
by its design, the Act mostly burdens access to protected speech….Because the Act’s coverage definition is content-based, strict scrutiny applies to all provisions….given either standard—strict or intermediate scrutiny—the Act fails.
__
most of the Act’s restrictions are “all or nothing” proposals. For example, a prospective adult or minor user of X (Twitter) must verify her age, at which point she will gain access to more or less the entire platform, including its potentially harmful (but not necessarily unprotected) content. But if she balks at the age-verification process, then she will be barred from the platform, including its many forms of fully protected speech and its many opportunities to engage therein. [The court cites an example of how AG Murrill’s social media pages would become off-limits in that situation.] Likewise, if a prospective minor user obtains one parent’s consent to create a Reddit account, then he will gain access to forums where users freely disseminate potentially harmful content. If he does not obtain his parent’s consent, however, then he will be barred from the entire platform. He will not be able to seek homework help from a forum dedicated to mathematics. He will not be able to learn new painting techniques from a forum dedicated to art. He will not be able to share his most recent song—or solicit feedback—from a forum dedicated to making music.
Legislatures need to stop treating social media as if it’s categorically bad. It has many pro-social uses, including for minors. The legislatures freely throw those benefits, and the users who derive those benefits, into the trash.
__
the Act is at once under-inclusive and over-inclusive. It is under-inclusive in that minors (1) can encounter the same potentially harmful content on unregulated websites and (2) can gain more or less unrestricted access to covered platforms upon satisfying the one-time age-verification and parental-consent requirements. It is over-inclusive in that it burdens a large amount of protected speech—for minors and adults alike. And it is seemingly largely redundant of existing parental controls, meaning that there is a less-restrictive alternative: Louisiana can encourage the use of available tools….The degree of the Act’s over- and under-inclusivity is such that it fails intermediate scrutiny as well.
The court addressed strict and intermediate scrutiny simultaneously. This makes the opinion more appeal-proof (except in the Fifth Circuit, which doesn’t understand or apply US law).
__
the age-verification and parental-consent requirements would nonetheless trigger First Amendment scrutiny because of their indirect but substantial burdens to (accessing and engaging in) speech. …what Louisiana’s law plainly targets, if indirectly, is access to speech on covered social media platforms, not access to accounts themselves
This is a good example of how the FSC v. Paxton decision didn’t bless all age authentication mandates (but there’s a non-trivial chance the Supreme Court will give that categorical green light the next case it hears).
__
Defendants’ comparison to similar restrictions for “alcohol, consumable hemp products, lottery tickets, employment, handguns, fireworks, body piercings, or tattoos” also falls flat. Notably, most of these items do not qualify as speech. And even the tattoo analogy is unpersuasive—at a minimum, because of the clear difference in tailoring. That is, the Act’s age-verification and parental-consent requirements encompass far more protected speech than similar restrictions for tattoos. To complete Defendants’ analogy, the Act would be like requiring all people to verify their ages—and all minors to obtain parental consent—before getting tattoos and before looking at other people’s tattoos.
The Court prefers the analogy of a supermarket or a strip mall. A supermarket sells alcohol, tobacco, and lottery tickets, but it mostly sells other items (e.g., food, cleaning supplies, hygiene products). Similarly, a strip mall may include a liquor store, a store that sells sporting goods (e.g., firearms), etc. But prospective customers do not need to verify their respective ages in order to enter a supermarket or gain access to a strip mall. Nor do minor prospective customers need parental consent to enter or gain access to such places.
This reiterates the discussion in my Segregate-and-Suppress paper and my amicus brief in the FSC v. Paxton case.
__
The court expressly distinguishes FSC v. Paxton because this statute isn’t limited to porn:
In upholding that age-verification requirement, however, the Court recognized repeatedly that pornography is unprotected speech for minors and that, as a result, States “may prevent children from accessing [it].
Also, the court notes that FSC v. Paxton would still require intermediate scrutiny, not rational basis scrutiny.
__
The Act’s age-verification and parental-consent requirements fail strict and intermediate scrutiny. Even if the Court accepts that Defendants have a compelling interest “in protecting the physical and psychological well-being of minors,” Defendants have not established a causal relationship between social media use and health harms to minors…
even if Defendants had established a causal relationship between social media use and health harms to minors, manifold tailoring issues would nonetheless compel the Court to find that the Act’s age-verification and parental-consent requirements fail any form of heightened scrutiny. Most notably, the age-verification and parental-consent requirements are both over- and under-inclusive. As NetChoice has noted, parents already have several ways to control their children’s access to—and to monitor their children’s use of—social media platforms. And unlike the Texas law at issue in Free Speech Coalition, the Act here burdens an enormous amount of protected speech—“substantially more. . . than is necessary to further [Louisiana’s] interest.” At the same time, the Act does not regulate identical speech on smaller social media platforms (i.e., platforms with fewer than five million account holders) and websites with excepted “predominant or exclusive function[s].”
This is another court that didn’t accept the correlation = causation argument.
__
The upshot is that, without restriction, a 13-year-old can access, engage with, and proliferate potentially harmful content on a social media platform with 4,999,999 account holders or any website—regardless of user-ship—where social interaction is a major function but not the predominant function. But unless and until that same 13-year-old verifies his age and obtains one parent’s consent, he cannot access, engage with, or proliferate benign—even instructive or useful—speech, such as Facebook posts by politicians and other public persons or entities. If the 13-year-old clears the Act’s threshold barriers to covered platforms like Facebook, however, he not only gains access to such benign speech. He also gains more or less unrestricted access to whatever potentially harmful content exists on such platforms…
__
By regulating what material these companies can publish (i.e., advertising based on prohibited data versus advertising based on age and location), the advertising prohibition functions as a content-based restriction, in which case it triggers strict scrutiny. At a minimum, though, the restriction triggers intermediate scrutiny given its burden on covered NetChoice members’ speech.
__
If Defendants are to carry their burden of justifying the advertising prohibition, they cannot point to the same broad claim of causation [of social media harms]. Rather, they must show a more specific causal relationship—between targeted advertising based on data other than age and location and harms to minors. Nowhere, however, do Defendants posit such a relationship. …Neither Dr. Twenge nor Defendants define “inappropriate advertisements.” They do not indicate which platforms publish such advertisements. They do not link such advertisements to the collection or use of data besides age and location. They do not suggest that the advertising prohibition will reduce the number of inappropriate advertisements, much less explain how. They do not discuss less-restrictive alternatives, such as allowing minors to opt out of targeted advertising (or all advertising).
__
The direct-messaging restrictions are over-inclusive in that they burden, if not thwart, many innocuous—even productive—forms of communication. Defendants do not deny, for example, NetChoice’s hypothetical where, on account of the restrictions, a minor cannot “reach[] out to local politicians” in order to engage in political speech. Nor do Defendants seriously dispute that such restrictions are largely redundant of existing parental controls and content moderation. At the same time, the direct-messaging restrictions are under-inclusive in that they do not limit minors’ communication with adults on unregulated websites. For example, under the Act, a minor using a platform with 4,999,999 account holders can receive and reply to unsolicited messages from adult users. The same minor using a platform with 5,000,000 account holders cannot. Lastly, the restrictions are under-inclusive in that minors can skirt them easily (e.g., by connecting with adults on covered platforms prior to exchanging messages with them).
For more on the problems with making size-based regulatory distinctions between Internet services, see this paper.
__
Distinguishing NetChoice v. Bonta:
California’s law underscores several of this Court’s concerns with the tailoring of the Act’s direct-messaging restrictions, namely: (1) The direct-messaging restrictions do not alter adults’ ability to view and to comment on minors’ posts. (2) Adults and minors can communicate without restriction so long as they connect with one another on covered platforms. And (3) adults and minors can communicate without restriction on unregulated websites.
__
even if the Act’s direct-messaging restrictions only trigger intermediate scrutiny, they do not automatically satisfy it. In fact, the same concerns of over- and under-inclusivity apply with full force. Defendants have not demonstrated that the restrictions here are even substantially related to their stated interest.
__
Vagueness
the Court finds that the Act (writ large) is unconstitutionally vague because the scope of the term “[s]ocial media platform” is unclear…The phrase “predominant or exclusive function” is undefined. In the Court’s view, it is also nebulous.
My old maxim: if you can’t define it, you can’t regulate it.
__
If enough people stop using a covered platform for social interaction, is it no longer covered by the Act?…Accepting that, at present, Twitch’s predominant function is social interaction, what happens if tomorrow (or next week, next month, next year) more people use the platform for its interactive gaming function? Will the Act cease to cover Twitch? And again, where is the line between these functions? The Act does not say…
if platforms like Twitch are covered by the Act, then the Act’s under-inclusivity becomes all the more apparent. At a minimum, a user would have to verify his age in order to watch a Twitch stream of his favorite massively multiplayer online videogame. But the same user could play the videogame itself (e.g., make a profile, populate a friends list, chat with friends and strangers) without having to satisfy any of the Act’s requirements
Again, size-based Internet regulations are routinely going to create weird edge cases.
__
Personnel note: Louisiana relied on Jean Twenge and Tony Allen as experts.
Allen has appeared on the blog before in other failed efforts to support segregate-and-suppress laws. This is also not the first time a court repeatedly cites Allen’s words in support of NetChoice’s arguments. 🤷♂️
As for Dr. Twenge, the court says “NetChoice has pointed out that, inter alia, Dr. Twenge: (1) could not supply consensus definitions of “social media” or “mental health,” (2) conceded that she did not have any “formal criteria” for including studies in—or excluding them from—her expert report, and (3) included only studies showing correlation, not causation.”
What’s Next?
This case will be appealed to the Fifth Circuit, which has almost never met a censorship law it didn’t like and has zero interest in the rule of law. As savvy as this ruling is, the odds of this ruling withstanding a Fifth Circuit review seems low.
What happens after that is anyone’s guess. In the wake of Moody and FSC v. Paxton, the Supreme Court has ensured it will receive a steady stream of cert petitions related to segregate-and-suppress laws. Rulings like Moody, Garland v. TikTok, and FSC v. Paxton collectively leave no clear indication of what they will do next.
Case Citation: NetChoice v. Murrill, 2025 WL 3634112 (M.D. La. Dec. 15, 2025)
* * *
Now onto the Arkansas law:
This case involves the Arkansas Act 901 of 2025, which imposes liability on social media for addiction/self-harm, including a private right of action. The court grants a preliminary injunction except with respect to the private right of action, which, as usual, evades prospective judicial review (see my discussion about the depravity of this constitutional workaround). Standout passages from the opinion:
__
There is no doubt that users engage in constitutionally protected speech on these platforms. Platforms themselves also engage in a range of protected expression on their platforms.
__
there is no doubt that unfettered social media access can harm minor users. And there is evidence to support the conclusion reached by the Arkansas legislature that unfettered social media access can also harm adult users. But under existing First Amendment jurisprudence, the scope of that harm relative to the efforts social media platforms take to prevent it is largely beside the point.
__
the Act regulates pretty much everything a social media platform does.
__
One important difference between this case and Moody is that the challenged laws in Moody restricted the removal or suppression of user speech, while Act 901 does the opposite, restricting the dissemination or promotion of user speech. This raises different First Amendment concerns because forcing a platform to carry user speech it would rather remove burdens the platform’s speech, but not the user’s, while forcing a platform to suppress user speech it would rather promote burdens both the platform’s and the user’s speech.
__
Starting with the likely constitutional applications, Defendants failed to identify any.
💥
__
Section 1503, however, always premises liability on the content of the speech to which a user is exposed—“online content promoting, or otherwise advancing, self-harm or suicide”—and therefore has no application that is exempt from First Amendment scrutiny…
three of § 1502’s four prohibited results (drugs, eating disorders, and suicide) impose content-based restrictions on platforms’ editorial discretion and on users’ speech. And the fourth (social media addiction), even assuming it is content neutral, is not narrowly tailored to further the State’s interests.
__
A user who searched for cat videos in the past will be recommended cat videos in the future. But YouTube should probably know that recommending videos about nooses to a user who searched “how to tie a noose” in the past could cause that user to attempt suicide. If YouTube treats “how to tie a noose” like any other search for purposes of its recommendation algorithm, it may be subject to liability under Act 901. To avoid that liability, YouTube cannot maintain its preferred recommendation algorithm and must instead incorporate the State’s editorial judgment by treating a “how to tie a noose” search differently than a cat search because of the search’s content…
certain types of content, like the noose example, are known to be associated with certain results. When platforms know that this type of content is posted on their platforms, § 1502 requires them to exercise reasonable care to ensure that their designs, algorithms, and features do not promote it, thereby removing platforms’ ability to, for example, use content-agnostic recommendation algorithms…A law that prohibits platforms from pushing “certain types of content” but allows them to push other types of content is a content-based law.
The court makes an empirical assumption about why people research suicide-related themes and how they respond to that content. I would have liked to see more thoughtful analysis about the range of possibilities in these circumstances, including the possibility that such research reduces ultimate harms.
I highlight the last point: “A law that prohibits platforms from pushing “certain types of content” but allows them to push other types of content is a content-based law.” This means that any regulatory attempt to dictate how algorithms work will ALWAYS affect the algorithms’ outputs and thus are ALWAYS content-based restrictions.
__
Section 1502 imposes liability on platforms for disseminating content in a way that the platform “should have known” would cause any Arkansas user to purchase a controlled substance, develop an eating disorder, or attempt suicide—even if the vast majority of Arkansas users exposed to that content or dissemination method would not respond in those manners. By imposing liability on a platform any time the protected speech by or on that platform results in a specified “harm” to a single Arkansas viewer that the platform should have anticipated, § 1502 impermissibly limits the online posting and promotion of protected speech that is not harmful to most viewers. The State cannot force platforms to censor potentially sensitive, but protected, speech as to all users for the benefit of some subset of particularly susceptible users.
As I discuss in my Segregate-and-Suppress article, minor subpopulations often have conflicting informational needs. Catering to one subpopulation may disadvantage other subpopulations. The court recognizes this tradeoff and cites it against the law’s constitutionality.
__
Defendants have not met their burden to show that § 1503 is the least restrictive means of accomplishing the State’s interest in preventing suicide.
__
both operative provisions of Act 901 are substantially underinclusive because they do not restrict “identical content that is communicated through other media.” Thus, 13 Reasons Why, a TV show which graphically depicts a suicide, can remain on Netflix, but YouTube could be liable for showing a user clips of the same. Similarly, Netflix can glorify thin bodies and cast exclusively thin actresses, but Instagram may be liable if it promotes those actresses’ posts thereby causing a user to develop an eating disorder. Defendants respond that these non-social media platforms do not have “the same type of emerging research and whistleblowers indicating that [they] are causing the same widespread harm that social media is causing.” But Defendants’ own evidence indicates that consumption of other types of digital media also risks the prohibited results….
underinclusivity is especially concerning because Act 901 does not generally bar minors’ access to speech promoting suicide or tending to cause a prohibited result; instead, it limits only their access to forums in which to discuss—rather than merely view—this speech.
The state is engaging in classic Internet exceptionalism–treating the Internet worse than other media for identical content.
__
some designs or features that may make a platform “addictive” do not implicate platforms’ editorial discretion, and prohibiting or restricting these features would imposes little, if any, burden on users’ speech. For example, § 1502 might prohibit “infinite scroll.” The use of infinite scroll by platforms can hardly be considered expressive, and replacing infinite scroll with click to load more or pagination would still provide users easy access to the same amount of speech as infinite scroll. This change may help reduce compulsive use by providing stopping cues to users, furthering the State’s asserted interest without substantially burdening users’ speech. The same analysis applies to autoplay. Section 1502’s addiction component is likely constitutional as applied to both these features
I strongly disagree with the court’s speculation here. “Infinite scroll” and “auto-play” should absolutely be protected editorial decisions about the best way to present content. It’s like a legislature telling print newspapers how to present their editorial content and ads, such as banning stickers on the front page or forcing the newspaper to artificially categorize articles into topical groupings that don’t reflect the newspaper’s preferred taxonomy. The editorial decision-making is the same in each case–and equally protected by the First Amendment.
__
the more pressing addiction driver, the use of intermittent variable rewards, is achieved through designs, algorithms, and features that present more difficult questions. Social feedback features, e.g., likes, shares, or comments, by their very nature provide randomly timed rewards because they depend upon the independent action of another user, not the platform. Prohibiting this social feedback because it causes addiction in some users burdens the speech of the many other users for whom this feedback is not addictive.
Platforms also use algorithms to order the content they display, and many of these algorithms incidentally or intentionally incorporate intermittent variable rewards.
[in a footnote, the court adds “even the simple reverse chronological algorithm may create an intermittent variable reward schedule for users because, as a result of the independent actions of the accounts a user follows, the posts displayed will naturally vary in how interesting or “rewarding” they are”].
I found the court’s distinction between intermittent variable rewards and auto-play/infinite scrolling unpersuasive. They are equally part of a publisher’s editorial toolkit.
__
In a world where billions of pieces of content are posted on social media every day, social media would be functionally useless as a “vast democratic forum[ ]” if platforms were not allowed to use any algorithm—any system—for selecting and ordering content to display to users. If a social media platform was a library, banning algorithms would be roughly equivalent to requiring books be placed on shelves at random. Such a prohibition would burden users’ (or library patrons’) First Amendment rights by making it significantly more difficult to access speech a user wishes to receive, so a state probably could not constitutionally ban algorithms for the organization of speech (on social media or elsewhere) altogether….
a platform may know that an algorithm is addictive in some habitual users and still wish to give more temperate users access to algorithmically curated content, but § 1502 directs that the platform “shall not use” the algorithm at all.
We need more editorial curation, not less.
__
I didn’t understand this footnote:
It is possible that some algorithms—especially purely or primarily time-tracking engagement-based algorithms like TikTok’s—could be prohibited with minimal burden on the more temperate user, who could instead access the same content of interest by following specific users or topic. Some algorithmic features, like mixing in content the algorithm identifies as less interesting to maintain the intermittent variable reward schedule, could also be prohibited without burdening users’ access to speech, although such prohibitions may burden platforms’ speech to the extent the less interesting content reflects platforms’ editorial preferences. And requiring that users opt in to addictive features, rather than forbidding them altogether, would also be less burdensome on users’ speech
It’s a no from me on all of these alternatives. This all sounds well within the scope of a publisher’s prerogative. In particular, forcing users to opt-into a feature essentially dictates the design of the default feature, which I don’t think legislatures can do when it comes to Constitutionally-protected publications.
__
the problem with § 1502 is not so much that its prohibited results are vague, but that it fails to specify a standard of conduct to which platforms can conform and its violation entirely depends upon the sensitivities of some unspecified user and a judge or jury’s determination about what the platform “should have known” about those sensitivities. Such a law is unconstitutionally vague
I feel like this is true with every segregate-and-suppress law. Essentially, if one sensitive user is in the audience, it vetoes the tool for everyone, even if there are vastly more people who benefit from the tool. There are no pareto-optimal options here; only tradeoffs, and the government forcing publishers to make a different tradeoff choice is constitutionally impermissible.
__
The state argued that 1502 has an actual knowledge standard. This is obviously wrong. 1502 specifies a constructive knowledge standard. The court says:
In the sensitive First Amendment context, objective [negligence] standards like this one do not tend to decrease the threat posed by vague laws….There is no narrowing construction the Court can give to § 1502 that would be consistent with its text and would render it constitutional—Defendants don’t even bother to suggest one
In contrast, 1503(a) has requires willfulness+, so the court says it’s not vulnerable to a void-for-vagueness challenge.
__
The court rejects Anderson v. TikTok as precedent: “Immunizing platforms from being treated as publishers only when they are not acting like publishers would nullify Section 230 and contradict its plain text.”
__
NetChoice is not likely to succeed on its facial preemption challenge because some applications of Act 901 are consistent with Section 230. In some potential applications, e.g., designs that make it difficult to delete one’s account, liability is not premised on a platform’s editorial decisions about third-party content at all. In others, e.g., appearance-altering filters, Snapchat streaks, the platform’s own posts, the platform itself is acting as an information content provider, and Section 230 does not immunize information content providers from liability for their own content.
__
This case will be appealed to the Eighth Circuit, rather than the Fifth Circuit. That’s nominally good news because the Fifth Circuit is so terrible, but I have no idea what to expect on appeal.
Case Citation: NetChoice v. Griffin, 2025 WL 3634088 (W.D. Ark. Dec. 15, 2025)
* * *
Blog Posts on Segregate-and-Suppress Obligations
- Challenge to Maryland’s “Kid Code” Survives Motion to Dismiss–NetChoice v. Brown
- My Testimony Against Mandatory Online Age Authentication
- Read the Published Version of My Paper Against Mandatory Online Age Authentication
- Prof. Goldman’s Statement on the Supreme Court’s Demolition of the Internet in Free Speech Coalition v. Paxton
- Court Permanently Enjoins Ohio’s Segregate-and-Suppress/Parental Consent Law–NetChoice v. Yost
- Arkansas’ Social Media Safety Act Permanently Enjoined—NetChoice v. Griffin
- Why I Emphatically Oppose Online Age Verification Mandates
- California’s Age-Appropriate Design Code (AADC) Is Completely Unconstitutional (Multiple Ways)–NetChoice v. Bonta
- Another Conflict Between Privacy Laws and Age Authentication–Murphy v. Confirm ID
- Recapping Three Social Media Addiction Opinions from Fall (Catch-Up Post)
- District Court Blocks More of Texas’ Segregate-and-Suppress Law (HB 18)–SEAT v. Paxton
- Comments on the Free Speech Coalition v. Paxton SCOTUS Oral Arguments on Mandatory Online Age “Verification”
- California’s “Protecting Our Kids from Social Media Addiction Act” Is Partially Unconstitutional…But Other Parts Are Green-Lighted–NetChoice v. Bonta
- Section 230 Defeats Underage User’s Lawsuit Against Grindr–Doll v. Pelphrey
- Five Decisions Illustrate How Section 230 Is Fading Fast
- Internet Law Professors Submit a SCOTUS Amicus Brief on Online Age Authentication–Free Speech Coalition v. Paxton
- Court Enjoins the Utah “Minor Protection in Social Media Act”–NetChoice v. Reyes
- Another Texas Online Censorship Law Partially Enjoined–CCIA v. Paxton
- When It Comes to Section 230, the Ninth Circuit is a Chaos Agent–Estate of Bride v. YOLO
- Court Dismisses School Districts’ Lawsuits Over Social Media “Addiction”–In re Social Media Cases
- Ninth Circuit Strikes Down Key Part of the CA Age-Appropriate Design Code (the Rest is TBD)–NetChoice v. Bonta
- Mississippi’s Age-Authentication Law Declared Unconstitutional–NetChoice v. Fitch
- Indiana’s Anti-Online Porn Law “Is Not Close” to Constitutional–Free Speech Coalition v. Rokita
- Fifth Circuit Once Again Disregards Supreme Court Precedent and Mangles Section 230–Free Speech Coalition v. Paxton
- Snapchat Isn’t Liable for Offline Sexual Abuse–VV v. Meta
- 2023 Quick Links: Censorship
- Court Enjoins Ohio’s Law Requiring Parental Approval for Children’s Social Media Accounts–NetChoice v. Yost
- Many Fifth Circuit Judges Hope to Eviscerate Section 230–Doe v. Snap
- Louisiana’s Age Authentication Mandate Avoids Constitutional Scrutiny Using a Legislative Drafting Trick–Free Speech Coalition v. LeBlanc
- Section 230 Once Again Applies to Claims Over Offline Sexual Abuse–Doe v. Grindr
- Comments on the Ruling Declaring California’s Age-Appropriate Design Code (AADC) Unconstitutional–NetChoice v. Bonta
- Two Separate Courts Reiterate That Online Age Authentication Mandates Are Unconstitutional
- Minnesota’s Attempt to Copy California’s Constitutionally Defective Age Appropriate Design Code is an Utter Fail (Guest Blog Post)
- Do Mandatory Age Verification Laws Conflict with Biometric Privacy Laws?–Kuklinski v. Binance
- Why I Think California’s Age-Appropriate Design Code (AADC) Is Unconstitutional
- An Interview Regarding AB 2273/the California Age-Appropriate Design Code (AADC)
- Op-Ed: The Plan to Blow Up the Internet, Ostensibly to Protect Kids Online (Regarding AB 2273)
- A Short Explainer of Why California’s Social Media Addiction Bill (AB 2408) Is Terrible
- A Short Explainer of How California’s Age-Appropriate Design Code Bill (AB2273) Would Break the Internet
- Is the California Legislature Addicted to Performative Election-Year Stunts That Threaten the Internet? (Comments on AB2408)
- Omegle Denied Section 230 Dismissal–AM v. Omegle
- Snapchat Isn’t Liable for a Teacher’s Sexual Predation–Doe v. Snap
- Will California Eliminate Anonymous Web Browsing? (Comments on CA AB 2273, The Age-Appropriate Design Code Act)
- Minnesota Wants to Ban Under-18s From User-Generated Content Services
- California’s Latest Effort To Keep Some Ads From Reaching Kids Is Misguided And Unconstitutional (Forbes Cross-Post)
- Backpage Gets Important 47 USC 230 Win Against Washington Law Trying to Combat Online Prostitution Ads (Forbes Cross-Post & More)
- Backpage Gets TRO Against Washington Law Attempting to Bypass Section 230–Backpage v. McKenna
- MySpace Wins Another 47 USC 230 Case Over Sexual Assaults of Users–Doe II v. MySpace
- MySpace Gets 230 Win in Fifth Circuit–Doe v. MySpace
- Website Isn’t Liable When Users Lie About Their Ages–Doe v. SexSearch