Courts Enjoin Internet Censorship Laws in Louisana and Arkansas
[I have a mondo draft roundup blog post, coming soon, covering a lot of segregate-and-suppress rulings. For now, I’ve prioritized coverage of these two rulings due to their importance. I’ll discuss the Louisiana law first, then the Arkansas law. Warning: this post is 5k words đ´.]
* * *
The first case involves Louisiana’s Secure Online Child Interaction and Age Limitation Act (âthe Actâ). La. S.B. 162, Act No. 456 (codified at La. R.S. §§ 51:1751â1759). The bill summarizes its scope:
[s]ocial media compan[ies]â providing â[s]ocial media platform[s]â must impose age-verification, parental consent and parental controls, a prohibition on certain data-collection and advertising to minors, and restrictions on direct-messaging between adults and minors whose accounts are not âalready connected . . . on the [social media] service.
In other words, a typical segregate-and-suppress law with a parental consent/control kicker.
In an excellent 94 page opinion, the court rules for NetChoice on all major issues. The court summarizes its conclusions:
(1) NetChoice has standing to challenge the Act on behalf of its members and their users. (2) NetChoice is entitled to summary judgment on the basis of its First Amendment as-applied challenge. And (3) NetChoice has shown that it is entitled to permanent injunctive relief.
This opinion is so long and makes so many good points that I’m going to quote the best parts with occasional light commentary.
As-Applied Challenge
the Act implicates the First Amendment in two distinct but related ways: (1) The Act burdens covered NetChoice membersâ ability to publish speech on their respective platformsâand to publish such speech to minor users specifically. And (2) the Act burdens prospective and current usersâ ability to access and to engage in speech on covered platforms.
__
by its design, the Act mostly burdens access to protected speech….Because the Actâs coverage definition is content-based, strict scrutiny applies to all provisions….given either standardâstrict or intermediate scrutinyâthe Act fails.
__
most of the Actâs restrictions are âall or nothingâ proposals. For example, a prospective adult or minor user of X (Twitter) must verify her age, at which point she will gain access to more or less the entire platform, including its potentially harmful (but not necessarily unprotected) content. But if she balks at the age-verification process, then she will be barred from the platform, including its many forms of fully protected speech and its many opportunities to engage therein. [The court cites an example of how AG Murrill’s social media pages would become off-limits in that situation.] Likewise, if a prospective minor user obtains one parentâs consent to create a Reddit account, then he will gain access to forums where users freely disseminate potentially harmful content. If he does not obtain his parentâs consent, however, then he will be barred from the entire platform. He will not be able to seek homework help from a forum dedicated to mathematics. He will not be able to learn new painting techniques from a forum dedicated to art. He will not be able to share his most recent songâor solicit feedbackâfrom a forum dedicated to making music.
Legislatures need to stop treating social media as if it’s categorically bad. It has many pro-social uses, including for minors. The legislatures freely throw those benefits, and the users who derive those benefits, into the trash.
__
the Act is at once under-inclusive and over-inclusive. It is under-inclusive in that minors (1) can encounter the same potentially harmful content on unregulated websites and (2) can gain more or less unrestricted access to covered platforms upon satisfying the one-time age-verification and parental-consent requirements. It is over-inclusive in that it burdens a large amount of protected speechâfor minors and adults alike. And it is seemingly largely redundant of existing parental controls, meaning that there is a less-restrictive alternative: Louisiana can encourage the use of available tools….The degree of the Actâs over- and under-inclusivity is such that it fails intermediate scrutiny as well.
The court addressed strict and intermediate scrutiny simultaneously. This makes the opinion more appeal-proof (except in the Fifth Circuit, which doesn’t understand or apply US law).
__
the age-verification and parental-consent requirements would nonetheless trigger First Amendment scrutiny because of their indirect but substantial burdens to (accessing and engaging in) speech. …what Louisianaâs law plainly targets, if indirectly, is access to speech on covered social media platforms, not access to accounts themselves
This is a good example of how the FSC v. Paxton decision didn’t bless all age authentication mandates (but there’s a non-trivial chance the Supreme Court will give that categorical green light the next case it hears).
__
Defendantsâ comparison to similar restrictions for âalcohol, consumable hemp products, lottery tickets, employment, handguns, fireworks, body piercings, or tattoosâ also falls flat. Notably, most of these items do not qualify as speech. And even the tattoo analogy is unpersuasiveâat a minimum, because of the clear difference in tailoring. That is, the Actâs age-verification and parental-consent requirements encompass far more protected speech than similar restrictions for tattoos. To complete Defendantsâ analogy, the Act would be like requiring all people to verify their agesâand all minors to obtain parental consentâbefore getting tattoos and before looking at other peopleâs tattoos.
The Court prefers the analogy of a supermarket or a strip mall. A supermarket sells alcohol, tobacco, and lottery tickets, but it mostly sells other items (e.g., food, cleaning supplies, hygiene products). Similarly, a strip mall may include a liquor store, a store that sells sporting goods (e.g., firearms), etc. But prospective customers do not need to verify their respective ages in order to enter a supermarket or gain access to a strip mall. Nor do minor prospective customers need parental consent to enter or gain access to such places.
This reiterates the discussion in my Segregate-and-Suppress paper and my amicus brief in the FSC v. Paxton case.
__
The court expressly distinguishes FSC v. Paxton because this statute isn’t limited to porn:
In upholding that age-verification requirement, however, the Court recognized repeatedly that pornography is unprotected speech for minors and that, as a result, States âmay prevent children from accessing [it].
Also, the court notes that FSC v. Paxton would still require intermediate scrutiny, not rational basis scrutiny.
__
The Actâs age-verification and parental-consent requirements fail strict and intermediate scrutiny. Even if the Court accepts that Defendants have a compelling interest âin protecting the physical and psychological well-being of minors,â Defendants have not established a causal relationship between social media use and health harms to minors…
even if Defendants had established a causal relationship between social media use and health harms to minors, manifold tailoring issues would nonetheless compel the Court to find that the Actâs age-verification and parental-consent requirements fail any form of heightened scrutiny. Most notably, the age-verification and parental-consent requirements are both over- and under-inclusive. As NetChoice has noted, parents already have several ways to control their childrenâs access toâand to monitor their childrenâs use ofâsocial media platforms. And unlike the Texas law at issue in Free Speech Coalition, the Act here burdens an enormous amount of protected speechââsubstantially more. . . than is necessary to further [Louisianaâs] interest.â At the same time, the Act does not regulate identical speech on smaller social media platforms (i.e., platforms with fewer than five million account holders) and websites with excepted âpredominant or exclusive function[s].â
This is another court that didn’t accept the correlation = causation argument.
__
The upshot is that, without restriction, a 13-year-old can access, engage with, and proliferate potentially harmful content on a social media platform with 4,999,999 account holders or any websiteâregardless of user-shipâwhere social interaction is a major function but not the predominant function. But unless and until that same 13-year-old verifies his age and obtains one parentâs consent, he cannot access, engage with, or proliferate benignâeven instructive or usefulâspeech, such as Facebook posts by politicians and other public persons or entities. If the 13-year-old clears the Actâs threshold barriers to covered platforms like Facebook, however, he not only gains access to such benign speech. He also gains more or less unrestricted access to whatever potentially harmful content exists on such platforms…
__
By regulating what material these companies can publish (i.e., advertising based on prohibited data versus advertising based on age and location), the advertising prohibition functions as a content-based restriction, in which case it triggers strict scrutiny. At a minimum, though, the restriction triggers intermediate scrutiny given its burden on covered NetChoice membersâ speech.
__
If Defendants are to carry their burden of justifying the advertising prohibition, they cannot point to the same broad claim of causation [of social media harms]. Rather, they must show a more specific causal relationshipâbetween targeted advertising based on data other than age and location and harms to minors. Nowhere, however, do Defendants posit such a relationship. …Neither Dr. Twenge nor Defendants define âinappropriate advertisements.â They do not indicate which platforms publish such advertisements. They do not link such advertisements to the collection or use of data besides age and location. They do not suggest that the advertising prohibition will reduce the number of inappropriate advertisements, much less explain how. They do not discuss less-restrictive alternatives, such as allowing minors to opt out of targeted advertising (or all advertising).
__
The direct-messaging restrictions are over-inclusive in that they burden, if not thwart, many innocuousâeven productiveâforms of communication. Defendants do not deny, for example, NetChoiceâs hypothetical where, on account of the restrictions, a minor cannot âreach[] out to local politiciansâ in order to engage in political speech. Nor do Defendants seriously dispute that such restrictions are largely redundant of existing parental controls and content moderation. At the same time, the direct-messaging restrictions are under-inclusive in that they do not limit minorsâ communication with adults on unregulated websites. For example, under the Act, a minor using a platform with 4,999,999 account holders can receive and reply to unsolicited messages from adult users. The same minor using a platform with 5,000,000 account holders cannot. Lastly, the restrictions are under-inclusive in that minors can skirt them easily (e.g., by connecting with adults on covered platforms prior to exchanging messages with them).
For more on the problems with making size-based regulatory distinctions between Internet services, see this paper.
__
Distinguishing NetChoice v. Bonta:
Californiaâs law underscores several of this Courtâs concerns with the tailoring of the Actâs direct-messaging restrictions, namely: (1) The direct-messaging restrictions do not alter adultsâ ability to view and to comment on minorsâ posts. (2) Adults and minors can communicate without restriction so long as they connect with one another on covered platforms. And (3) adults and minors can communicate without restriction on unregulated websites.
__
even if the Actâs direct-messaging restrictions only trigger intermediate scrutiny, they do not automatically satisfy it. In fact, the same concerns of over- and under-inclusivity apply with full force. Defendants have not demonstrated that the restrictions here are even substantially related to their stated interest.
__
Vagueness
the Court finds that the Act (writ large) is unconstitutionally vague because the scope of the term â[s]ocial media platformâ is unclear…The phrase âpredominant or exclusive functionâ is undefined. In the Courtâs view, it is also nebulous.
My old maxim: if you can’t define it, you can’t regulate it.
__
If enough people stop using a covered platform for social interaction, is it no longer covered by the Act?…Accepting that, at present, Twitchâs predominant function is social interaction, what happens if tomorrow (or next week, next month, next year) more people use the platform for its interactive gaming function? Will the Act cease to cover Twitch? And again, where is the line between these functions? The Act does not say…
if platforms like Twitch are covered by the Act, then the Actâs under-inclusivity becomes all the more apparent. At a minimum, a user would have to verify his age in order to watch a Twitch stream of his favorite massively multiplayer online videogame. But the same user could play the videogame itself (e.g., make a profile, populate a friends list, chat with friends and strangers) without having to satisfy any of the Actâs requirements
Again, size-based Internet regulations are routinely going to create weird edge cases.
__
Personnel note: Louisiana relied on Jean Twenge and Tony Allen as experts.
Allen has appeared on the blog before in other failed efforts to support segregate-and-suppress laws. This is also not the first time a court repeatedly cites Allen’s words in support of NetChoice’s arguments. đ¤ˇââď¸
As for Dr. Twenge, the court says “NetChoice has pointed out that, inter alia, Dr. Twenge: (1) could not supply consensus definitions of âsocial mediaâ or âmental health,â (2) conceded that she did not have any âformal criteriaâ for including studies inâor excluding them fromâher expert report, and (3) included only studies showing correlation, not causation.”
What’s Next?
This case will be appealed to the Fifth Circuit, which has almost never met a censorship law it didn’t like and has zero interest in the rule of law. As savvy as this ruling is, the odds of this ruling withstanding a Fifth Circuit review seems low.
What happens after that is anyone’s guess. In the wake of Moody and FSC v. Paxton, the Supreme Court has ensured it will receive a steady stream of cert petitions related to segregate-and-suppress laws. Rulings like Moody, Garland v. TikTok, and FSC v. Paxton collectively leave no clear indication of what they will do next.
Case Citation: NetChoice v. Murrill, 2025 WL 3634112 (M.D. La. Dec. 15, 2025)
* * *
Now onto the Arkansas law:
This case involves the Arkansas Act 901 of 2025, which imposes liability on social media for addiction/self-harm, including a private right of action. The court grants a preliminary injunction except with respect to the private right of action, which, as usual, evades prospective judicial review (see my discussion about the depravity of this constitutional workaround). Standout passages from the opinion:
__
There is no doubt that users engage in constitutionally protected speech on these platforms. Platforms themselves also engage in a range of protected expression on their platforms.
__
there is no doubt that unfettered social media access can harm minor users. And there is evidence to support the conclusion reached by the Arkansas legislature that unfettered social media access can also harm adult users. But under existing First Amendment jurisprudence, the scope of that harm relative to the efforts social media platforms take to prevent it is largely beside the point.
__
the Act regulates pretty much everything a social media platform does.
__
One important difference between this case and Moody is that the challenged laws in Moody restricted the removal or suppression of user speech, while Act 901 does the opposite, restricting the dissemination or promotion of user speech. This raises different First Amendment concerns because forcing a platform to carry user speech it would rather remove burdens the platformâs speech, but not the userâs, while forcing a platform to suppress user speech it would rather promote burdens both the platformâs and the userâs speech.
__
Starting with the likely constitutional applications, Defendants failed to identify any.
đĽ
__
Section 1503, however, always premises liability on the content of the speech to which a user is exposedââonline content promoting, or otherwise advancing, self-harm or suicideââand therefore has no application that is exempt from First Amendment scrutiny…
three of § 1502âs four prohibited results (drugs, eating disorders, and suicide) impose content-based restrictions on platformsâ editorial discretion and on usersâ speech. And the fourth (social media addiction), even assuming it is content neutral, is not narrowly tailored to further the Stateâs interests.
__
A user who searched for cat videos in the past will be recommended cat videos in the future. But YouTube should probably know that recommending videos about nooses to a user who searched âhow to tie a nooseâ in the past could cause that user to attempt suicide. If YouTube treats âhow to tie a nooseâ like any other search for purposes of its recommendation algorithm, it may be subject to liability under Act 901. To avoid that liability, YouTube cannot maintain its preferred recommendation algorithm and must instead incorporate the Stateâs editorial judgment by treating a âhow to tie a nooseâ search differently than a cat search because of the searchâs content…
certain types of content, like the noose example, are known to be associated with certain results. When platforms know that this type of content is posted on their platforms, § 1502 requires them to exercise reasonable care to ensure that their designs, algorithms, and features do not promote it, thereby removing platformsâ ability to, for example, use content-agnostic recommendation algorithms…A law that prohibits platforms from pushing âcertain types of contentâ but allows them to push other types of content is a content-based law.
The court makes an empirical assumption about why people research suicide-related themes and how they respond to that content. I would have liked to see more thoughtful analysis about the range of possibilities in these circumstances, including the possibility that such research reduces ultimate harms.
I highlight the last point: “A law that prohibits platforms from pushing âcertain types of contentâ but allows them to push other types of content is a content-based law.” This means that any regulatory attempt to dictate how algorithms work will ALWAYS affect the algorithms’ outputs and thus are ALWAYS content-based restrictions.
__
Section 1502 imposes liability on platforms for disseminating content in a way that the platform âshould have knownâ would cause any Arkansas user to purchase a controlled substance, develop an eating disorder, or attempt suicideâeven if the vast majority of Arkansas users exposed to that content or dissemination method would not respond in those manners. By imposing liability on a platform any time the protected speech by or on that platform results in a specified âharmâ to a single Arkansas viewer that the platform should have anticipated, § 1502 impermissibly limits the online posting and promotion of protected speech that is not harmful to most viewers. The State cannot force platforms to censor potentially sensitive, but protected, speech as to all users for the benefit of some subset of particularly susceptible users.
As I discuss in my Segregate-and-Suppress article, minor subpopulations often have conflicting informational needs. Catering to one subpopulation may disadvantage other subpopulations. The court recognizes this tradeoff and cites it against the law’s constitutionality.
__
Defendants have not met their burden to show that § 1503 is the least restrictive means of accomplishing the Stateâs interest in preventing suicide.
__
both operative provisions of Act 901 are substantially underinclusive because they do not restrict âidentical content that is communicated through other media.â Thus, 13 Reasons Why, a TV show which graphically depicts a suicide, can remain on Netflix, but YouTube could be liable for showing a user clips of the same. Similarly, Netflix can glorify thin bodies and cast exclusively thin actresses, but Instagram may be liable if it promotes those actressesâ posts thereby causing a user to develop an eating disorder. Defendants respond that these non-social media platforms do not have âthe same type of emerging research and whistleblowers indicating that [they] are causing the same widespread harm that social media is causing.â But Defendantsâ own evidence indicates that consumption of other types of digital media also risks the prohibited results….
underinclusivity is especially concerning because Act 901 does not generally bar minorsâ access to speech promoting suicide or tending to cause a prohibited result; instead, it limits only their access to forums in which to discussârather than merely viewâthis speech.
The state is engaging in classic Internet exceptionalism–treating the Internet worse than other media for identical content.
__
some designs or features that may make a platform âaddictiveâ do not implicate platformsâ editorial discretion, and prohibiting or restricting these features would imposes little, if any, burden on usersâ speech. For example, § 1502 might prohibit âinfinite scroll.â The use of infinite scroll by platforms can hardly be considered expressive, and replacing infinite scroll with click to load more or pagination would still provide users easy access to the same amount of speech as infinite scroll. This change may help reduce compulsive use by providing stopping cues to users, furthering the Stateâs asserted interest without substantially burdening usersâ speech. The same analysis applies to autoplay. Section 1502âs addiction component is likely constitutional as applied to both these features
I strongly disagree with the court’s speculation here. “Infinite scroll” and “auto-play” should absolutely be protected editorial decisions about the best way to present content. It’s like a legislature telling print newspapers how to present their editorial content and ads, such as banning stickers on the front page or forcing the newspaper to artificially categorize articles into topical groupings that don’t reflect the newspaper’s preferred taxonomy. The editorial decision-making is the same in each case–and equally protected by the First Amendment.
__
the more pressing addiction driver, the use of intermittent variable rewards, is achieved through designs, algorithms, and features that present more difficult questions. Social feedback features, e.g., likes, shares, or comments, by their very nature provide randomly timed rewards because they depend upon the independent action of another user, not the platform. Prohibiting this social feedback because it causes addiction in some users burdens the speech of the many other users for whom this feedback is not addictive.
Platforms also use algorithms to order the content they display, and many of these algorithms incidentally or intentionally incorporate intermittent variable rewards.
[in a footnote, the court adds “even the simple reverse chronological algorithm may create an intermittent variable reward schedule for users because, as a result of the independent actions of the accounts a user follows, the posts displayed will naturally vary in how interesting or ârewardingâ they are”].
I found the court’s distinction between intermittent variable rewards and auto-play/infinite scrolling unpersuasive. They are equally part of a publisher’s editorial toolkit.
__
In a world where billions of pieces of content are posted on social media every day, social media would be functionally useless as a âvast democratic forum[ ]â if platforms were not allowed to use any algorithmâany systemâfor selecting and ordering content to display to users. If a social media platform was a library, banning algorithms would be roughly equivalent to requiring books be placed on shelves at random. Such a prohibition would burden usersâ (or library patronsâ) First Amendment rights by making it significantly more difficult to access speech a user wishes to receive, so a state probably could not constitutionally ban algorithms for the organization of speech (on social media or elsewhere) altogether….
a platform may know that an algorithm is addictive in some habitual users and still wish to give more temperate users access to algorithmically curated content, but § 1502 directs that the platform âshall not useâ the algorithm at all.
We need more editorial curation, not less.
__
I didn’t understand this footnote:
It is possible that some algorithmsâespecially purely or primarily time-tracking engagement-based algorithms like TikTokâsâcould be prohibited with minimal burden on the more temperate user, who could instead access the same content of interest by following specific users or topic. Some algorithmic features, like mixing in content the algorithm identifies as less interesting to maintain the intermittent variable reward schedule, could also be prohibited without burdening usersâ access to speech, although such prohibitions may burden platformsâ speech to the extent the less interesting content reflects platformsâ editorial preferences. And requiring that users opt in to addictive features, rather than forbidding them altogether, would also be less burdensome on usersâ speech
It’s a no from me on all of these alternatives. This all sounds well within the scope of a publisher’s prerogative. In particular, forcing users to opt-into a feature essentially dictates the design of the default feature, which I don’t think legislatures can do when it comes to Constitutionally-protected publications.
__
the problem with § 1502 is not so much that its prohibited results are vague, but that it fails to specify a standard of conduct to which platforms can conform and its violation entirely depends upon the sensitivities of some unspecified user and a judge or juryâs determination about what the platform âshould have knownâ about those sensitivities. Such a law is unconstitutionally vague
I feel like this is true with every segregate-and-suppress law. Essentially, if one sensitive user is in the audience, it vetoes the tool for everyone, even if there are vastly more people who benefit from the tool. There are no pareto-optimal options here; only tradeoffs, and the government forcing publishers to make a different tradeoff choice is constitutionally impermissible.
__
The state argued that 1502 has an actual knowledge standard. This is obviously wrong. 1502 specifies a constructive knowledge standard. The court says:
In the sensitive First Amendment context, objective [negligence] standards like this one do not tend to decrease the threat posed by vague laws….There is no narrowing construction the Court can give to § 1502 that would be consistent with its text and would render it constitutionalâDefendants donât even bother to suggest one
In contrast, 1503(a) has requires willfulness+, so the court says it’s not vulnerable to a void-for-vagueness challenge.
__
The court rejects Anderson v. TikTok as precedent: “Immunizing platforms from being treated as publishers only when they are not acting like publishers would nullify Section 230 and contradict its plain text.”
__
NetChoice is not likely to succeed on its facial preemption challenge because some applications of Act 901 are consistent with Section 230. In some potential applications, e.g., designs that make it difficult to delete oneâs account, liability is not premised on a platformâs editorial decisions about third-party content at all. In others, e.g., appearance-altering filters, Snapchat streaks, the platformâs own posts, the platform itself is acting as an information content provider, and Section 230 does not immunize information content providers from liability for their own content.
__
This case will be appealed to the Eighth Circuit, rather than the Fifth Circuit. That’s nominally good news because the Fifth Circuit is so terrible, but I have no idea what to expect on appeal.
Case Citation: NetChoice v. Griffin, 2025 WL 3634088 (W.D. Ark. Dec. 15, 2025)
* * *
Blog Posts on Segregate-and-Suppress Obligations
- Challenge to Marylandâs âKid Codeâ Survives Motion to DismissâNetChoice v. Brown
- My Testimony Against Mandatory Online Age Authentication
- Read the Published Version of My Paper Against Mandatory Online Age Authentication
- Prof. Goldmanâs Statement on the Supreme Courtâs Demolition of the Internet in Free Speech Coalition v. Paxton
- Court Permanently Enjoins Ohioâs Segregate-and-Suppress/Parental Consent LawâNetChoice v. Yost
- Arkansasâ Social Media Safety Act Permanently EnjoinedâNetChoice v. Griffin
- Why I Emphatically Oppose Online Age Verification Mandates
- Californiaâs Age-Appropriate Design Code (AADC) Is Completely Unconstitutional (Multiple Ways)âNetChoice v. Bonta
- Another Conflict Between Privacy Laws and Age AuthenticationâMurphy v. Confirm ID
- Recapping Three Social Media Addiction Opinions from Fall (Catch-Up Post)
- District Court Blocks More of Texasâ Segregate-and-Suppress Law (HB 18)âSEAT v. Paxton
- Comments on the Free Speech Coalition v. Paxton SCOTUS Oral Arguments on Mandatory Online Age âVerificationâ
- Californiaâs âProtecting Our Kids from Social Media Addiction Actâ Is Partially UnconstitutionalâŚBut Other Parts Are Green-LightedâNetChoice v. Bonta
- Section 230 Defeats Underage Userâs Lawsuit Against GrindrâDoll v. Pelphrey
- Five Decisions Illustrate How Section 230 Is Fading Fast
- Internet Law Professors Submit a SCOTUS Amicus Brief on Online Age AuthenticationâFree Speech Coalition v. Paxton
- Court Enjoins the Utah âMinor Protection in Social Media ActââNetChoice v. Reyes
- Another Texas Online Censorship Law Partially EnjoinedâCCIA v. Paxton
- When It Comes to Section 230, the Ninth Circuit is a Chaos AgentâEstate of Bride v. YOLO
- Court Dismisses School Districtsâ Lawsuits Over Social Media âAddictionââIn re Social Media Cases
- Ninth Circuit Strikes Down Key Part of the CA Age-Appropriate Design Code (the Rest is TBD)âNetChoice v. Bonta
- Mississippiâs Age-Authentication Law Declared UnconstitutionalâNetChoice v. Fitch
- Indianaâs Anti-Online Porn Law âIs Not Closeâ to ConstitutionalâFree Speech Coalition v. Rokita
- Fifth Circuit Once Again Disregards Supreme Court Precedent and Mangles Section 230âFree Speech Coalition v. Paxton
- Snapchat Isnât Liable for Offline Sexual AbuseâVV v. Meta
- 2023 Quick Links: Censorship
- Court Enjoins Ohioâs Law Requiring Parental Approval for Childrenâs Social Media AccountsâNetChoice v. Yost
- Many Fifth Circuit Judges Hope to Eviscerate Section 230âDoe v. Snap
- Louisianaâs Age Authentication Mandate Avoids Constitutional Scrutiny Using a Legislative Drafting TrickâFree Speech Coalition v. LeBlanc
- Section 230 Once Again Applies to Claims Over Offline Sexual AbuseâDoe v. Grindr
- Comments on the Ruling Declaring Californiaâs Age-Appropriate Design Code (AADC) UnconstitutionalâNetChoice v. Bonta
- Two Separate Courts Reiterate That Online Age Authentication Mandates Are Unconstitutional
- Minnesotaâs Attempt to Copy Californiaâs Constitutionally Defective Age Appropriate Design Code is an Utter Fail (Guest Blog Post)
- Do Mandatory Age Verification Laws Conflict with Biometric Privacy Laws?âKuklinski v. Binance
- Why I Think Californiaâs Age-Appropriate Design Code (AADC) Is Unconstitutional
- An Interview Regarding AB 2273/the California Age-Appropriate Design Code (AADC)
- Op-Ed: The Plan to Blow Up the Internet, Ostensibly to Protect Kids Online (Regarding AB 2273)
- A Short Explainer of Why Californiaâs Social Media Addiction Bill (AB 2408) Is Terrible
- A Short Explainer of How Californiaâs Age-Appropriate Design Code Bill (AB2273) Would Break the Internet
- Is the California Legislature Addicted to Performative Election-Year Stunts That Threaten the Internet? (Comments on AB2408)
- Omegle Denied Section 230 DismissalâAM v. Omegle
- Snapchat Isnât Liable for a Teacherâs Sexual PredationâDoe v. Snap
- Will California Eliminate Anonymous Web Browsing? (Comments on CA AB 2273, The Age-Appropriate Design Code Act)
- Minnesota Wants to Ban Under-18s From User-Generated Content Services
- Californiaâs Latest Effort To Keep Some Ads From Reaching Kids Is Misguided And Unconstitutional (Forbes Cross-Post)
- Backpage Gets Important 47 USC 230 Win Against Washington Law Trying to Combat Online Prostitution Ads (Forbes Cross-Post & More)
- Backpage Gets TRO Against Washington Law Attempting to Bypass Section 230âBackpage v. McKenna
- MySpace Wins Another 47 USC 230 Case Over Sexual Assaults of UsersâDoe II v. MySpace
- MySpace Gets 230 Win in Fifth CircuitâDoe v. MySpace
- Website Isnât Liable When Users Lie About Their AgesâDoe v. SexSearch
