Too Many Courts Are Letting States Take Wrecking Balls to the Internet (Roundup)
âTis the season for Internet censorship. đ More accurately, Internet censorship is now a four-season sport in state legislatures. There is not a stereotypical red state/blue state divide. Instead, the âdivideâ is between pro-censorship and anti-censorship legislators. You can count the number of anti-censorship legislators nationally on a single hand.
This asymmetry has unleashed a flood of misguided state Internet censorship laws, many of them in the âsegregate-and-suppressâ genre. This blog post covers the legal challenges to a dozen state censorship laws.
[My backlog ties directly to my 2-week trip to China in the second half of June. I couldnât blog some rulings that came out during the trip, and subsequent rulings came faster than I could clear my backlog. Censorship is a big business nowadays].
X v. Ellison, 2025 WL 3459641 (D. Minn. Dec. 2, 2025)
This is an anti-political âdeepfakesâ law. The court says X lacks Article III standing to challenge the law:
Attorney General Ellison asserts that X Corp.âs pleadings do not demonstrate that X Corp. itself ever acts with any particular mens rea when deepfakes are disseminated on XâŠ.there is no showing in the pleadings that X Corp. has ever received a report of a deepfake through its own technology (whatever that is) or through its âpartnerships with third partiesâ (whoever those are)âŠ.the Court cannot conclude as a matter of law that X Corp. has plausibly demonstrated that it disseminates deepfakes knowingly or with reckless disregardâŠ
The pleadings do not demonstrate that X Corp. ever disseminates deepfakes with the specific intent to injure a candidate or influence the result of an electionâŠ.X Corp. is agnostic about a userâs motivation for disseminating the deepfake. And at no point does X Corp. allege or put forward any evidence that it allows deepfakes to continue circulating on X to injure a particular candidate or to influence the result of an election. Indeed, X Corp. alleges that it largely plays an uninvolved, backseat role when it comes to addressing deepfakes, instead ceding content moderation to X users who can offer Community Notes, comments, and replies to clarify potentially deceptive AI-generated mediaâŠ.even if X Corp. is aware of a political deepfake disseminated on X, the Terms of Service make clear that X Corp. does not endorse the intent of the user who posted the deepfakeâŠ.
nowhere does X Corp. allege or show that it enters into the Terms of Service with Xâs users for the purpose of disseminating deepfakes. Nothing in the Terms of Service reflects any purposeâeither on the part of X Corp. or Xâs usersâto disseminate deepfakes.
Computer & Communications Industry Association v. Uthmeier, 2025 WL 3458571 (11th Cir. Nov. 25, 2025)
This is a challenge to Florida HB3, a segregate-and-suppress law that bans the availability of certain social media âaddictiveâ features to minors. This is a rough ruling for free speech. (I discuss the district court ruling below).
The appeals court summarizes:
We agree with the district court that HB3 likely implicates the First Amendment by singling out protected expressive activities. We also agree that HB3 is likely content neutral because it is not facially content based, nor does it have a content-discriminatory purpose. We conclude, however, that HB3âs limited restrictions likely satisfy the intermediate scrutiny test for content-neutral regulations, so the Attorney General has made a strong showing he is likely to succeed on the merits.
With respect to the lawâs content neutrality:
HB3 is facially content neutral. It restricts social media platformsâ ability to contract with children under a certain age if those platforms include certain addictive features. Neither its definition of âsocial media platformâ nor of âaddictive featuresâ makes any reference to the type of content involvedâŠ.HB3 defines social media platforms by reference to a form of expression, not a subject matter.
The court talks about parental control as a less-restrictive alternative:
Florida should allow parents to oversee their childrenâs social media use, the district court opined, rather than stepping in and restricting access itself. The courtâs lengthy discussion of alternatives already available to parents only highlights Floridaâs purpose in enacting HB3: it found that parental controls were not working
And what about the lawâs age authentication mandate? The court says FSC v. Paxton blessed it:
We acknowledge that the age verification requirement imposes some burden on adult speech (thus implicating First Amendment scrutiny in the first place). But the Supreme Court has recently clarified that age verification does not automatically trigger strict scrutiny because it does not constitute a âban on speech to adults.â
When FSC v. Paxton first came out, too many experts initially took the position that the damage wasnât too bad because the opinion purportedly cabined its effects to adult content. Opinions like this expose that fallacy. The court delinks the adult content predicate and unleashes unrestrained censorship.
Judge Rosenbaum dissents, saying HB3 is âplainly unconstitutional on its face.â As you can imagine, I think sheâs right and the majority is wrong.
She writes:
the Act purports to regulate the speech of everyone who uses the covered social media websites. For minors, it acts as a categorical ban on speech (and access to speech) on covered social media platforms. And it forces the platform to demand identifying information from all users, including adults. In doing so, it chills countless usersâ speech on deeply personal, political, religious, and familial mattersâreaching the heart of what the First Amendment was designed to protect in the first place.
My Segregate-and-Suppress paper raises these objections and many others.
She disagrees the law is content-neutral:
The Act directly regulates expression in at least two ways. First, the Act inserts the state of Florida into usersâ decisions about whether and how to speak onlineâŠ.By preventing a user from accessing a platform, the State restricts the userâs access to the unique features of that platform. In doing so, the State interferes with the message, form, and content of speech the user seeks to engage in. When a user canât join a platform, the user canât access speech that may be available on only that platform or communicate with the audience that may be present on only that platform.
Second, the law interferes with the platformsâ editorial choices about which users can join the platform and about what content to display to those usersâŠ.Verification requirements can directly impact the content that the platform hosts.
She finds another angle to reject content neutrality:
the Act determines whether a platform is covered based on what content that platform permits. Generally speaking, if a platform involves public user-to-user speech, then the platform is covered; if it doesnât, itâs not. Thatâs a content-based law, triggering strict scrutiny.
This is pernicious, especially FOR minors:
Florida prohibits minors from holding accounts on all social media platforms that allow user-to-user expression. So minors canât participate in or contribute to discourse in the entire online forumâwhether through the âformâ of liking or commenting on somebodyâs post, posting, contributing to a message board, or sending a direct message. Thatâs the effect of the lawâs prohibition on minors from creating or holding accounts on these websites. Excluding individuals wholesale from a particular forum doesnât merely regulate one âformâ of speech; it regulates all forms of speech in that forumâŠ
When it comes to speaking online, the Act effectively prohibits many minors from speaking at all.
Perhaps a contributing factor to the mental and emotional distress experienced by minors today is that the government keeps telling them to STFUâŠ.?
NetChoice, LLC v. Bonta, 2025 WL 2600007 (9th Cir. Sept. 9, 2025)
This lawsuit involves Californiaâs âProtecting Our Kids from Social Media Addiction Actâ (SB 976), a segregate-and-suppress law. The segregation requirement is inchoate because the law deputizes the AG to make rules about how to do age authentication (which it hasnâthavenât been done), so it wonât go into effect until 2027. Meanwhile, the law contains a package of suppression requirements, some of which the district court enjoined. But the lawâs most important suppression requirement is to ban personalized feeds for minors, which the Ninth Circuit doesnât enjoin. This permits the government to control how publishers publish content and readers consume content.
The court summarizes the statute:
An addictive feed consists of media shared by other users in which that media is shown to a user based on the userâs unique information and online activity. This means that, with some exceptions, covered platforms cannot use any information provided by minor users to decide what content to show themâŠ
the more an algorithm implements human editorial directions, the more likely it is to be expressive for First Amendment purposes. An algorithm that promotes a platformâs own message to users is likely to be protected speech. [TikTok v. Garland] Such an algorithm, after all, is not unlike traditional media curated by human editors. [Moody]âŠ
On the other hand, an algorithm that ârespond[s] solely to how users act online,â merely âgiving them the content they appear to want,â probably is not expressive. [Moody, including Barrettâs concurrence]. Personalized algorithms might express a platformâs unique message to the world, or they might reflect usersâ revealed preferences to them.
This opinion is obviously ignorant of media theory. It assumes usersâ preferences are static and not shaped by the media. It assumes uersâ preferences are capable of being revealed, without distortion, by a mirror. It also ignores that it is an editorial judgment to implement an algorithm that purports only to reveal a userâs undistorted preferences.
The court continues: âCaliforniaâs use of âsocial mediaâ platform as statutory shorthand does not render the Act content based, since it applies to websites whether they facilitate social interaction or other forms of content.â The law singles out social media publishers for different treatment than other publishers. This is a speaker-based restriction. The court later says that not all speaker-based laws are content-based, but the court should have explained the differences more.
On an as-applied basis, the court subjects restrictions on displaying like counts to strict scrutiny. The like display restriction fails due to NetChoice v. Bonta (9th Cir. 2024).
On an as-applied basis, the court subjects the requirement that minorsâ accounts be defaulted to private mode to intermediate scrutiny. âIn private mode, minors cannot conform their social media habits to maximize interaction and approval of a worldwide audience. This logically serves the end of protecting minorsâ mental health by reducing screentime and habit-forming platform usageâŠ.California took a relatively nuanced approach.â
The challenge to the age authentication requirement isnât ripe (the age authentication method wonât come online until 2027 or later).
With respect to minorsâ access to algorithmic feeds, the court says:
The only speech regulated by this provision is the speech made up of the algorithms themselves. All other contentâincluding third-party content created by other usersâremains available to minors. Minors can still search for it or follow its creator, after all.
This is terrible. Among other drafting defects, the court establishes a fake dichotomy between hosting and promotion. Both are forms of publishing content, and each is an editorial expression. To say that hosting and promotion are substitutes for each other is asinine.
Thus, the court dodges the facial challenge because it believes that âall we recognize is that some personalized recommendation algorithms may be expressive, while others are not, and that inquiry is fact intensive,â so there may be constitutional applications of the law. This is grossly wrong. All algorithmic choices are expressive, and the state has stripped away some of those expressive choices from the editorial toolkit.
The court rejects the overbreadth facial challenge because the lawâs potential reach is so expansive that itâs impossible to tell who is covered (the court asks: âESPN.com? wsj.com? neopets.com? chess.com? Airbnb? Or any number of thousands of platformsâ). In other words, the more poorly-drafted and overreaching the law is, the more itâs immunized from First Amendment challenges??? The Moody precedent was an unfortunate gift to bad legislators.
This opinion sucked.
Kohls v. Bonta, 2025 WL 2495613 (E.D. Cal. Aug. 29, 2025)
This case is a First Amendment challenge to AB 2839, which regulates political synthetic content a/k/a âdeepfakes.â The court had previously preliminarily enjoined the law.
The court now makes the injunction permanent because âAB 2839 discriminates based on content, viewpoint, and speaker and targets constitutionally protected speech.â The court adds:
While it is true that AB 2839 has constitutional applications to the extent that defamatory or fraudulent speech falls under its umbrella, this is only because its scope is so elastic that it penalizes wholesale categories of speech, sweeping in both protected and unprotected speech. Thus, the statuteâs potential unconstitutional applications would regularly outweigh its constitutional ones.
Thanks for passing yet another censorship law, California legislature!
Chamber of Commerce v. Lierman, No. 24-1727 (4th Cir. Aug. 15, 2025)Â
This involves Marylandâs digital ad tax (âMaryland thus became the first state in the country to tax the revenues companies produce by advertising on the internet.â). The law both imposed a tax on digital ads and prohibited publishers from separately disclosing this tax âby means of a separate fee, surcharge, or line-item.â As the court says, âIf companies pass on the cost of the tax, they must do so in silenceâkeeping customers in the dark about why prices have gone up and thereby insulating Maryland from political responsibility.â
CCIA v. Uthmeier, 4:24cv438-MW/MAF (N.D. Fla. June 3, 2025)
[This decision got overturned by the 11th Circuit CCIA v. Uthmeier opinion discussed above.]
A Florida segregate-and-suppress law, H.B. 3, âprohibits some social media platforms from allowing youth in the state who are under the age of 14 to create or hold an account on their platforms, and similarly prohibits allowing youth who are 14 or 15 to create or hold an account unless a parent or guardian provides affirmative consent for them to do so.â This is one of the many anti-âaddictionâ segregrate-and-suppress laws.
The court summarizes:
Although this Court today finds that Floridaâs challenged law is likely unconstitutional, it does not doubt that parents and legislators in the state have sincere concerns about the effects that social media use may have on youth, nor does it render parents or the State powerless to address those concerns. For example, this Order leaves in place new provisions of Florida law that require covered social media platforms to terminate any account held by a youth under 16 in the state upon the request of a parent or guardian. Instead, like other district courts around the country, this Court simply recognizes that the First Amendment places stringent requirements on the State to avoid substantially burdening speech unless the State can show that doing so is necessary to achieve its significant interests.
Of specific interest, the court says that the law triggers intermediate scrutiny. The court says the law isnât content-based: âSocial speech, or speech generated by users on social media platforms, can touch on any conceivable topic, message, or idea, and accordingly does not seem akin to the kind of category that the Supreme Court discussed in City of Austin.â
The court says the law nevertheless fails intermediate scrutiny:
Even assuming the significance of the Stateâs interest in limiting the exposure of youth to websites with âaddictive features,â the lawâs restrictions are an extraordinarily blunt instrument for furthering it. As applied to Plaintiffsâ members alone, the law likely bans all youth under 14 from holding accounts on, at a minimum, four websites that provide forums for all manner of protected speech: Facebook, Instagram, YouTube, and Snapchat. It also bans 14- and 15-year-olds from holding accounts on those four websites absent a parentâs affirmative consent, a requirement that the Supreme Court has clearly explained the First Amendment does not countenance [cite to Brown]
The court cites the Packingham decision for its rejection of categorical bans on social media:
Social media websites, according to the Court, âprovide the most powerful mechanisms available to a private citizen to make his or her voice heard.â This is perhaps especially true for youth, for whom the available forums for speech are generally more limited than for adults, who tend to have freer rein to decide for themselves where to go and how to spend their timeâŠ
Youth are people, not mere people-in-waiting or extensions of their parents. They have their own interests, ideas, and minds. Not only that, but they are citizens in training. The responsibilities and privileges of citizenship are significant, and our constitutional system is better served when its citizens build those muscles over time, beginning when they are young, rather than all at once the day they come of age.
The court also cites the lawâs parental veto power, which the plaintiffs didnât challenge, as sufficient to protect childrenâs interest. My Segregate-and-Suppress paper explains the many problems with parental consent/veto rights.
NetChoice LLC v. Carr, 1:25-cv-02422-AT (N.D. Ga. June 25, 2025)
This is a pre-FSC v. Paxton ruling, so it may not be good law any more.
The court summarizes Georgiaâs segregate-and-suppress law:
The Protecting Georgiaâs Children on Social Media Act of 2024 is undoubtedly aimed at protecting young people from these dangers. In attempting to do so, it also implicates at least three distinct First Amendment interests: (1) it restricts the rights of Georgiaâs minors to access a vital forum of information and conversation; (2) it chills the rights of all Georgians to engage in anonymous speech online; and (3) it impedes social media platformsâ ability to communicate with their users. Those burdens cannot comport with the First Amendment.
The court emphasizes point 2: âa universal age verification requirement for Georgians using social media would potentially all but kill anonymous speech onlineâŠ.requiring users to tie their online views to their identity would undoubtedly chill speechâand, likely, would disproportionately chill speech on the most controversial issues.â
Section 230 also is in play:
SB 351 requires that social media platforms prohibit â[t]he display of any advertising in the minor account holderâs account based on such minor account holderâs personal information.â But that requirement âwould ultimately [result in] liability âfor decisions relating to the monitoring, screening, and deletion of content from its networkâactions quintessentially related to a publisherâs role.â. . . In such a case, section-230 immunity would likely attach.â Plaintiff is thus likely to succeed on the merits of its Section 230 challenge to the Advertising Provision, which would impose liability on social media platforms for their decisions (or lack thereof) on âwhether to publish, withdraw, postpone or alterâ specific advertisements to young users
The court concludes:
SB 351 purports to regulate minorsâ social media use for their own good. But the State has carved out dozens of arbitrary exemptions and, in oral argument, downplayed the rigor of its parental consent requirements. The State certainly has the authority and ability to provide children and parents with education, resources, and tools to understand the detriments of social media and engage with the internet in a healthier and more productive way. But the challenged portions of SB 351 do not do so. Instead, the Act curbs the speech rights of Georgiaâs youth while imposing an immense, potentially intrusive burden on all Georgians who wish to engage in the most central computerized public fora of the twenty-first century. This cannot comport with the free flow of information the First Amendment protects.
NetChoice LLC v. Fitch, 2025 WL 1709668 (S.D. Miss. June 18, 2025)
This is another pre-FSC ruling. It already reached the Supreme Courtâs shadow docket, where Justice Kavanaugh wrote: âNetChoice has, in my view, demonstrated that it is likely to succeed on the meritsânamely, that enforcement of the Mississippi law would likely violate its membersâ First Amendment rights under this Courtâs precedentsâŠ.under this Courtâs case law as it currently stands, the Mississippi law is likely unconstitutional.â Despite that, even Justice Kavanaugh voted to let the lawâa law he thinks is unconstitutionalâgo into effect. How messed up is that?
Some quotes from the (now-deprecated) opinion:
- âThe facial distinctions H.B. 1126 draws based on the message a particular digital service provider conveys, or the more subtle content-based restrictions based upon the speechâs function or purpose, render the Act content-based, and therefore subject to strict scrutinyâ
- âthe Attorney General has not shown that the alternative suggested by NetChoice, the private tools currently available for parents to monitor their children online would be insufficient to secure the Stateâs objective of protecting childrenâ
- âthe Act requires all users (both adults and minors) to verify their ages before creating an account to access a broad range of protected speech on a broad range of covered websites. This burdens the First Amendment rights of adults using the websites of Netchoiceâs covered membersâ
- âH.B. 1126 similarly requires only one parent or guardianâs consent to create an account with a covered memberâs website, and it does not explain how the parent or guardian relationship is to be verifiedâ
NetChoice v. Bonta, 2025 WL 1918742 (N.D. Cal. July 11, 2025)
The case involves a constitutional challenge to a state law parallel to the INFORM Consumers Act:
Title 1.4D requires online marketplaces to collect and periodically verify certain identification, contact, and banking information from âhigh-volume third-party sellers,â defined as sellers making a minimum number of sales meeting certain statutory criteria that are processed through the online marketplace. If a high-volume third-party seller fails to provide information to the online marketplace as required by Title 1.4D, the online marketplace must suspend any future sales activity of the seller pending the sellerâs compliance.
The court says:
A straightforward comparison between the INFORM Act and the amended version of California Civil Code § 1749.8(b) makes clear that the two are in opposition and incompatible. Section § 1749.8(b) now requires an online marketplace to count both transactions made through the online marketplace and transactions made outside of the online marketplace when determining who is a high-volume third-party seller, while the INFORM Act requires an online marketplace to count only transactions made through the online marketplace. Both requirements cannot be satisfied simultaneously.
With respect to another provision of the law:
California Civil Code § 1749.8.9(b)(1)(D) requires online marketplaces to â[m]aintain internal written policies, systems, and staff to monitor listings in order to affirmatively prevent and detect organized retail crime.â Based on information obtained through the required monitoring, an online marketplace must take certain actions, including alerting law enforcement if the online marketplace âknows or should know that a third-party seller is selling or attempting to sell stolen goods to a California resident.â The âalertingâ requirement can be fulfilled only by monitoring third-party sellersâ online content. In addition, online marketplaces must establish policies prohibiting the sale of stolen goods on the online marketplace, and impose âconsequences for knowingly selling stolen goods on the online marketplace, including, but not limited to, suspension or termination of the sellerâs account.â That is, the online marketplace must moderate third-party content, deciding whether to publish or to withdraw it from the platform. The Court finds that NetChoice is likely to prevail on its preemption claim grounded in Section 230 (Count 2) as to SB 1144 Section 6âs addition of California Civil Code §§ 1749.8.9(a), (b)(1)(A), and (b)(1)(D).
The other provisions of California Civil Code § 1749.8.9 do not implicate an online marketplaceâs role as a publisher. For example, § 1749.8.9(b)(1)(B) requires that an online marketplace provide a mechanism to allow individuals to report suspicious conduct by third-party sellers, while § 1749.8.9(b)(1)(C) requires that an online marketplace provide a mechanism to facilitate communication with law enforcement. Those requirements do not involve publication of third-party content and thus does not trigger Section 230. While § 1749.8.9(b)(2) requires publication of the policy and mechanism mandated by other subsections, such publication is not of third-party content. Finally, § 1749.8.9(c) merely states the effective date of the new statutory section.
The Stateâs opposition focuses largely on provisions of SB 1144 that are not implicated by NetChoiceâs Section 230 claim, and for that reason those arguments are not persuasive. The Court likewise is not persuaded by the Stateâs attempts to characterize controlling case law as less than definite with respect to Section 230âs bar on liability flowing from a state law requiring an online service to publish, withdraw, moderate, or monitor third-party content. In the Courtâs view, Grindr, Calise, Estate of Bride, and Barnes make clear that the identified provisions of California Civil Code § 1749.8.9 fall within Section 230âs preemption clause. Finally, the Stateâs reliance on legislative purpose is misplaced where, as here, the text of Section 230 is clear as construed by controlling case law.
Volokh v. James, Docket No. 23-356 (2d Cir. August 1, 2025)
The constitutionality of the Hateful Conduct Law depends on how the statute is interpreted. If either substantive provision of the statute requires Plaintiffs to adopt or incorporate the Stateâs definition of âhateful conduct,â which includes constitutionally protected speech, then we review the statute under (at least) intermediate scrutiny, and it fails. But if Plaintiffs can comply with the Hateful Conduct Law by disclosing a content moderation policy that does not incorporate or affirmatively encompass the statuteâs definition of âhateful conductâ and by providing a general mechanism for reporting content-related complaints, then we review the statute under the more relaxed standard set forth in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985), and its progeny, and the statute survives constitutional scrutiny. Whether the statute can support the latter, constitutional interpretation is a question best left to the New York Court of Appeals. We thus defer decision and CERTIFY to the New York Court of Appeals questions regarding the requirements of the statute.
National Retail Federation v. James, 2025 WL 2848212 (S.D.N.Y. Oct. 8, 2025)
New Yorkâs algorithmic pricing disclosure law qualifies for Zauderer treatment, which it survives.
Free Speech Coalition v. Knudsen, 2025 WL 2240252 (D. Mt. Aug. 6, 2025)
Montana amended its anti-porn segregate-and-suppress law to remove any AG enforcement, so that the law can only be enforced via private rights of action (the âbountyâ approach). This one âneatâ trick eliminates the plaintiffsâ ability to prospectively challenge the lawâs constitutionality. Nice one, Montana legislature.
District of Columbia v. Facebook, Inc., 2025 WL 2166018 (D.C. Ct. App. July 31, 2025)
This is a consumer protection action over Facebookâs Cambridge Analytica scandal:
In the Districtâs telling, Facebook violated the CPPA by unintentionally misleading consumers about which of their data was accessible to third-party applications through a Facebook userâs friends and about Facebookâs enforcement capabilities for auditing third-party applications. The District also alleged that Facebook made a material omission by failing to disclose to users that their data had been obtained in violation of Facebookâs policies
The appellate court reversed the trial courtâs grant of summary judgment to Facebook, saying âCPPA claims based on unintentional misrepresentations need only be proved by a preponderance of the evidence, as opposed to clear and convincing evidence.â
BONUS: âAttorney General James Releases Proposed Rules for SAFE for Kids Act to Restrict Addictive Social Media Features and Protect Children Online.â I donât have the fortitude to read this. Maybe you do.
* * *
Blog Posts on Segregate-and-Suppress Obligations
- Texas Judge Enjoins App Store Authentication LawâCCIA and SEAT v. Paxton
- Courts Enjoin Internet Censorship Laws in Louisana and Arkansas
- Challenge to Marylandâs âKid Codeâ Survives Motion to DismissâNetChoice v. Brown
- My Testimony Against Mandatory Online Age Authentication
- Read the Published Version of My Paper Against Mandatory Online Age Authentication
- Prof. Goldmanâs Statement on the Supreme Courtâs Demolition of the Internet in Free Speech Coalition v. Paxton
- Court Permanently Enjoins Ohioâs Segregate-and-Suppress/Parental Consent LawâNetChoice v. Yost
- Arkansasâ Social Media Safety Act Permanently EnjoinedâNetChoice v. Griffin
- Why I Emphatically Oppose Online Age Verification Mandates
- Californiaâs Age-Appropriate Design Code (AADC) Is Completely Unconstitutional (Multiple Ways)âNetChoice v. Bonta
- Another Conflict Between Privacy Laws and Age AuthenticationâMurphy v. Confirm ID
- Recapping Three Social Media Addiction Opinions from Fall (Catch-Up Post)
- District Court Blocks More of Texasâ Segregate-and-Suppress Law (HB 18)âSEAT v. Paxton
- Comments on the Free Speech Coalition v. Paxton SCOTUS Oral Arguments on Mandatory Online Age âVerificationâ
- Californiaâs âProtecting Our Kids from Social Media Addiction Actâ Is Partially UnconstitutionalâŠBut Other Parts Are Green-LightedâNetChoice v. Bonta
- Section 230 Defeats Underage Userâs Lawsuit Against GrindrâDoll v. Pelphrey
- Five Decisions Illustrate How Section 230 Is Fading Fast
- Internet Law Professors Submit a SCOTUS Amicus Brief on Online Age AuthenticationâFree Speech Coalition v. Paxton
- Court Enjoins the Utah âMinor Protection in Social Media ActââNetChoice v. Reyes
- Another Texas Online Censorship Law Partially EnjoinedâCCIA v. Paxton
- When It Comes to Section 230, the Ninth Circuit is a Chaos AgentâEstate of Bride v. YOLO
- Court Dismisses School Districtsâ Lawsuits Over Social Media âAddictionââIn re Social Media Cases
- Ninth Circuit Strikes Down Key Part of the CA Age-Appropriate Design Code (the Rest is TBD)âNetChoice v. Bonta
- Mississippiâs Age-Authentication Law Declared UnconstitutionalâNetChoice v. Fitch
- Indianaâs Anti-Online Porn Law âIs Not Closeâ to ConstitutionalâFree Speech Coalition v. Rokita
- Fifth Circuit Once Again Disregards Supreme Court Precedent and Mangles Section 230âFree Speech Coalition v. Paxton
- Snapchat Isnât Liable for Offline Sexual AbuseâVV v. Meta
- 2023 Quick Links: Censorship
- Court Enjoins Ohioâs Law Requiring Parental Approval for Childrenâs Social Media AccountsâNetChoice v. Yost
- Many Fifth Circuit Judges Hope to Eviscerate Section 230âDoe v. Snap
- Louisianaâs Age Authentication Mandate Avoids Constitutional Scrutiny Using a Legislative Drafting TrickâFree Speech Coalition v. LeBlanc
- Section 230 Once Again Applies to Claims Over Offline Sexual AbuseâDoe v. Grindr
- Comments on the Ruling Declaring Californiaâs Age-Appropriate Design Code (AADC) UnconstitutionalâNetChoice v. Bonta
- Two Separate Courts Reiterate That Online Age Authentication Mandates Are Unconstitutional
- Minnesotaâs Attempt to Copy Californiaâs Constitutionally Defective Age Appropriate Design Code is an Utter Fail (Guest Blog Post)
- Do Mandatory Age Verification Laws Conflict with Biometric Privacy Laws?âKuklinski v. Binance
- Why I Think Californiaâs Age-Appropriate Design Code (AADC) Is Unconstitutional
- An Interview Regarding AB 2273/the California Age-Appropriate Design Code (AADC)
- Op-Ed: The Plan to Blow Up the Internet, Ostensibly to Protect Kids Online (Regarding AB 2273)
- A Short Explainer of Why Californiaâs Social Media Addiction Bill (AB 2408) Is Terrible
- A Short Explainer of How Californiaâs Age-Appropriate Design Code Bill (AB2273) Would Break the Internet
- Is the California Legislature Addicted to Performative Election-Year Stunts That Threaten the Internet? (Comments on AB2408)
- Omegle Denied Section 230 DismissalâAM v. Omegle
- Snapchat Isnât Liable for a Teacherâs Sexual PredationâDoe v. Snap
- Will California Eliminate Anonymous Web Browsing? (Comments on CA AB 2273, The Age-Appropriate Design Code Act)
- Minnesota Wants to Ban Under-18s From User-Generated Content Services
- Californiaâs Latest Effort To Keep Some Ads From Reaching Kids Is Misguided And Unconstitutional (Forbes Cross-Post)
- Backpage Gets Important 47 USC 230 Win Against Washington Law Trying to Combat Online Prostitution Ads (Forbes Cross-Post & More)
- Backpage Gets TRO Against Washington Law Attempting to Bypass Section 230âBackpage v. McKenna
- MySpace Wins Another 47 USC 230 Case Over Sexual Assaults of UsersâDoe II v. MySpace
- MySpace Gets 230 Win in Fifth CircuitâDoe v. MySpace
- Website Isnât Liable When Users Lie About Their AgesâDoe v. SexSearch
