A Massive Roundup of Section 230 Decisions
This post also owes its origins to my 2-week trip to China in June. Section 230 decisions started backing up while I was on the trip and never stopped accruing. In total, this post covers about 30 decisions in 7k+ words. Whew! Some of these decisions are real stinkers, too.
Doe v. City of Warwick, 2025 WL 2197311 (D.R.I. Aug. 1, 2025)
This case involves a third-party service that collects anonymous citizen tips for law enforcement. The service is called Tip411 and is offered by Citizen Observer. The city of Warwick adopted Tip411. Doe claims that Roe anonymously submitted harassing tips to Tip411. All of the tips proved false, but the tips caused law enforcement to confront Doe in an aggressive manner.
Doe sued Citizen Observer for negligently designing its service. Citizen Observer invoked Section 230. The court says that Doe properly stated a products liability claim:
His claims are based on the Tip411 product; that is, he is asserting product liability, negligence, and breach of warranty claims based on Citizen Observer’s own conduct in developing, marketing, and selling an allegedly defective law enforcement tool. His claims are also focused on the absence of adequate warnings to Tip411 users and Citizen Observer’s failure to provide municipal trainings. Reading the allegations in Mr. Doe’s complaint and taking the facts stated as true, the Court finds that Mr. Doe claims against Citizen Observer are product liability claims based on its conduct in defectively designing and failing to warn and/or train foreseeable users and breach of warranty of the Tip411 product.
Hmm…this seems problematic. For example, what “warnings” would have changed Roe’s behavior? And Citizen Observer is supposed to teach law enforcement how to do its policing work?
It goes downhill from there:
Illogically, Citizen Observer also asserts that it acts as a passive message board and/or server host. Mr. Doe agrees with the latter, asserting that Citizen Observer does not take part in any of the communication that is directed through their platform in anyway, as they do not monitor, filter, or address the tips that travel through the application. Because it has been established that a publisher takes part in “reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content[,]” Mr. Doe asserts that it is impossible for Citizen Observer to be immune as a publisher and/or speaker of Mr. Roe’s posts when it acts as a passive message board and does not review, edit, or monitor what posts are published on its platform. The Court finds that Mr. Doe’s claims do not treat Citizen Observer as a publisher and therefore it is not immune from his state-law claims.
No. Just no. Section 230 protects the decision not to edit (a leave-up decision) just as much as the decision to edit (remove). And “conduits” get just as much Section 230 protection as web hosts. For example, IAPs aren’t liable for third-party content flowing through their network (230’s definition of ICS expressly includes IAPs). So this is clearly wrong. Let’s hope the court gets on track in the next round.
Chabot v. Frazier, 2025 WL 2164002 (Tex. Ct. App. July 30, 2025)
Chabot contends Frazier’s claims for defamation relating to Chabot’s republication of the December 2023 DMN and WFAA.com articles are barred by section 230 of the Communications Decency Act (the CDA)….Simply put, the CDA generally bars defamation and libel claims against an entity that merely passively permits the publishing (or, here, the republishing) of another’s content. GoDaddy.com, LLC v. Toups, 429 S.W.3d 752, 755 (Tex. App.—Beaumont 2014, pet. denied). Chabot maintains that the website is a provider of an interactive computer service as defined by the CDA, that the content at issue was provided by another information content provider, and Frazier’s allegations improperly seek to treat Chabot as a publisher of the content posted on the website
Frazier argues that Chabot is not entitled to immunity for his publication of the 2023 WFAA.com article because Chabot did not act neutrally when he republished the article under the headline “Collin County Rep. Fred Frazier Dishonorably Discharged from DPD” after he had been informed of the article’s inaccuracies and after WFAA had published an updated and corrected article. Frazier asserts that instead Chabot acted as an information content provider by republishing the article.
Under the limited record here and viewing the evidence in the light most favorable to Frazier, we conclude Chabot did not establish as a matter of law immunity under the CDA
Ugh, this line: “the CDA generally bars defamation and libel claims against an entity that merely passively permits the publishing (or, here, the republishing) of another’s content.” The phrase “passively permits the publishing” is gibberish. Publishing is never passive!
In a footnote, the court adds “Where a defendant contributes to and shapes the
content of the information at issue, there is no immunity under the CDA.” I’ve complained before about the nonsensical and illogical “content shaping” exception to Section 230. Seeing this bad meme perpetuate is painful.
This case seems to cover some of the same ground as the D’Alonzo case from 20 years ago, which is so old that the lawyers probably had no idea it existed. I’ve repeatedly posted about how 230 can apply to verbatim content republication before. Too bad the court had no idea.
Stearns v. Google Inc., 2025 WL 2391555 (N.D. Cal. Aug. 18, 2025)
He alleges that he performed a Google search on May 11, 2019, which unwittingly returned images of child pornography which formed the basis of federal charges that were subsequently field against him. Plaintiff was convicted and sentenced to 11 years….
section 230 of the CDA generally immunizes entities like search engines from liability for claims involving how these entities do or do not moderate content created by others….The CDA would preclude any claim like plaintiff’s even if he stated a claim under state law [cite to Ginsberg v. Google]
Riganian v. LiveRamp Holdings, Inc., 2025 WL 2021802 (N.D. Cal. July 18, 2025)
This is a class-action lawsuit alleging “LiveRamp has tracked, compiled, and analyzed vast quantities of their personal, online, and offline activities to build detailed ‘identity profiles’ on them for sale to third parties.” With respect to Section 230:
- “Plaintiffs are asking LiveRamp ‘to moderate its own content.'”
- “CDA immunity does not apply when the defendant contributes to or shapes the content at issue.” Ugh, the content “shaping” meme again….
- “The Data Marketplace does not consist only of user-generated content…[LiveRamp] is the ‘information content provider’ of the [Data Marketplace] dossiers because it is ‘responsible, in whole or in part, for the creation or development of’ those dossiers.”
U.S. v. EZLynk, SEZC, 149 F.4th 190 (2d Cir. August 20, 2025)
The district court ruling in this case was so interesting that I based my Fall 2024 Internet Law final exam around it.
EZ Lynk is a type of app store to obtain apps (called “tunes”) to customize cars. The app store includes many defeat device apps designed to overcome the manufacturer’s emission control efforts, i.e., to run a more polluting car. The district court ruled that the app store qualified for Section 230 protection. The Second Circuit disagrees.
The Second Circuit credits the following allegations that EZ Lynk materially contributed to the alleged unlawful activity:
EZ Lynk “directly and materially” contributed to the development of delete tunes disseminated through the EZ Lynk System. It worked with delete-tune creator PPEI “in the early stages of testing the EZ Lynk System[,] approximately two years before the system’s launch in 2016,” and again previewed the updated device before its launch in 2018. Several of the posts cited in the Complaint explicitly refer to drivers installing PPEI-provided delete tunes through the EZ Lynk System, and PPEI jointly administers the EZ Lynk Forum Facebook group, helping drivers troubleshoot the installation of their delete tunes using the EZ Lynk System. The Complaint also alleges EZ Lynk “work[ed] with” and “collaborated with” delete-tune creator GDP Tuning before the EZ Lynk System was publicly available
OK, but the apps/tune are still third-party content, no? Relying heavily on the problematic LeadClick case, the Second Circuit says the allegations:
raise the reasonable inference that Appellees deliberately courted – i.e., “recruited” – delete-tunes creators and “collaborated with” them to ensure that their delete tunes would be compatible with and available to users of the EZ Lynk System. Under that inference, Appellees “did not merely act as . . . neutral intermediar[ies]” between the delete tunes creators and vehicle owners “but instead ‘specifically encourage[d] development of what [was] offensive about the content.’”
I mean, isn’t this is what all app stores do? To ensure good consumer experiences, app stores provide a set of technical specifications for compatible apps, review the apps for various standards, and otherwise exercise content moderation over the apps’ availability. So does this mean that all app stores are not “neutral intermediaries” (ugh) of any “illegal” apps available in their app stores?
I think the court was likely responded to the problematic nature of defeat devices and not intending to doom all app stores, but the sloppy handling of Section 230 for app stores leaves plenty of room for future plaintiff mischief. 📉
Gibralter LLC v. DMS Flowers, LLC, 2025 WL 2623293 (E.D. Cal. Sept. 11, 2025)
This is a trademark dispute between floral businesses that spilled over to Teleflora, which provides an ecommerce platform.
With respect to the state law claims (“Unfair and Deceptive Trade Practice, Common Law Trademark Infringement and Unfair Competition, and Trademark Dilution and Injury to Business Reputation”), the court says Section 230 applies to Teleflora’s liability:
The FAC alleges that Teleflora’s online platform enables third parties to sell their products through “estores” on an affiliate network such that Teleflora qualifies as an “interactive computer service provider” under the CDA…. [Cite to the Parts.com v. Yahoo decision from a dozen years ago.]
A party is not an information content provider outside the ambit of CDA
immunity unless it creates or develops the offending content in whole or in part. Plaintiffs’ allegations establish at most that Teleflora controls, supervises, monitors, and profits from the offending content – not that it created or developed that content.
The court applies Section 230 to state IP claims but it spends no time justifying that decision, which is correct in the Ninth Circuit but not well-accepted elsewhere.
Bodin v. City of New Orleans, 2025 WL 2589590 (E.D. La. Sept. 8, 2025)
This is a challenge to New Orleans’ rules for short-term rentals. The court rejects Airbnb’s challenges based on Section 230 (emphasis added):
The 2024 Ordinance requires Airbnb to verify the registration status of each listing “before any booking transaction is facilitated,” and to reverify each listing “at least every 30 days of the prior verification” and whenever Airbnb “knows or should know” that any data relevant to verification has changed, essentially requiring Airbnb to monitor the registration status of all of its New Orleans listings to identify changes that are potentially material to verification. Airbnb alleges that by forcing it to engage in verifying the registration status of a third-party listing, the 2024 Ordinance treats Airbnb as a publisher of third-party content in conflict with § 230. Airbnb claims that the 2024 Ordinance runs further afoul of § 230 by effectively requiring Airbnb to remove listings when it cannot verify that the host is eligible to list the property….
The 2024 Ordinance does not operate against Airbnb’s role as a publisher of third-party STR listings but rather against its conduct as a booking agent between users and hosts for which Airbnb earns a fee. The 2024 Ordinance does not require Airbnb to monitor or delete anything from its website. Airbnb remains free without penalty to allow as many unlawful STR listings on its website as it chooses to. The 2024 Ordinance simply precludes Airbnb from collecting a fee, in other words profiting, for booking an STR transaction that includes a non-permitted (unlawful) STR. Airbnb may very well determine that for its business model the most effective means of compliance will be to review its website so as to remove unpermitted host listings from its site but the 2024 Ordinance does not compel that action….. Because the verification requirement of the 2024 Ordinance does not treat Airbnb as the speaker or publisher of third party content, the CDA is not implicated.
Oh come on.
Greater Las Vegas Short-Term Rental Association v. Clark County, 2025 WL 2608146 (D. Nev. Aug. 28, 2025)
The regulation at issue “directly imposes verification, monitoring, and deactivation obligations on hosting platforms.” The court accepts Airbnb’s Section 230 challenge using a Calise duties analysis:
The Court agrees with Plaintiffs that the “duty to monitor” springs from Airbnb’s status as a publisher of host listings….platforms like Airbnb are only required to monitor the content of host listings if they are licensed to do business in Clark County… [Note: I didn’t understand this discussion]
Plaintiffs contend unlike the Santa Monica ordinance in HomeAway, the Clark County Ordinance requires that postings be “verified prior to publication,” “monitored to ensure they contain certain information,” or “removed when certain conditions are met.” The Court is persuaded that these requirements distinguish the Clark County Ordinance from the ordinance at issue in HomeAway. Moreover, at the Hearing, Defendant conceded that the provisions in question do impose a duty on platforms like Airbnb to monitor content.
It looks like the plaintiffs win here because Clark County imposed liability upon publication, rather than only at the time of booking?
Onwuka v. Twitter Inc., 2023 Cal. Super. LEXIS 113496 (Cal. Superior Ct. Dec. 12, 2023)
The court summarizes: “plaintiff is unhappy with defendant’s editorial and/or publishing processes” (i.e., alleging racial discrimination in its content moderation practices). In light of the Murphy v. Twitter case, this is an easy Section 230 dismissal. “Defendant’s content rules are typical publisher conduct….Defendant’s policy that required plaintiff to check a box admitting that he violated defendant’s rules to unlock his account–even if unfair or untrue–is such publishing conduct….All of the content that plaintiff claims defendant required him or others to remove (and all of the content in plaintiffs locked account) is created and posted by plaintiff and others, not defendant.”
Espinha v. Elite Universe, Inc., 2025 Cal. Super. LEXIS 42223 (Cal. Superior Ct. July 24, 2025)
In support of the first cause of action, Plaintiffs allege Defendant operates a website on which a user accused Plaintiffs of working “to protect and advance the interests of a network of illegal . . . scam artists”, and Defendant refused to remove the posts even though the user who made them agreed to do so. As Defendant points out, the Communications Decency Act of 1996 immunizes Defendant from liability…Even if Plaintiffs allege actionable claims for defamation against the person who made the posts on Defendant’s website, Defendant is not liable for maintaining the website. Moreover, “[w]here. . . an internet intermediary’s relevant conduct in a defamation case goes no further than the mere act of publication—including a refusal to depublish upon demand, after a subsequent finding that the published content is libelous—section 230 prohibits this kind of directive.”…
Plaintiffs also rely on Liapes v. Facebook (2023) 95 Cal.App.5th 910, which is not on point. In that case, the Court of Appeal held the Communications Decency Act of 1996 does not immunize a social media platform acting as an information content provider by requiring users to disclose their age and gender to design and create an advertising system which required advertisers to exclude delivery to users based on those characteristics. In the instant case, Plaintiffs’ allegations are simply Defendant permitted a user’s post to remain on its site. Plaintiffs do not allege facts to show Defendant acted as an information content provider—”that is, someone ‘responsible in whole or in part, for the creation or development’ of the content at issue.”
Day v. TikTok, Inc., 2022 U.S. Dist. LEXIS 34380 (N.D. Ill. Feb. 28, 2022)
The plaintiff complained about videos uploaded by another user. An obvious Section 230 case. A meritless FOSTA workaround also failed.
Amy v. Apple, 5:24-cv-08832-NW (N.D. Cal. Oct. 15, 2025)
This is a putative class action brought against Apple, Inc. by individuals depicted in Child Sexual Abuse Material (“CSAM”) shared using Apple’s technology and hosted on Apple’s servers. Named Plaintiffs Amy and Jessica (using pseudonyms) allege violations under 18 U.S.C. §§ 2252, 2252A, and 2255 as well as violations of products liability and negligence state laws….
Plaintiffs allege that Apple’s failure to implement NeuralHash or any other child safety features capable of detecting known CSAM on its products caused Plaintiffs to be injured because CSAM depicting them was received, possessed, and distributed using Apple products. Apple could have designed its products to protect and avoid injury to child victims of known CSAM, and Apple knew or should have known that CSAM depicting Amy and Jessica would continue to spread through Apple’s products without Apple implementing proactive detection technologies. Despite this knowledge, Apple avoided design changes that would have increased safety and reduced the injury to CSAM victims. Plaintiffs allege that Apple’s failure to implement any known CSAM detection is a design defect because Apple can safely implement readily available features to prevent the spread of known CSAM but has continuously failed to do so.
The court points to the Doe v. Apple decision, which alleged similar claims on similar facts, and “Plaintiffs rely on the same arguments and analyses that the Court rejected
previously.” The court points out that the plaintiffs have problems with Apple’s alleged scienter and the applicability of Section 230.
Paul v. Brattin, 2025 WL 2845390 (W.D. Mo. Oct. 7, 2025)
This is a claim that retweeting created false light liability:
Mr. Richard Brattin, a Missouri State Senator, reposted an X post originally authored by Deep Truth Intel. The post featured a photo of Mr. Loudermill handcuffed on the curb and stated, “The Kansas City Chiefs Super Bowl Parade shooter has been identified as 44-year-old Sahil Omar, an illegal immigrant.” Mr. Brattin’s repost added “@POTUS CLOSE THE BORDER.” Contrary to Mr. Brattin’s post, Mr. Loudermill was not an illegal immigrant or connected to the shooting.
The court correctly says that Section 230 doesn’t apply to Brattin’s addition (“@POTUS CLOSE THE BORDER”) because that’s first-party content. However, Brattin’s addition isn’t false light on its own or in context, so the court should have dismissed the claim. Instead we get this:
Mr. Brattin created his own X post for which Ms. Paul seeks to hold him liable. There are no allegations about the content of the Deep Truth Intel post, only Mr. Brattin’s. The face of the Amended Complaint does not seek to hold Mr. Brattin liable for the Deep Truth Intel post. Ms. Paul’s false light claim is plausible on its face. Mr. Brattin is not entitled to immunity under the CDA for his own post
Paul v. Hoskins, 2025 WL 2845388 (W.D. Mo. Oct. 7, 2025)
Same facts as the prior squib, except a DIFFERENT Missouri State Senator, Hoskins, retweeted the same post with this caption:
Fact – President Biden’s @POTUS open border policies & cities who promote themselves as Sanctuary Cities like @KansasCity invite violent illegal immigrants into the U.S. Fact – Violent illegal immigrants with guns are exactly why we need the 2A. I have the right to protect my … show more
[What is up with all of the Missouri State Senators grandstanding about immigration using false facts? I know the answer to that question, but it’s still disgusting.]
In this case, Hoskins’ caption actually referred to violent illegal immigrants, so the false light claim is more plausible. It too survived a 230 dismissal attempt.
Byrd v. Google LLC, No. 2023 L 013005 (Ill. Cir. Ct. October 31, 2025)
Plaintiff has failed to provide facts as to how Google has defamed him or violated his right of publicity. Google does not deny that these articles pop up when a search is made for Plaintiff, but Google is not the party that has written these articles or published the pictures. Additionally, the Court finds that under United States Code, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Thus, the Court finds Google cannot be treated as the publisher of articles that have been published online about Plaintiff, even if they may show up when using their services.
Zenon v. Google, Inc., No. CV-014025/23 (N.Y. Civil Ct. March 25, 2024)
This is a scammy ads case. “As to Google, plaintiff alleges that it allowed Reckon to advertise on its site; that it received payment from Reckon for advertising; and that it did nothing to prevent, alter or remove the content of Reckon ‘s advertisement. Yet these are precisely the editorial functions immunized by Section 230.”
Nordheim v. LinkedIn Corp., 2025 WL 3145293 (N.D. Cal. Oct. 21, 2025)
Another failed pro se account termination case, this time against LinkedIn:
According to Plaintiff, an individual, Aaron Laks, made harmful and false accusation against Plaintiff on LinkedIn. Plaintiff reported Laks’ accusations to LinkedIn but LinkedIn failed to intervene or investigate and, instead, suspended Plaintiff’s account without cause. Plaintiff also alleges that “Linkedin still retains and displays defamatory content” and that LinkedIn banned Plaintiff “due to false reports”. He claims all stem from his alleged harm that he incurred as a result. Plaintiff thus seeks to hold Defendant liable as a publisher for failing to remove content posted by a third-party and for temporarily barring Plaintiff from accessing or controlling his own content….
the content he is concerned with was either created by Laks or by Plaintiff, not LinkedIn…Although Plaintiff complains about the actions LinkedIn took or failed to take with respect to the content created by Laks, or by temporarily preventing Plaintiff from responding to that content, he does not allege any content created by LinkedIn.
Mann v. Meta Platforms, Inc., 2025 WL 3255009 (N.D. Cal. Aug. 18, 2025)
The allegations in Mann’s amended complaint are substantially identical to the allegations in his initial complaint – Meta exposed him to third-party content on Facebook relating to drug use that Mann found distressing….for the reasons stated in the court’s OSC, § 230 bars Mann’s claims.
Mann’s citation to the testimony of Meta’s CEO does not compel a different result. A statement that “Facebook no longer serves its original purpose” and is “now a showcase where the algorithm is in charge” does not render Meta responsible for third-party content on the Facebook platform. This does not amount to a specific promise to remove meth-related third-party content such that § 230 immunity does not apply.
Atlas Data Privacy Corp. v. We Inform LLC, 2025 WL 2444153 (D.N.J. Aug. 25, 2025)
This is a challenge to “Daniel’s Law,” a notice-and-takedown law that permits certain government officials and family members to remove their contact information from online sites. With respect to Section 230:
The court at this early stage has little information about the activities of these four defendants relevant to CDA immunity…At oral argument on the motions to dismiss, defendants candidly conceded that they do not operate platforms where third parties simply post information. Defendants seek out and compensate others for providing the home addresses appearing on their websites.
Even if defendants do not create the home addresses and unlisted telephone numbers of covered persons, the court has insufficient evidence in the record to determine whether they develop it. Defendants We Inform, Infomatics, and The People Searchers acknowledge in affidavits that they “provide comprehensive reports” to their consumers. Smarty similarly attached to its motion screenshots of its website, which state that Smarty “meticulously craft[s] personalized solutions tailored to every facet of [its] customers’ business needs.” The screenshots also provide that its service will “[f]ill in missing data & unlock additional information about any validated street address” that a user searches. To determine whether defendants develop the information in issue and whether they have immunity under the CDA must await discovery.
Niedziela v. Viator Inc., 2025 WL 2732916 (C.D. Cal. Sept. 25, 2025)
A woman suffered serious personal injuries when a tree branch fell on her on a tour booked through Viator (a TripAdvisor subsidiary). Viator defended on Section 230. The court says “the right on which Niedziela’s claim is premised relates to Defendants’ status as publishers of the Waterfall Gardens Tour listing, not a separate or independent right.” Then, citing Calise, the court says:
to the extent that Niedziela’s negligence claim is premised upon Defendants’ failure to warn, Section 230 does not immunize Defendants from liability because Niedziela does not seek to hold Defendants liable for failing to vet or monitor third-party conduct. To the extent that Niedziela’s negligence claim is premised upon Defendants’ advertisement of the Waterfall Gardens Tour or the inclusion of the Waterfall Gardens Tour on the Viator website, however, Niedziela does seek to hold Defendants liable as speakers or publishers, and Section 230 applies
The court also says Viator may have materially contributed to the listing’s content because it added a certification badge (the “Badge of Excellence”) to the listing:
The Court is not persuaded that the Badge of Excellence is an aggregate metric akin to that in Kimzey. The star rating system in Kimzey was a pure aggregation metric that did not include Yelp’s own impressions about the quality of a business. Niedziela, in contrast, alleges that the Badge of Excellence reflects Viator’s evaluation of the quality of the Waterfall Gardens Tour, which included conclusions that Viator drew as a part of its intense vetting process. To the extent that Defendants dispute the truth of Niedziela’s allegations about the criteria reflected in the award of the Badge of Excellence or the role of Viator’s vetting process in deciding whether to award the Badge of Excellence to a tour listing, those disputes are not appropriate for resolution at this stage of the litigation….
even if Defendants are correct that the Badge of Excellence was awarded based upon “objective” criteria such as whether a tour permitted mobile booking, those criteria reflected Viator’s determinations about what conditions affected the quality of a tour experience, not third-party determinations. Thus, unlike a neutral aggregation tool, the Badge of Excellence credited particular postings based upon Viator’s assessment of those postings….for the purpose of the instant Motion, the Badge of Excellence constitutes Viator’s material contribution to the Waterfall Gardens Tour listing, such that Defendants can be held liable as creators of that content even if other content in the listing was provided by a third party
Terms like “neutral aggregation tool” are a good tipoff that the court has lost the jurisprudential plot.
Also, “as a matter of law, “Viator’s Terms of Use were not reasonably conspicuous, and Niedziela is not bound by the exculpatory clause contained therein.”
Prior blog post. The court sets up the facts:
As Plaintiffs explain, each Defendant operates an app store through which social casinos are available for download. Each Defendant also requires apps downloaded from their respective stores to use their respective payment processing system for in-app purchases. Each Defendant then takes a thirty percent cut of every in-app transaction. Whenever Defendants process a virtual chip purchase in a social casino, say Plaintiffs, they are contributing to the problem by unlawfully facilitating illegal gambling transactions
The court previously denied Section 230 for payment processing, but authorized interlocutory appellate review, which the Ninth Circuit declined. The defendants took another run at Section 230, citing the intervening Calise precedent. It doesn’t change the answer:
The crux of the statutory claims in these cases is that Defendants were prohibited from processing in-app payments for social casino apps. Payment processing is not an act of publishing. It is a transaction, one that is “distinct, internal, and nonpublic.” Of course, payment processing activities may be an important part of publishing activity. But that does not make payment processing a publishing activity. Instead, it is better viewed as a generic business activity common to virtually all companies, publishers or not, just like hiring workers or paying taxes…Limits on Defendants’ ability to process certain payments does not interfere with Defendants’ ability to publish third-party apps by offering them in their app stores or by making in-app content available. One can understand this point by recognizing that the duties imposed by these statutes apply equally to dedicated payment processors such as PayPal, Square, and Stripe even though those companies are plainly not publishers. A duty that applies equally to non-publishers does not treat a defendant as a publisher.
The defendants argued that they would have to monitor the activities of the apps to avoid liability. The court is unmoved:
Defendants can choose to stop offering their own payment processing and allow app developers to use the services of dedicated third-party processors. In this way, Defendants can avoid all the issues raised by Plaintiffs’ claims without so much as glancing at any app’s content….
monitoring does not become necessary just because it “would be the best option from a business standpoint” or would be the “most practical compliance option.”…Perhaps if the termination of their payment processing services would pose an existential threat to Defendants, or if it would prevent Defendants from engaging in their publishing activities, then such termination would not be an acceptable alternative to monitoring.
I wonder about any opinion where the court’s answer is essentially “you can avoid liability by exiting the industry.”
The defendants argued that they only provide neutral tools (ugh). The court responds:
While the Ninth Circuit has recognized a neutral tools analysis for Section 230, it has consistently situated that analysis under the third prong of the immunity test—whether content is provided by a third party. This is because the neutral tools analysis informs whether the defendant is a “creator or developer” of content, i.e., whether the content is the defendant’s or another’s.
I’ve repeatedly criticized the “neutral tools” doctrine as an oxymoron, and this narrowing construction by the court is even more dubious. I wonder how the Ninth Circuit will view this doctrinal move by the court.
The court certifies the case for interlocutory appeal once again. It points out in detail various doctrinal problems with Calise, essentially baiting the Ninth Circuit to fix the doctrinal mess it made in Calise. This case will reach the Ninth Circuit eventually, one way or another.
Google LLC v. Latam Airlines Group S.A., 2025 WL 2721690 (N.D. Cal. Sept. 24, 2025)
This case involves two videos that a user uploaded to YouTube that criticized Latam Airlines. In 2018, a Brazilian court held the videos defamed Latam and ordered their removal from YouTube Brazil. In a series of rulings from 2024 and 2025, the Brazil Supreme Court ordered the videos to be removed globally. Google sought relief in US court that it doesn’t have to comply with the global removal order in the US.
The court says Google’s Section 230 argument can support its preliminary injunction request:
- YouTube is an ICS provider.
- The videos came from a third party.
- The Brazilian global removal order would treat Google as the publisher of third-party content. Cite to Google v. Equustek.
The court also says the SPEECH Act protects Google because Brazilian defamation law doesn’t require plaintiffs to show actual malice.
Fleites v. MindGeek S.A.R.L., 2025 WL 2902301 (C.D. Cal. Sept. 25, 2025)
This is a very long FOSTA opinion involving CSAM on Pornhub. Citing Doe v. MindGeek (C.D. Cal. 2021) and Doe #1 v. MG Freesites (N.D. Ala. 2022), the court denies a Section 230 defense because MindGeek is partially responsible for the content development:
Plaintiff claims that MindGeek reviewed, uploaded, categorized, tagged, optimized for user preference and disseminated the videos of Plaintiff. MindGeek also purportedly uploaded the optimized, tagged, and categorized video to its other tubesites. While the Court agrees that Plaintiff’s pleadings as to MindGeek’s involvement in the videos as specific to her leave more to be desired, the Court finds that these allegations paired with the general allegations found in the rest of the SAC detailing MindGeek’s tools that are
not neutral in nature but rather encourage criminality are sufficient at this stage of the litigation when all reasonable inferences are drawn in favor of Plaintiff.
With respect to the FOSTA beneficiary liability claims, the court says Doe v. Reddit only governs the 230 FOSTA exception, which isn’t applicable because the court rejected Section 230 on other grounds. Thus, the court will accept constructive knowledge arguments regarding the prima facie elements that would otherwise be foreclosed if the FOSTA 230 exception was governing the case.
R.Q.U. v. Meta Platforms, Inc., 2025 Cal. Super. LEXIS 70297 (Cal. Superior Ct. Nov. 5, 2025)
An outgrowth of the state court social media addiction case.
the fact that a design feature like “infinite scroll” led a user to harmful content does not mean that there can be no liability for harm arising from the design feature itself. Here, there is evidence that the infinite scroll feature itself caused some harm to Moore…Moore has testified that the “endless scroll” feature has caused her to use Defendants’ applications much more than she would have without that feature
Gas Drawls, LLC v. Whaleco, Inc., 2025 U.S. Dist. LEXIS 254999 (C.D. Cal. Dec. 5, 2025)
The plaintiff enforces the IP rights of rapper Daniel Dumile Thompson, better known as MF DOOM. This is a trademark enforcement case. With respect to the state IP claims:
Plaintiff characterizes Temu as an information content provider on the ground that it is “responsible” for the product listings and allegedly alters and advertises them. These conclusory assertions do not plausibly allege that Temu is a content provider for the reasons discussed above—i.e., Plaintiff provides no factual basis to infer that Temu materially contributed to the alleged infringement. Thus, the state-law intellectual property claims, as alleged, are barred under § 230.
Also interesting:
Plaintiff contends that Temu is directly liable because it knowingly offers “MF DOOM” as a search keyword that triggers the display of the infringing listings. But Plaintiff does not explain how Temu “offered” the keyword, and the FAC itself states that Plaintiff’s counsel found the listings by typing “MF DOOM” into the search bar. It is therefore unclear that Temu did anything other than provide a search tool for its platform.
State v. TikTok Inc., 2025 WL 2399525 (N.C. Bus. Ct. Aug. 19, 2025)
The North Carolina AG sued TikTok for addicting minors. The court starts out with a standard anti-230 trope:
when section 230 says not to treat an internet platform “as the publisher or speaker of” others’ content, it means not to burden the platform with traditional publisher liability. The statute’s reach ends there. It does not relieve internet publishers “from all potential liability” or provide “an all purpose get-out-of-jail-free card for businesses that publish user content on the internet.”
TikTok doesn’t qualify for Section 230:
Neither of the State’s theories seeks to hold ByteDance liable for monitoring, altering, or removing user content, or for failing to do those things. The thrust of the unfairness theory is that ByteDance purposely designed TikTok to be addictive to minors. If what the complaint says is true, TikTok is packed with features—autoplay, endless scrolling, social rewards, and more—that exploit minors’ developmental immaturity and neurological susceptibility to intermittent, variable rewards. And TikTok addiction allegedly disrupts healthy sleep habits and social interactions, causing insidious psychological harms to teens and children. This theory has more in common with products liability than publisher liability, resting as it does on an alleged duty not to design and offer a product that endangers a vulnerable population…
It is no answer to say, as ByteDance does, that addicted minors spend their time on TikTok viewing third-party content. ByteDance’s business is, after all, to host and display user videos. Nearly everything it does is connected in some way to its users’ content. But it and other social-media platforms “continue to face the prospect of liability, even for their ‘neutral tools,’ so long as plaintiffs’ claims do not blame them for the content that third parties generate with those tools.” The State’s unfairness theory neither blames ByteDance for its users’ content nor aims to hold it accountable in its capacity as a publisher of that content. The theory instead seeks to hold ByteDance liable “for its own injurious conduct” in “creating and employing tools to addict young users.”…
the State’s unfairness theory treats ByteDance as a product designer, not a publisher, and faults it for offering a combination of features and social rewards that foster compulsive use by minors. Unlike Bride, ByteDance’s liability does not turn on user content or its failure to remove or suppress that content. This sort of anti-addiction claim therefore does not implicate section 230…
Section 230 gives internet platforms wide latitude to moderate content. But it does not shield them from liability for breaching their promises or misrepresenting their content-moderation activities.
Relying on Justice Barrett’s Moody concurrence, the court also rejects the First Amendment defense: “The algorithm does not convey a message by its programmer; it simply bows to user preferences and propensities….a reasonable person would understand TikTok’s video feed to reflect a given user’s content choices as opposed to ByteDance’s own creative expression or editorial judgment.” So much judicial ignorance about how algorithms work!
The court concludes:
If the State’s allegations are true, ByteDance has intentionally addicted millions of children to a product that is known to disrupt cognitive development, to cause anxiety, depression, and sleep deprivation, and (in the worst cases) to exacerbate the risk of self-harm. Federal law does not immunize this conduct, the First Amendment does not bless it, and North Carolina’s laws and courts are not powerless to address it.
Patterson v. United Network for Organ Sharing, 2022 WL 23024110 (D.S.C. March 7, 2022)
A patient sued the organ donor matching network for facilitating a liver match with the wrong blood type. The court rejects the network’s Section 230 defense:
the Court declines to find that United Network is entitled to blanket immunity under the CDA, as it appears to the Court that United Network’s duties clearly exceed those of an interactive computer service provider as contemplated by the CDA. In other words, accepting all well-pleaded allegations of Plaintiff’s complaint as true, matching Plaintiff with an incompatible donor goes beyond merely hosting a computer service that parties use to post information.
Martin v. Care.com, Inc., 2025 IL App (1st) 250913-U (Ill. Ct. App. December 15, 2025)
Care.com helps families hire in-home caregivers. Care made numerous public statements touting the safety of its caregivers, including doing background checks. However, Care didn’t screen for past incidents of child abuse. After Care’s referral, the plaintiffs retained Dunwoody as a nanny. Allegedly, Dunwoody had a history of child abuse and injured the plaintiffs’ baby. Dunwoody blamed the dad for the baby’s injuries, which had major consequences for the dad. Eventually, the state investigation exonerated the parents. The parents sued Care.com for its promises about screening. The district court dismissed on Section 230 grounds. The appeals court reverses.
With respect to Care’s marketing statements:
The corresponding obligation of Care.com is not to make misleading statements to consumers in the solicitation of business. Complying with this duty certainly cannot be said to require Care.com to moderate what caregivers communicate about their backgrounds through its platform….Care.com’s ability to satisfy its statutory duty under this cause of action stems from the statements Care.com itself chooses to make to consumers on its website. Accordingly, success on this cause of action does not require it to be treated as a publisher or speaker of content posted on its platform by third parties.
The negligent misrepresentation claims reach the same place:
it is Care.com’s own undertaking to have background checks performed on all potential caregivers and to inform customers of this when soliciting their business. Nothing other than a business decision requires Care.com to do this; it could simply allow potential caregivers to use its platform to communicate their background and qualifications to other customers and place the entire burden of conducting background checks on customers. If Care.com had done only the latter, then arguably publisher liability would be the only source of a duty from which liability could be imposed in a negligence claim. However, because Care.com undertook to have background checks performed on all potential caregivers and to make statements to customers about what these background checks entailed, its duty to customers such as the plaintiffs derives from its own actions and statements in this regard. In our view, Care.com’s ability to comply with this duty does not require it to moderate content or communications made by third parties through its Internet platform. Accordingly, success on this claim does not require it to be treated as a publisher or speaker in contravention of section 230(c)(1).
The court distinguishes Doe v. Grindr because
in totality, the statements by Care.com on its website are more specific than the statement at issue in Doe. More importantly, though, we find the statements at issue in this case to refer to Care.com’s own undertaking to ensure that potential caregivers undergo background checks prior to interacting with other customers. When we view these in their light most favorable to the plaintiffs, these statements simply are not about moderating content that is posted to or communicated through an Internet platform. For example, when Care.com states that “[w]e ensure potential account holders are screened and evaluated against our conduct and eligibility standards” by being “background-checked through our CareCheck process,” the court views this as a statement about Care.com’s own undertaking to its customers, which has nothing to do with the actions of a publisher concerning third-party content posted on the Internet.
The court rejects the argument that Section 230 applies if publication of third-party content was a but-for cause.
Sosa v. AT&T, 2025 WL 3719229 (N.D. Cal. Dec. 23, 2025)
“The only conduct Sosa complains of by YouTube is YouTube’s decisions regarding whether to takedown his video, when to put his video back up, and what ranking to give his video. These are ‘quintessential’ publishing decisions giving YouTube immunity to state law tort claims under Section 230.”
Kostov v. Go Daddy LLC, 2025 Ariz. Super. LEXIS 1282 (Ariz. Superior Ct. Oct. 8, 2025)
the Communications Decency Act does bar the majority of this lawsuit. Section 230 of the CDA provides immunity to interactive computer services providers against liability arising from content created by third parties. This includes requests for injunctive relief, such as removal of content.
That immunity applies because (a) Defendants provide or use an interactive computer service, (b) Ms. Kostov’s claims, for the most part, treat Defendants as the publisher or speaker of the information, and (c) the information comes from another content provider. Registering domains and hosting websites fall into the first prong. Ms. Kostov’s request for damages and injunctive relief show that she is treating Defendants as the speaker and/or publisher of the harmful statements. And, as Ms. Kostov’s complaint suggests, Defendants did not create any of the content.
The CDA, therefore, bars nearly all of Ms. Kostov’s claims. That includes defamation, negligence, cyberstalking, cyber-harassment, and injunctive relief.
What it does not clearly bar, however, is the demand for registrant information. That request appears in the recently filed amended complaint. Although Defendants addressed the amendments in their reply, this request was overlooked. Because Defendants have not addressed it, the Court declines to dismiss it.
This Court recognizes the significant difficulties Ms. Kostov has endured with the content and with efforts to have it removed. This Court, however, cannot circumvent established law, even if GoDaddy has process for reporting abuse. This Court’s order does not require GoDaddy, however, to sit idly by.
Doe v. DeLuca, 2025 Vt. Super. LEXIS 700 (Vt. Superior Ct. Dec. 15, 2025)
Doe alleges (1) “YouTube LLC is contributorily liable for the unauthorized commercial use of Plaintiff’s likeness by providing the platform and failing to remove the infringing content after notice”; and (2) “YouTube LLC facilitated the commercial use of Plaintiff’s likeness without Plaintiff’s consent, violating Plaintiff’s right to control the commercial exploitation of their identity.” These allegations treat YouTube wholly as a “publisher” or “speaker” of the videos made and posted by DeLuca. Doe effectively concedes YouTube’s status as an “interactive computer service,” and his allegations do not in any way challenge DeLuca’s status as “another information content provider.” Doe has not alleged that YouTube was “responsible, in whole or in part, for the creation or development of DeLuca’s video….
YouTube’s insertion of advertisements into DeLuca’s video does not remove Section 230 immunity for YouTube. A defendant must do more to meet the “material contribution” test.
Distinguishing Forrest v. Meta, the court adds: “Providing “neutral tools” for DeLuca to post his video does not eliminate Section 230 immunity for YouTube, where it otherwise “did absolutely nothing to encourage the posting of . . . [allegedly actionable] content.”” [insert my oft-repeated objection to the “neutral tools” phrase.]
The court acknowledges 230’s IP exception, but says the publicity rights claim is a privacy statute; and even if it wasn’t, the court would follow the Ninth Circuit’s ccBill decision to preclude state IP claims. This is a surprise move given that most non-Ninth Circuit courts have diverged from the Ninth Circuit on this point.
Despite the Calise case, the court reaches to pre-Calise precedent to find that Section 230 also applies to breach of contract claims:
Doe alleges that: (1) he “reviewed YouTube’s terms of service agreement”; (2) he “submitted a report to YouTube LLC to remove the [DeLuca] video”; (3) “YouTube LLC failed to remove the content”; and (4) “YouTube LLC failed to adhere to its own terms of service when it failed to remove the reported video.” As the cases above make clear, Doe’s allegations, while framed as a breach of contract claim, nevertheless go to the heart of YouTube’s actions as a publisher — Doe complains that YouTube published DeLuca’s videos when it should not have. Calise’s plain language shows that Section 230 applies to this sort of allegation that would “oblige[] the defendant to `monitor third-party content’—or else face liability—then that too is barred by § 230(c)(1).”
Also, “YouTube’s ToS do not create promises which it could have breached in the way that Doe alleges.”
Extra: “a now commonplace occurrence like DeLuca’s recording by cell phone of Doe in public and posting it online without more does not constitute as matter of law the sort of objectively outrageous conduct required for an IIED claim.”
BONUS: Gonzalez v. Viator Tours Inc., 2025 WL 2420943 (D. Mass. Aug. 20, 2025). A woman suffered a slip-and-fall on a third party excursion booked through Viator/TripAdvisor. Instead of relying on Section 230, the court dismisses the case on prima facie grounds:
- “the amended complaint does not plausibly allege that Viator or Tripadvisor were responsible for, or had control over, the operation of the catamaran tour or the placement of the ramp….Viator and Tripadvisor had no duty to take reasonable care in the operation of the Sunfos tour because Gonzalez does not allege that they had a role in, or control over, its operation”
- No duty to warn because no special relationship.
- No negligent selection claim outside of employment/IC relationship. Also, the complaint didn’t allege that “Sunfos was an unsafe or inexperienced catamaran operator” or “why Viator or Tripadvisor knew or should have known Sunfos to have such a reputation.”
