Social Media Services Aren’t Liable for Buffalo Mass-Shooting–Patterson v. Meta

The New York state intermediate appeals court has issued a significant ruling dismissing four lawsuits that sought to hold many social media services (Facebook, Instagram, Snap, Google, YouTube, Discord, Reddit, Twitch, Amazon and 4Chan) liable for the 2022 Buffalo mass-shooting. My blog post on the lower court ruling.

In a divided 3-2 ruling, the majority says that both Section 230 and the First Amendment negate the plaintiffs’ claims that the shooting was caused by the shooter’s addiction to and radicalization by social media. (The shooter embraced the “Great Replacement Theory” based on his exposure to it on social media). The majority opinion is filled with powerful speech-protective lines; some standouts:

  • “section 230 is the scaffolding upon which the Internet is built.”
  • “Without section 230, the diversity of information and viewpoints accessible through the Internet would be significantly limited.”
  • “There is no strict products liability exception to section 230”
  • “the interplay between section 230 and the First Amendment gives rise to a ‘Heads I Win, Tails You Lose’ proposition in favor of the social media defendants.”
  • “section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither”
  • “dismissal at the pleading stage is essential to protect free expression under Section 230. Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230”
  • “the motion court’s ruling, if allowed to stand, would gut the immunity provisions of section 230 and result in the end of the Internet as we know it”

* * *

The Majority Opinion

The plaintiffs argued they were suing over the services’ design, not any specific content. Iterating on a similar case positioning move by the Lemmon plaintiffs that worked, the plaintiffs tried to narrow the case to focus just on design:

Plaintiffs concede that, despite its abhorrent nature, the racist content consumed by the shooter on the Internet is constitutionally protected speech under the First Amendment, and that the social media defendants cannot be held liable for publishing such content. Plaintiffs further concede that, pursuant to section 230, the social media defendants cannot be held liable merely because the shooter was motivated by racist and violent third-party content published on their platforms. According to plaintiffs, however, the social media defendants are not entitled to protection under section 230 because the complaints seek to hold them liable as product designers, not as publishers of third-party content.

The majority rejects this case positioning and says the case is not just about product design. For example, the majority says: “plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms. [Cites to Force and MP]…There is no strict products liability exception to section 230.” The majority explains why the plaintiffs’ claims are predicated on third-party content:

the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.

Instead, plaintiffs’ theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter’s radicalization. Given that plaintiffs’ allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are “inextricably intertwined” with the social media defendants’ role as publishers of third-party content

Every time I see a social media addiction case, I ask: what are the plaintiffs allegedly “addicted” to? Inevitably, the answer is “third-party content.” That’s why Section 230 should routinely apply to social media addiction claims.

To get around this, the plaintiffs invoked the Third Circuit’s Anderson v. TikTok case. Like most courts outside the Third Circuit, the majority disagrees with the Anderson ruling:

We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230

Rejecting the Anderson precedent takes the majority down a long exposition of the interplay between Section 230 and the First Amendment. I believe much of this tangent is dicta, but it still helps explain why the First Amendment backfills Section 230 here.

The majority validates the First Amendment’s applicability to the plaintiffs’ claims, so getting around Section 230 wouldn’t really help the plaintiffs:

even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech (“expressive activity” as described by the Third Circuit) is protected by the First Amendment under Moody….

the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms. Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.

The majority emphatically rejects that there is a “but the algorithms” exception to the First Amendment:

Every method of displaying content involves editorial judgments regarding which content to display and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants’ algorithms, is constitutionally protected speech.

The majority concludes its Anderson tangent with this powerful summation:

the interplay between section 230 and the First Amendment gives rise to a “Heads I Win, Tails You Lose” proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson. Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.

Now back to the main Section 230 thread. The plaintiffs invoked Lemmon v. Snap to get around Section 230. The majority rejects that:

The Ninth Circuit made clear, however, that the plaintiffs “would not be permitted under § 230 (c) (1) to fault Snap for publishing other Snapchat-user content (e.g., snaps of friends speeding dangerously) that may have incentivized . . . dangerous behavior”. Here, in contrast, plaintiffs seek to do just that, i.e., to hold the social media defendants liable for content posted by other people that allegedly incentivized dangerous behavior by the shooter.

The majority also rejects Judge Katzmann’s dissent in Force v. Facebook:

To the extent that Chief Judge Katzmann concluded that Facebook’s content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230, we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit’s first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.

As a final reason to reject the case, the majority questions if the plaintiffs could establish causation (this causation discussion also appears to be dicta):

If plaintiffs’ causes of action were based merely on the shooter’s addiction to social media, which they are not, they would fail on causation grounds. It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not “not foreseeable in the normal course of events” and therefore broke the causal chain. It was the shooter’s addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent

The majority doesn’t cite the Taamneh opinion, but it dealt with somewhat analogous circumstances.

The majority opinion concludes with this summary:

We believe that the motion court’s ruling, if allowed to stand, would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms, the Internet would over time devolve into mere message boards.

Although the motion court stated that the social media defendants’ section 230 arguments “may ultimately prove true,” dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd., 591 F3d at 255 [the statute “protects websites not only from ‘ultimate liability,’ but also from ‘having to fight costly and protracted legal battles’ “]). Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230.

While everyone of goodwill condemns the shooter’s actions and the vile content that motivated him to assassinate Black people simply because of the color of their skin, there is in our view no reasonable interpretation of section 230 that allows plaintiffs’ tort causes of action to survive as against the social media defendants, who are entitled to immunity under the statute as the publishers of third-party content on their platforms.

The Dissent

The dissenting judges would accept many of the censorship arguments that plaintiffs routinely advocate for:

plaintiffs do not seek to hold defendants liable for any third-party content; thus, we conclude that those causes of action do not implicate section 230 of the Communications Decency Act (section 230) or the First Amendment. Even if section 230 were implicated, however, we conclude that the use of an algorithm to push disparate content to individual end users constitutes the “creation or development of information,” which could subject defendants to liability, and is not the type of editorial or publishing decision that would fall within the ambit of section 230…

Plaintiffs assert that defendants had a duty to warn the public at large and, in particular, minor users of their platforms and their parents, of the addictive nature of the platforms. They thus claim that defendants could have utilized reasonable alternate designs, including: eliminating “autoplay” features or creating a “beginning and end to a user’s ‘[f]eed’ ” to prevent a user from being able to “infinite[ly]” scroll; providing options for users to self-limit time used on a platform; providing effective parental controls; utilizing session time notifications or otherwise removing push notifications that lure the user to re-engage with the application; and “[r]emoving barriers to the deactivation and deletion of accounts.” These allegations do not seek to hold defendants liable for any third-party content; rather, they seek to hold defendants liable for failing to provide basic safeguards to reasonably limit the addictive features of their social media platforms, particularly with respect to minor users….

the use of design functions, such as algorithmic models that “autoplay” videos or create an “infinite feed,” constitutes the “creation or development of information” that would render defendants first-party content providers and, thus, not immune from liability under section 230

Summarizing this passage and more, the dissent embraces a bevy of plaintiffs’ standard justifications for imposing liability: Design is not about content; services are engaging in conduct, not content publishing; using algorithms “develops” third-party content and thus turns it into first-party content; services should warn users about [something?]; services should redesign their functionality to satisfy the plaintiffs’ specifications; “but the algorithms”; services are products; social media services override the free will of their users.

The dissent doesn’t think Moody applies: “Government-imposed content moderation laws that specifically prohibit social media companies from exercising their right to engage in content moderation is a far cry from private citizens seeking to hold private actors responsible for their defective products in tort.” From my perspective, when the “defective products” in question are speech venues, the dissent’s argument makes a distinction without a difference.

The dissent concludes:

logic and the law compel the conclusion that the social media platforms in question are products, and that the manufacturers of those products can be held liable in products liability. Defendants are multi-billion-dollar corporations that derive their revenue from maximizing user engagement on their platforms. They alone control the manufacture and distribution of their respective social media platforms. They are uniquely positioned to know of—and prevent—the harm posed by social media addiction generally and specifically in minors. Once injected into the stream of commerce, their platforms are uniform for all users. That users exchange their data as opposed to currency to use those platforms does not, in our view, vitiate their true nature as products….

The conduct at issue in this case is far from any editorial or publishing decision; defendants utilize functions, such as machine learning algorithms, to push specific content on specific individuals based upon what is most apt to keep those specific users on the platform. Some receive cooking videos or videos of puppies, while others receive white nationalist vitriol, each group entirely ignorant of the content foisted upon the other. Such conduct does not “maintain the robust nature of Internet communication” or “preserve the vibrant and competitive free market that presently exists for the Internet” contemplated by the protections of immunity but, rather, only serves to further silo, divide and isolate end users by force-feeding them specific, curated content designed to maximize engagement.

Implications

What Does Justice Look Like?

The Buffalo mass-murder was a shocking and devastating tragedy (despite the fact that mass-shootings occur regularly in the US). The shooting victims and their families deserve justice in at least three forms.

First, the wrongdoers should be held accountable for their contributions to the crimes. This process is already working; the shooter has pled guilty to his state law crimes and is still being prosecuted for his federal crimes. However, as both a legal and moral matter, the wrongdoers do not include social media. Treating content publishers as causal contributors to offline murders extends liability principles too far.

Second, the government has a responsibility to validate the victims’ experiences and double down on its efforts to keep its constituents safe, especially from gun violence.

Third, anyone endorsing, or expressing tacit support for, or not condemning, the Great Replacement Theory should be held accountable for their positions. That unfortunately includes far too many politicians. Their express or tacit support for the Great Replacement Theory contributes to an ecosystem that normalizes virulent hatred and encourages violence. These politicians’ positions are both ignorant and truly sickening; and the fact that they are often politically rewarded because of or despite their positions, rather than condemned and ostracized, is sickening.

The Future of Free Speech Online

The majority opinion stands in contrast to the rapid degradation of the First Amendment and Section 230 throughout the rest of the judicial system. After the TikTok and FSC rulings, it’s clear that a majority of the Supreme Court will not protect online speech; indeed, those rulings wiped out much of the speech gains from Moody and more. And courts throughout the country have systematically carved back Section 230 in structurally significant ways. At this point, plaintiffs can find precedent supporting over a dozen different Section 230 workarounds. Thus, it’s disorienting to see a throwback opinion like this. To me, the majority’s conclusions are so obvious and intuitive, I still can’t understand how any courts are reaching contrary conclusions. But they are.

Given the case’s stakes and the 3-2 split at the Appellate Division court, there is no doubt this opinion will be appealed to New York highest state court, the Court of Appeals. Indeed, I would expect that court’s ruling to be appealed to the US Supreme Court, regardless of who wins. I have no predictions how this case will fare on further proceedings. I think the majority clearly got the law right and the dissent clearly got the law wrong, but that doesn’t ensure the defendants will preserve their victory on appeal.

I also don’t think this opinion is likely to persuade the other judges who are issuing plaintiff-favorable social media addiction or “but the algorithms” decisions. At this point, many judges are firmly committed to the techlash. Also, judges might make a juriprudential distinction based on the victims’ identity. In this case, the victims had to trace the legal responsibility back through an independent perpetrator (the shooter). In the teen self-harm cases, the teens are arguing that social media services are victimizing them directly.

Case Citation: Patterson v. Meta Platforms, Inc., 2025 WL 2092260 (N.Y. App. Div. July 25, 2025). The majority opinion’s author is Judge Stephen Lindley.

The related rulings (all pointing back to this ruling): Salter v. Meta Platforms, Inc., 2025 WL 2092189 (N.Y. App. Div. July 25, 2025); Stanfield v. Mean LLC, 2025 WL 2092121 (N.Y. App. Div. July 25, 2025); Jones v. Mean LLC, 2025 WL 2092152 (N.Y. App. Div. July 25, 2025).