Section 230 Protects Services That Permit Anonymous Third-Party Posts–Bride v. Snap

This case involves two “anonymous messaging” apps, Yolo and LMK. Both allegedly target teens audiences. “Plaintiffs allege they received harassing messages in response to their benign posts on Defendants’ applications and did not receive comparable messages on other platforms in which user identities were revealed.” Both apps allegedly were not responsive enough to complaints or unmasking requests, including not following their own purported policies.

The plaintiffs sued the apps for “(1) strict product liability based on a design defect; (2) strict product liability based on a failure to warn; (3) negligence; (4) fraudulent misrepresentation; (5) negligent misrepresentation; (6) unjust enrichment; (7) violation of the Oregon Unlawful Trade Practices Act; (7) violation of the New York General Business Law § 349; (8) violation of the New York General Business Law § 350; (9) violation of the Colorado Consumer Protection Act; (10) violation of the Pennsylvania Unfair Trade Practices Law; (11) violation of the Minnesota False Statement in Advertising Act; and (12) violation of California Business and Professions Code §§ 17200 & 17500.” The defendants successfully defend on Section 230 grounds.

ICS Provider. “Plaintiffs do not meaningfully challenge Defendants’ status.”

Publisher/Speaker Claims. “although Plaintiffs frame user anonymity as a defective design feature of Defendants’ applications, Plaintiffs fundamentally seek to hold Defendants liable based on content published by anonymous third parties on their applications….While Plaintiffs urge that preventing users from posting anonymously is unrelated to the content users of Defendants’ applications generate, these ‘decisions about the structure and operation of a website are content-based decisions’ under Section 230.”

To get around this, the plaintiffs cited Lemmon v. Snap. The court doesn’t agree:

Though Plaintiffs seek to characterize anonymity as a feature or design independent of the content posted on Defendants’ applications, the theories underlying Plaintiffs’ claims essentially reduce to holding Defendants liable for publishing content created by third parties that is allegedly harmful because the speakers are anonymous. Imposing such a duty would “necessarily require [Defendants] to monitor third-party content,” e.g., in the form of requiring Defendants to ensure that each user’s post on their applications is traceable to a specifically identifiable person.

Third-Party Content. Despite the rhetorical moves to position the lawsuit about the defendants’ design choices, this is actually an easy case. “Defendants did not create or develop the harassing and explicit messages that led to the harm suffered by Plaintiffs; the sending users did.”

Claims Covered. Section 230 applied to all twelve of the plaintiffs’ claims. A couple of specifics:

  • The false advertising claims don’t escape 230: “Had those third-party users refrained from posting harmful content, Plaintiffs’ claims that Defendants falsely advertised and misrepresented their applications’ safety would not be cognizable.” This is the latest entry in the confusing jurisprudence about when 230 applies to first-party marketing representations that are rendered untrue by users’ activities.
  • Despite Doe v. Internet Brands, 230 applies to the failure-to-warn claims because “Plaintiffs’ theory would require the editing of third-party content, thus treating Defendants as a publisher of content.”

The court rejects the invocation of Henderson: “To the extent the Fourth Circuit’s decision in [Henderson], in which the Fourth Circuit Court of Appeal reinterpreted its prior conception of “publication” under § 230(c)(1) in [Zeran], is implicated here, the court finds it unpersuasive in light of broader view adopted by the Ninth Circuit.” This court is right that Henderson wasn’t a persuasive opinion, but this adds to the evidence of a circuit split…

Implications

This case relates to a tragedy, a teen suicide after online harassment. I share the heartbreak. However, this case’s litigation approach is problematic.

First, the complaint assumes that the anonymous messages created liability for someone, but that may not be the case. “Harassment” isn’t always illegal/tortious. This case may be a potential attempt to impose liability on intermediaries for lawful-but-awful content. This is the direction the UK is headed, but it won’t work here in the US.

Second, the complaint asked the court to ban the apps from the Internet entirely, largely because they enable anonymous/pseudonymous conversations. I link this lawsuit to the broader efforts to impose kn0w-your-customer (KYC) obligations on Internet services. However, nonymity and pseudonymity play important roles in the Internet discourse. Lawsuits like this are intended to override those options. That’s unconstitutional, and it’s not the proper role of the judicial system.

Third, this lawsuit overlaps the broader censorial efforts to impose liability for social media addiction, including an MDL on that topic in the Northern District of California. I don’t expect the MDL to succeed, but it’s clear plaintiffs are trying different ways to remix the basic concerns. It would be better if we resolved the MDL before we see the proliferation of different litigation approaches on the same legal issues.

Finally, this lawsuit is obviously about third-party content. Plaintiffs are wielding a variety of recent anti-230 precedents–such as Lemmon and Henderson–to obfuscate that basic point, but the court didn’t get confused. This court cut through that and got straight to the point. Anything the plaintiffs are complaining about exists only because of third-party content, which makes it a straightforward Section 230 case. The Supreme Court may rip open a huge hole in this legal principle in Gonzalez, but based on standard 230 jurisprudence, this lawsuit never had a chance.

I want to reiterate this passage from the opinion:

Plaintiffs’ claims essentially reduce to holding Defendants liable for publishing content created by third parties that is allegedly harmful because the speakers are anonymous. Imposing such a duty would “necessarily require [Defendants] to monitor third-party content,” e.g., in the form of requiring Defendants to ensure that each user’s post on their applications is traceable to a specifically identifiable person

This is an interesting (though not unprecedented) interpretation of Section 230. It essentially says that 230 prevents the mandating of author attribution because doing so would require third-party content monitoring. Think about the applications of this principle to something like the CA AADC, which requires websites–including entities that qualify for Section 230–to authenticate user (and by implication, author) age. Does the requirement of authenticate authors’ ages similarly constitute the monitoring of third-party content? This could be one of the many legal problems with the AADC.

I’m sure this opinion will be appealed. Meanwhile, the battle is raging in the MDL, the recent lawsuits brought by school districts over teens’ social media addiction, other lawsuits, and the panoply of state bills that will be introduced in 2023. So this is not the final word on the matter.

Case citation: Bride v. Snap, Inc., 2023 U.S. Dist. LEXIS 5481 (C.D. Cal. Jan. 10, 2023). The complaint.