Court Rejects an Attempt to Create a Common-Law Notice-and-Takedown Scheme–Bogard v. TikTok
The plaintiffs allege they notified YouTube and TikTok about videos that allegedly violated the services’ rules, and the services didn’t take action on those notifications despite making various promises to do so. These arguments revisit well-trodden legal ground, but the plaintiffs tried a modest innovation. This lawsuit purports to focuses on the allegedly defective operation of the services’ reporting tools, but the plaintiffs’ goal was to hold the services accountable for their alleged inaction in response to some reports. In other words, the plaintiffs are trying to use venerable legal doctrines to create a common-law notice-and-takedown scheme. This doctrinal move doesn’t work. The court dismisses the case entirely with leave to amend.
Strict Products Liability. The plaintiffs claimed that the services’ reporting features were defectively designed. The court responds: “Plaintiffs do not clearly identify the ‘product’ at issue or the ‘design defect’ it allegedly contains.” The court says that reporting tools might be legally classifiable as a “product” (cite to the Social Media Addiction case, but I vigorously disagree with any ruling that conflates chattel-based rules with intangibles). However, the court says that’s essentially a doctrinal bait-and-switch: plaintiffs actually object “to Defendants’ decisions, after receiving Plaintiffs’ reports, to remove or not remove certain videos; [not] to the functionality of the reporting tool itself.” Thus,
As framed by Plaintiffs, the alleged “defect” is not “content-agnostic”…The crux of Plaintiffs’ allegations is that the Defendants’ reporting systems are defective because Plaintiffs’ reports do not produce the outcomes that Plaintiffs believe they should—i.e. removal of the reported videos. Thus, to remedy the alleged defect, Defendants would have to change the content posted on their platforms. Such allegations fail to state a claim under products liability law.
The court also questions if the plaintiffs alleged any cognizable harm due to the functioning of the reporting tools (as opposed to the content that didn’t get removed).
Negligence. “The Court is not persuaded that Plaintiffs have plausibly alleged any Defendant assumed the obligation of a ‘first response hotline,’ such as 911 dispatcher or suicide prevention hotline, and thereby assumed a corresponding duty of care.” The plaintiffs cited the Ninth Circuit’s promissory estoppel exclusion to 230 from Barnes, but the court wonders how the plaintiffs detrimentally relied on any reporting feature or were prevented from reporting videos to law enforcement.
Misrepresentation. Pointing to multiple statements from disparate sources, the plaintiffs claimed that the services publicly “convey the following representations: (1) that Defendants “review and act upon harms that violate their policies” (2) in a way that is meaningful and “accurate enough to respond to a majority of harms,” (3) such that their platforms will be free of certain content.” The court responds:
Many of the statements simply describe what content is allowed on the platforms. Indeed, it is difficult to imagine how such statements of policy could be considered “false” for purposes of Plaintiffs’ claims. [Cites to Doe v. Grindr and Lloyd v. Facebook] The Court is not persuaded that such “not allowed” statements of policy are equivalent to a representation that Defendants’ platforms do not have content that violates Defendants’ policies or guidelines.
In other words, the plaintiffs repackaged one of the most venerable plaintiff arguments to try to get around 230: that a court should treat a service’s imposition of negative behavioral covenants on users’ content as if they are promises that such content will never appear on the site. Once again, this tactic didn’t work.
The court also says the plaintiffs failed to point to any specific videos that weren’t removed but should have been. “Moreover, to the extent determination of whether a video violates a policy or guideline or contains specific prohibited content requires a subjective determination that must be made by Defendants, even these affirmative ‘we remove’ statements may not be susceptible of being ‘true’ or ‘false.'” The court didn’t close the thought, but the obvious implication is that the plaintiffs want the court to second-guess how the services apply their stated editorial policies when moderating content. This kind of judicial usurpation of services’ editorial discretion would be bad news, but the court doesn’t open up that door.
State UDAPs. The state consumer claims fail for various reasons, including difficulty connecting the reporting tools’ failings to any losses suffered by the plaintiffs.
Section 230. Thus far, the entire lawsuit failed on prima facie grounds, before the court reached Section 230. Once again, amending or repealing Section 230 wouldn’t change the outcome of this case. Still, the court says that many of the plaintiffs’ arguments also contravene Section 230.
With respect to the strict products liability and negligence claims: “Plaintiffs’ theory of liability is that Defendants designed a reporting tool that is defective because objectionable content persists on their platforms even after it is reported; in other words, the breach of duty alleged is Defendants’ failure to remove reported videos. These are precisely the circumstances in which Section 230 applies.”
With respect to the misrepresentation claims:
the duty at issue arises from Defendants’ alleged promises about how they handle prohibited content on their platforms, including prohibited content reported to them. However, considering what this asserted duty “requires [Defendants] to do,” it appears that fulfillment of the duty Plaintiffs say Defendants undertook in making the alleged representations would necessarily require Defendants to change how they moderate content posted by third parties—i.e. to remove all reported videos. Unlike Bride, where the representations at issue involved banning or unmasking users who posted objectionable content—implicating duties other than those of a publisher—it is difficult to see how Plaintiffs’ misrepresentation claims treat Defendants as anything other than a publisher or speaker of third-party content.
This highlights the problems with the Bride case. It’s mockable to say that banning user-authors for posting objectionable content isn’t a publisher function. This forces this court to distinguish Bride by saying that removing content is a publisher function, but removing users isn’t–not at all persuasive, but the problem is with Bride, not this court.
Implications. I think this lawsuit previews the future of Section 230 litigation. The plaintiffs assembled a variety of typical anti-Section 230 arguments and packaged them in the ever-growing list of known 230 workarounds like “product design” and “misrepresentation” purportedly targeting the reporting function and not the services’ editorial decisions. This judge doesn’t get fooled by the repackaging, but other judges with an anti-230 bent would be more receptive to it. But even if a court wanted to get around Section 230, this ruling shows that the case also lacks prima facie merit, which makes Section 230 just a doctrinal fast lane to the same outcome.
Case Citation: Bogard v. TikTok Inc., 2025 U.S. Dist. LEXIS 32959 (N.D. Cal. Feb. 24, 2025).