District Court Again Rejects Plaintiffs’ Attempts to Manufacture Common Law Notice-and-Takedown Duties–Bogard v. TikTok
This is a quirky lawsuit designed to subvert Section 230, the First Amendment, and traditional common law. I previously summarized the case:
This lawsuit purports to focuses on the allegedly defective operation of the services’ reporting tools, but the plaintiffs’ goal was to hold the services accountable for their alleged inaction in response to some reports. In other words, the plaintiffs are trying to use venerable legal doctrines to create a common-law notice-and-takedown scheme.
The court previously dismissed the case. Prior blog post. The plaintiffs filed an amended complaint that contains few new fact allegations. Instead, the plaintiffs tried a different framing of those facts. The court summarizes the gist: the plaintiffs “challenge only Defendants’ automated reporting tools and not other aspects of Defendants’ content review and reporting processes.” It doesn’t work.
Product Liability–Design Defect
“Plaintiffs contend that Defendants’ reporting tools are defective because Plaintiffs’ reports do not produce the outcomes that Plaintiffs believe they should—i.e., a determination that the reported videos violate Defendants’ Community Guidelines.” The court implies that the plaintiffs think of “reporting tools” as a kind of automated censorbot, i.e., “Plaintiffs appear to contend that each reporting tool…itself determines whether such a violation exists.”
The court is confused by the allegations because, in the end, all objections involve the content non-removal decisions: “the design defect Plaintiffs describe is bound entirely to Defendants’ content moderation decisions, including how closely the determinations of the automated reporting tools track human review decisions.” The court says California law doesn’t support this.
Negligence
The plaintiffs’ reframed theory: “by offering reporting tools and soliciting user reports, Defendants assumed a duty to review the reports, and a further duty to respond to such reports or to warn users of the danger of making reports that do not result in the removal of reported content.” The court says California law doesn’t impose this duty. However, I will note that various state and International laws are increasingly requiring such duties, such as laws imposing a mandatory explanation of content decisions (sometimes coupled with a right to appeal).
Misrepresentation
The court reiterates: “the challenged statements that merely describe content that is allowed or not allowed on the platform, or that describe Defendants’ general policies, are not “equivalent to a representation that Defendants’ platforms do not have content that violates Defendants’ policies or guidelines.””
The court says that TOS promises to remove violative content could be actionable, but the plaintiffs never alleged that they relied on these promises.
Lack of Proximate Causation
to the extent Plaintiffs allege they suffered emotional harm from watching distressing videos prior to reporting them, their independent and voluntary decision to search out such videos cannot be attributed to Defendants, and Defendants cannot be said to cause that harm. While Plaintiffs assert that they would not have used Defendants’ reporting tools had they known the tools were “defective,” Plaintiffs do not allege that they would not have searched for and watched the videos at all. Indeed, Plaintiffs allege that they are highly motivated to identify harmful content posted on Defendants’ platforms and get Defendants to remove such content.
Citing the Calise decision, the court does a claim-by-claim analysis of Section 230 to identify the duty embedded in each claim.
Strict products liability. “the focus of Plaintiffs’ claim is that Defendants undertook a duty to moderate third-party content based on receiving reports from users via the reporting tool and then failed to accurately determine that the reported content should be removed. As such, the alleged duty that forms the basis of Plaintiffs’ product-defect claim “stem[s] from [Defendants’] status as a publisher.””
Also, “to avoid liability Defendants would need to alter their reporting tools to do a better job of moderating (i.e. removing) content reported by users, or alter their “responses” to Plaintiffs’ reports.”
The court distinguishes the confounding Doe v. Twitter decision, which excluded claims over a “reporting mechanism architecture” from 230:
Plaintiffs do not challenge the ease with which Defendants’ reporting tools may be accessed, but rather how good the reporting tools (or Defendants) are at determining whether reported content violates Defendants’ Community Guidelines and should therefore be removed. Such content moderation is “quintessential publisher conduct,” and Defendants cannot be held liable for it.
Negligence.
to the extent Plaintiffs contend that Defendants have a duty arising from Defendants’ role as manufacturers or providers of a product, and a corresponding obligation to remove content, this claim is barred by Section 230(c)(1) for the same reasons Plaintiffs’ claim 1 is barred. Likewise, to the extent Plaintiffs contend that Defendants have a duty to protect others because they assumed responsibility for creating a reporting system that reliably resulted in removal of all prohibited content, such a claim also is barred under Section 230(c)(1) because it attempts to hold Defendants responsible for their inadequate content moderation generally or for their incorrect determinations about whether content should be removed from their platforms.
The court again distinguishes Doe v. Twitter: “Unlike the negligence claim addressed in Doe I, where Twitter had a statutory duty to promptly report CSAM to NCMEC once it obtained actual knowledge of such material, Plaintiffs here identify no duty independent of Defendants’ duty as a publisher of third-party content.”
Misrepresentation.
because fulfillment of the alleged duty to not make false or misleading representations would not necessarily require Defendants to make any changes to their contention moderation practices or to any other publisher-related conduct, Section 230 would not bar Plaintiffs’ misrepresentation claims as to the three “review and remove” statements
(but these misrepresentation claims failed on their prima facie elements).
First Amendment
to the extent the Court has concluded that Defendants are entitled to Section 230 immunity for the statements and conduct Plaintiffs challenge, for the same reasons, the Court finds that such statements and conduct are also protected by the First Amendment. However, as to the three “review and remove” statements Plaintiffs challenge, the Court does not find that these statements are protected by the First Amendment.
A sub silento rejection of Anderson v. TikTok.
Implications
The court dismisses the case with prejudice, so I assume the next stop is the Ninth Circuit.
“Review and remove” statements are liability traps. Now that Calise and YOLO enable plaintiffs bypass Section 230 for any promise-based claims, services have extra reasons to scrub their site disclosures to ensure they aren’t making any statements that remotely resemble a promise related to content moderation. As this case illustrates, any description of the service’s content moderation cannot imply or promise that violative content will be subjected to any specific remedies or else courts may treat those statements as a backdoor guarantee of outcomes enforceable even by those not in privity. This is something services can fix today.
Doe v. Twitter didn’t blow up this case. The Doe v. Twitter court had no idea what it was doing when it said that 230 didn’t apply to claims over “reporting mechanism architecture.” That ruling opened the door for so many unmeritorious plaintiff arguments. Thus, the lower court here might have read that to mean that any claims over reporting functionality automatically bypass 230. That didn’t happen this time, but the Doe v. Twitter case still holds a lot of potential to cause serious mischief.
No discussion of Take It Down Act. The plaintiffs failed to create a common law notice-and-takedown system. However, since the court’s initial dismissal, Congress enacted the Take It Down Act, which provides a new statutory notice-and-takedown mechanism. The opinion doesn’t mention this development at all, continuing to leave us in the dark about how that new statutory mechanism interacts with the plaintiffs’ arguments.
Case Citation: Bogard v. TikTok Inc., 2025 WL 3637035 (N.D. Cal. Dec. 15, 2025)
