Facebook Can’t Shake Lawsuit Over OnlyFans Bribery Allegations–Dangaard v. Meta
This lawsuit involves troubling allegations that Facebook executives (allegedly, Nick Clegg, Nicola Mendelsohn, and Cristian Perrella) took bribes from OnlyFans-related entities to spike Facebook and Instagram posts that promoted competitors of OnlyFans. Allegedly, the spiking included naming the plaintiffs on the services’ lists of “dangerous individuals or organizations,” which then fed into GIFCT to block the plaintiffs on other sites too.
These allegations sound so fantastical that they are hard to believe, yet the plaintiffs have introduced evidence–including a whistleblower report–that was persuasive enough to Judge Alsup to defeat a motion to dismiss. You probably know by now that Judge Alsup gives the benefit of the doubt to plaintiffs on motions to dismiss, only to hammer them on summary judgment if their evidence doesn’t hold up to scrutiny. So I’m not reading too much yet into Judge Alsup’s decision, but it’s nevertheless a disappointing ruling for Facebook.
Section 230
Facebook claimed Section 230 protects its content moderation decisions. The court disagrees: “Meta defendants are alleged to have done more than merely demote or remove information provided by third parties (i.e., plaintiffs’ accounts and posts)….Meta defendants are alleged to have purposefully designed their platforms to filter posts and accounts in an anticompetitive manner….this order finds only that, when automated content-moderation tools are allegedly designed to facilitate unlawful conduct, the claims survive CDA defenses.”
Ugh, the court is making some difficult doctrinal moves here.
First, the court’s rhetoric smooths out some analytical difficulties to make this result sound cleaner than it really is. The plaintiffs can’t show that Facebook or Instagram “designed their tools to facilitate unlawful conduct.” The allegations are that (allegedly) rogue Facebook employees intentionally misclassified some accounts for their own personal profit. In other words, nothing about Facebook or Instagram’s tools are miscalibrated; the alleged problem was that legitimately designed tools were misused by employees allegedly not following company rules. Now, reread the above passage and you can see how it masks this distinction.
Second, the Ninth Circuit Enigma opinion indicated that Section 230 should step back in the face of allegations that a service made filtering decisions based on anti-competitive animus. Yet, here, the plaintiffs were not competitors of Facebook, so the court quietly expanded the scope of the anti-competitive animus to include situations where a non-competitor helped third parties (i.e., the plaintiff’s competitors). Ugh.
The court then has some unhelpful and pernicious dicta about policy considerations:
- “Because “[n]othing in [the CDA] shall be construed to prevent any State from enforcing any State law that is consistent with [the CDA],” this order cannot construe the CDA to bar plaintiffs’ claims of unfair competition on the Internet.” What? Invoking 230’s state law preemption provision is basically tautological and makes NO sense.
- “Meta defendants cannot help OnlyFans violate laws of general applicability and hide behind the CDA to avoid liability itself.” Again, what? Section 230 absolutely protects against “laws of general applicability” when those laws seek to impose liability for 230-protected activity.
- “Meta defendants could have employed Section 230(c)(2) to attempt to defend themselves — they claim to be removing obscene material from their platforms in good faith, which is what Section 230(c)(2) immunizes. But they instead chose Section 230(c)(1) to shield themselves. To approve Meta defendants’ CDA defense would make Section 230(c)(1) a backdoor to CDA immunity — “contrary to the CDA’s history and purpose.”” That sentence also makes no sense. A defendant claiming 230(c)(1) isn’t taking a backdoor to CDA immunity–it’s literally the front door. Plus, it’s disingenuous to say that Facebook/Instagram could claim 230(c)(2) when there is no possible way they can show that bribed employees improperly removing content for profit would constitute “good faith” removals.
To be fair, Section 230 is an awkward fit for allegations of bribery, even if the alleged misconduct involved content moderation decisions. At the same time, the court clearly struggled with explaining why Section 230 didn’t fit, so it creates a lot of unnecessary messiness in the Section 230 discussion. That gives plaintiffs lots of opportunities for misleading selective quoting.
First Amendment
“the Supreme Court has held that the First Amendment does not immunize anticompetitive conduct….Meta defendants are allegedly removing posts and accounts linked to all adult entertainment websites except for OnlyFans. If that is true, then Meta defendants are helping OnlyFans to achieve an unlawful monopoly in the online adult entertainment business.”
(Again, the court quietly sidesteps the problem of the lack of horizontal or vertical competition between the plaintiffs and Facebook. ¯\_(ツ)_/¯)
Vicarious Liability for Employee Acts
If the bribery allegations are true, Facebook will throw the employees under the bus. But Alsup isn’t ready for that yet. “It is premature to conclude that those accepting bribes were involved in a frolic of their own so as to immunize Meta itself.”
Then there’s this chilling statement:
The employment of individuals within Meta defendants’ content-moderation and security teams predictably and plausibly creates the risk that employees will intentionally and tortiously remove certain content from Meta defendants’ platforms. Such employees have a duty to filter content. In the performance of that duty, it is plausibly foreseeable that an employee would abuse his power for his own benefit….even if no benefit flows to Meta defendants, that alone would not preclude liability.
Whoa, plaintiffs are gonna love that. By necessity, content moderation employees have the power to publish/depublish content, and of course services know that (it’s the only way they can do their jobs). So what exactly is the service supposed to do to protect against the possibility of bribery of content moderators? Indeed, every business faces the risk that someone will bribe their employees to get undeserved benefits. Now what?
Ugh, tough ruling for Facebook. But I wonder how the summary judgment opinion will read.
Case citation: Dangaard v. Instagram LLC, 2022 WL 17342198 (N.D. Cal. Nov. 30, 2022). The complaint.
UPDATE: In July 2023, the court dismissed OnlyFans from the lawsuit due to a lack of personal jurisdiction. Dangaard v. Instagram LLC, 2023 WL 4869234 (N.D. Cal. July 31, 2023)
Pingback: Creator Economy Law Newsletter – Issue #11 – Creator Economy Law()