Court Greenlights TikTok Content Moderators’ Lawsuit–Young v. ByteDance
TikTok outsources some of its content moderation/review to third-party BPOs (Atrium and Telus). Two BPO-employed reviewers claim they suffered psychological harm from their work. The lawsuit claims the BPOs were TikTok’s proxies. Allegedly:
TikTok provided all “instructional” and “training” materials. TikTok required content moderators to use its proprietary “TCS” software to review its videos, and it exercised “full control” over that software. And TikTok imposed punishing quantity and accuracy quotas: Some moderators viewed thousands of videos each day, and they were required to have an accuracy rate of up to 95%. TikTok enforced these quotas by “constantly surveil[ling]” moderators through its software.
The reviewers allege that TikTok did not sufficiently mitigate the harm they suffered from their review work, such as “muting their audio or changing their color or resolution.” They also claim that graphic videos would unexpectedly show up in the wrong queues.
Ordinarily, TikTok should not be liable for any harms suffered by employees of its independent contractors. Any claims should be directed to the BPO, not the BPO’s clients. However, there is an exception when the client “retains control.” The court says this may have happened here:
According to the complaint, TikTok required all content moderators to use its proprietary TCS software. TikTok had “full control” over that software, including control over how the videos were displayed and how the audio was streamed. It is highly plausible that this control was exclusive: Based on the allegations in the complaint, it does not seem that the contractor could use its own software or tinker with TikTok’s. Given that control, TikTok had a duty to use reasonable care related to the software. And accepting the allegations as true, TikTok failed to adopt reasonable safeguards in the software that could mitigate the harm from the videos
The court accepts the plaintiffs’ allegations about potential mitigation steps, including making the videos less graphic when reviewed and better sorting of graphic and non-graphic videos. The court also cites the allegation that “TikTok promised its moderators that they could opt out of child pornography by using the queue system, but that system is allegedly faulty.” Finally, the court cites the allegations that TikTok created harm by setting unreasonable productivity standards.
The court also allows an exception to the general rule due to the supply of unsafe equipment, i.e., the faulty sorting software for reviewer queues.
* * *
Similar lawsuits by content moderators have failed, including Garrett-Alfred v. Facebook and Aguilo v. Cognizant. I’m not sure why this one succeeded when the others didn’t, other than it may have framed the case and facts differently. Does this ruling open up the possibility that content reviewer claims for psychological trauma could succeed? That legal standard could affect literally tens of thousands of potential plaintiffs, so the stakes are pretty high.
The court’s standard makes it unclear how services can have humans performing content review without getting sued. Can services mitigate their liability simply by putting the right disclosures or contracts in place? That should be easy enough to fix. Perhaps industry-standard wellness practices will become sufficiently well-accepted that courts will recognize as “best practices” and reject any claims against services that adopt them. (As alleged, it sounds like the BPOs may not have been deploying best practices for worker wellness, but we need to hear the BPOs’ side of the story before drawing any conclusions).
Otherwise, the court’s standard seems to doom services. For example, the plaintiffs alleged that TikTok set unreasonable productivity standards. Is the court going to micromanage services to dictate what is a “reasonable” productivity standard? Similarly, the court accepted the plaintiffs’ arguments that items were misplaced into the wrong queue. Of course they were–human moderation is needed to fix automated errors, including queue assignments. If the court’s conclusion is that services must perfectly place reviewable content to proper queues without any human intervention, then the services will always fail this legal standard.
Finally, a reminder that regulators routinely prefer human content moderation over automated systems, despite the potential toll that imposes on the content reviewers. This may be another example where the regulators are willfully blind about the ways their Internet policies hurt subcommunities that the regulators aren’t a part of.
Case citation: Young v. ByteDance, Inc., 2023 WL 3484215 (N.D. Cal. May 15, 2023)