Court Doesn’t Expect YouTube to Moderate Content Perfectly–Newman v. Google
This is one of several ideologically motivated lawsuits against YouTube for allegedly engaging in “discriminatory” content moderation. The initial cohort of plaintiffs were conservatives (Prager); but then as a purported “gotcha,” the law firm added LGBTQ (Divino) and people of color (Newman) plaintiff cohorts. By experimenting with more sympathetic plaintiff demographics, I assume the law firm hoped to create better precedent that it could then weaponize to help conservatives object to content moderation and more deeply entrench their existing privilege. However, as I observed before, it’s not actually possible to “discriminate” against every subpopulation of user-authors, because discrimination-against-everyone is really discrimination-against-no one. Thus, I always felt the litigation ploy acted as an adverse admission by the plaintiffs. But courts don’t always use facts like that for petard-hoisting, instead grounding their rulings in legal doctrines and admissible evidence. And the precedent is indeed stacked against any account termination or content removal plaintiffs.
After 5 tries, the Divino LGBTQ lawsuit finally failed last month. And after a remarkable 6 tries, the Newman race-based lawsuit has now failed too (prior blog post). In both cases, the high-concept and splashy constitutional issues fizzled out long ago. As the Newman court summarized, “this case has shed its intentional discrimination and constitutional claims, becoming—first and foremost—a breach of contract dispute.”
The court discusses this language that YouTube added to its community guidelines in 2021:
We enforce these Community Guidelines using a combination of human reviewers and machine learning, and apply them to everyone equally—regardless of the subject or the creator’s background, political viewpoint, position, or affiliation
I do not understand YouTube’s decision to add this language while it had all of these discrimination lawsuits pending. What was YouTube thinking???
The court says this language could be an enforceable promise:
the statement reads like a guarantee that users can expect identity-neutral treatment from YouTube when they use its service. Moreover, the statement is definite enough for the Court to ascertain YouTube’s obligation under the contract (it must avoid identity-based differential treatment in its contentmoderation) and to determine whether it has performed or breached that obligation.
This sets up YouTube for a major own-goal. Yet, the court bails YouTube out. [Tip to YouTube: PLEASE PLEASE PLEASE DELETE THIS LANGUAGE FROM YOUR COMMUNITY GUIDELINES IF YOU HAVEN’T ALREADY DONE SO.]
The court says: “the plaintiffs must do more than gesture at plausible ideas in the abstract. They must allege sufficient factual content to give rise to a reasonable inference that their content has been treated in a racially discriminatory manner by YouTube’s algorithm.”
The centerpiece of the plaintiffs’ allegations is a chart comparing 32 of the plaintiffs’ restricted works to 58 unrestricted works by “white” submitters. The court shreds this chart.
As a proxy for determining that submitters were “white,” the plaintiffs identified videos from “large corporations.” This is a non-sequitur, and the court easily disregards these videos. The court calls other video comparisons “downright baffling.” Yet other comparisons actually undercut the plaintiffs’ arguments because the restricted videos apparently deserved more moderation than their comparators, “which dramatically undermines the inference that the differential treatment was based on the plaintiff’s race.”
This leaves only “a scarce few” video comparisons as “even arguably viable,” but that’s not enough to support the contract breach claim (emphasis added):
the complaint provides no context as to how the rest of these users’ videos are treated, and it would be a stretch to draw an inference of racial discrimination without such context. It may be that other similarly graphic makeup videos by Ley have not been restricted, while other such videos by the white comparator have been restricted. If so, this would suggest only that the algorithm does not always get it right. But YouTube’s promise is not that its algorithm is infallible. The promise is that it abstains from identity-based differential treatment.
The issue of error rates is critical to any allegations of identity-based discriminatory content moderation. A large service like YouTube with an exceptionally high content moderation accuracy rate will make many millions of moderation errors–not because of discrimination but because of the inevitable limitations and possible arbitrariness of content moderation. As the court indicates, it’s not realistic to demand perfect content moderation, so that’s not the appropriate baseline for assessing whether content moderation has been done on a discriminatory basis.
So, exactly what proof the court would accept to show identity-based discriminatory content moderation? The court says the 32/58 video comparison was too small a sample to generate reliable results. Yet, the court also said that general statistical evidence of site-wide discrimination wouldn’t matter. So I guess the judge would credit a large number of plaintiffs with a large enough corpus of compared works to achieve statistically reliable results? Or perhaps there is no way for plaintiffs to plead discrimination without smoking-gun evidence of individual discriminatory decisions.
On that front, the plaintiffs’ other key piece of evidence came from a 2017 meeting between YouTube “queer” creators and Google’s Vice President of Product Management, Johanna Wright. Allegedly Wright said that Google was filtering content “belonging to individuals or groups based on gender, race, religion, or sexual orientation.” At another meeting, YouTube allegedly admitted that it differentially removed content from non-white submitters at a higher rate than white submitters. The court says these allegations “do not come close to making up for the glaring deficiencies in the plaintiffs’ chart”:
First, the allegations are vague as to what exactly was said. For example, the complaint purports to quote Wright, but it is not clear where Wright’s words end and the plaintiffs’ recitation of legal buzzwords begins. Similarly, the plaintiffs attribute a great many (buzzword-laden) statements to YouTube’s representatives but barely quote them.
Second, and more importantly, these alleged admissions were made in 2017, four years before YouTube added its promise to the Community Guidelines. In machine-learning years, four years is an eternity. There is no basis for assuming that the algorithm in question today is materially similar to the algorithm in question in 2017. That’s not to say it has necessarily improved—for all we know, perhaps it has worsened. The point is that these allegations are so dated that their relevance is, at best, attenuated. Finally, these allegations do not directly concern any of the plaintiffs or their videos. They are background allegations that could help bolster an inference of race-based differential treatment if it were otherwise raised by the complaint. But, in the absence of specific factual content giving rise to the inference that the plaintiffs themselves have been discriminated against, there is no inference for these background allegations to reinforce.
I wonder again: what evidence could plaintiffs have marshaled to show actionable discriminatory content moderation?
To be clear, I favor high evidentiary barriers to claims of discriminatory content moderation. It should not be possible to establish identity-based discriminatory content moderation based solely on inferences. Otherwise, every group can easily find a statistician who will crunch a dataset to show, with low p-values, that moderation for one content category isn’t perfectly equal to some other content category. This isn’t discrimination; this is an inevitable consequence of editorial decisions at scale (especially if the service doesn’t always know its authors’ demographics).
(I also question if the law can restrict publishers’ ability to discriminate in their editorial decisions. That topic is expressly at issue in the FLA and TX social media censorship cases).
So, plaintiffs need smoking gun evidence, or it’s an easy case to dismiss. As I wrote about this case in 2021:
the plaintiffs claim that YouTube engaged in race-based discriminatory content moderation, but there’s no way for plaintiffs to prove this because there’s no baseline of what “unbiased” content moderation looks like. Instead, an unavoidable truth of content moderation: everyone believes that the Internet services are “biased” against them…but it’s impossible for Internet services to be biased against everyone. Without a simple-to-apply legal defense that content moderation is always and inevitably biased and the law offers no remedy for that, we will experience a repetitive cycle of plaintiff attempts to weaponize the law against that truth.
In future cases alleging discriminatory content moderation, maybe courts should order the plaintiffs to pay the attorneys’ fees, especially when a plaintiff gets 6 bites at the apple.
With this dismissal, the case is now ready for its appeal to the Ninth Circuit, which has always been its inevitable destination. There, this case will join the Divino case. I would like to think that the Ninth Circuit will find these cases easy to affirm, but as indicated by the troubling Vargas ruling, anything could happen when the Ninth Circuit considers the intersection of discrimination claims and editorial decision-making.
One oddity: the court instructs both parties “not to remove or otherwise make unavailable the videos cited in the complaint until the appellate process has run its course.” I understand that the court is trying to preserve the evidence for the appellate court, but this is actually an unconstitutional order to keep publishing content that either party may determine does not meet its editorial standards.
Case citation: Newman v. Google LLC, 2023 WL 5282407 (N.D. Cal. Aug. 17, 2023). The CourtListener page.