What Should Photo Repositories Do About Blackface Photos?–Thompson v. ShutterStock
Blackface depictions have a long history of racism. In 2020, Facebook banned them as hate speech when the images “caricature” black people. At the same time, the “Black Peter” character is a long-standing (though increasingly controversial) part of Christmas celebrations in the Netherlands, and older images of people in blackface are essential to understanding our (racially dubious) history. Thus, like most efforts to define “bad” content for content moderation purposes, when it comes to the legitimacy of publishing blackface photos, “it depends.”
Today’s case involves the racial discrimination complaints of Thompson, an African-American employee at the photo repository ShutterStock, especially in the emotionally raw aftermath following the 2020 George Floyd murder. (Long-time readers may recall that this blog had a several-year arrangement with ShutterStock to use their stock photos to illustrate blog posts; that arrangement ended in 2016). Among other concerns, Thompson pointed to blackface photos in ShutterStock’s photo repository in his racial discrimination lawsuit. The court’s discussion of the topic:
Offensive images can contribute to a hostile work environment. See Banks v. Gen. Motors, LLC, 81 F.4th 242, 264–65 (2d Cir. 2023) (finding that offensive images such as nooses and Confederate flags contributed to a hostile work environment). Images portraying blackface are certainly offensive. Plaintiff alleges that many of these images remained in Shutterstock’s content library in violation of Shutterstock’s own content policy. Additionally, Plaintiff alleges that he reported these issues to Shutterstock’s social media team and to Beecher and was told that Shutterstock only maintained the policy to make the organization look good and did not actually care about enforcing the policy. Finally, Plaintiff alleges that, while Shutterstock removed some of the offensive images, Shutterstock allowed many of the blackface images to remain in its content library and that Shutterstock employees continued to “distribute inaccurate and offensive guidance regarding Shutterstock’s content policy.” It is plausible that the presence of offensive blackface images in Shutterstock’s content library, combined with selective application of Shutterstock’s own content moderation policy in response to a complaint from a Black employee, could contribute to a racially hostile work environment. Therefore, this conduct may count toward a hostile work environment claim against Shutterstock.
However, Shutterstock’s refusal to remove some blackface images from its content library was not sufficiently severe, on its own, to create a hostile work environment. The placement of an offensive image or object in a workplace may be severe enough to create a hostile work environment. See Banks, 81 F.4th at 265 (“A reasonable jury could find that even a single placement of [a noose] – imbued as it is with historical gravity as a symbol and tool of actual violence – directly at the workstation of a Black employee could amount to severe conduct sufficient to support an inference that the workplace is hostile to Black employees.”). However, Plaintiff’s allegations lack sufficient detail regarding the nature and context of the images for the Court to infer that they rise to this level of severity. Nor does Plaintiff allege that he or other Shutterstock employees regularly encountered these images during the course of their employment. Given this lack of detail, and absent other indica of the severity or pervasiveness of the blackface images, the Court does not find it plausible that Shutterstock’s refusal to remove some blackface images from its content library rose to the level of a hostile work environment.
(I’m not exactly sure what the court is saying here. Is it saying that blackface photos in the content database could have contributed to a hostile work environment, but Thompson didn’t provide enough evidence in his case? Or is the court saying that the continued publication of the photos could have contributed to a hostile work environment, but an employer could cure it by a sufficiently responsive takedown policy? Or something else?)
To me, this case is a microcosm of the general debates we have over content moderation, except that it arose in a slightly unusual posture. This case involves an employee discrimination complaint, rather than the more common external complaints about content moderation. It suggests an employee might have better legal standing to complain about a service’s content moderation than outsiders. That’s a bit like the BPO cases over the working conditions for content reviewers because of their exposure to terrible content.
I would like to know more about the images ShutterStock didn’t remove and why. If those images relate to Black Peter or depict historical uses of blackface, they might still “comply” with ShutterStock’s policies or be reasonable exceptions. In that case, counting the continued availability of those photos as evidence of a hostile work environment raises an obvious First Amendment problem; it would represent a tool for back-door censorship of Constitutionally protected material. It also would put judges in the uncomfortable and Constitutionally dubious position of second-guessing the legitimacy of ShutterStock’s content moderation decisions.
(Note: currently in the US, ShutterStock has the legal freedom to ban all blackface photos, even photos with social or historical value. If I were in charge, it’s probably the decision I would make given the long-standing racism implications of the images. However, consider how such a ban would interplay with any “must-carry” statutory laws. For example, would Thompson have a claim under the Florida social media censorship law that ShutterStock didn’t moderate content “consistently”? Would uploaders have a claim under the Texas social media censorship law that banning blackface uploads constitutes “viewpoint discrimination,” especially if it didn’t ban other types of cultural appropriation? As you can see, the stakes in the imminent NetChoice Supreme Court decisions are extremely high).
Finally, this case brings to mind the statutes requiring Internet services to enforce their policies as written. As you already know, it’s impossible to do content moderation perfectly, so plaintiffs can always find examples where a service allegedly did not live up to its stated policy. This case illustrates one way of finding that out (an employee complaint), but the implications are the same. If Internet services are liable for doing content moderation imperfectly, the liability exposure poses an existential risk.
Case Citation: Thompson v. ShutterStock, Inc., 2024 WL 2943813 (S.D.N.Y. June 10, 2024).
Selected Other Cases Involving ShutterStock