Should We Adopt a Notice-and-Takedown Scheme for Deepfakes and Other Inauthentic Media?
Prof. Christa Laser (Cleveland-Marshall) and I engaged in a point/counterpoint about legal mechanisms to address inauthentic recordings and photos, including AI-assisted forged videos (a/k/a “deepfakes”). She argued for a notice-and-takedown scheme in some circumstances, including those creating fake pornographic images. I explained why any notice-and-takedown scheme would be misused to suppress legitimate depictions, which would help people avoid accountability for their actions.
One point I could have amplified more (I was under some tight word count limits): Takedowns of allegedly inauthentic media would target primary sources, but sometimes we need to see the evidence for ourselves, not just hear it described. In particular, people with privilege can use their position, and any implicit credibility it conveys, to cast doubt on the facts unless the facts are incontrovertible.
For example, former Rep. Katie Hill sued over the publication of intimate images that contributed to her departure from Congress. The court granted the defendants’ anti-SLAPP motion, explaining why we needed to see the images, not just hear about them:
the intimate images published by Defendant spoke to Plaintiff’s character and qualifications for her position, as they allegedly depicted Plaintiff with a campaign staffer whom she was alleged to have had a sexual affair with and appeared to show Plaintiff using a then-illegal drug and displaying a tattoo that was controversial because it resembled a white supremacy symbol that had become an issue during her congressional campaign. Accordingly, the images were a matter of ‘public issue or public interest.’…
Plaintiff has failed to carry her burden establishing that there is a probability of success on the merits on her claim under Civil Code section 1708.85. Section 1708.85(c)(5) provides for an exception from liability for images which are a matter of public concern. Here, Defendant has established that the images are a matter of public concern, as they speak to Plaintiff’s character and qualifications for her position as a Congresswoman, allegedly depicting an extramarital sexual relationship with a paid campaign staff member, the use of illegal drugs by a sitting Congresswoman, and a tattoo similar to the symbols formerly used by white supremacists.
Plaintiff’s argument that the images are not a matter of public concern because Defendant could have simply described the images rather than publishing them is unpersuasive, as the fact that information to be gleaned from an image may be disseminated in an alternative manner does not equate to a finding that the image itself is not a matter of public concern. …
The two photos at issue here are nowhere as explicit as the sex video tape in the Michaels case, and are not morbid as the photos in Jackson were described. The photos show a sitting Congresswoman engaging in conduct some might consider highly inappropriate and perhaps unlawful, with one exhibiting Plaintiff’s tattoo which looks similar to the symbols formerly used by white supremacists. The facts of which these photos speak are about Plaintiff’s character, judgment and qualifications for her congressional position.
Hill v. Heslep, 20STCV48797 (Cal. Superior Ct. April 7, 2021) (emphasis added).
My contribution to the colloquy only addressed some policy concerns of a notice-and-takedown scheme. A notice-and-takedown scheme would also face potentially significant Constitutional concerns (the anti-IMDb law isn’t perfectly analogous, but it’s representative of the problems) and, if the laws were implemented at the state law, Section 230 preemption as well.
Inauthentic media depictions can harm a person’s privacy and reputation and pose a risk to broader society, as well. “Deepfake” technology allows the creation of a type of inauthentic media using “deep machine learning” techniques, using a computer to quickly swap or simulate faces, voices, and movements.
In a blog post on the YourWitness Blog (yourwitness.csulaw.org), Professor Christa Laser argues that Notice and Takedown procedures available in copyright law can be expanded to protect persons from deepfakes. Professor Eric Goldman thinks that such a reform would inhibit the dissemination of truthful information.