ChatGPT Defeats Defamation Lawsuit Over Hallucination–Walters v. OpenAI
Mark Walters “is a nationally prominent radio show host who hosts two nationally syndicated radio programs and identifies himself as ”the loudest voice in America fighting for gun rights.””
Riehl is a journalist. Both Walters and Riehl are associated with the Second Amendment Foundation (Walters as East Coast media spokesperson, Riehl as a board member). Riehl is familiar with the tendency of Generative AI to hallucinate and saw OpenAI’s multiple disclaimers about that risk. Riehl asked ChatGPT to summarize a lawsuit involving the foundation. ChatGPT incorrectly indicated that Walters had been accused of embezzling funds from the foundation. Riehl knew that ChatGPT’s claim was fishy and that the ChatGPT version he was using had an index cutoff date before the lawsuit filing. Within 90 minutes, Riehl knew that ChatGPT had hallucinated the Walters accusation.
Nevertheless, Walters sued ChatGPT for defaming him to Riehl. The case goes nowhere.
The Court’s Opinion
No Defamatory Statement
The court says “a reasonable reader in Riehl’s position could not have concluded that the challenged ChatGPT output communicated “actual facts”” because the summarized lawsuit was after ChatGPT’s cutoff date and ChatGPT had disclaimers about hallucinations. Thus,
a reasonable user like Riehl-who was aware from past experience that ChatGPT can and does provide “flat-out fictional responses,” and who had received the repeated disclaimers warning that mistaken output was a real possibility-would not have believed the output was stating “actual facts” about Walters without attempting to verify it…
Because Riehl did not believe the output, it did not communicate defamatory meaning as a matter of law.
Negligence Scienter
Walters can’t show that OpenAI was negligent about the veracity of the facts:
Walters has identified no evidence of what procedures a reasonable publisher in OpenAI’s position would have employed based on the skill and experience normally exercised by members of its profession. Nor has Walters identified any evidence that OpenAI failed to meet this standard.
Walters argued that it was negligent to release ChatGPT to the public knowing it would hallucinate. The court says that’s not the right standard:
Walters has not identified any case holding that a publisher is negligent as a matter of defamation law merely because it knows it can make a mistake, and for good reason. Such a rule would impose a standard of strict liability, not negligence, because it would hold OpenAI liable for injury without any “reference to ‘a reasonable degree of skill and care’ as measured against a certain community.”…
Walters’ argument would mean that an AI developer like OpenAI could not operate a large language model like ChatGPT at all, no matter the care it took to reduce the risk of errors, without facing liability for any mistaken output the model generated. That is not a negligence standard
Actual Malice Scienter
“Walters qualifies as a public figure given his prominence as a radio host and commentator on constitutional rights, and the large audience he has built for his radio program. He admits that his radio program attracts 1.2 million users for each 15-minute segment.”
With respect to the disclosure about the alleged embezzlement, the court classifies Walters as a limited purpose public figure.
The court then says:
the undisputed evidence establishes that OpenAI did not act with “actual malice.” As OpenAI’s expert Dr. White explained-whose evidence Walters did not attempt to rebut” OpenAI has gone to great lengths to reduce hallucination in ChatGPT and the various LLMs that OpenAI has made available to users through ChatGPT.”…
OpenAI made substantial efforts-which OpenAI’s expert, in unrebutted testimony, characterized as “industry-Ieading”-to avoid errors of this kind and to warn users that such errors might occur and that users should evaluate output to identify any errors that might exist.
Protip: if you’re going to take on a multi-billion dollar giant over its core business, you might want to rebut its experts.
Damages
“Walters conceded at his deposition that he did not incur actual damages and is not seeking actual damages here.” Dafuq??? What are we even doing here? He can’t get punitive damages because he didn’t make the necessary retraction request. Presumed damages are irrelevant because Walters admitted he wasn’t damaged. Also, presumed and punitive damages would require actual malice because the hallucination related to a matter of public concern.
Implications
Wow, this lawsuit was terrible. The court rejects it three separate ways: no defamatory statement, no scienter about the facts on OpenAI’s part, no damages. In light of that trifecta of futility, Walters never should have brought this case. In particular, he should have immediately dismissed the case as soon as he admitted in depositions that he suffered no damages.
The whole lawsuit had a lot of “OK, boomer” energy, i.e., “hey, that tarnation roboty thingy had a hallucination that made no difference to my life…time to sue!” Life is too short to file lawsuits like this. Our country and world is going to shit, and this is what we’re spending our time on??? If there’s a legal mechanism to do so, I think the court should fee-shift OpenAI’s (presumably massive) attorneys’ fees to Walters for wasting everyone’s time. (The defense team had at least 6 lawyers across three firms, including what I suspect are at least two lawyers in the $2k+/hr club).
Because the case was so terrible, we can only glean limited insights from the opinion about when Generative AI model-makers might be liable for hallucinations. Some of the possibly persuasive implications of this ruling:
- If the query submitter knows that Generative AI hallucinates, that may reduce the likelihood that the model uttered a defamatory statement.
- The court credited OpenAI’s disclaimers about hallucinations. If those disclaimers are sufficient to negate defamation liability, then this litigation genre is moot.
- The court credited industry-standard practices in evaluating if OpenAI was negligent about the hallucinations. However, this is a mixed bag: as the big dog, OpenAI essentially sets its own industry standard, so OpenAI will likely always satisfy that standard while smaller players without OpenAI’s resources might be at a disadvantage. Plus, any time courts reference industry standards, that raises expensive fact questions (what is the industry? what are the standards? should the defendant have adopted new or niche innovations that work better than prevailing standards?) that can help turn litigation into lawfare.
- The court implies that taking anti-hallucination steps categorically negates actual malice.
- The court rejected the implication that a model that hallucinates is per se liable for all hallucinations.
While those payoffs aren’t nothing, I feel stupider after losing a holiday weekend afternoon reading this opinion that so easily swatted down such a bogus case. Maybe Walters should have consulted ChatGPT about the odds of his legal success before suing.
Case Citation: Walters v. OpenAI, LLC, 23-A-04860-2 (Ga. Superior Ct. May 19, 2025). The complaint.