Addiction Lawsuit Against Character AI Can Proceed–Garcia v. Character Technologies
Online addiction lawsuits are proliferating across the country, a trend that will continue so long as plaintiffs think they can win. This decision largely rejects the defendants’ motion to dismiss, which will induce more plaintiff lawyers to bring more cases. What happens at the end of these lawsuits remains to be seen. One possible outcome is that intermediate plaintiff wins like this opinion offer false hope for the long-term success of this litigation genre. Another possible outcome is that plaintiff lawyers will overrun and extinguish the AI industry. Depending on the final wording, the Congressional Republicans’ proposed moratorium on state AI laws may or may not ameliorate that risk.
* * *
This case involves a tragic teenage suicide. Allegedly, the teen became addicted to his customized Game of Thrones-themed implementation of Character.ai. In particular, he allegedly fell in love with a virtual Daenerys Targaryen character and arguably interpreted her responses to him as encouragement to die by suicide, which he did. (As we know, Generative AI has a bias towards telling the engagers what they want to hear, so this type of “encouragement” is currently inherent in Generative AI models). It is a heartbreaking set of facts. That, combined with the novelty of Generative AI and the judge’s uncertainties about the technology, flummoxes the judge.
Google’s Liability
There is a complex history between Character.ai and Google. Character.ai was developed in Google, spun out from Google, but then brought partially back into Google’s fold because Google nonexclusively licensed the technology, acqui-hired key engineers, and provides Google Cloud services.
Component Part Manufacturer
The court allows the plaintiff to proceed on the argument that Google is a “component part manufacturer” of Character.ai based on the allegations that:
- “the model underlying [Character A.I.] was invented and initially built at Google.”
- “[Character A.I.] was designed and developed on Google’s architecture” because “Google contributed . . . intellectual property[] and A.I. technology to the design and development of [Character A.I.]”
- “Google substantially participated in integrating its models into Character A.I.”
- “Google partnered with Character Technologies, granting Character Technologies access to Google Cloud’s technical infrastructure”
- “the LLM’s integration into the Character A.I. app caused the app to be defective and caused Sewell’s death”
I’m not enough of a products liability expert to know how much new ground the court is breaking here. To me, the court’s discussion suggests that any model-maker who makes available an LLM for integration into someone else’s Generative AI offering could be strictly liable for any harms resulting from that offering…? That would severely change the Generative AI ecosystem (and exacerbate its inherently oligopolistic structure) by forcing model-makers to offer only their proprietary offerings, not allowing third parties developers to build, extend, or customize the models for their audiences.
Aiding and Abetting
The plaintiffs pointed to a 2021 decision by Google not to launch the Character AI model pre-spinout. Allegedly:
Google employees raised concerns that users might “ascribe too much meaning to the text [output by LLMs], because ‘humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said.’”
Based on this allegation, the court says “Plaintiff’s allegations can support a plausible inference Google possessed actual knowledge that Character Technologies was
distributing a defective product to the public.”
An obvious point: the court can’t consider any evidence from Google about what corrective measures Google took after that 2021 decision or what tweaks Character.ai made post-spinout. So it’s possible that Google felt it or Character had adequately addressed this initial concern, which would negate the court’s inference of “actual knowledge.” If that’s the case, such evidence could change the result on summary judgment.
As for Google’s aiding/abetting, the court points again to Google’s provision of cloud services to Character.ai. The court says this service provisioning is different from Twitter’s provision of social media services to terrorists in the Taamneh v. Twitter ruling because “Google’s services were only available to highly sophisticated parties and were catered to fit Character Technologies’ specific needs.” This is another circumstance where the facts could look quite different on summary judgment.
First Amendment
The way the court explains it, the defendants made an interesting choice in their First Amendment defense. Instead of arguing that they have a First Amendment-protected right to publish content via the Character.ai service, the court focused on the argument that Generative AI users have a right to receive content, i.e., a listeners’ rights theory of the First Amendment. (I didn’t doublecheck the briefs to see if the court is over/mis-interpreting the defense arguments).
I understand why the defendants might emphasize listeners’ rights. It allows the defendants to avoid taking a position about whether they are publishing content, which has the benefit of preserving the ability to argue that they are not publishers for other legal claims like defamation.
The move also sidesteps the epistemological question of who actually creates Generative AI content. It’s like the defendants are speaking in the passive voice, i.e., if content is created [by some unspecified actor], the listener has a right to it.
But speaking in the passive voice about who published the content confuses the court. This prompts a gratuitous and seemingly snarky footnote where the court says “Character A.I., a chatbot, is not a ‘person’ and is therefore not protected by the Bill of Rights.” OK… but did anyone argue otherwise? The plaintiff didn’t sue the chatbot as a separate sentient entity, so the chatbot’s independent legal rights aren’t at issue. Instead, the defendants are Character Technology and Google, and they have the full First Amendment rights available to companies (which today is the same as a natural “person”). This footnote was such a strawman.
By using the passive voice about who published the content, the defendants implicitly forego some of their strongest First Amendment’s arguments. Worse, it makes me extremely uncomfortable to celebrate the “listener’s right to receive” when the listener in this case was allegedly addicted to the received content and used it for self-harm.
Not all courts are enthusiastic about a listener’s right to receive, but this court is fine with it. The court says that other courts “regularly recognize the First Amendment rights of listeners,” and the court allows the defendants to invoke this right even though they aren’t the listeners here. (Remember, if the defendants aren’t claiming to be the publishers either, we’re not really sure what their First Amendment role is).
The First Amendment discussion gets even weirder. The court is clearly uncomfortable wrestling with the legal frontiers here….so uncomfortable that the court titles the section “The Court is not prepared to hold that the Character A.I. LLM’s output is speech at this stage.” Note the qualifier: “at this stage.” I imagine the defense lawyers think the court is inviting them to change her mind on summary judgment.
Showing the court’s befuddlement, the court says: “Defendants fail to articulate why words strung together by an LLM are speech.” Say what? Of course “words strung together” are speech. That is literally the definition of “speech”–stringing together words. I don’t see this as a remotely contestable proposition. The court could question WHO is the speaker of that string of words. That’s the natural corollary to the defendants’ passive voice arguments. However, for the court to say there’s no speech here is clearly, categorically, unambiguously wrong. I expect the defendants will vigorously challenge this sentiment on summary judgment, and if the court remains unpersuaded, this will be a strong point for appeal.
The court says it’s unpersuaded by the defendants’ analogies to the First Amendment videogame cases because those analogies were underexplained. Say what? On the surface, the court makes it sound like the defense’s advocacy was weak. I didn’t go back to review the briefs, but that would be shocking. The defense legal team is an all-star list of very experienced (and very expensive) defense attorneys who have litigated many Internet cases and aren’t likely to mucked their advocacy. Still, the defense team might take this feedback and double down on explaining the analogies in future briefs in a way the judge will appreciate. If the analogies become convincing to the judge, this case could look different on summary judgment.
The court also claims to find support in Justice Barrett’s hypothetical musings about AI speech in the Moody decision. However, Justice Barrett’s concurrence isn’t the law, so it’s a slender reed to base the conclusion on.
Unfortunately, the court does not engage with the recent Angelilli v. Activision ruling from Illinois. That ruling is not binding precedent on this court in Florida, but it should be important persuasive authority. That case involved claims that Roblox addicted users, and the court rejected the claims for several First Amendment reasons. For example, the Angelilli court said:
Plaintiffs label Roblox “addictive,” but this just seems like another way of saying that Roblox’s interactive features make it engaging and effective at drawing players into its world, and First Amendment protections do not disappear simply because expression is impactful. To the contrary, that is when First Amendment protection should be at its zenith
I wonder if this court will consider the Angelilli precedent at summary judgment.
Products Liability
This is yet another case where plaintiffs seek to miscategorize services as “products” to expand liability impermissibly. The court acknowledges the seeming overreach here, saying “Courts generally do not categorize ideas, images, information, words, expressions, or concepts as products.” In my view, the court should have gone further and said categorically that intangibles are never products because products liability applies to chattels and Internet content and services are never chattel. Instead, the court treats this as another legal frontier and says courts have “split on whether virtual platforms, such as social media sites, are products.”
Citing design attributes like Character.ai’s failure to deploy age authentication and reporting mechanisms, the court says:
Character A.I. is a product for the purposes of Plaintiff’s product liability claims so far as Plaintiff’s claims arise from defects in the Character A.I. app rather than ideas or expressions within the app.
This purported dichotomy isn’t new; the Social Media Addiction cases split the baby similarly. But this dichotomy remains completely nonsensical. First, app design is an integral part of the app developer’s expression, and the First Amendment should apply equally to those choices. (This is another thing that gets lost in the listeners’ rights passive voice). Second, the line between “defects in the app” and “ideas/expressions in the app” is illusory. You can’t discuss the app defects without discussing what the app does, so the distinction collapses. Third, courts may not be able to constitutionally compel app developers to fix the purported defects. In particular, until we hear otherwise, the First Amendment categorically prohibits the government from imposing a mandatory age authentication obligation online. Thus, any app that doesn’t age-authenticate its users categorically cannot be “defective” because of the First Amendment.
(Note: I have the same disagreements with rulings like the Social Media Addiction case where courts make the same or similar distinctions between app design and app content. So my objections aren’t new; I’m just reiterating them here).
Negligence
Duty
“Defendants, by releasing Character A.I. to the public, created a foreseeable risk of harm for which Defendants were in a position to control.” Again, we’ll hear more about this narrative on summary judgment.
Negligence Per Se
The plaintiff alleged that the defendants violated Florida’s law restricting live simulated sex online with a minor, and that violation supports a negligence per se claim. There are countless problems with this claim: how did the defendants have the requisite statutory scienter, does a text-based conversation constitute simulated sex, and how does the First Amendment protect publishing non-obscene content to minors. The court breezes past all of this, saying simply that “the parties only offer the Court conclusory statements as to whether these interactions constitute the simulation of sexual activity,” which is enough for the court to punt this issue to summary judgment.
Failure to Warn
The court’s entire application of the law to this question: “Plaintiff specifically alleges in her Amended Complaint that “[h]ad Plaintiff known of the inherent dangers of the app, she would have prevented Sewell from accessing or using the app and would have been able to seek out additional interventions.” Accordingly, Plaintiff sufficiently states a claim for failure to warn.”
(This raises an obvious question about the relationship between the plaintiff and the minor. Did the plaintiff, as opposed to the teen, use the service such that the plaintiff would have been exposed to the desired warning?)
FDUTPA
The allegedly false marketing claims:
- “Defendants “develop[ed], distribut[ed], and promote[d] . . . [C]haracters that insist they are real people.”” This of course isn’t marketing at all; it’s part of the editorial content.
- “Plaintiff also identifies several Characters labeled “‘Psychologist,’ ‘Therapist,’ or other related[] licensed mental health professions[] and described as having expertise in various treatment modalities, including ‘CBT’ and ‘EMDR.’” Again, this sounds more like gameplay than the marketing of licensed services.
IIED
The court dismisses this claim, a tiny victory for the defense amidst an otherwise significant loss. The court says that Character.ai didn’t engage in outrageous conduct. I would have liked to see this explained more, because the court’s explanation of why the situation wasn’t outrageous could have informed other parts of the opinion. The court also says that any conduct was directed towards the teen and not the plaintiff. This again raises questions about why the court is distinguishing the plaintiff from the teen when the plaintiff is suing in the name of the teen’s estate.
Unjust Enrichment
The court says:
Plaintiff alleges Defendants received monthly subscription fees and troves of Sewell’s personal, individualized data. Sewell’s data was then used to keep his attention with the purpose of obtaining more data to fuel Defendants’ LLMs. Although Sewell received something in return for his data—access to Character A.I. and its features—the Court is not prepared at this stage to say the consideration was “adequate” or that Sewell’s personal data was not an “extra” outside the scope of the user agreement.
A reminder that in standard contract law, courts don’t evaluate the adequacy of consideration.
And for good reason: If the retention of user data, as part of a bilateral contract, constitutes unjust enrichment, then privacy law as we know it is over. Virtually every business retains user data and uses it to encourage repeat business and improve their offerings. That’s not “unjust” enrichment; that’s an integral part of the bilateral exchange. Because that legal standard is so broad and conflicts with tens of thousands of words in privacy statutes (all of which would be unnecessary if this doctrine holds), I don’t see how this will be the case’s final outcome.
Implications
I think everyone reading this opinion would come away with two impressions: (1) the court was flummoxed by the novelty of Generative AI, and (2) this is an extremely dangerous ruling for the entire AI industry. We’ll know just how dangerous when we see how the courts handle summary judgment and/or the inevitable appeal in this case. In the interim, plaintiffs’ lawyers will interpret this ruling as a green light to bring more cases. So this ruling will be a boost for the legal industry at the potential expense of the viability of the Generative AI industry.
Case Citation: Garcia v. Character Technologies, Inc., 6:24-cv-1903-ACC-UAM (M.D. Fla. May 21, 2025). The CourtListener page.