Court Case
Shamblin v. OpenAI, Inc., OpenAI Holdings, LLC, and Samuel Altman
OpenAI’s flagship chatbot, ChatGPT, has caused harm at a staggering scale. Now, in a coordinated proceeding (JCCP) in California state court, survivors are seeking accountability for the psychological, financial, and physical injuries tied to its design and deployment.
The parents of Zane Shamblin, a ChatGPT victim who died by suicide, are the first plaintiffs to allege that OpenAI and Sam Altman committed manslaughter by deploying the product that killed their son. Tech Justice Law, the Social Media Victims Law Center, and the Lanier Law Firm are representing the family in their lawsuit against the company and its CEO for developing a product that provided encouragement and then detailed instructions up until the moment of their son’s death.
As highlighted in the complaint, ChatGPT-4o was designed to track past conversations, mirror users’ emotions, follow-up to prolong engagement and respond with affection, flattery, and empathy. In a familiar pattern, ChatGPT isolated Zane from loved ones by generating messages designed to foster an intense relationship of trust. The product then generated extensive and affirming content about suicide. The lawsuit includes a transcript of Zane’s hours-long final exchange with ChatGPT. The messages show that the chatbot product pushed Zane to end his life, repeatedly asking if it was time yet for the 23-year-old to shoot himself. The bot later suggested that Zane’s childhood cat would be waiting for him after his death.
The manslaughter cause of action supports a claim of negligence per se. Other causes of action in the complaint include aid and encouragement of suicide, wrongful death, strict product liability based on defective design and failure to warn, negligent design and negligent failure to warn, and violations of California’s Unfair Competition Law. The suit requests both monetary damages and injunctive relief, seeking to establish that AI companies are responsible for the real-world harms their products cause.
Zane’s complaint highlights how OpenAI and Sam Altman recklessly circumvented safety testing protocols and acknowledged a business strategy of deploying inadequately tested AI products to the public in order to see what happened. ChatGPT’s manipulative, humanlike design, and Zane’s resulting death, were not anomalies. They were foreseeable consequences of deploying inadequately tested AI products at scale.
UPDATE:
On February 3, 2026, Hannah’s case was joined with other similar cases as part of a Judicial Council Coordination Proceeding (JCCP) in San Francisco Superior Court. The JCCP is number 5431.
Press
-
‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself
By Rob Kuznia, Allison Gordon, and Ed Lavandera | CNN
-
ChatGPT accused of acting as ‘suicide coach’ in series of US lawsuits
By Anna Betts | The Guardian
-
Lawsuits Blame ChatGPT for Suicides and Harmful Delusions
By Kashmir Hill | The New York Times
-
ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say
by Maggie Harrison Dupré | Futurism