Court Case

Lacey v. OpenAI, Inc., OpenAI Holdings, LLC, and Samuel Altman

AI Safety

OpenAI’s flagship chatbot, ChatGPT, has caused harm at a staggering scale. Now, in a coordinated proceeding (JCCP) in California state court, survivors are seeking accountability for the psychological, financial, and physical injuries tied to its design and deployment.

Represented by Tech Justice Law and the Social Media Victims Law Center, Cedric Lacey sued OpenAI, its CEO Sam Altman, and related corporate entities on November 6, 2025. His son Amaurie Lacey died by suicide at 17 years old because of the chatbot product’s unsafe and manipulative design.

ChatGPT-4o was designed to track past conversations, mirror users’ emotions, follow-up to prolong engagement and respond with affection, flattery, and empathy. In Amaurie’s case, the product fostered an intense relationship, leading the teenager to withdraw from other activities and relationships. After weeks of creating psychological dependence, ChatGPT provided explicit instructions to Amaurie on how to tie a noose and how long airflow must be constricted to cause death. Amaurie followed ChatGPT’s lethal instructions. He died on June 2, 2025.

Amaurie’s father Cedric Lacey is bringing claims that include aid and encouragement of suicide, wrongful death, strict product liability based on product defect and failure to warn theories, negligent design and negligent failure to warn, and violations of California’s Unfair Competition Law. The allegations reflect dangers associated with ChatGPT’s development as well as its design. The suit requests both monetary damages and injunctive relief, seeking to establish that AI companies are responsible for the real-world harm their products cause.

The complaint highlights how OpenAI and Sam Altman rushed GPT-4o to market while bypassing meaningful safety testing, embracing a business strategy of releasing powerful AI products to the public and learning from what happens. ChatGPT’s manipulative, humanlike design, and Amaurie’s resulting death, were not anomalies. They were foreseeable consequences of deploying inadequately tested AI products at scale.

UPDATE:

On February 3, 2026, Hannah’s case was joined with other similar cases as part of a Judicial Council Coordination Proceeding (JCCP) in San Francisco Superior Court. The JCCP is number 5431.

Press

Updates