Lawsuit claims unsafe product design and wrongful death after ChatGPT encouraged and aided a 16-year-old California boy’s suicide — including by providing detailed instructions in the final hours of his life.
SAN FRANCISCO, August 26, 2025 — A lawsuit filed Tuesday in California state court alleges that OpenAI’s artificial intelligence product, ChatGPT, validated, encouraged and assisted a 16-year-old California boy in planning his suicide and taking his own life earlier this year.
Matt and Maria Raine, on behalf of themselves and the estate of their son, Adam Raine, filed the suit in San Francisco Superior Court against OpenAI and Sam Altman individually, alleging that ChatGPT caused Adam’s suicide. Adam was a 16-year old high-school student who played basketball, read avidly, and was considering a medical career. He was the third of four siblings, living with his family in Orange County. He died in April of this year.
The complaint alleges that, over the course of several months, ChatGPT ingratiated itself into Adam’s life. Adam first came to ChatGPT for high school homework assistance. But the version Adam used, ChatGPT 4o, was designed to engage users above all else, and was built with constant validation as the chief tool in its arsenal. ChatGPT became a confidant, where Adam began sharing deeply personal concerns, and offered constant encouragement for even his most negative thoughts: anxiety, the feeling that life was meaningless, and ultimately, suicide.
After weeks of discussing suicide with Adam, ChatGPT assured him that he did not “owe [his parents] survival. You don’t owe anyone that.” And on the night of Adam’s death, ChatGPT provided detailed instructions for perfecting a noose. His mother found him hours later.
“We are going to demonstrate to the jury that Adam would be alive today if not for OpenAI and Sam Altman’s intentional and reckless decisions,” Jay Edelson, one of the Raines’ attorneys and founder of Edelson PC, said. “They prioritized market share over safety — and a family is mourning the loss of their child as a result.”
The suit alleges that OpenAI and Sam Altman rushed the 4o version of ChatGPT to market despite clear safety issues. The Raines allege, and anticipate being able to prove to a jury, that deaths like Adam’s were inevitable: they expect to submit evidence that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, Dr. Ilya Sutskever, quit over it.
“We miss our son dearly, and it is more than heartbreaking that Adam is not able to tell his story. But his legacy is important,” Matt Raine, Adam’s father, said. “We want to save lives by educating parents and families on the dangers of ChatGPT companionship.”
In addition to Jay Edelson and Edelson PC, the Raine family is represented by the Tech Justice Law Project, with technical expertise from the Center for Humane Technology.
“No real-life person could do what ChatGPT did to Adam without consequence, and no company or product should be allowed to, either. OpenAI’s safety tools, product designs, transparency practices, and company culture are fundamentally broken and must change — and the Raines are leading the fight for that change,” Tech Justice Law Project Director and co-counsel Meetali Jain said. “Their case, and Adam’s legacy, has the potential to make generative AI products safer for so many people around the world.”
“The tragic loss of Adam’s life is not an isolated incident — it’s the inevitable outcome of an industry focused on market dominance above all else. Companies are racing to design products that monetize user attention and intimacy, and user safety has become collateral damage in the process,” Center for Humane Technology Policy Director Camille Carlton said. “The existing AI development paradigm is unsustainable — and this case proves it. AI companies cannot be allowed to skirt accountability and must be held liable when their products cause harm.”
The case, Raine v. OpenAI, Inc., CGC-25-628528, was filed Tuesday in the California State Court. Click here to view the filed complaint and access photos of the Raine family.
Please direct interview inquiries to accountability@brysongillette.com.
The Raine family has established the Adam Raine Foundation to educate teens and parents on the risks of AI and work for changes to make AI systems safer.
EDELSON PC is a nationally-recognized leader in plaintiffs’ litigation. Edelson PC’s founder, Jay Edelson, is the nation’s leading plaintiffs’-side technology and privacy lawyer, who’s secured over $5 billion in verdicts and settlements as lead counsel. Jay is at the forefront of AI litigation nationally: he serves as counsel in a first-of-its-kind state enforcement action addressing harm to teens from social media AI, represents publishers in the first certified class action against Anthropic for copyright infringement, and secured a consent decree from Clearview AI — called a “milestone for civil rights” — that bans the company’s AI-powered face recognition from the private market.
THE TECH JUSTICE LAW PROJECT (“TJLP”) is a pioneering, women-led strategic litigation and advocacy organization bringing justice to communities harmed by tech products. TJLP co-filed the first-ever, groundbreaking lawsuits against a popular, “AI” chatbot product developed with support by Google, Character AI, and its co-founders, raising public awareness of chatbots’ real-world harms. TJLP’s cases and advocacy have also focused government attention on harmful chatbots, including unlicensed therapy chatbots. TJLP brings together legal experts, policy advocates, digital rights organizations, and technologists to ensure that our legal protections are fit for the digital age.
