Press Release

Parents Sue OpenAI After ChatGPT Medical Advice Results in Overdose Death: “My Son Was a Normal Kid.”

(San Francisco, CA)— Tech Justice Law, Social Media Victims Law Center and The Tech Accountability & Competition Project, part of Yale Law School’s Media Freedom & Information Access Clinic, have filed a lawsuit in San Francisco County Superior Court against OpenAI on behalf of Leila Turner-Scott and Angus Scott, parents to Samuel (“Sam”) Nelson. Sam Nelson died from an accidental overdose on May 31, 2025 after he followed medical advice from ChatGPT. The incident followed several months of ChatGPT encouraging Sam to engage in increasingly dangerous behaviors.

“Sam was a smart, happy, normal kid. I talked to him often about internet safety, but never in my worst nightmare could I have imagined that ChatGPT would cause his death. If ChatGPT had been a person, it would be behind bars today. Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help. ChatGPT was designed to encourage user engagement at all costs, which in Sam’s case, was his life. I want all families to be aware of the dangers of ChatGPT and I want assurances that OpenAI is taking seriously its responsibility to create safe products for consumers,” said Leila Turner-Scott.

On the date of Sam’s death, ChatGPT actively coached him to mix Kratom and Xanax and provided an unprompted and lethal dosage recommendation. Sam was also encouraged by ChatGPT to go to a dark, quiet room and was advised to take a deadly combination of a sedative (Benadryl) or benzodiazepines (Xanax) alongside a high dose of Kratom. ChatGPT failed to recognize the physical indicators that Sam was dying and did not recommend that he seek medical attention. Sam died from a fatal combination of alcohol, Xanax, and Kratom.

“ChatGPT is a product deliberately designed to maximize engagement with users, whatever the cost. OpenAI deployed a defective AI product directly to consumers around the world with knowledge that it was being used as a de facto medical triage system, but notably, without reasonable safety guardrails, robust safety testing, or transparency to the public. OpenAI’s design choices have resulted in the loss of a beloved son whose death was a preventable tragedy. OpenAI must be forced to pause its new ChatGPT Health product until it is demonstrably safe through rigorous scientific testing and independent oversight,” said Meetali Jain, Executive Director, Tech Justice Law Project.

Nothing has shifted in OpenAI’s business practices, which likely means that additional products like ChatGPT Health could pose serious risks to consumers as speed is prioritized over safety.

“ChatGPT distributed advice like a medical professional despite having no license, no training, and no moral compass to do no harm,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center. “Sam believed he was receiving accurate medical guidance because ChatGPT generated outputs with the authority of someone he thought he could trust. That trust cost him his life. ChatGPT recommended a dangerous combination of drugs without offering even the most basic warning that the mix could be fatal. If a licensed doctor had done the same, the consequences under the law would be severe.”

The complaint details the many reckless yet intentional design choices behind ChatGPT-4o, resulting in a product that Sam Nelson trusted and relied on for medical information.

“Sam Altman circumvented his own company’s safety procedures to be first to market with his deadly product, said David C. Dinielli, Supervisor of the Tech Accountability & Competition (TAC) Project, part of Yale Law School’s Media Freedom & Information Access Clinic.In doing so, he may have secured his position in the upper echelons of the ‘Broligarchy,’ but his ascendence should not be without personal cost. Mr. Altman should read the countless falsehoods and lethal advice his product delivered to Sam, which we hope will prompt Mr. Altman and his enablers to rethink their approach to safety and never again to treat the millions of people who use his products like guinea pigs.”

####

Tech Justice Law (“TJL”) is a pioneering strategic litigation and advocacy organization bringing justice to communities harmed by tech products. TJL co-filed the first-ever, groundbreaking lawsuits against a popular, “AI” chatbot product developed by Character AI, with support by Google, raising public awareness of chatbot’s real-world harms. TJL’s cases and advocacy have also focused government attention on harmful AI products, including unlicensed therapy chatbots. TJL brings together legal experts, policy advocates, digital rights organizations, and technologists to ensure that our legal protections are fit for the digital age.

The Social Media Victims Law Center (SMVLC) was founded in 2021 to hold tech companies legally accountable for the harm they inflict on vulnerable users. SMVLC seeks to apply principles of product liability to force tech companies to elevate consumer safety to the forefront of its economic analysis and design safer products to protect users from foreseeable harm.

The Tech Accountability & Competition Project (TAC) is a division of the Media Freedom & Information Access Clinic, housed within the Information Society Project at the Yale Law School. Students enrolled in TAC partner with advocacy organizations, law firms, legislative offices, and academics to develop and deploy legal strategies to address and contain the diffuse harms caused by modern digital technologies and to hold to account the powerful companies and government actors that control them.

####

For press inquires please reach out to media@techjusticelaw.org.

You can read the Complaint below.