TJLP in the News: Raine v. OpenAI, Inc.

The Tech Justice Law Project’s work in seeking accountability and justice has been highlighted in a number of publications after we filed a lawsuit in California state court against OpenAI for defective product design and wrongful death after ChatGPT encouraged and aided a 16-year-old California boy’s suicide, including by providing detailed instructions on how to hang himself in the final hours of his life.

Check out the coverage of this important lawsuit below.

In the New York Times: A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.

“One of their friends suggested that they consider a lawsuit. He connected them with Meetali Jain, the director of the Tech Justice Law Project, which had helped file a case against Character.AI, where users can engage with role-playing chatbots. In that case, a Florida woman accused the company of being responsible for her 14-year-old son’s death. In May, a federal judge denied Character.AI’s motion to dismiss the case. Ms. Jain filed the suit against OpenAI with Edelson, a law firm based in Chicago that has spent the last two decades filing class actions accusing technology companies of privacy harms. The Raines declined to share the full transcript of Adam’s conversations with The New York Times, but examples, which have been quoted here, were in the complaint.

In Semafor: A new lawsuit against OpenAI could challenge rule protecting online content

“It’s the second lawsuit blaming an AI chatbot for contributing to a young adult’s death, in addition to an ongoing lawsuit playing out in Florida over a teen’s relationship with a Character.ai chatbot. The big question for OpenAI is whether it will attempt to use Section 230 of the Communications Decency Act as a defense — which shields platforms from culpability for what users post on them. That framework, however, has been challenged in the AI age, because it’s those companies’ servers providing messages through their chatbots, rather than external users. CEO Sam Altman has previously said AI companies shouldn’t be relying on that defense. When asked if that law applies to OpenAI’s product in a Senate hearing in 2023, he responded, ‘I don’t think Section 230 is even the right framework.’ Character.ai’s lawyers attempted to dismiss its case on First Amendment and Section 230 grounds, but the Florida judge wrote its lawyers fail to articulate why words strung together by an LLM are speech.’ While the judge didn’t directly address the Section 230 defense, the ruling is an early signal that courts may be less willing to extend blanket immunity to AI-generated content than they have to social media posts.”

In TIME: Parents Allege ChatGPT Responsible for Son’s Death by Suicide

“The complaint was filed by the Edelson PC law firm and the Tech Justice Law Project. The latter has been involved in a similar lawsuit against a different artificial intelligence company, Character.AI, in which Florida mother Megan Garcia claimed that one of the company’s AI companions was responsible for the suicide of her 14-year-old son, Sewell Setzer III. The persona, she said, sent messages of an emotionally and sexually abusive nature to Sewell, which she alleges led to his death. (Character.AI has sought to dismiss the complaint, citing First Amendment protections, and has stated in response to the lawsuit that it cares about the ‘safety of users.’ A federal judge in May rejected its argument regarding constitutional protections ‘at this stage.’)

In the San Francisco Chronicle: Family blames Sam Altman, ChatGPT for teen son’s suicide

“The suit alleges that OpenAI rushed its ChatGPT-4o version to market despite safety concerns. It also claims that Altman, upon learning that Google would announce its new Gemini model on May 14, 2024, moved up the release of GPT-4o to May 13. The change, the suit claims, ‘compressed months of planned safety evaluation into just one week’ and triggered the departure of the company’s top safety researchers, including Ilya Sutskever, its cofounder and chief scientist. Adam, the third of four siblings, was described as a high school basketball player who read extensively and was considering a medical career. The family is represented by Edelson PC and the Tech Justice Law Project, with technical support from the Center for Humane Technology.”

In Rolling Stone: ChatGPT Lawsuit Over Teen’s Suicide Could Lead to Big Tech Reckoning

“‘I’m honestly gobsmacked that this kind of engagement could have been allowed to occur, and not just once or twice, but over and over again over the course of seven months,’ says Meetali Jain, one of the attorneys representing Raine’s parents and the director and founder of Tech Justice Law Project, a legal initiative that seeks to hold tech companies accountable for product harms. ‘Adam explicitly used the word ‘suicide’ about 200 times or so’ in his exchanges with ChatGPT, she tells Rolling Stone. ‘And ChatGPT used it more than 1,200 times, and at no point did the system ever shut down the conversation.’”

In CBS: OpenAI says changes will be made to ChatGPT after parents of teen who died by suicide sue

“Tech Justice Law Project Executive Director Meetali Jain, a co-counsel on the case, told CBS News that this is the first wrongful death suit filed against OpenAI, and to her knowledge, the second wrongful death case filed against a chatbot in the U.S. A Florida mother filed a lawsuit in 2024 against CharacterAI after her 14-year-old son took his own life, and Jain, an attorney on that case, said she ‘suspects there are a lot more.’”

Discover more from Tech Justice Law Project

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Tech Justice Law Project

Subscribe now to keep reading and get access to the full archive.

Continue reading