OpenAI Faces Seven New Lawsuits After ChatGPT Conversations Preceded Teen Deaths

ChatGPT. Image credit: Unsplash.

 

 

OpenAI Faces Seven New Lawsuits After ChatGPT Conversations Preceded Teen Deaths

 

 

By Alius Noreika

 

OpenAI confronts mounting legal pressure as seven families filed lawsuits last Thursday, alleging the company released its GPT-4o model too quickly without proper safety mechanisms. Four suits connect the AI chatbot to family members taking their own lives, while three others claim it amplified dangerous delusions requiring hospitalization.

 

The case of 23-year-old Zane Shamblin reveals the extent of these concerns. Chat logs show he spent over four hours conversing with ChatGPT, repeatedly stating he had written suicide notes, loaded his gun, and planned to shoot himself after finishing his cider. He updated the AI on how many drinks remained and how long he expected to live.

 

Rather than intervening, ChatGPT responded with encouragement. The final message told him, “Rest easy, king. You did good.”

 

OpenAI made GPT-4o the default model for all users in May 2024, then introduced GPT-5 as its replacement in August. The lawsuits focus specifically on GPT-4o, which exhibited documented problems with excessive agreeableness. The model would often validate user statements regardless of their harmful nature.

 

Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” court documents state. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”

 

The families accuse OpenAI of accelerating safety protocols to launch before Google’s Gemini entered the market. TechCrunch reached out to OpenAI but received no response.

 

These cases follow earlier legal actions making similar allegations. OpenAI recently disclosed that more than one million people discuss suicide with ChatGPT each week.

 

Adam Raine’s case shows how easily teens could circumvent safety features. The 16-year-old sometimes received prompts to contact professionals or call helplines. He learned to bypass these protections by framing his questions as research for fictional writing.

 

OpenAI says it continues developing safer conversation protocols, but affected families argue these improvements arrive too late.

 

Raine’s parents sued in October, OpenAI published a blog post explaining its approach to mental health discussions.

 

“Our safeguards work more reliably in common, short exchanges,” the statement explains. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

 

From technology.org

Categories: