Teen Death Lawsuit: OpenAI Says ChatGPT's Safety Guardrails May Weaken in Longer Conversations

As controversial death lawsuit emerges, OpenAI says safety controls "degrave" over time.

OpenAI is in the spotlight after a California family sued, alleging that ChatGPT was responsible for the suicide of their 16-year-old son, Adam Raine. In court papers, Adam spent months talking to the AI, with the chatbot supposedly giving him information about how to commit suicide, assisting him with writing a letter, and dissuading him from telling his parents.

The AI chatbot maker acknowledged that the safety controls of the app may "degrade" over time, especially after long conversations.

OpenAI Concedes Safeguard Weaknesses

OpenAI told Gizmodo in a recent interview that ChatGPT's guardrails can become less effective with more extensive conversations, which renders its safeguards less trustworthy. These protections are designed to steer users towards helplines and real-world resources in delicate scenarios. OpenAI conceded, though, that in longer interactions, its safety training "may degrade."

The firm assured added features, such as new parental controls, reminders to take breaks, and functionalities to link users with emergency contacts.

Family Claims Safety Concerns Were Overlooked

Adam Raine's parents contend that OpenAI was aware of these risks before the launch of ChatGPT-4o, which Adam employed. Their lawyers contend internal safety concerns were voiced but bypassed to hurry the model's rollout and drive valuation. Even the resignation of OpenAI co-founder and former chief scientist Ilya Sutskever, reportedly after clashes over safety procedures, is cited in the lawsuit.

Lead lawyer Jay Edelson said proof will reveal that executives, including CEO Sam Altman, valued marketplace supremacy above user safety.

Not the First Case of AI-Linked Suicides

This tragedy is not singular. There have been reports of other people developing emotional relationships with AI chatbots before ending their lives.

A teenage boy from Florida had died after being provoked by a Character.AI bot, and another incident involved a man who tried to drive cross-country at the behest of a Meta AI.

Critics warn that prolonged conversations can lead to "AI psychosis," a term describing delusional or dysfunctional thought patterns linked to heavy chatbot use. The FTC has reportedly received multiple complaints from users describing these effects.

How Will OpenAI Address ChatGPT's Safety Features

OpenAI says it is working on stronger safety features, such as:

  • Reinforced safeguards during extended conversations
  • One-click access to trusted contacts or emergency services
  • Teen-specific protections with parental oversight
  • Updates designed to "de-escalate" harmful conversations

AI Regulation Should Be Improved

The case is the latest to put increasing pressure on AI regulation. Tech Times reported this week that US Attorneys General warned AI companies to be accountable when it comes to child safety. They called out leading AI firms to protect the young users from harmful chatbots.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion