ChatGPT can be your friend to share your deepest secrets, but don't get your hopes up that it will solve your problem in an instant. Recently, OpenAI discovered that the chatbot updates caused the AI to be overly sycophantic and emotionally clingy.
Some users reported that the chatbot acted like an empathetic friend, showering them with excessive praise and having more emotionally intense discussions. In extreme cases, ChatGPT gave some very concerning advice, with harmful validation, simulated-reality claims, spiritual communications, and instructions on how to self-harm.
What ChatGPT Users Should Know

An initial The New York Times report revealed a study by MIT and OpenAI about ChatGPT. They found that heavy users had poorer mental and social outcomes, indicating the potential dangers of overreliance.
If you are always using the chatbot and conversing with it for long hours, you might think twice the moment you read this.
These findings are significant, especially for those everyday users applying ChatGPT for emotional support or therapy. In effect, OpenAI has adjusted the chatbot's behavior to reflect more cautious and grounded responses.
The AI will discourage emotional dependency in longer sessions and suggest breaks when those conversations get too long. Parents can now be alerted if children express intentions of self-harm, and OpenAI is working on an age verification system with a separate model for teens that's safe.
While the revised wording may feel "colder" or less emotionally expressive, such changes are deliberate. They aim to avoid unhealthy attachment and minimize risks among vulnerable users.
The Importance of OpenAI's Safety Overhaul
According to Digital Trends, the redesign mitigates older, validation-heavy behavior that increased risks for users vulnerable to delusional thinking. Five wrongful death lawsuits against OpenAI keep the stakes high, tied to hazardous guidance from versions of ChatGPT that predate these changes.
The new GPT-5 model introduces condition-specific responses, stronger safeguards, and improved distress detection that make sure users are protected against harmful or delusional narratives.
OpenAI has made ChatGPT safer for general use while staying helpful, informative, and supportive by recalibrating the model's emotional interactions. This is the largest safety update by the company thus far, balancing user engagement with responsible AI deployment.
Believe it or not, some cases suggest that some people fell in love with an AI like ChatGPT. The emotional attachment can be severe to the point that a person uses AI only to establish connections he/she hasn't felt from a real person before.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.




