OpenAI: ChatGPT Parental Controls Are Coming Next Month Following Wrongful Death Lawsuit

The wrongful death lawsuit has prompted changes for ChatGPT.

Chatgpt

OpenAI has previously claimed that it would add parental controls to ChatGPT. In the latest update from the company, the new safety feature is said to be rolling out to the AI platform in the coming month.

The latest change came after the company said that it wants to do better for its users and offer parents a chance to monitor and set what their underage kids are interacting with on the platform.

The new parental controls would require parents to have their own ChatGPT accounts and have it connected to their children's accounts, with specific supervision features available soon for families.

OpenAI ChatGPT Parental Controls Coming Next Month

According to the latest blog post from OpenAI, they are now setting up the promised parental controls for ChatGPT, which they promised last week.

The company said that in the next month, ChatGPT will see parental controls set up on the platform to help parents monitor and control their children's usage of the AI tool.

The minimum age of usage for teenage users remains at 13 years old, and those who are of this age and older could have their accounts linked to their parents through an email invitation.

After linking, parents may now control how ChatGPT responds to the minor user alongside an age-appropriate model, which is already turned on by default.

Parents may also choose to disable or enable certain ChatGPT features from being accessible for their child's account, including Memory and Chat History.

Lastly, parents would also be notified when the chatbot detects that the teenager is in a challenging or distressing situation, with expert input also giving parents tips or ways on how to handle it.

Wrongful Death Lawsuit vs. OpenAI

OpenAI is now setting up a significant change to ChatGPT following a recent lawsuit that involved a teenage boy's suicide, allegedly because of the chatbot, with the bereaved parents blaming the technology.

While OpenAI is still in the hot seat of this legal battle, the company said that it wants to do better by adding parental controls to the chatbot platform for extra protection.

Companies developing AI have been setting up age restrictions to limit the access of the chatbots by children, especially as there are significant dangers present. Generally, users aged 13 to 18 are permitted to use AI chatbots, but there have certain limits on what kind of experience they will get.

Despite the safeguards made available on different platforms, there are still many ways to manipulate AI.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion