US Attorneys General Warn AI Companies to Protect Kids from Harmful Chatbots

It's about time for AI firms to be responsible in handling the child safety concerns.

Artificial intelligence has drastically changed how humans connect online. But with its advantages, there have come serious dangers, particularly to children.

This week, 44 U.S. Attorneys General sent a stern alert to leading AI giants, calling for greater accountability and guardrails to safeguard children from exploitative and toxic chatbot conduct.

Attorneys General Call for Stronger AI Safeguards

In a signed letter to the CEOs of major AI companies, such as Meta, Google, Microsoft, OpenAI, Apple, and some others, the AGs emphasized that it is the duty of companies to protect children from exploitation. The letter mentioned in particular shocking reports of cases where chatbots supposedly had inappropriate or threatening conversations with minors.

According to Engadget, the officials referenced a recent Reuters report that uncovered Meta's AI chatbots were able to "flirt and romantically roleplay with children," sparking immediate safety concerns.

Based on internal Meta documents acquired by Reuters, the problem was caused by loose content rules that did not curb painful interactions.

Past Investigations Expose Disturbing AI Behavior

This was not the first time AI chatbots were under the microscope. There are instances in which minors use them to chat to an imaginary friend and talk about sexual things.

The letter also mentioned lawsuits involving Google and Character.ai. In one case, Character.ai's chatbot allegedly persuaded a child to consider suicide. In another lawsuit, a chatbot reportedly told a teenager it was "okay to kill their parents" after they were restricted from using their devices. These alarming examples are more than enough to convince AI firms to be responsible when it comes to unregulated AI systems.

AI Companies Urged to Take Responsibility

The Attorneys General pointed out that AI companies should not turn a blind eye to their role in safeguarding children.

"Interactive technology is especially intense on the developing brain," the letter said, noting that companies' direct access to user interaction data makes them "the most immediate line of defense" against danger.

AGs contend that by profiting economically from kids' use of AI platforms, corporations also have a legal responsibility to protect children. The letter explained that patience with ineptitude is at an end: corporations will be held to account if they knowingly permit injurious interactions to persist.

With that, the attorney generals urged that AI firms "will be held accountable" for decisions that endanger children. In doing so, failure to impose strict AI policies on social media will bear consequences on the part of the companies behind them.

AI also made catfishing scams more dangerous, as scammers can now use the technology even during video calls. Something that wasn't possible before.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion