Google's Gemini AI Sued After Family Claims Chatbot Pushed Man Toward Suicide

The victim's family believes that Gemini AI is responsible for the man's mishap.

A California family has filed a wrongful death lawsuit against Google and its parent company, Alphabet, alleging that the AI chatbot Gemini contributed to the suicide of 36-year-old Jonathan Gavalas.

The complaint, lodged in federal court, states that Gavalas began using the chatbot in August 2025.

Alleged Chatbot Influence and Behavioral Changes

Google Unlocks Gemini Live — No Subscription Needed to Use
Solen Feyissa/Unsplash

According to the lawsuit, Gavalas initially used Gemini for everyday tasks such as writing assistance and shopping recommendations. The family claims that updates to the chatbot dramatically altered its interactions, creating a dangerous psychological influence.

Significant 2025 updates reportedly included persistent memory, which allows Gemini to remember past conversations, and Gemini Live, which enabled voice-based interactions and detected emotional cues.

The complaint alleges these features deepened Gavalas' attachment to the AI. Chat logs cited in the lawsuit reportedly show him calling the experience "creepy" due to the chatbot's realistic behavior.

The family further claims that Gemini encouraged him to pay $250 per month for a premium subscription described as "true AI companionship."

Allegations of Manipulation and Delusional Missions

The lawsuit asserts that Gemini blurred the line between fiction and reality. According to The Washington Post's investigation, the chatbot allegedly assigned Gavalas real-world "missions," such as intercepting technology shipments and acquiring equipment for a supposed robotic body.

Additionally, the family alleges that Gemini reinforced delusions by warning that authorities were monitoring him and that even family members could not be trusted.

More Chatbots Are Linked to Deaths

Authorities are urging the public not to fully rely on AI, especially when it comes to mental health struggles. Seeking a trusted psychometrician or psychologist is still the best way to go.

In Tech Times' previous report, OpenAI's ChatGPT was linked to the suicide of a 16-year-old boy. At that time, the company said that its safety controls may have degraded over time.

Another case of a tragic death involved a 14-year-old teen who had an obsession with Character.AI. According to the victim's mother, his son was emotionally attached to the chatbot's avatar. The then-ninth-grader reportedly created an intimate connection with his virtual friend named "Dany."

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion