Google Ignores Hidden Gemini AI Exploit That Lets Hackers Control Text

Google is currently under fire for ignoring major Gemini AI vulnerability.

Google's flagship AI model, Gemini, is under fire after cybersecurity researchers discovered a serious flaw known as an "ASCII smuggling" exploit. What's worse, the tech giant has stated it won't fix it.

The vulnerability exposes a potential security risk that could allow attackers to manipulate Gemini's responses through invisible commands embedded in text.

Hidden Commands Inside Gemini's Text System

Person holding phone with Gemini logo
Gemini isn’t just an app—it’s Google’s vision for AI that watches, understands, and acts without needing a prompt. Vincent Feuray/Getty Images

In an early report by Bleeping Computer, cybersecurity researcher Viktor Markopoulos of FireTail was the one who initially identified the vulnerability.

The exploit takes place by putting secret control characters or Unicode points in the text that Gemini recognizes as commands, despite the fact that the users can't see them. The invisible commands can quietly change the behavior of the AI, making it produce false or unexpected results.

Markopoulos showed how the vulnerability could be used with mundane text-based inputs like emails or calendar invitations. For instance, an email that may seem harmless in the eyes of the recipient can have invisible commands.

As Gemini processes or summarizes the email, the AI may inadvertently alter meeting information, skew data, or generate inaccurate summaries, all based on text that appears normal to humans.

Gemini Fails Where Other AIs Succeed

When the same ASCII smuggling attack was tested on other leading AI platforms, including OpenAI's ChatGPT and Microsoft's Copilot, they immediately figured out what was wrong with the hidden inputs.

However, Google's Gemini, along with Elon Musk's Grok and China's DeepSeek, failed to block the exploit, leaving them open to manipulation. This only shows that Gemini's text interpretation engine is more permissive than its competitors, potentially making it a prime target for AI-driven social engineering attacks.

Google Calls It 'Social Engineering', Not a Security Flaw

Despite the security risk, Google has declined to patch the flaw. In response to FireTail's report, the company labeled the ASCII smuggling exploit as a "social engineering" issue rather than a system vulnerability, according to Android Police. Google maintains that the problem lies in users being deceived, not in the AI's code or design.

But cybersecurity professionals contend this move could leave users vulnerable. Gemini interacts with Google's toolset of Gmail, Docs, and Calendar. In theory, hackers could use the bug to disseminate fake information or compromise confidential corporate information.

AI Security Fears

Google has already patched a number of Gemini-related vulnerabilities, such as problems in its logs, summaries, and browsing history, the so-called "Gemini Trifecta." But dismissing ASCII smuggling as not being an issue is a huge red flag that is hard to ignore.

Aside from vulnerabilities, Gemini AI also became a subject of controversy when it reportedly produced different responses mid-sentence. Some questioned Google about the ethics of using an AI tool at that time.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion