Google's AI Overviews feature is generating tens of millions of incorrect answers every hour, even though a new study finds its summaries are technically accurate about 90 percent of the time.
The analysis, conducted by open-source AI company Oumi and reported by The New York Times, evaluated thousands of AI Overview responses and concluded that Google's AI generally provides correct, well‑sourced information in 9 out of 10 cases.
At first glance that sounds like a strong result, but when applied to the more than 5 trillion searches Google is expected to handle in 2026, the remaining 10 percent quickly scales into a flood of bad information. Popular Science notes that this error rate translates into "tens of millions of questionable answers each hour," or hundreds of thousands of errors every minute.
The study also sheds light on where those wrong answers come from. Oumi's researchers found that AI Overviews frequently draw on social platforms and user‑generated content, with Facebook emerging as the second‑most‑cited source and Reddit the fourth.
Inaccurate answers leaned even more on Facebook, citing it in 7 percent of wrong responses versus 5 percent of correct ones, suggesting that low‑quality or context‑poor posts can quietly shape what many users see as an authoritative summary.
In some cases, the system appears to misstate or oversimplify information from otherwise reliable sources, producing a distorted version of what the underlying article actually says.
Experts warn that this combination of scale and subtle error is especially risky because of where AI Overviews appear. The summaries sit at the very top of Google's results page, often above traditional blue links, and are presented in a confident, conversational tone, Odyssey News Media reported.
That positioning encourages users to accept the answer at a glance, without clicking through to verify the details, turning each misstep into a piece of misinformation that can spread quickly across social media and everyday conversations.
The report also highlights how the system can be gamed. Because AI Overviews pull from pages that appear credible to Google's ranking systems, bad actors can create polished blogs filled with false claims, then drive artificial traffic to boost their visibility.
If those posts are treated as legitimate sources, the AI may repeat their made‑up facts in a clean, authoritative paragraph at the top of search results. Researchers and digital rights advocates argue that this makes AI Overviews not just an occasional nuisance, but a new vector for disinformation at global scale, as per Futurism.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.





