Google recently faced criticism after its new artificial intelligence model, Gemini, produced images depicting historical figures inaccurately. The tool generated images of German soldiers from the Second World War and Vikings with varying ethnicities, prompting backlash from social media users.

With the recent incident that happened with its AI image generator, the search engine giant paused it for a while.

Temporary Block on Image Generation

(Photo : Steve Johnson from Unsplash)
Google has indeed "missed the mark" after inaccurate depictions of historical photos generated by Gemini AI image generator.

In response to the controversy, Google announced a temporary halt to Gemini's image generation feature, specifically for images of people. This decision comes as the tech giant acknowledges the need to address issues with accuracy and bias within its AI models.

 
Related Article: Google Renames Bard AI Chatbot to Gemini, Introduces Subscription Model for US Users

Google Wants to Roll Out Improved Version of Gemini

While Google did not cite specific examples, the company emphasized its commitment to resolving the recent issues with Gemini's image generation.

Jack Krawczyk, a senior director at Google, acknowledged that adjustments were necessary for the model to accurately reflect its global user base.

"We're working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," Krawczyk said in a report by CNBC.

Challenges in Historical Context

Krawczyk further highlighted the complexity of historical contexts, noting that nuanced adjustments were required to ensure accurate portrayals of historical figures. Despite the challenges, Google remains dedicated to refining its AI tools to mitigate bias effectively.

Addressing Bias in AI

The incident underscores broader concerns regarding bias in artificial intelligence, particularly its impact on marginalized communities. Previous investigations have revealed instances of bias in image generators, highlighting the need for ongoing efforts to eliminate prejudice from AI systems.

Continued Research and Improvement

Andrew Rogoyski, an expert from the Institute for People-Centered AI, emphasized the difficulty of mitigating bias in deep learning and generative AI. He noted that while mistakes are inevitable, ongoing research and the implementation of diverse approaches offer hope for improvement in the future.

Earlier this month, Gemini, along with Microsoft's AI chatbot appear to be hallucinating when it comes to the stats of the Super Bowl LVIII. Based on the previous report, the chatbots fabricate the data whenever it is asked with a question.

The above samples only show that AI is far from being perfect. It is flawed to the point that its hallucinations are considered "pervasive and disturbing," Stanford University study wrote.

There's nothing wrong with using AI tools, but if they compromise the authenticity of the subject, it's about time to alert the companies to improve them to a better version.

Read Also: Google Gemini to Fix Issues, Bring Selected Assistant Features Back for All


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion