In recent months, artificial intelligence (AI) image generators have evolved significantly, allowing them to create realistic-looking images that are frequently difficult to distinguish from real photographs.

Recent viral hits that have shaken the media, such as an AI-generated image of Donald Trump's alleged arrest days before a grand jury charges him with multiple counts of business fraud and the Pope wearing a fabulous white designer puffer coat only a few days after being reported sick with lung disease, have been generated by AI systems, making it increasingly difficult to distinguish between real and artificial images.

How to Spot AI-Generated Images

In a Scientific American piece, S. Shyam Sundar, a Pennsylvania State University researcher who studies the psychological impact of media technologies, remarks that AI image generators' ability is astounding. However, there are still some telltale signs that give away the AI algorithm. For example, AI systems may struggle with producing images with realistic-looking hands or produce mangled appendages with too many digits.

But it did not take too long for the AI image generators to evolve.

AI image generators have previously struggled to mimic human hands and produce images with realistic-looking appendages. However, thanks to recent advances in AI technology, some systems, such as Midjourney V5, can now convincingly mimic human hands. This ability has significantly contributed to the advancement of AI's image-generation capabilities.

Even in its early stages, the tool has already helped someone win first place in a fine arts competition. Think about it.

AI image generators have also been helped a lot by the fact that many images can be used to train AI systems and that data processing infrastructure and user interfaces are improving.

Concerning Speed of AI Development

A recent experiment by psychologists at Lancaster University in England tested whether volunteers could distinguish between passport-like headshots created by an AI system called StyleGAN2 and actual images. The results were worrying because they showed that people could not reliably tell the difference between fake and real pictures of real people.

Even though there are problems, AI systems trained to spot fake images may be the answer to the problem of AI systems fooling humans. 

Read Also: Open Letter to Pause AI Developments Beyond GPT-4 Signed by Woz, Elon Musk, and MORE-But Why?

These AI detection programs work by collecting datasets of authentic and AI-generated images and using them to train machine-learning models to differentiate between the two. However, most detectors are still unable to recognize fake images produced by different algorithms.

Social media platforms need to take action against AI-generated content on their sites. Users should also scrutinize visual information skeptically by questioning whether it is false, AI-generated, or harmful before sharing it.

The progress of AI image generators is a big worry because it threatens the credibility of visual media. As AI technology continues to progress, it is essential to develop detection methods that can keep up with the technology's rapid development.

Lastly, the rapid advancement of this technology raises a myriad of new dangers, as criminals may soon exploit AI-generated images for nefarious purposes, a new breed of challenges for people online.

Stay posted here at Tech Times.

Related Article: AI Detectors: Can You Spot AI-Generated Content? The Math Says No

 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion