Artificial intelligence proves that human living can be more comfortable through its application. 

Although it's useful in some way, experts noted in the recent study that humans can now have a hard time distinguishing computer-generated faces from real ones.

Identifying AI-Generated Faces is a Challenge

Humans 'More Likely' to Trust Computer-Generated Faces Than the Real Ones, Says AI Study
(Photo : Omid Armin from Unsplash )
According to the latest study, humans are having a hard time detecting AI-generated photos from the real cones.

According to a recent report by Fast Company, the researchers have been focused on knowing how humans can accurately identify a synthetic image from AI and a real face.

University of California-Berkeley professor Hany Farid noticed the surge of computer-generated images as years went by. As deep learning came into the light, he also observed how GANs or generative adversarial networks affect the creation of realistic photos.

Moreover, Farid also highlighted that if people closely look at how GANs and deep fakes improved in the past years, it's faster how CGI progresses. With that, he and his team proposed an argument that will tackle problems in detecting realistic fakes.

"Fraudulent online profiles are a good example. Fraudulent passport photos. Still photos have some nefarious usage.But where things are going to get really gnarly is with videos and audio," Farid said.

Another researcher who teaches at the University of Lancaster in England accompanied Farid in the study. Sophie Nightingale also wondered how AI deceived humans when spotting the real visage.

Related Article: Scientists Use AI to Determine Fakes: How to Spot Deepfake Videos-- Look at Their Eyes!

How Experts Conducted the Experiment

In the study, they conducted three experiments to see if people could clearly grasp the authenticity of different photos. Nvidia's StyleGAN2 created all the synthetic images in the study.

After the participants concluded their choices in determining 800 images, the experts requested them to categorize each of them as fake or true. The average accuracy in detecting the photos was 48.2%.

In the second trial, Farid and Nightingale gave the participants some tips on how they could identify computer-generated images from non-AI-based ones. Following this additional step, the accuracy in detecting realistic fakes improved to 59%.

Farid noticed that despite giving enough hints to the participants, humans had a hard time choosing the real faces from their fake counterparts.

Although the experts did not anticipate the outcome, the challenge in spotting faces did not appear to be a surprise for them. As such, a small but significant difference appeared regarding the trustworthiness of the images.

Farid relied on a mathematical model to search for a similar face for every AI-generated structure. Of course, they based it on the ethnicity and facial expression of the subject.

At this moment, the uncanny resemblance of AI-generated faces to human faces proves that AI can easily deceive people. Nightingale was aware of its disadvantage, which could harm the users. For instance, dating scams might be rampant because of this.

To view the study entitled "AI-synthesized faces are indistinguishable from real faces and more trustworthy," click here. In another report, Toolbox reported the AI trends that could evolve in 2022. 

To check another story tied to AI, read our latest report about Clearview's plan to collect 100 billion photos from different subjects all over the world.

Read Also: Audio Deepfake: This AI Voice from Sonantic Can Flirt With You

This article is owned by Tech Times

Written by Joseph Henry 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion