
In a wide-ranging interview with International Business Times' Isaiah McCall, Ari Abelson, co-founder and president of Open Origins, warns that artificial intelligence will outpace human ability to discern from real or AI-generated images and videos.
Artificial intelligence is rapidly transforming the internet's visual landscape, making it increasingly difficult to distinguish real photos and videos from synthetic ones, according to Ari Abelson, co-founder and president of media authenticity company Open Origins.
Joining International Business Times' Visionary Voices series, Abelson said the most significant development in artificial intelligence over the past year has not been the race toward artificial general intelligence, but the explosive improvement in AI-generated media.
"I think things are moving incredibly fast—faster than anyone could reasonably predict," Abelson said. "But interestingly, not always in the ways that are being publicly emphasized."
While much of the public discussion around AI has focused on predictions that machines could soon replace large portions of human labor, Abelson said a more immediate shift is already taking place in the form of photorealistic AI-generated images, videos and text.
Systems capable of producing convincing media are improving so quickly that humans may soon lose the ability to reliably tell the difference between authentic content and synthetic creations, he said.
"Right now, when we scroll through social media, it's already extremely difficult to distinguish between content written by a person and content generated by AI," Abelson said. "The same is becoming true for images and videos."
Originally, many experts believed that moment would arrive closer to the end of the decade. But Abelson said the timeline appears to be accelerating.
"By the end of this year—and certainly heading into 2027—I believe humans will essentially lose the ability to reliably tell the difference between AI-generated media and real human content," he said.
The rapid improvement of AI-generated visuals has created both creative opportunities and new risks.
On platforms like TikTok and X, users frequently share surreal AI-generated videos—such as celebrities appearing in impossible or comedic scenarios—that are clearly intended as entertainment. Abelson said this type of content can function much like cartoons or fictional storytelling.
"AI can be an incredible creative tool," he said. "Someone could generate a full music video based on an idea they had in a dream within seconds."
The challenge emerges when synthetic media becomes indistinguishable from reality.
Without clear signals that indicate whether content is authentic or artificially generated, highly realistic deepfakes could be used to spread misinformation or damage reputations. In extreme cases, Abelson warned, fabricated videos depicting political leaders could escalate geopolitical tensions if they circulate widely before they can be debunked.
"AI itself isn't inherently good or bad—it's a neutral tool," he said. "The problem is that we currently lack reliable systems for distinguishing authentic content from synthetic content."
Open Origins was founded to address that problem by developing systems that verify the origin of photos and videos at the moment they are captured. The company's technology aims to create a permanent record showing whether a piece of media was produced by a human camera or generated artificially.
As AI tools become more powerful and widely available, Abelson believes establishing trustworthy verification systems will be critical for journalism, historical archives and the broader information ecosystem.
"Ultimately, the goal is that when someone encounters an image or video online," he said, "they can check whether it has a verifiable origin point or whether it may be synthetic."
About Our Visionary Voice, Ari Abelson

Over the last decade, Ari has helped startups build growth and community strategies. He has a background in mis/disinformation research, having collaborated with major tech companies and governments to combat misinformation. Ari holds an MSc from the London School of Economics, and has previously worked with Moonshot CVE, LSHTM and has contributed to projects commissioned by the MoD and Facebook.
Originally published on IBTimes
© Copyright IBTimes 2024. All rights reserved.




