AI-Generated Protest Videos Made With OpenAI's Sora 2 Spark Outrage, Confusion Online

If you take a close look, the protest videos look real.

Creating videos with OpenAI's Sora 2 is now made easier and better, but with how AI is used, not all people see it as a useful tool. Some see it as something that can blur the realism of deepfakes.

While the newest AI video generator allows users to create and share highly realistic AI videos, it has also triggered misinformation after a series of AI-generated protest videos surfaced online, many of which appear to promote political propaganda.

OpenAI's Sora 2 Fuels Misinformation Boom

Since Sora 2's release, social media platforms like TikTok, Facebook, and Instagram have been flooded with fake videos portraying violent protests and confrontations between demonstrators and federal agents. According to Gizmodo, some of the most viral clips feature fabricated scenes that many users, including public figures, have mistaken for real footage.

One of the most popular videos, viewed more than 40 million times on Instagram, features a black-clad protester shouting at a soldier, only to be pepper-sprayed. The AI-produced soldier counters with "Sergeant Pepper," sending right-wing fanatics laughing and applauding online. While there is a clear Sora watermark, many did not notice it as an AI product.

AI-Generated Political Propaganda on the Rise

Another viral Sora video is of demonstrators yelling "no queso, no cheese," a racist mockery of the actual chant "no justice, no peace." The clip, which collected millions of views on various platforms, was joined in by users propagating political misinformation. People used captions such as "Liberals acting like clowns – goodbye FAFO" as they shared the video, consolidating polarizing language.

What is most disturbing about these forged protest videos is how easy they are to merge with real news footage.

Videos of Sora-generated content are now mixed with legitimate protest clips, further blurring the public's eyes. Clearly, there's a lot going on with this wave of deepfakes. When used improperly, it can amplify political manipulation and online propaganda.

Actual Violence Overlooked for AI Illusions

As AI-created clips go viral, actual instances of protest brutality continue to happen with much less coverage. For example, Rev. David Black, who is a pastor in Chicago, was pepper-sprayed by federal agents while participating in a peaceful protest. Likewise, a woman in Portland was sprayed while speaking quietly to the police.

Even when there is real evidence of police brutality, these incidents are dwarfed by AI fakes propelling political narratives. Critics say that sharing such forged videos creates a distorted reality, one justifying violent crackdowns while depicting protesters as dangerous or worthy of violence.

AI Deepfakes Threaten Public Trust

The Sora 2 scandal points to a rising problem: separating authentic events from AI fakes. Though OpenAI has placed visible watermarks on videos generated by Sora, these are frequently overlooked or cut out when posted online. With media generated by artificial intelligence growing increasingly difficult to identify, experts predict a looming "truth crisis" in online communications.

Sora 2's abilities show the strength of generative AI, but their abuse shows a dark side. People should understand how to use the tool responsibly to spread facts, not misinformation. It becomes disastrous when the deepfake video is shared across social media.

Not all people verify the news, even if it means searching for it on the internet only requires two clicks.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Tags:OpenAI
Join the Discussion