Twitter is rolling out tests for Community Notes for Media, which will apply the platform's sourced fact checks to specific photos and video clips. This comes after a fake image went viral claiming to show an explosion near the Pentagon.

FRANCE-TWITTER
(Photo : SEBASTIEN BOZON/AFP via Getty Images)
The twitter's logo is pictured on screen reflected by mirrors in Mulhouse, eastern France on May 30, 2023.

Community Notes for Media

As artificial intelligence dominates the internet, several platforms have released their own features to become more prominent if the posted content is fake. Twitter joins this trend as the company is rolling out Community Notes for Media beta. According to a report from Engadget, this will add context to potentially misleading photos and videos on the platform.

Contributors can now be able to add information specifically related to an image, which will be fact-checked below the tweet. Twitter specifically mentioned that this feature was made to identify AI-generated fake images that can be confusing, frightening, and amusing, hence why these images became viral.

Twitter users will see notes from contributed marked as "About the Image", which is slightly different from the original Community Notes. Ratings of the context will also be included that can help identify cases where a contributor's note may not apply to a specific tweet. 

As of the moment, this feature only supports Tweets with a single image as Twitter is still currently working to expand it to videos and tweets with multiple images and videos. The platform will now monitor how notes on media are used as they rolled this feature out today.

Image Matching

To address the viral spread of such photos, Twitter is aiming for the notes to automatically show "recent and future" copies of the same image even if they are shared by different users to different tweets. However, Slash Gear reported that the company clarified that this will take some time to perfect its image matching. 

The company stated, "It's currently intended to err on the side of precision when matching images, which means it likely won't match every image that looks like a match to you. We will work to tune this to expand coverage while avoiding erroneous matches."

Also Read: Twitter Community Notes: New Feature Where People Can Contribute, Give Context to Tweets, Now Available

Additionally, Community Notes as a general feature is far from perfect based on its track record. The feature can sometimes result in failed fact checks or debunks of false claims. But as contributors pointed out, the feature is not impervious from errors or perpetuating common misconceptions.

Based on a report from The Verge, this feature was introduced by Twitter after a fake Pentagon explosion picture was shared last week by verified accounts or Twitter blue subscribers. There are also pictures circulating on the internet of an AI-generated image of Pope Francis wearing a streetwear hype beast that went viral before learning it was fake. 

Related Article: Twitter Community Notes to Label Misinformation, and Elon Musk Claims No One is Exempted

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion