In the wake of recent terrorist attacks, Facebook has unveiled how it intends to ensure that terrorist organizations aren't able to spread their message.

Terrorists Have No Voice On Social Media

The recent terrorist attacks, primarily focused in Europe, have prompted governments, news anchors, and average citizens to question technology's role in helping terrorists spread their message and bring in new recruits. While the ultimate blame lies with the terrorists, some have argued that Facebook and other social media sites could do more to stop spreading terrorist activity.withaig

In response to these reports, Facebook has posted a new blog clarifying its stance on the issue of terrorism. The company makes it clear that terrorist messaging and recruitment are not tolerated on the site and that it works quickly to remove pro-terrorist content and will alert law enforcement when it believes it has discovered a threat to the public's safety. 

Facebook acknowledged that dealing against terrorism on a social media platform is challenging, but the post said that the company felt the need to be cautious to avoid misleading the public into believing there is an easy fix for the problem. That being said, the company has shared some of its plans, including how it is using artificial intelligence to track terrorist activity on the site.

How Artificial Intelligence Can Be Used To Fight Terrorism

Facebook discussed ways where AI can be used in the fight against the spread of terrorist messaging.

• The first is image matching. If a user attempts to upload a photo or image that matches previously uploaded terrorist-related content, the system will flag it and prevent it from being uploaded. This means that once Facebook has removed one piece of pro-terrorist material, it will be harder for other accounts to upload duplicates. This often means that terrorist propaganda is silenced before it can even reach the site.

• One of Facebooks more experimental approaches to terrorism is language understanding. Facebook is currently using previously removed pro-terrorist content to train its AI to filter out future versions of similar content. This data is then applied to an algorithm that will help the AI uncover pro-terrorist posts. This system is still in its early phases, but Facebook is hopeful that its accuracy will improve with time.

• As is often the case in the real world, online terrorists tend to organize themselves into groups. A fortunate side effect of this is that it can make it easier for Facebook to track terrorists. Once they find one pro-terrorist account, they can then use factors such as friends lists, liked pages, and shared attributes to determine whether an account is associated with known terrorists.

• Facebook didn't provide much details on how it handles fake accounts created by terrorist supporters but said that it is constantly working to filter out pro-terrorist accounts. The company has said that it would share data across its various platforms to ensure that none of its properties, including WhatsApp and Instagram, were used to promote terrorism.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion