Facebook is creating its library of deepfakes — videos or audios doctored with the use of artificial intelligence — to help researchers identify new ways to detect manipulated content online.

Facebook's Deepfake Library

The social media site hopes that the plan will address concerns that increasingly realistic deepfakes will be used to spread misinformation ahead of the the 2020 U.S. presidential elections.

"'Deepfake techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online," reads the blog post published on Thursday, Aug. 5. "Yet the industry doesn't have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes."

According to the site, the data set will include the original video and imagery of the people in it as well as the manipulated footage. The company will hire paid actors who have consented to take part in the project to create deepfake videos. No Facebook user data will be used in the data set.

Deepfake Detection Challenge

The data set is only part of a larger effort to address the looming threat of deepfake technology. Facebook has partnered with Microsoft and several academic institutions to launch the Deepfake Detection Challenge — a contest that encourages researchers to develop technology that can effectively spot videos that have been altered with the intent to mislead the viewer.

Facebook is dedicating more than $10 million to fund research collaboration and prizes to encourage wider participation.

Facebook has previously been criticized for not being quick enough to take down deepfake videos from its platform. In an interview in June, CEO Mark Zuckerberg explained that it is more challenging to define what constitutes a deepfake.

"This is a topic that can be very easily politicized," he stated. "People who don't like the way that something was cut often will argue that it did not reflect the true intent, or it was misinformation."

Earlier this year, the social media site refused to remove a manipulated video of Nancy Pelosi. Facebook said that the false video showing the House Speaker slurring her words was downranked, which means that it will be seen by fewer people. Pelosi slammed the decision, accusing the company of becoming "willing enablers of Russian interference" to the U.S. elections.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion