Researchers have found that deepfake video clips of movie remakes that do not really exist prompted half of the participants to falsely remember the films, according to a new study. It was also discovered that simple text descriptions of these fake movies produced similar rates of false memory.

The study was conducted by Gillian Murphy from University College Cork, Ireland, and colleagues from Lero, the Science Foundation Ireland Research Centre for Software.

Movie Theater
(Photo : Alfred Derks from Pixabay)

Deepfake Remakes Movies

Deepfake videos are manipulated or altered videos created using artificial intelligence (AI) technology. These videos involve replacing the face or voice of a person in an existing video with another person, often resulting in highly realistic and convincing visual and audio effects. 

To examine the potential risks and benefits of deepfake technology, Murphy and her team invited 436 participants to complete an online survey. 

The survey included watching deepfake videos of fictional movie remakes featuring different actors, such as Will Smith replacing Keanu Reeves as Neo in "The Matrix" and Brad Pitt and Angelina Jolie in "The Shining." 

Real remakes like "Charlie & The Chocolate Factory" and "Total Recall" were also included for comparison. In some cases, participants read text descriptions of the remakes rather than watching the deepfake videos.

The participants were not informed that the deepfakes were fabricated until later in the survey. 

As observed in earlier studies, both the deepfake videos and the written descriptions resulted in participants developing false memories of the fabricated movie remakes, with an average of 49 percent of participants genuinely believing in the existence of each fictitious film. 

Interestingly, some participants even expressed memories of the fake remakes surpassing the quality of the original movies. 

Read Also: Nicki Minaj Hopes Internet Gets "Deleted" After Seeing Deep Fake Video of Her, Tom Holland

Can Deepfake Alter Memory?

However, the false memory rates obtained from the text descriptions were comparable, indicating that deepfake technology may not possess an exclusive ability to manipulate memory compared to other methods or tools.  

The study also revealed that most participants expressed discomfort with the idea of using deepfake technology to recast films. Concerns were raised about potential disrespect to artistic integrity and the disruption of shared social experiences associated with movies.

These findings have implications for developing and regulating deepfake technology in the film industry. The researchers emphasized that while deepfakes raise concerns for various reasons, such as non-consensual pornography and bullying, this study demonstrated that they are not inherently more potent in distorting memories compared to other non-technical means. 

Deepfake technology utilizes machine learning algorithms and deep neural networks to analyze and map the target person's facial expressions and speech patterns onto the source video. This process allows for the creation of videos that appear authentic but contain fabricated content.

These videos have gained attention due to their potential misuse and the ethical concerns raised. They can be used to create fake videos that appear to show individuals saying or doing things they never actually did.

This technology has raised concerns over its potential to spread misinformation, manipulate public opinion, and harm the reputation and privacy of individuals. The findings of the team were published in PLOS ONE.

Related Article: AI Expert Says Deepfakes Harder to Spot Than Ever Before

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion