The growing influence of artificial intelligence (AI) and AI-enhanced social media has sparked concerns about their potential to manipulate reality and influence public opinion. 

In response to this, a neuroscience expert from Rice University's Baker Institute of Public Policy, Harris Eyre, advocates for a digital self-defense system to protect individuals and communities from AI-driven misinformation and its impact on collective intelligence.

COSTA RICA-EDUCATION-SCIENCE-NEUROLABS
(Photo : EZEQUIEL BECERRA/AFP via Getty Images)
A Costa Rican secondary education student uses a Neurolabs headset to measure his electromagnetic brain activity while being exposed to different mental exercises and visual stimuli, during the launching of the project in San Jose, on March 07, 2019. - Costa Rica along with five other countries was chosen for the first global study of neuroscience applied to education, Neurolabs.

'Neuroshield' to Protect the Public from AI

In a new report, Eyre emphasized the need for regulatory measures to control advanced AI and AI-enhanced social media platforms, which have the potential to distort reality and spread false information, thereby challenging the democratic functioning of societies. 

The rise of deep fakes, especially during election seasons, has further amplified the urgency to address these concerns and safeguard the integrity of information.

Eyre proposed the development of a "neuroshield," a multifaceted approach to counter the negative effects of AI on our cognitive processes and collective well-being. The neuroshield would involve three key components: a code of conduct for information objectivity, regulatory protections, and an educational toolkit for citizens.

The first aspect of the neuroshield focuses on establishing a "code of conduct" among publishers, journalists, media leaders, and opinion makers to support the objectivity of information. 

While acknowledging the importance of social and political freedom in interpreting facts, Eyre emphasized the need to protect undeniable truths from being distorted by ambiguous or misleading narratives. 

The second aspect revolves around implementing regulatory protections to ensure that AI model providers are held accountable and maintain transparency. Drawing inspiration from the proposed European AI Act, Eyre advocates for policies addressing AI technologies' potential biases and vulnerabilities. 

Read Also: Christopher Nolan Says AI Creators Are Facing Their 'Oppenheimer Moment,' Warns of 'Terrifying Possibilities'

Educational Toolkit for Citizens

The third component of the neuroshield entails creating an educational toolkit for citizens, especially young people heavily engaged on social media platforms. 

This toolkit, developed in collaboration with neuroscientists, aims to enhance cognitive freedom and empower individuals to discern factual information from disinformation. 

By equipping people with fact-checking skills and an awareness of cognitive biases, the toolkit serves as a defense against the manipulative tactics employed by AI-driven misinformation. 

"It is critical for both policymakers and brain scientists to advance this policy approach," Eyre said in a statement.

"The proposed European AI Act is an example of foreseeing how AI model providers can be held accountable and maintain transparency. By closely involving neuroscientists in planning and rolling out the Neuroshield, the US can ensure that the best existing insights about the functioning of our cognition are taken into account," he added.

Related Article: Computer Scientists Develop 'De-Stijl' Tool With Adobe to Help People Use Color Better in Graphic Design

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion