During the COVID-19 pandemic, when misinformation was widespread on the internet, social media giants like Facebook created unique policies to stop the transmission of false information—though later proven a failure.

A recent study, "The Efficacy of Facebook's Vaccine Misinformation Policies and Architecture During The COVID-19 Pandemic," published in Science Advances, raised alarming questions about Facebook's efforts to combat vaccine misinformation.

Research Reveals Disturbing Trends

Phys.org shares that the research, led by experts at the George Washington University, in collaboration with researchers from Johns Hopkins University, delved into Facebook's response to COVID-19 vaccine misinformation. 

David Broniatowski, lead study author and an associate professor of engineering management and systems engineering at GW, pointed out, "To effectively tackle misinformation and other online harms, we need to move beyond content and algorithms to also focus on design and architecture" Broniatowski notes.

Facebook's architecture, designed to foster connections among users over shared interests, inadvertently allowed misinformation to flourish. 

The platform's structural elements, including fan pages promoting brands and community celebrities, enabled various influencers to reach vast audiences. 

These influencers then formed groups explicitly designed to create communities where members could share information related to topics like vaccine hesitancy, including misinformation.

A Troubling Discovery

The study's most troubling finding was that despite Facebook's substantial efforts to remove anti-vaccine content during the pandemic, overall engagement with such content did not significantly decrease, and in some cases, even increased.

Lorien Abroms, a professor of public health at GW's Milken Institute School of Public Health and a study author, remarked, "This finding - that people were equally likely to engage with vaccine misinformation before and after Facebook's extensive removal efforts - is incredibly concerning."

"It shows the difficulty that we face as a society in removing health misinformation from public spaces," Abroms added.

Failed Content Moderation

Intriguingly, the study also highlighted the disparities in how content producers on both sides of the vaccine debate used Facebook. 

Anti-vaccine content producers were found to be more effective in coordinating content delivery across pages, groups, and users' news feeds.

Facebook's algorithms and content removal efforts struggled to counteract this coordinated dissemination of misinformation, which raises concerns about the effectiveness of such policies in the face of determined misinformation spreaders.

Worse yet, the remaining anti-vaccine content on Facebook became more misinformative, featuring sensationalist false claims about vaccine side effects that were often too new to be fact-checked in real time.

To compound the issue, the study revealed a form of "collateral damage" where pro-vaccine content might have been inadvertently removed due to the platform's policies. As a result, vaccine-related content on Facebook became more politically polarized.

Read Also: WhatsApp Denies Ads Rumor Amid Speculation Over Revenue Boost

An Analogy to Building Design

Broniatowski's comparison of Facebook's architecture to building design draws attention to a possible solution. 

He suggests that social media platform designers should collaborate to develop a set of "building codes" for their platforms, informed by scientific evidence, to reduce online harms. 

Stay posted here at Tech Times.

Related Article: Did YouTube Promote Anti-Vaccine Content During COVID-19 Pandemic? New Study Provides Answers

 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion