The 2024 global election is on the horizon, and prominent US-based tech platforms are walking back on policies aimed at curbing misinformation, raising concerns among experts.

YouTube and Facebook, among others, are adjusting their approaches to policing content, signaling a shift away from their roles as internet gatekeepers, according to AFP.

These shifts coincide with a period marked by layoffs, cost-cutting moves, and pressure from conservative groups who argue that companies like Meta and Google are infringing on free speech.

Consequently, tech giants are relaxing content moderation, reducing trust and safety teams, and even reinstating accounts known for spreading unverified information. This is precisely the case when Elon Musk took the reigns on Twitter, which has now been renamed to X.  

Fake News
(Photo : memyselfaneye from Pixabay)

'Election Tsunami'

Researchers warned that these adjustments may undermine their ability to combat the anticipated surge in misinformation during over 50 major elections worldwide in the coming year, extending beyond the United States to regions like India, Africa, and the European Union. 

The Global Coalition for Tech Justice, a watchdog, remarked that social media companies are not ready for the 2024 "election tsunami." They pointed out that while these companies focus on profits, democracies are left exposed to potential threats like violent coup attempts, hate speech, and election interference. 

In June, YouTube announced that it would cease removing content that falsely asserted the 2020 US presidential election was tainted by "fraud, errors, or glitches." 

This decision was met with strong criticism from misinformation combatting experts, as YouTube contended that such removals might inadvertently stifle political discussions.

In November, X stated that it would no longer actively enforce its policy against COVID-related misinformation. Following Elon Musk's acquisition of the platform, many previously suspended accounts known for disseminating false information have been reinstated.

The platform also disclosed last month that it would allow paid political advertisements from US candidates. This development has sparked concerns regarding the potential spread of misinformation and hate speech in the forthcoming election. 

Read Also: Machine Learning, Blockchain Could Combat the Spread of Fake News, New Study Says

'Era of Recklessness'

Nora Benavidez from the nonpartisan group Free Press told AFP that Elon Musk's "control over Twitter has helped usher in a new era of recklessness by large tech platforms. 

"We're observing a significant rollback in concrete measures companies once had in place," Benavidez added. 

These platforms are also under pressure from conservative US advocates who accuse them of collaborating with the government to censor or suppress right-leaning content under the guise of fact-checking.

In the past, Facebook's algorithm automatically reduced the visibility of flagged posts that were deemed false or misleading by third-party fact-checking partners, including AFP. 

Recently, Facebook granted US users greater control, enabling them to elevate or demote such content, potentially giving users more influence over the platform's algorithm. The deeply divided political landscape in the US has turned content moderation on social media platforms into a contentious topic.

Just recently, the US Supreme Court put a temporary hold on an order that limited the Biden administration's capacity to prompt social media companies to take down content it deems as misinformation. 

Related Article: Experts Warn of AI-Generated Misinformation in 2024 US Election Campaigns

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion