Facebook released a statement Oct. 21 detailing a loosening of its censorship policy. The company will now allow explicit and offensive posts if it finds them newsworthy.

This development immediately caused a stir mainly because it violates the company's rules covering graphic content that includes nudity and violence.

"We're going to begin allowing more items that people find newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards," Joel Kaplan, vice president for global public policy at Facebook, and Justin Osofsky, vice president for global operations and media partnerships at Facebook, said in a joint statement.

Observers cite how the statement is largely ambiguous, which could lead to confusing or even inconsistent censorship standard. The company merely pointed to a contextualized evaluation of newsworthiness based on local culture and global practices. How this objective could be achieved is not clear at this point, which is particularly vexing to some since they are often in conflict with each other.

There are also reports that suggest how Mark Zuckerberg prevailed over his team to bend the company's rule to allow some of Donald Trump's posts, which editors have decided qualified as hate speech. The situation, according to The Wall Street Journal, was so tense that some employees threatened to quit.

To be fair, Facebook maintains that it is currently in the process of establishing some semblance of guidelines and working with stakeholders to ensure that the Facebook community is safe but also open to expression.

The censorship policy pronouncement can also be understood as a direct response to a recent snafu involving the removal of a Swedish breast cancer awareness video. The social media platform was also criticized for taking down a footage of Philando Castile's death.

As more information emerges, it appears that Google's algorithm is largely to be blamed for these censorship mistakes. The social networking company has already sent some of its human editors packing last August in favor of a proprietary algorithm. This technology automatically flags and censors content that violates a set of criteria. It has already been blamed for posting fake news. Facebook has so far invoked technical glitch for past errors.

Presently, it is not clear how the social media network will go about its new censorship policy. Will it hire back the team of human editors or merely tweak its algorithm? The answer to this question must also be reconciled with other factors such as the company's commitment to an international media consortium that addresses fake news and hoaxes.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion