Facebook researchers claim Mark Zuckerberg and other Facebook bosses ignored the rules they proposed to address the alleged bias on Instagram's automated account removal system that seemed to target Black users.

According to an NBC News article on Thursday, July 23, Gizmodo reported that the higher-ups even told researchers who requested anonymity to stop doing any research about racial bias on Facebook's moderation tools.

The issue rooted in Facebook's attempt to make its automated moderation systems neutral by creating an algorithm that is equivalent to "You know, I don't really see color."

While the company's community standard regarding hate speech holds the same critical remarks against privileged and marginalized groups, however, in reality, the company's content moderation tools detect hate-speech against white people at a much higher rate than those directed at black people.

The Facebook researchers for their study that compared the white users, Black Instagram accountholders have 50% greater chances of having their accounts automatically disabled for ToS infractions like posting hate speech and bullying.

 "The world treats Black people differently from white people," one employee told NBC, adding that we are "making choices on the wrong side of history" if we choose to treat everyone the same way.

Also, another Facebook employee shared a chart on an internal forum in July 2019 that suggests that the company's tools "disproportionately defend white men."

The chart was later leaked to NBC News. It showed Facebook took down more hate speech against white people automatically than those reported by users. This shows that while users did not find a post offensive enough, Facebook still deleted it.

Read also: Police Issues Warning on 2 'Daredevils' Seemingly Mimicking Joker's Skyscraper Dangle Scene in Nolan Movie 

Using the same tools, less hate speech posts against marginalized groups, including Black, transgender, and Jewish users, were proactively removed by Facebook than those reported by users. This means that while certain posts are deemed offensive, Facebook's automated tools were not detecting them.

Instead of using the proposed rules were eventually scrapped while Instagram sorted to using an updated version of the moderation tool. Employees were even banned from testing it on the revised tool.

Facebook's response to the report

Facebook claimed that the researchers used a flawed methodology. However, it did not deny that it issued a moratorium on probing racial bias in its moderation tools. In an interview with NBC, Facebook's VP of Growth and Analytics, Alex Schultz said they made such decisions based on ethics and methodology concerns.

Facebook said that it was currently looking for better methods of testing its products for racial bias. In his interview with NBC, Schultz said that racial bias on Facebook's platforms is a "very charged topic," but the company has largely increased its investment in investigating algorithmic bias and its effects on moderating hate speech.

The company announced earlier this week it had created new teams to investigate racial impacts on its programs. These employees will compare how Black and minority users, as well as white users, are affected by Facebook and Instagram algorithms.

Facebook spokeswoman Carolyn Glanville said in a statement that the company is "actively investigating how to measure and analyze internet products."

Glanville said that leaders sought a "standard and consistent approach" to prevent a biased and negligent work, so they set up a project to do that.

Read also: Instacart Spokesperson Denies There Has Been a Data Breach with their Customers' Information

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion