TikTok wants to create a more effective and faster moderation process. The social media platform will now remove its human reviewers and replace them with automated systems.

TikTok Automated Systems

In the United States and Canada, TikTok will begin using automated reviewing systems to remove videos that contain sex, nudity, graphic content, violence, illegal activities, and other violations of its minor safety policy.

Suppose the automated system detects a video in those categories. In that case, it will be pulled from the platform right away, and the creator of the video will be given a chance to file for an appeal to a human moderator.

Until now, the social media platform has run all videos by human moderators in the United States before pulling them, according to The Verge. 

Also Read: Parler's Amazon Banned Due to Violent Content, Death Threats; CEO Fears End for the App

The change is made to limit the volume of "distressing videos" that moderators are required to watch and to give them more time to spend on clips that are more tricky to decipher, like misinformation, that needs context to properly review.

The moderators for other social media companies, like Facebook, have reportedly developed PTSD-like symptoms from the videos that they were required to watch every day.

Issues of Discrimination

According to Axios, the announcement from the social media platform is also part of its effort to provide more transparency around TikTok video moderation.

However, it is important to note that automated systems are not perfect, which is an issue that TikTok might have to face next. Some communities may be hit hardest by issues in automated takedowns.

The social media app has a history of discriminatory moderation and was criticized for removing the intersex hashtag twice.

In 2019, the social media site was also accused of limiting the reach of people with disabilities, especially those with Down Syndrome and facial disfigurement.

According to Netzpolitik.org, the social media platform intentionally limited the reach of differently-abled users to protect them from potential bullying on the platform. However, the public pointed out that what TikTok did was a form of discrimination.

The issue was compounded by the platform's moderators, who were forced to make quick decisions about the users' physical traits and mental traits before flagging their videos.

A spokesperson from TikTok said that the social media app was aware that the rules were flawed, and it was a poor attempt to prevent conflict on the platform.

Flaws in Automated Systems

Tiktok has been testing the automated system in other countries, including Pakistan and Brazil, and said that only 5% of the videos that its system removed were proven not to violate any of the app's rules.

TikTok removes hundreds of thousands of videos every day. During the first three months of 2021, the social media app removed 8,540,088 videos in the United States alone, so it is possible that thousands of videos were pulled by mistake.

A spokesperson of Tiktok says that human moderators will still be working behind the scenes. They will be in charge of reviewing appeals, community reports, and other videos flagged by the automated system.

TikTok will start rolling out the automated review system over the next couple of weeks.

Related Article: TikTok MLM Ban Explained: What are the Special Community Guidelines For? Pyramid and Other Schemes Also Get Red Flag!

This article is owned by Tech Times

Written by Sophie Webster

ⓒ 2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.