TikTok / Unsplash
(Photo : Unsplash)

TikTok finally implemented a technology to remove videos that are unsavory for minors with a no-nonsense zero-tolerance policy. The software would allow human staff to focus more on content that needs addressing, like misinformation and hate speech.

Accounts shown to contain videos of explicit and violent content will automatically be removed by the technology, especially when they are posting child abuse content.

TikTok to Partly Automate Its Review System

The company is starting to automate its review system, hindering videos that feature graphic, illegal, and other unsavory contents that violate its minors' safety policy, as told by Business Insider

TikTok is now making the move to reduce the worrying number of videos that human moderators would otherwise have to review.

Eric Han, TikTok's head of US safety, has done this to prevent the minors' safety policy within the US and Canada. 

The transition would allow human moderators to give more attention to videos that contain bullying, racism, hate speech, misinformation, and similar content. Before the transition, it was the job of human moderators to vet and make decisions before removing videos deemed not appropriate.

Read More: TikTok to Launch Feature Similar to Cameo, Users Can Pay Creators in Exchange of Custom Clips

The Technology Isn't Perfect

The company has also stated that no technology to filter videos is perfect.

To combat this, creators who have their videos removed can make their statement and appeal to TikTok directly.

People who worked for big companies had to suffer post-traumatic stress disorders from reviewing content that was being posted online on their platforms.

An instance was from a former Facebook moderator who claims to review 1,000 pieces of content per night, and sued the company to filter out horrific content.

TikTok's human moderators still would need to review community reports and appeals to remove content that violates its policies. As there are still some TikTok videos that can slip past the moderator side, the automated video review would be a big help to TikTok's safety team.

Sanctions for Policy Breakers

TikTok will now suspend accounts from creators who frequently violate their policies. The suspension includes not uploading videos, comments, or edit their profiles for 24 to 48 hours. 

The company now also implements a zero-tolerance policy for posting content that pertains to child abuse. The offender would automatically have the video and the user removed from the platform permanently. 

The initial testing of the automated system first started in Brazil and Pakistan in a report they published online

The report indicated that there has been a total of almost 12 million video removals from the US alone, following the eight and seven million video removals from Pakistan and Brazil, respectively. 

The first quarter of 2021 removed over 8.5 million views from the US and will continue to rise with the automated review when it starts to roll out in the next few weeks as TikTok has stated.

Before the eventual rollout, human moderators would need to filter all explicit and provocative content manually, for now.

Read More: #TikTokDown Trends on Twitter Following Chinese App Temporarily Fails on Log-in--Are Servers Back Now?

This article is owned by Tech Times

Written by Alec G.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion