Twitter will send prompt messages to users before posting a potentially-offensive tweet to curtail harassment in the platform.

The Twitter App loads on an iPhone in this illustration photograph
(Photo : REUTERS/Mike Blake)
The Twitter App loads on an iPhone in this illustration photograph taken in Los Angeles, California, U.S., July 22, 2019.

The company is testing this new feature that allows the user to rethink or edit the tweet that uses "harmful" language.

Although harassment on Twitter certainly does not just happen in the "heat of the moment," any efforts that may "reduce toxicity" and create a harmonious social media platform are welcome. 

ALSO READ: [BREAKING] China Launches Largest Carrier Rocket in Space Bigger Than an Olympic Swimming Pool 

Twitter users have mixed responses to the company's tweet

One user thinks it is a valid suggestion. Another user wrote, "This is a good idea, it will create less conflict and less situations when people have a second chance to think rationally before posting."

"A very useful tool and about time. You should add this as a permanent feature," another user responded to the tweet.However, others question the validity of the measure. "Who are the gods that get to decide what words are harmful?" asks one user.Another user airs the same concern. "Who decides what is right and what is wrong??? BTW please add edit option so we can correct our tweet instead of deleting for all spelling and grammatical mistakes we can correct tweet," the reply said.

Meanwhile, the majority of the replies are requesting an edit button, so they can revise the tweets in general. "We just want an edit option and folders to organize our bookmarks," said one user."I'd rather have an edit button after I hit send for misspellings. Then anything said in the heated moment could be changed at any time. But that's just me maybe," suggests by another user.

ALSO READ: [BREAKING] Nintendo's Old Files Have Been Stolen; That's Why There's Super Mario 64, Says Report 

Twitter has not responded yet to as to what types of languages trigger the alert or who decides which one is offensive or not.

Instagram nudges

However, such an effort is no longer new to social media fans. In July, Instagram had a similar trial in its struggle against online bullying and encouraged positivity. In this test, users will receive a warning and nudge before sharing a potentially abusive post.

"This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification," Adam Mosseri, head of Instagram, wrote in a blog post.

"It's our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem," he adds.

On December 16, the company announced in another blog post that the results of the trial were promising, "and we've found that these types of nudges can encourage people to reconsider their words when given a chance."

As companies operate with relative skeleton crews during the coronavirus pandemic, this kind of moderation that they conduct across massive social media platforms is particularly relevant right now.

All of the major social networks have increased their reliance on AI detection as the tech workers work away from the office. Thus, Facebook has decided to have content moderators among the first employees to return to the workplace.

Read also: Tesla Applied For A License To Become UK Energy Provider: Autobidder Might Be Introduced To the Country

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion