Facebook Messenger is rolling out a new method on its website to combat scammers and imposters. The company will be classifying certain possible bad actors and provide users with safety alerts regarding suspicious account posts.

The move represents the latest example of Facebook using automation to fix its myriad problems with content moderation. Facebook also says it takes Messenger closer to being encrypted by default from end to end.

ALSO READ: Facebook Messenger Monitors Users' Messages, But For Good Reason

How this works

A crucial aspect of how it works is that Facebook restricts how much user messages is actually read by its AI to detect scams. Instead, the code looks at possible harassment, depending on what accounts do.

"We may use signals user reports or reported content to inform the machine learning models," a Facebook spokesperson told Recode, "but we don't do so proactively. "We primarily use behavioral signals to determine when to surface a safety notice."

If an account attempts to use a name that looks like one of your friends or an account belonging to a grown-up sends a bunch of messages or friend requests to minors, it could create a notice. Given that the moderation tool does not access content from words, it is planned to operate with complete encryption, which Facebook has long said would roll out on Messenger.

Notably, if they are flagged with this new automated verification method, Facebook will not automatically block potentially fake or scammy accounts. Messenger will show a safety note warning the user and allowing them to prevent the individual or read on how to defend themselves against scams. This tactic is no different from the recent push by Facebook to nudge users who have liked coronavirus hoaxes.

No more scammers, please?

This latest tool on Messenger attempts to solve a well-documented problem. To trick people into giving up their money or financial information, Con artists continue to find creative and convincing ways of impersonating friends and family users. One of the new security messages from Facebook even specifically warns people about "refusing requests for money." More recently, these ploys took advantage of the Covid-19 pandemic, with scammers promoting fake treatments and charity efforts. The problem has gotten bad enough that the Better Business Bureau has issued a public warning.

This new effort to tackle its huge scam issue does not extend to misinformation that has long been festering on Facebook platforms. Fake news and conspiracy theories find new breeding ground amid Covid-19 pandemic. Since the novel coronavirus broke out, Facebook made regulatory changes to curb fake news. But again, despite its prevalence on private messaging apps like Messenger and WhatsApp, these new safety notices do not apply to fake news and misinformation.

ALSO READ: Coronavirus Update: Facebook to Alert Users Interacting With Fake News 

Facebook's latest safety alerts also reflect yet another effort to use technology to mitigate its biggest, major security challenges. Facebook has now employed AI in its attempts to address problems as wide-ranging as extremist activity and recruiting for illicit drug trafficking. Still, none of those violations have gone away entirely. And most recently, Facebook again switched to its AI-powered content management system as the pandemic pulled the company's human reviewers from their offices. However, it warned that the system might be less optimal.

Since March, the app has been rolling out on Android devices, and Facebook claims that more than 1 million people have been flagging potentially-related posts a week. The app will begin expanding to iOS users as of next week.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion