Meta's Facebook can monitor people's private messages for harmful content such as child exploitation and could also flag phrases such as "going to kill" or "going to shoot" but without context.

Hence, the company has to rely on user reports to detect such threats. Meta on Wednesday revealed that the warnings were sent via private one-to-one text messages on Messenger before the Texas incident. 

US-IT-LIFESTYLE-INTERNET
(Photo : OLIVIER DOULIERY/AFP via Getty Images)
In this photo illustration, a person looks at a smart phone with a Facebook App logo displayed on the background, on August 17, 2021, in Arlington, Virginia.

Private One-To-One Text Message

Gov. Greg Abbott revealed at a news conference that warnings were posted on Facebook before the brutal attack occurred at Robb Elementary School in Uvalde on Tuesday, May 24. Particularly, the authorities found that the suspect himself made clear threats on the platform before the incident took place.

However, Facebook spokesman Andy Stone clarified in a tweet that, " the messages Gov. Abbott described were private one-to-one text messages that were discovered after the terrible tragedy occurred."

Stone added that the company is now working closely with law enforcement to cooperate in their ongoing investigation.

It is then worth noting that the threats were not publicly posted on Facebook but rather were sent via Messenger. 

However, Facebook can monitor these private messages as explained by ABC News. Its artificial intelligence systems can detect harmful content, links containing malware, and can flag threatening messages.

The app's AI systems will have to undergo a painstaking interpreting process of these threatening words - which can liken the threats to song lyrics, jokes, and satire -  its systems then have a difficult task in verifying possible warnings so Facebook will only act on it base on user reports.

But even this monitoring could soon be scrapped off because Meta is planning to put end-to-end encryption on the messaging systems of Facebook and Instagram next year.

Read Also: Meta Debuts 'Privacy Center' for Facebook, Centers on 5 Primary Focuses-Limited Release on Desktop

How End-to-End Could Affect Situations Like Texas' Incident

With the end-to-end encryption in place, no one would be able to access messages other than the sender and recipient. An app that already has this encryption is WhatsApp, which is also owned by Meta.

If Meta pushes through with this encryption, they can no longer monitor the private messages of their users and possibly detect threats such as the warnings that led to the Texas incident.

A recent Meta-commissioned paper, reported by ABC News, underscored the benefits of such privacy but still acknowledged harms such as users abusing the encryption to exploit children, conduct human trafficking, and amplify hate speech.

Apple's end-to-end encryption, for instance, was brought into the Justice Department to discuss issues on its messaging privacy.

Back in 2019,  a tragic incident involving U.S. sailors happened at a Navy installation and investigators were unable to access the suspect's iPhones since they had encryption. 

In the same year, several social media and technology organizations sent an open letter to Mark Zuckerberg, imploring him to stop increasing Facebook's encryption citing that it could compromise "communications freedom, public safety, and democratic values."

Related Article: Facebook Meta | What Does the Metaverse Mean for User Privacy and Security?

This article is owned by Tech Times

Written by Joaquin Victor Tacla

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion