A team of researchers from four leading universities has found a way to use machine learning technology to flag risky conversations on Instagram without having to eavesdrop on them. 

The research examined the most effective data input, including metadata, text, and image features, to train machine learning models for identifying risky conversations.

US-IT-LIFESTYLE-INTERNET
(Photo: OLIVIER DOULIERY/AFP via Getty Images)
In this photo illustration, a person looks at a smartphone with an Instagram logo displayed on the screen, on August 17, 2021, in Arlington, Virginia

Identifying Risky Conversations

The findings revealed that metadata attributes, such as conversation duration and participant engagement level, can detect risky conversations.

The program was able to identify risky conversations with 87% accuracy in the experiment by utilizing only these sparsely available and anonymous details.

By identifying the most useful type of data input for these models, the study can help improve their accuracy and effectiveness in detecting potentially harmful conversations, such as those related to cyberbullying, hate speech, or online harassment

The team collected and analyzed more than 17,000 private chats from 172 Instagram users aged 13-21. The participants labeled each conversation as "safe" or "unsafe."

They used this data to develop a program that can identify risky conversations using metadata, such as conversation length, number of users involved, number of messages sent, response time, and number of images sent. This program could operate even if Instagram conversations were encrypted. 

"One way to address this surge in bad actors, at a scale that can protect vulnerable users, is automated risk-detection programs," said Afsaneh Razi, Ph.D., an assistant professor in Drexel's College of Computing & Informatics, who was a co-author of the research.

"But the challenge is designing them in an ethical way that enables them to be accurate, but also non-privacy invasive. It is important to put the younger generation's safety and privacy as a priority when implementing security features such as end-to-end encryption in communication platforms." 

Read Also: Lawyers Allege that Facebook, Instagram Cause Mental Health Illnesses to Users

Protecting Young Users, Ensuring Their Privacy

At present, there is a struggle among regulators and providers to protect young social media users from bullying and harassment while ensuring their privacy.

Instagram is the most commonly used social media platform among 13-21-year-olds in the US, and some studies claim that harassment on the platform is causing an increase in depression, especially among teenage girls, resulting in more mental health and eating disorders.  

Platforms are facing more pressure to protect users' privacy following the Cambridge Analytica scandal and EU privacy laws.

However, this makes it harder for the platforms to use automated technology to detect and prevent risks. The team's system offers a two-way solution that can protect young users while also maintaining their privacy. 

The team, led by researchers from Drexel University, Boston University, Georgia Institute of Technology, and Vanderbilt University, published their findings in the Proceedings of the Association for Computing Machinery's Conference on Human-Computer Interaction. 

Related Article: New Instagram Quiet Mode Rolls Out; Here's Why It's Different From Other Screen Time Management Features

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: Meta Instagram
Join the Discussion