Search engines, such as Google, Microsoft Bing, DuckDuckGo, Yahoo, and AOL, could serve as gateways to harmful content, which glorifies, celebrates, or offers instruction about self-harm or suicide, according to the UK's communications regulator Ofcom. 

"Search engines are often the starting point for people's online experience, and we're concerned they can act as one-click gateways to seriously harmful self-injury content," Ofcom's online safety policy development director, Almudena Lara, said in a statement.

FRANCE-TELECOM-INTERNET-TAX-BUSINESS-GOOGLE
(Photo : DAMIEN MEYER/AFP via Getty Images)
This picture taken on May 13, 2013 in the French western city of Rennes shows a woman choosing Google Search (or Google Web Search) web search engine front page on her tablet.

Search Results Led to Glorified Self-Harm Content

Research conducted by the Network Contagion Research Institute on behalf of Ofcom revealed that approximately 22% of the links analyzed in search results led to content that either glorified or provided instructions on self-harm, suicide, or eating disorders.

The study involved entering common search terms related to self-injury and cryptic phrases commonly used by online communities to disguise their true meaning. 

Over 37,000 result links from the five major search engines were analyzed, uncovering significant concerns about the availability of harmful content on the internet.

Across the search engines, the study found that one in five (22%) results linked directly to content celebrating or providing guidance on non-suicidal self-injury, suicide, or eating disorders with just a single click. 

Furthermore, 19% of the top links on the first page of search results, increasing to 22% among the top five page one results, were linked to content promoting or encouraging these harmful behaviors.

Particular risk was associated with image searches, which yielded the highest proportion of harmful or extreme results (50%), followed by web pages (28%) and videos (22%). 

Notably, images were found to be more likely to inspire acts of self-injury, and detection algorithms faced challenges distinguishing between visuals glorifying self-harm and those distributed in recovery or medical context, according to Ofcom.

The use of deliberately obscured search terms by online communities posed a significant challenge, with individuals six times more likely to encounter harmful content related to self-injury when using cryptic terms.

On a positive note, one in five (22%) search results were categorized as "preventative," signposting users to content focused on providing help, such as mental health services or educational material about self-injury risks. 

Read Also: UK Approves AI Age Checks to Prevent Children From Accessing Porn Sites

Search Engines like Google Are Gateways to Harmful Content

Lara expressed concern about search engines that serve as gateways to harmful content, especially for children. She emphasized the need for search services to understand the potential risks and the effectiveness of their protection measures.

"Search services need to understand their potential risks and the effectiveness of their protection measures - particularly for keeping children safe online - ahead of our wide-ranging consultation due in Spring," Lara noted

Responding to these findings, Ofcom highlighted that search services must be prepared to fulfill their requirements under the Online Safety Act.

In the spring, Ofcom plans to open a consultation on its Protection of Children Codes of Practice, which will set out the practical steps users and search services can take to meet their children's safety duties. 

Related Article: UK Ofcom Proposes Investigation into Amazon, Microsoft for Alleged Abuse of Market Power

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion