The UK's regulatory authority, Ofcom, is launching a consultation in response to the growing trend of children as young as five engaging with the internet.

This move comes amid concerns over the potential risks posed to children online, prompting Ofcom to explore the role of artificial intelligence (AI) and automated tools in safeguarding young users. 

Kids Playing Connected

(Photo: Thomas Quinn from Pixabay)

Children Aged 5 to 7 Are Joining the Internet

The consultation aims to assess the current and potential future applications of these technologies in identifying and removing illicit online content, particularly to shield children from harmful material and to detect sexual abuse imagery.

Ofcom's recently released findings shed light on the increasing presence of young children. Specific statistics indicate that a quarter (24%) of children aged 5 to 7 now own smartphones, while three-quarters (76%) use tablets.

Additionally, around a third of children in this age group go online to send messages or make voice/video calls, and a similar proportion engage with social media platforms. 

Notable platforms experiencing an uptick in usage among children aged 5 to 7 include WhatsApp, TikTok, Instagram, and Discord, with a concurrent rise in online gaming activities.

In response to evolving digital consumption patterns among children, Ofcom will initiate consultations in the forthcoming weeks to formulate a comprehensive strategy for bolstering online child protection measures.

Moreover, Ofcom will undertake an additional consultation later this year specifically focused on harnessing automated tools, including AI, to identify illegal content and content posing significant risks to children.

Read Also: UK Approves AI Age Checks to Prevent Children From Accessing Porn Sites


Parental Guidance for Using the Internet

Despite parents' increased engagement in social media alongside their children, 32% of young children are venturing into digital spaces autonomously. 

This trend is mirrored by a growing willingness among parents to permit their children to establish social media profiles before meeting the prescribed minimum age requirements.

"The research suggests a disconnect between older children's exposure to potentially harmful content online, and what they share with their parents about their online experiences. A third (32%) of 8-17s say they have seen something worrying or nasty online in the last 12 months, but only 20% of parents of this age group report their child telling them they had seen something online that scared or upset them in the same time frame," Ofcom's report reads.

Notably, efforts to safeguard children's online experiences extend beyond parental oversight and encompass legal obligations imposed on technology companies under the Online Safety Act. 


Under this legislation, tech firms are mandated to implement measures to safeguard children from harmful online content, and Ofcom is poised to oversee compliance with these regulations.

As part of its commitment to upholding online safety standards, Ofcom is poised to roll out consultations on various fronts, including developing a Children's Safety Code of Practice and exploring AI-driven solutions to mitigate risks associated with illegal content and content detrimental to children's well-being. 

Related Article: UK Ofcom Proposes Investigation into Amazon, Microsoft for Alleged Abuse of Market Power

Byline


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion