According to researchers from the Oxford Martin Programme on Ethical Web and Data Architectures (EWADA) at the University of Oxford, there's a pressing need for a more thoughtful approach in integrating ethical principles into the development and regulation of AI, particularly concerning children.

Children School to Learn

(Photo : Duernsteiner from Pixabay)

Are Children Being Left Behind by AI?

One major issue identified by the researchers is the lack of consideration for the developmental aspects of childhood. The complexity and diversity of children's needs, including factors like age, developmental stages, backgrounds, and personalities, are often overlooked, according to the researchers.

Another challenge highlighted by the researchers is the minimal attention given to the role of guardians, such as parents, in childhood. Traditional roles of parents are often assumed to be superior, neglecting the evolving dynamics of the digital world and the need for a more balanced perspective.

Moreover, there's a dearth of child-centered evaluations that prioritize children's best interests and rights. Quantitative assessments dominate discussions around safety and safeguarding in AI systems, but the researchers claim that these often fail to account for the nuanced developmental needs and long-term well-being of children.

Lastly, the absence of a coordinated, cross-sectoral approach to formulating ethical AI principles for children hampers the implementation of impactful changes. 

AI Tools to Assist Children

The researchers cite real-life examples to illustrate these challenges. While AI technologies are increasingly employed to ensure children's safety online by identifying inappropriate content, the team highlights gap in integrating safeguarding principles into AI innovations, particularly those supported by Large Language Models (LLMs).

In collaboration with the University of Bristol, the researchers are developing tools to assist children with conditions like ADHD. These tools are designed with careful consideration of children's needs, ensuring interfaces that align with their daily routines, digital literacy skills, and preferences for simplicity and effectiveness.

To address these challenges, the researchers propose several recommendations. They emphasize the need for increased involvement of key stakeholders, including parents, guardians, AI developers, and children themselves. 

Direct support for industry designers and developers of AI systems is also crucial, fostering their engagement in ethical AI principles implementation.

Additionally, the establishment of legal and professional accountability mechanisms centered on children's interests is essential, according to the researchers. 

Read Also: AI-Powered WWII Vets Converse with Visitors in New Museum Exhibit

AI Incorporation in Children's Lives

Dr. Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University's Department of Computer Science, and lead author of the paper, said, "The incorporation of AI in children's lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape."

"This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space."

The perspective article of the researchers was published in the journal Nature Machine Intelligence. 

Related Article: AI-Powered Robots Become Latest Venture in Automating Recycling, Waste Management

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: AI AI Children
Join the Discussion