Pedophiles are reportedly exploiting artificial intelligence (AI) technology to produce and distribute highly realistic child sexual abuse images.

Uncovered by a recent BBC investigation, this alarming trend raises urgent concerns about the ethical challenges posed by advancing AI technology and the need for stronger measures to combat the proliferation of illegal content.

FRANCE-TECHNOLOGY-AI

(Photo : Lionel BONAVENTURE / AFP)
This picture taken on April 26, 2023 in Toulouse, southwestern France, shows screens displaying the logo of Stable Diffusion, an articial intelligence application, created by stability.ai.

The AI Dilemma: Exploiting the Potential of Stable Diffusion

Using AI software called Stable Diffusion which was initially intended for art and graphic design, individuals are able to create life-like depictions of child abuse, including explicit scenes involving infants and toddlers.  

With Stable Diffusion, users can input word prompts to describe any desired image, and the program generates it accordingly. However, the BBC investigation reveals that this technology is now being exploited to create distressing images of child sexual abuse, including the rape of infants and toddlers. 

Read Also: Meta Takes Action Against Child Exploitation on Instagram Following Disturbing Findings

Patreon and Pixiv's 'Zero Tolerance'

The accessibility of these explicit images has been facilitated through platforms like Patreon, where some individuals pay subscriptions to gain access. While Patreon claims to have a "zero tolerance" policy regarding such imagery, the National Police Chief's Council has criticized platforms that profit from this content while shirking moral responsibility.

Online child abuse investigation teams in the UK have already encountered such content, shedding light on the severity of the issue. Since the advent of AI-generated images, there has been a significant increase in explicit content involving not just young girls but also toddlers.

It is essential to note that a computer-generated "pseudo image" depicting child sexual abuse is treated as illegal material in the UK, carrying the same legal consequences as real photographs or videos. 

Ian Critchley, who leads on child safeguarding at the National Police Chiefs' Council (NPCC), warned against the misconception that no harm is caused when synthetic images are created. He noted that pedophiles could progress from viewing such images to committing actual abuse against live children.

The sharing of these abused images follows a three-step process: Pedophiles create the images using AI software, then promote them on platforms such as Pixiv, a Japanese picture-sharing website predominantly used by artists sharing manga and anime. These accounts on Pixiv provide links directing customers to more explicit images, which can be accessed through platforms like Patreon.

The fact that Pixiv is hosted in Japan, where the sharing of sexualized cartoons and drawings of children is legal, enables creators to use it as a platform for promoting their work. 

They join groups and use hashtags to reach a wider audience. Pixiv has stated its commitment to addressing this issue and banned all photo-realistic depictions of sexual content involving minors.

Production of Child Abuse Images on an Industrial Scale

Stability AI, the UK company leading the global collaboration behind the creation of the AI image generator Stable Diffusion, affirmed that it prohibits the use of its products for illegal or immoral purposes. 

However, an earlier open-source version of Stable Diffusion released last year enabled users to bypass filters and train the software to produce any image, including illegal content. As AI technology continues to advance rapidly, concerns are rising regarding its potential risks to privacy, human rights, and safety.

Critchley raised an additional concern, saying the abundance of realistic AI or "synthetic" images could hinder the process of identifying real victims of abuse and diverting valuable resources away from genuine cases.

He stressed that society is at a critical juncture where it must decide whether the internet and technology will remain a fantastic opportunity for young people or become a more dangerous place.

He noted that immediate action is necessary to address this pressing issue and protect vulnerable individuals from the dark underbelly of AI innovation. 

Related Article: AI Image Generators Advancing Rapidly, Detectors Can't Keep Up: Study 

Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion