Recent findings from the National Center for Missing & Exploited Children (NCMEC) highlight a disturbing surge in child sexual exploitation online. 

The NCMEC's annual CyberTipline report, released on Tuesday, April 16, reveals a concerning rise in various forms of abuse, including the dissemination of child sexual abuse material (CSAM) and financial sexual extortion.

Shocking Statistics Involving AI-Generated Content and Child Abuse

New Report Says AI-Made Photos, Videos Prompt Rise in Child Abuse Online
(Photo : Ryoji Iwata from Unsplash) 
AI-generated images and clips are affecting the rise of child sexual exploitation cases online, NCMEC says in the latest assessment report.

In 2023, reports of child abuse online soared by over 12% compared to the previous year, surpassing a staggering 36.2 million reports. While the majority of tips were related to CSAM like photos and videos, there was also a disturbing increase in cases of financial sexual extortion, where predators coerce children into sending explicit images or videos for monetary gain.

Related Article: Facebook Flooded With AI-Generated Images Without AI Watermarks Despite Meta Pledges

Emergence of AI-Generated CSAM

A concerning development highlighted by the NCMEC is the emergence of images and videos generated by artificial intelligence (AI) depicting child sexual exploitation. 

The center received 4,700 reports of AI-generated CSAM, a category it began tracking only in 2023. This new trend poses significant challenges for law enforcement and impedes the identification of real child victims.

"The NCMEC is deeply concerned about this quickly growing trend, as bad actors can use artificial intelligence to create deepfaked sexually explicit images or videos based on any photograph of a real child or generate CSAM depicting computer-generated children engaged in graphic sexual acts. For the children seen in deepfakes and their families, it is devastating," the NCMEC report states.

Child Sexual Online Material Has No Place in the US

Creating and distributing child sexual abuse material, including AI-generated content, is illegal in the United States. However, the majority of reported incidents originate outside the US, underscoring the global nature of this issue. 

In 2023, more than 90% of the reported CSAM incidents were uploaded from international sources, The Guardian writes in its report.

Corporate Responsibility and Reporting Obligations

Tech giants play a crucial role in combating online child sexual exploitation. Platforms like Facebook, Instagram, and WhatsApp are among the top reporters of CSAM incidents to the NCMEC. 

However, there is a significant gap between the volume and quality of reports submitted, highlighting the need for improved reporting standards and collaboration between tech companies and law enforcement.

Call to Action

The NCMEC report emphasizes the urgent need for action from Congress and the global tech community to address this growing threat. 

Despite efforts to combat child sexual exploitation online, there remains a critical need for legislative measures and enhanced cooperation to protect vulnerable children and hold perpetrators accountable.

Earlier this month, parents of high school students studying at St. Thomas Aquinas Catholic Secondary School in London, Canada were alarmed about the recent trend in the campus.

At the time, Tech Times reported that AI-manipulated nude photos of some students were making rounds on Telegram. Many claimed that the pictures were not theirs, calling out the irresponsible use of AI apps in editing the original photos.

Read Also: Researcher Introduces Filter for 'Unsafe' AI-generated Images

Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion