UK Technology Companies and Child Protection Agencies to Examine AI's Capability to Create Abuse Content

Technology companies and child safety agencies will be granted authority to assess whether AI tools can produce child exploitation images under recently introduced UK legislation.

Substantial Increase in AI-Generated Illegal Content

The announcement coincided with findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will allow designated AI companies and child safety organizations to inspect AI models – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to prevent them from producing depictions of child exploitation.

"Fundamentally about preventing abuse before it happens," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the risk in AI models promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.

This law is designed to averting that problem by enabling to halt the production of those images at their origin.

Legislative Framework

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or sharing AI models developed to create exploitative content.

Real-World Impact

This week, the official toured the London headquarters of a children's helpline and listened to a mock-up call to counsellors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.

"When I learn about children experiencing extortion online, it is a source of extreme frustration in me and rightful concern amongst parents," he stated.

Alarming Data

A prominent online safety foundation reported that instances of AI-generated exploitation content – such as webpages that may include numerous images – had significantly increased so far this year.

Cases of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are released," commented the head of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be victimised all over again with just a simple actions, giving offenders the ability to create possibly limitless amounts of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally commodifies victims' suffering, and renders young people, particularly female children, more vulnerable both online and offline."

Support Session Data

The children's helpline also released details of support interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Employing AI to rate body size, physique and appearance
  • AI assistants dissuading children from talking to safe adults about abuse
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-faked images

During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapeutic applications.

Jacob Schwartz
Jacob Schwartz

A tech enthusiast and business strategist with over a decade of experience in digital transformation and startup consulting.