UK Tech Firms and Child Protection Agencies to Examine AI's Capability to Generate Exploitation Images
Technology companies and child safety agencies will be granted permission to assess whether artificial intelligence tools can generate child abuse images under recently introduced British legislation.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will allow designated AI developers and child safety groups to examine AI models – the underlying systems for conversational AI and visual AI tools – and ensure they have sufficient protective measures to prevent them from producing images of child exploitation.
"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the danger in AI models promptly."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that issue by enabling to stop the creation of those images at their origin.
Legal Structure
The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a ban on possessing, creating or distributing AI systems designed to create child sexual abuse material.
Practical Consequences
This recently, the official visited the London base of a children's helpline and listened to a simulated call to counsellors involving a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about young people facing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst families," he stated.
Alarming Data
A prominent online safety organization reported that instances of AI-generated exploitation material – such as webpages that may contain numerous files – had more than doubled so far this year.
Instances of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to ensure AI products are safe before they are launched," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a simple actions, giving offenders the capability to make potentially limitless quantities of advanced, lifelike child sexual abuse material," she continued. "Content which additionally commodifies victims' suffering, and makes children, particularly female children, less safe both online and offline."
Counseling Interaction Data
Childline also released information of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
- Employing AI to rate body size, body and looks
- Chatbots discouraging children from talking to trusted adults about harm
- Being bullied online with AI-generated content
- Online extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated terms were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing AI assistants for assistance and AI therapeutic applications.