Tech firms and child protection organizations will receive authority to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced British legislation.
The announcement coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the government will allow designated AI companies and child safety organizations to inspect AI models – the underlying technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from producing images of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now identify the danger in AI systems early."
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that problem by helping to stop the production of those materials at their origin.
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or distributing AI models designed to create exploitative content.
This recently, the minister visited the London base of Childline and heard a simulated conversation to counsellors involving a account of AI-based abuse. The interaction depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about young people experiencing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.
A leading online safety organization stated that instances of AI-generated exploitation content – such as online pages that may include multiple files – had more than doubled so far this year.
Cases of the most severe content – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
The legislative amendment could "constitute a crucial step to ensure AI tools are secure before they are launched," stated the head of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a few clicks, giving offenders the ability to make potentially endless amounts of advanced, photorealistic exploitative content," she continued. "Material which additionally commodifies victims' trauma, and renders young people, particularly girls, less safe both online and offline."
Childline also published details of counselling interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing utilizing chatbots for support and AI therapeutic apps.