Technology companies and child safety agencies will receive permission to evaluate whether AI tools can generate child abuse material under recently introduced UK laws.
The announcement came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow approved AI developers and child protection groups to examine AI systems – the foundational technology for chatbots and image generators – and verify they have sufficient safeguards to stop them from producing images of child exploitation.
"Ultimately about stopping abuse before it occurs," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI systems early."
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI developers and others cannot generate such content as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that problem by helping to stop the production of those images at their origin.
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, producing or distributing AI systems designed to create child sexual abuse material.
This recently, the official toured the London base of a children's helpline and listened to a simulated conversation to advisors involving a account of AI-based exploitation. The call portrayed a teenager seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I learn about children facing blackmail online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.
A prominent internet monitoring organization stated that instances of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.
Instances of the most severe content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
The law change could "constitute a crucial step to guarantee AI tools are safe before they are released," commented the chief executive of the online safety organization.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a simple actions, providing offenders the ability to create potentially limitless quantities of advanced, photorealistic exploitative content," she added. "Material which additionally exploits survivors' trauma, and renders young people, particularly female children, more vulnerable both online and offline."
Childline also published details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:
During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing utilizing chatbots for assistance and AI therapeutic apps.
A dedicated writer and life coach passionate about helping others unlock their potential through mindful practices and positive thinking.