Photo credit: www.theguardian.com
According to the Internet Watch Foundation (IWF), the depiction of child sexual abuse through artificial intelligence (AI) has become “significantly more realistic.” This finding highlights alarming advancements in AI technology that are being exploited to create illegal content targeted at vulnerable minors.
In its latest annual report, the IWF noted a staggering 380% surge in reports of AI-generated child sexual abuse imagery in the UK, jumping from 51 cases in 2023 to 245 in 2024. These reports accounted for a total of 7,644 images and several videos, indicating that a single URL can host multiple instances of this illicit material.
A substantial portion of the reported images fell into the “category A” classification, which describes the most severe forms of child sexual abuse content, encompassing penetrative sexual acts and sadistic behavior. This category represented 39% of the actionable AI-generated material analyzed by the IWF.
In response to these troubling trends, the UK government announced in February that possessing, creating, or distributing AI tools specifically designed to generate child sexual abuse material will soon be illegal. This decision aims to close a concerning legal loophole that has raised significant alarm among law enforcement and child safety advocates. Furthermore, individuals found in possession of manuals instructing others on effectively using AI tools for the creation of abusive content will also face legal repercussions.
The IWF, which not only operates within the UK but also has a global reach, reported a concerning trend of AI-generated imagery emerging in accessible parts of the internet rather than being confined to the “dark web,” which requires specialized browsers to access. The quality of these AI-generated images can sometimes be so convincing that even trained analysts from the IWF struggle to differentiate them from actual photographs and videos.
The annual report also revealed record levels of webpages hosting child sexual abuse material, with the IWF logging 291,273 incidents in 2024—a 6% increase compared to the previous year. Most of the victims depicted in these reports were identified as girls.
To combat the proliferation of such abuse material, the IWF has introduced a new safety tool accessible for free to smaller websites. This initiative aims to assist these platforms in identifying and mitigating the spread of harmful content. The tool, known as Image Intercept, is equipped to detect and block images found in an IWF database containing 2.8 million digitally marked criminal images. This move is designed to ensure compliance with the recently implemented Online Safety Act, which includes measures for protecting children and combating illegal content, such as child sexual abuse material.
Derek Ray-Hill, interim chief executive of the IWF, emphasized that providing this tool free of charge represents a “major moment in online safety.” Additionally, technology secretary Peter Kyle remarked that the growth of AI-generated abuse and sextortion—where minors are threatened after sharing intimate images—highlights the constantly evolving dangers facing young people online. He characterized the Image Intercept tool as a “powerful example of how innovation can be part of the solution” in enhancing safety for children in digital spaces.
Source
www.theguardian.com