Photo credit: www.bbc.com
Calls for Stricter Regulations on AI-Generated Content Involving Child Victims
The ongoing issue of AI-generated videos featuring child murder victims has reached a critical point, prompting Denise Fergus, the mother of murdered toddler James Bulger, to appeal for new legislation to combat this disturbing trend. Ms. Fergus has expressed her frustration with social media platforms like TikTok, stating that her requests to remove such content have largely gone unanswered.
Ms. Fergus highlighted that videos depicting a digital representation of her son discussing his abduction remain accessible, despite being flagged as offensive. The government maintains that these types of content are unlawful under the Online Safety Act and should be removed promptly by the respective platforms.
A TikTok representative commented that the platform actively works to remove AI-generated videos that violate community guidelines, claiming a proactive identification rate of 96% for harmful content prior to user reports.
Investigations by the BBC uncovered similar AI-generated videos on platforms such as YouTube and Instagram. Both of these platforms affirmed that they have taken down content deemed in violation of their policies.
Ms. Fergus expressed her belief that the current laws do not provide sufficient power to enforce the removal of harmful AI-generated content. “It’s just words at the moment,” she stated, calling for more decisive action.
She described the videos as “absolutely disgusting,” stressing the emotional toll they take on families affected by such tragedies. “When you see that image, it stays with you,” she said, highlighting the deep psychological impact these representations can have.
James Bulger was murdered in 1993 at the hands of two 10-year-olds, Jon Venables and Robert Thompson, who lured him away from his mother. After enduring horrific treatment, James’s body was discovered days later, leading to one of the most notorious criminal cases in British history, marking Venables and Thompson as the youngest convicted murderers in the UK’s modern legal landscape.
The Nature of Disturbing Content
The unsettling AI videos under scrutiny typically portray animated versions of child victims narrating their own tragic stories. Reports from the BBC indicate that various accounts share this type of content, seemingly for profit through increased views and clicks.
YouTube has acknowledged its guidelines prohibit depictions of deceased individuals recounting the circumstances of their deaths, with a representative disclosing that channels violating these rules, such as Hidden Stories, have been terminated.
Ms. Fergus condemned the misuse of AI technology, stating, “How sick is that? It’s just corrupt. It’s just weird and it shouldn’t be done.” A government source indicated that individuals sharing such videos may face prosecution under public order offenses linked to obscene or gross material.
The government characterized the use of technology for such disturbing depictions as “vile,” asserting that the Online Safety Act was designed to classify these videos as illegal content warranting immediate removal.
In 2023, the previous Conservative administration enacted the Online Safety Act, which aims to hold social media companies accountable for protecting users against harmful content. Ofcom is responsible for overseeing compliance with this law and has been developing guidelines for platforms regarding the enforcement of these protections.
Kym Morris, chair of the James Bulger Memorial Trust, suggested that the Online Safety Act be amended to specifically address protections against harmful AI-generated content. She emphasized the need for clear definitions, accountability measures, and legal ramifications for those who exploit technological advancements to inflict harm on victims and their families.
Her sentiment echoed a broader concern: “This is not about censorship – it’s about protecting dignity, truth, and the emotional wellbeing of those directly impacted by horrific crimes,” she observed.
Future Legislative Considerations
Discussions regarding potential regulations to compel social media platforms to remove certain types of “legal-but-harmful” content were previously considered in the Online Safety Act but were ultimately abandoned due to censorship fears. Advocates for online safety argue that the law requires updates to close existing loopholes and better protect vulnerable populations.
Technology Secretary Peter Kyle recently remarked that he had “inherited an unsatisfactory legislative settlement” in the Online Safety Act, signaling an openness to future changes. However, reports indicate that there are currently no plans to introduce new legislation specifically targeting AI-generated content.
A government source revealed that a more focused AI bill intended to regulate advanced AI models may be on the horizon for later this year. Meanwhile, Jemimah Steinfeld, CEO of Index on Censorship, pointed out that many of these AI videos likely breach existing laws. However, she cautioned against hastily amending regulations that might inadvertently restrict legitimate content.
Steinfeld concluded, emphasizing the importance of balancing regulation with freedom: “To have to relive what she’s been through, again and again, as tech improves, I can’t imagine what that feels like,” she noted, underscoring the emotional burden that families like Ms. Fergus’s carry.
Ofcom reiterated that platforms must evaluate whether reported content violates UK law and respond accordingly while also ensuring that appropriate measures are enforced to protect users from illegal material.
Source
www.bbc.com