Artificial intelligence (AI) images of child sexual abuse risk “flooding the internet”, the British NGO Internet Watch Foundation has warned, as cited by The Guardian.

Child abusePhoto: Artit Oubkaew / Alamy / Profimedia Images

The organization said it had identified nearly 3,000 such images in breach of British law, explaining that in some cases existing photographs of victims of actual abuse were used to create the new images.

In other cases, AI technology has been used to create images of celebrities that have been “rejuvenated” and then depicted in sexual assault scenarios.

Other examples uncovered by the Internet Watch Foundation included the use of AI tools to show clothed children found on the internet to show them unclothed.

Susie Hargreaves, the IWF’s chief executive, said her organisation’s “worst nightmares” had “come true”.

“Earlier this year, we warned that AI-generated images could soon become indistinguishable from real-life photographs of children who have been sexually abused, and that we could start to see the distribution of such images in much greater numbers. We’re past that point,” Hargreaves said.

Images created by artificial intelligence are sold on the dark web

“The scary thing is that we see how criminals deliberately train artificial intelligence on images of real victims who have already been abused. Children who have been raped in the past are now being included in new scripts because someone, somewhere wants to see it,” she added.

The findings were published by the IWF in its latest report, which was based on an investigation carried out in forums on the “dark web”, that hidden part of the Internet that can only be accessed with specialized browsers, and in the case of many sites. with usernames and passwords.

The alarm signal sounded by the IWF comes in the context of the fact that applications and tools based on artificial intelligence for the generation and processing of images have recently undergone impressive development, although the public’s attention has been attracted more by talking “bots” such as ChatGPT or Bing.

Although the most well-known automated image creation programs, such as Midjourney, have built-in mechanisms to prevent requests to create illegal content, criminals often use them in conjunction with other less powerful tools that do not have such protections.