By | November 17, 2023
AI-generated images of child sexual abuse may flood the internet.  Now action is called for

NEW YORK (AP) – The already alarming spread of child sexual abuse images on the Internet could get much worse if nothing is done to put checks on artificial intelligence that generates deeply fake images, a watchdog agency warned Tuesday.

In a written report, the UK’s Internet Watch Foundation is urging governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly widens the pool of potential victims.

“We don’t talk about the damage it can do,” said Dan Sexton, the watchdog group’s chief technical officer. “This is happening right now and it needs to be addressed right now.”

In a first-of-its-kind case in South Korea, a man was sentenced in September to 2 1/2 years in prison for using artificial intelligence to create 360 ​​virtual child abuse images, according to the Busan District Court in the country’s southeast. .

In some cases, children use these tools on each other. At a school in southwestern Spain, police are investigating teenagers’ alleged use of a phone app to make their fully clothed schoolmates appear naked in photos.

The report reveals a dark side of the race too building generative AI systems which allows users to describe in words what they want to produce – from emails to new artwork or videos – and have the system spit it out.

If left unchecked, the deluge of deeply false images of child sexual abuse could taint investigators as they try to rescue children who turn out to be virtual characters. Perpetrators can also use the images to groom and coerce new victims.

Sexton said IWF analysts discovered the faces of famous children online as well as a “massive demand to create more images of children who have already been abused, possibly years ago.”

“They’re taking existing real-world content and using it to create new content for these victims,” ​​he said. “It’s just incredibly shocking.”

Sexton said his charity, which aims to combat child sexual abuse, first began reporting offensive AI-generated images earlier this year. That led to an investigation into forums on the so-called dark web, a part of the internet hosted on an encrypted network and accessible only through tools that provide anonymity.

What IWF analysts found were addicts sharing tips and marveling at how easy it was to turn their home computers into factories for creating sexually explicit images of children of all ages. Some also shop and try to profit from such images that seem increasingly true to life.

“What we’re starting to see is this explosion of content,” Sexton said.

While the IWF’s report is intended to flag a growing problem more than to offer prescriptions, it calls on governments to strengthen laws to make it easier to combat AI-generated abuse. It is particularly aimed at the European Union, where there is a debate over surveillance measures that could automatically scan messaging apps for suspected child sexual abuse images even if the images are not previously known to law enforcement.

One focus of the group’s work is to prevent those previously exposed to sexual abuse from being abused again by redistributing their images.

The report says technology vendors could do more to make it harder for the products they’ve built to be used in this way, although that’s complicated by the fact that some of the tools are difficult to put back in the bottle.

A crop of new AI image generators were introduced last year and wowed the public with their ability to conjure up whimsical or photorealistic images on command. But most of them do not benefit producers of child sexual abuse material because they contain mechanisms to block it.

Technology vendors that have closed AI models, with full control over how they are trained and used — such as OpenAI’s image generator DALL-E — have been more successful in blocking abuse, Sexton said.

In contrast, one tool favored by producers of child sexual abuse images is open source Stable Diffusion, developed by London-based startup Stability AI. When Stable Diffusion appeared on the scene in the summer of 2022, a subset of users quickly learned how to use it to create nudity and pornography. While most of this material depicted adults, it was often without consentsuch as when it was used to create celebrity-inspired nudes.

Stability later rolled out new filters that block unsafe and inappropriate content, and a license to use Stability’s software comes with a ban on illegal use.

In a statement released on Tuesday, the company said it “strictly prohibits any misuse for illegal or immoral purposes” on its platforms. “We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes,” the statement said.

However, users can still access older versions of Stable Diffusion, which is “largely the software of choice … for people who create explicit content involving children,” said David Thiel, chief technologist at the Stanford Internet Observatory, another watchdog group who studies the problem.

The IWF report acknowledges the difficulty of trying to criminalize AI image-generating tools themselves, even those that are “fine-tuned” to produce infringing material.

“You can’t regulate what people do on their computers, in their bedrooms. It’s not possible,” Sexton added. “So how do you get to the point where they can’t use openly available software to create malicious content like this?”

Most AI-generated images of child sexual abuse would be considered illegal under current laws in the US, UK and elsewhere, but it remains to be seen whether law enforcement has the tools to combat them.

A British police official said the report shows the effects already being witnessed by officers working to identify victims.

“We see children trimmed, we see perpetrators making their own images to their own specifications, we see the production of AI images for commercial gain – all of this normalizes the rape and abuse of real children,” Ian Critchley, child protection lead for the National, said in a statement Police Chiefs’ Council.

The IWF’s report is timed ahead of a global AI security gathering next week hosted by the UK government that will include high-profile attendees including US Vice President Kamala Harris and tech leaders.

“Although this report paints a bleak picture, I am optimistic,” IWF CEO Susie Hargreaves said in a prepared written statement. She said it’s important to communicate the realities of the problem to “a broad audience because we need to have discussions about the darker side of this amazing technology.”

___

O’Brien reported from Providence, Rhode Island. Associated Press writers Barbara Ortutay in Oakland, Calif., and Hyung-jin Kim in Seoul, South Korea, contributed to this report.


#AIgenerated #images #child #sexual #abuse #flood #internet #action #called

Leave a Reply

Your email address will not be published. Required fields are marked *