Artificial intelligence (AI) is being used to produce “astoundingly realistic” imagery of child sexual abuse which are then being distributed online.
This is the warning of analysts at the Internet Watch Foundation, a UK-based charity which is responsible for identifying and removing child sexual abuse material from the Internet.
In the UK — as in the US — AI-generated, so-called “pseudo images” of child sexual abuse are illegal in the UK.
As the charity notes, “far from being a victimless crime, this imagery can normalise and ingrain the sexual abuse of children. It can also make it harder to spot when real children may be in danger.”
The Internet Watch Foundation is calling on Prime Minister Rishi Sunak to treat this threat as a “top priority” when he hosts the first global AI summit later this year.
READ MORE: Deer ‘caught Covid from humans and passed it back'[LATEST]
Internet Watch Foundation chief executive Susie Hargreaves said: “AI is getting more sophisticated all the time. “Offenders are now using AI image generators to produce sometimes astoundingly realistic images of children suffering sexual abuse.
“For members of the public, some of this material would be utterly indistinguishable from a real image of a child being sexually abused. Having more of this material online makes the internet a more dangerous place.
“We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery.
“This would be potentially devastating for internet safety and for the safety of children online.
“We have a chance, now, to get ahead of this emerging technology, but legislation needs to be taking this into account, and must be fit for purpose in the light of this new threat.
The warning comes following a five-week study by the foundation into the threats posed by emerging AI technology, during which they began recording instances of AI-generated child sexual abuse material reported to their hotline for the first time.
In total, the charity investigated 29 reports — including those submitted by members of the public — of URLs suspected to harbor AI-generated child sexual abuse imagery.
Of these, the Internet Watch Foundation determined that seven did indeed contain such images — including Category A and B material depicting children as young as 3–6 years.
(In this context, Category A material is defined as that which involves penetrative sexual activity or sadism. Category B material involves non-penetrative sexual activity.)
Alongside identifying and having removed a number of instances of AI-generated child sexual abuse material, the foundation’s analysts also discovered an online “manual” written by offenders that instructs other criminals on how to train the AI and refine prompts as so that it returns more realistic results.
We use your sign-up to provide content in ways you’ve consented to and to improve our understanding of you. This may include adverts from us and 3rd parties based on our understanding. You can unsubscribe at any time. More info
New glass is ten times tougher and slashes carbon footprint by nearly half[LATEST]
Farmers can grow baked beans in UK for first time ‘thanks to climate change'[LATEST]
Donut-shaped rock on Mars has space fans convinced there’s life on Red Planet[LATEST]
Dan Sexton — the foundation’s chief technical officer — added that analysts have also seen evidence online that offenders are finding ways to circumvent safety measures put on AI image generators intended to stop them producing sexualized images of children.
He said: “The Internet Watch Foundation’s primary mission is to protect children. If a child can be identified and safeguarded, that is always the number one priority for analysts.
“Our worry is that, if AI imagery of child sexual abuse becomes indistinguishable from real imagery, there is a danger that [our] analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist.
“This would mean real victims could fall between the cracks, and opportunities to prevent real life abuse could be missed.”
Alongside the request for government action, Ms Harvreeaves is also calling on AI companies and developers to help prevent the abuse of their platforms.
She said: “We know criminals can and do exploit any new technology, and we are at a crossroads with AI.
“The continued abuse of this technology could have profoundly dark consequences — and could see more and more people exposed to this harmful content.
“Depictions of child sexual abuse, even artificial ones, normalise sexual violence against children. We know there is a link between viewing child sexual abuse imagery and going on to commit contact offenses against children.”
She concluded: “My worry is if this material becomes more widely and easily available, and can be produced at will — at the click of a button — we are heading for a very dangerous place in the future.”
Follow our social media accounts on https://www.facebook.com/ExpressUSNews and @ExpressUSNews
Source: Read Full Article