Parents warned over rise in AI-generated child abuse material to ‘embarrass’ and ‘bully’ classmates
Parents have been issued an urgent warning over a rise in AI-generated deepfakes made by students to “embarrass or bully classmates”.
Advancements in technology and AI have paved the way for the creation of hyper-realistic, fake pornographic content, known as deepfakes, which can portray someone doing something that never happened.
The AFP has sounded the alarm over a rise in the technology being used to create child abuse material (CAM), with a 48-year-old Victorian man jailed last year after he created more than 790 “realistic child abuse images” using AI.
The man was charged with one count of producing child abuse material and using a carriage service to transmit child abuse material before he was jailed for 13 months.
AFP Commander Helen Schneider said the ability to produce such a large amount of images and data was a real challenge for the AFP, as investigators were left to “analyse and painstakingly sort through a lot of images” in order to bring offenders before the court.
“When people are producing mass amounts … it consumes our resources a lot,” Commander Schneider told NewsWire.
“Over 790 images – that’s a lot of data.”
She said the quality of the AI-generated CAM was becoming increasingly realistic, and made it difficult for the AFP to ensure they weren’t investing resources into investigating images “where there is actually no real child at risk”.
She said the AFP wanted to instead focus resources “to identify children and remove them from harm”.
Children creating child abuse material with AI to “embarrass or bully classmates”
Also of particular concern is the rise in students using the technology.
“A lot of young people are using this technology to embarrass or bully classmates, which is a real concern for us,” Commander Schneider said.
“I know young people are very digitally literate in today’s world, obviously people in general are very curious about new technology.”
However she said that curiosity could open up opportunities to use technology to break the law.
A student from southwestern Sydney allegedly made deepfake pornography of female students using artificial intelligence and images sourced from social media, while a student from a school in Victoria’s northwest allegedly created graphic nude images of about 50 girls from the school last June.
Fake sexual images of a female teacher were also circulated around another school in Melbourne’s southeast last May.
Commander Schneider said the “entry level to use this type of technology was decreasing” and made it “more accessible from a capability perspective”.
“AI technology is increasingly accessible and I think it’s more accessible because it’s really integrated into a lot of the platforms used by Australians everyday,” she said.
She said young people may be unaware that using AI to create deepfakes – including images, videos or files of a real person – to depict someone under the age of 18 in an abuse situation, was in fact producing CAM.
As the school holidays come to a close, Commander Schneider urged parents, guardians and trusted adults to have “regular, open, non-judgmental” conversations with children about this issue.
Research done by the Australian Centre to Counter Child Exploitation in 2022 found only about half of parents talked to their children about online safety.
“We need to talk about this technology and understand how it might be misused,” she said.
“Make them understand the misuse of this technology to create images of someone that is in an abusive situation … is producing child abuse material.
“Whether it’s real or not, it still constitutes an offence under Australian law.”
She encouraged parents, guardians and trusted adults to check out the AFP-led education program ThinkUKnow, which has free resources to “assist parents and carers navigate these conversations, and information on where to get help if your child is a victim”.
Anyone with information about people involved in child abuse have been urged to contact the ACCCE, while anyone with information about abuse happening now or children at risk should contact triple-zero.
news.com.au mental health helplines
Originally published as Parents warned over rise in AI-generated child abuse material to ‘embarrass’ and ‘bully’ classmates
Get the latest news from thewest.com.au in your inbox.
Sign up for our emails