Teen boys at New Jersey school accused of creating AI deepfake nudes of female classmates
An investigation is underway after teen girls at a New Jersey high school discovered that their male classmates had used artificial intelligence websites to create fake nudes of them, which the teen boys then shared and circulated.
Parents at Westfield High School alerted authorities in their affluent town after their daughters learned about the “deep fake” nudes on 20 October. The girls had suspected that their classmates had been hiding something, but it wasn’t until one of the Sophomore classmates allegedly admitted to having received the fake nudes that they realised the scope of the situation.
“I am terrified by how this is going to surface and when. My daughter has a bright future and no one can guarantee this won’t impact her professionally, academically or socially,” Dorota Mani, whose 14-year-old daughter’s photo was used to create one of the AI-generated nudes, told The Wall Street Journal.
At a conference between local legislators and families impacted by the incident, Ms Mani’s daughter said that the girls in her class felt humiliated and some even opted to delete their social media, the WSJ reports. The female students have not seen the photos that their classmates shared in group chats and the high school declined to confirm to the WSJ whether school administrators had reviewed those conversations.
The Independent has reached out to Westfield High School and local police for comment.
“We’re aware that there are creepy guys out there,” Ms Mani’s daughter reportedly said during the meeting. “...but you’d never think one of your classmates would violate you like this.”
It is believed the teen boys used AI-powered online tools to create the deep fakes. It is still unclear whether they will face any disciplinary action.
“This is a very serious incident,” principal Mary Asfendis said in an email addressed to parents and obtained by the WSJ. “New technologies have made it possible to falsify images and students need to know the impact and damage those actions can cause to others.”
Four families have already filed reports with police, but the number of students whose pictures were used remains unknown.
In the last few years, the use of AI has prompted concerns about the spread of disinformation. Porn created using the technology now makes up for more than 90per cent of “deepfakes” circulating online, per drau-detecting firm Sensity AI.
Yet, the legislation in place to curb the misuse of AI is still new and insufficient. On Thursday, the UK held a two-day summit with international leaders in attendance, where officials tried to lay a foundation for laws in AI regulation.
Earlier this year, TikTok said that all deepfakes or manipulated content that show realistic scenes must be labelled to indicate they’re fake or altered and that deepfakes of private figures and young people are no longer allowed. Meta, as well as adult sites like OnlyFans and Pornhub, also participated in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet.
Some of the most used image generators don’t allow users to create pornographic images, according to the WSJ, but alternatives are easily available online.
While it is still too early to determine what legal pathways are available for students at Westfield High School, intellectual property attorney Natalie Elizarof told the WSJ that child sex abuse laws may apply in this situation.
“To be in a situation where you see young girls traumatized at a vulnerable stage of their lives is hard to witness,” Westfield Mayor Shelley Brindle told the WSJ.