The rise of fake cyberbullying is a growing problem for schools

Schools are facing an increasing problem with students using them artificial intelligence To turn innocent photos of classmates into sexually explicit fake photos.

The repercussions of the spread of manipulated photos and videos can create a nightmare for victims.

The challenge schools face was highlighted this fall when artificial intelligence-generated nude photos swept through a middle school in Louisiana. Two boys were eventually charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating images of her and her friends.

“Although the ability to alter images has been available for decades, the advent of artificial intelligence has made it easier for anyone to alter or create such images without any training or experience,” Lafourche Parish Mayor Craig Weber said in a news release. “This incident highlights a serious concern that all parents must address with their children.”

Here are the main takeaways from AP story About the rise of AI-generated nude images and how schools are responding.

Republican state Sen. Patrick Connick, who authored the legislation, said the prosecution stemming from the deepfakes at the Louisiana middle school is believed to be the first under the state’s new law.

This law is one of many across the country targeting deepfakes. In 2025, at least half of the states Legislation issued Addressing the use of generative artificial intelligence to create images and sounds that appear realistic, but are fabricated, according to the National Conference of State Legislatures. Some laws address simulated child sexual abuse material.

Students in Florida have also been prosecuted Pennsylvania And expelled in places like ca. A fifth-grade teacher in Texas was also accused of using artificial intelligence to create child pornography for his students.

Deepfakes started as a way to insult political opponents and young stars. Until the last few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.

“Now, you can do it on an app, you can download it on social media, and you don’t have to have any technical experience at all,” he said.

He described the scale of the problem as astonishing. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyberline rose from 4,700 in 2023 to 440,000 in just the first six months of 2025.

Sameer Hinduja, co-director of the Center for Cyberbullying Research, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, “students don’t think staff and teachers are completely oblivious, which may make them feel like they can act with impunity,” he said.

He said many parents assume schools are addressing the issue when they are not.

“A lot of them are very unaware and ignorant,” said Hinduja, who is also a professor at Florida Atlantic University’s College of Criminology and Criminal Justice. “We hear about ostrich syndrome, just kind of burying their heads in the sand, hoping this doesn’t happen among their young people.”

AI deepfakes are different from traditional bullying because instead of a text or nasty rumor, there’s a video or photo that often goes viral and then keeps resurfacing, creating a cycle of trauma, Alexander said.

He added that many victims suffer from depression and anxiety.

“They literally shut down their accounts because it makes it seem like, you know, there’s no way they can prove this isn’t real — because it seems 100% real,” he said.

Parents can start the conversation by casually asking their children if they’ve seen any funny fake videos online, Alexander said.

Take a moment to laugh at some of them, like Bigfoot chasing hikers, he said. From there, parents can ask their children: “Have you thought about what it would be like to be in this video, even if the video is funny?” Then parents can ask if one of their classmates has made a fake video, even if it is harmless.

“Based on the numbers, I guarantee they would say they know someone,” he said.

If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, founder and CEO of the company. Social Institutewhich works to educate people about responsible use of social media and has helped schools develop policies. She said many children fear their parents will overreact or take their phones away.

She uses the acronym SHIELD as a road map for how to respond. The “S” stands for “stop” and do not advance. “H” stands for “gathering” with a trusted adult. The “I” is for “notifying” any social media platforms the photo is posted on. The “E” is a signal to collect “evidence,” such as who posted the photo, but not to download anything. The “L” is for “limit” access to social media. “D” is a reminder to “direct” victims to help.

“I think the fact that this shortcut is six steps shows that this issue is really complex,” she said.

___

AP’s education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Search for access points Standards To work with charities, a existing From supporters and sponsored coverage areas at AP.org.

Leave a Comment