IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Report: Policies Needed to Curb Deepfake Sexual Harassment

A report from the nonprofit Center for Democracy and Technology suggests that schools should update their sexual harassment policies to better handle deepfakes, which have become a common problem in institutions across the U.S.

High school students holding a phone and bullying, laughing at a girl
Shutterstock
Illicit deepfake images of students are now circulating in U.S. schools, and most districts are ill-equipped to respond, according to a September report from the Center for Democracy and Technology (CDT), a nonprofit group focused on the impact of technology on civil rights.

Deepfakes are fabricated photos, videos or audio recordings that appear authentic due to advances in artificial intelligence. Deepfake "nonconsensual intimate imagery" (NCII), as labeled in the report, depicts a real person’s image or voice in a sexually explicit manner without their consent.

In the past year, 15 percent of high school students have heard about deepfake NCII depicting someone from their school, according to the CDT report. That number is based on responses to online surveys completed by 1,316 high school students this summer. Among these students, only 10 percent said their school provides guidance for victims of such imagery.

CDT President and CEO Alexandra Reeve Givens said sexually explicit deepfakes add to the existing problem of authentic NCII in schools. Thirty percent of the high school students surveyed said they had heard about authentic NCII depicting someone from their school.

“Sadly, in the past school year, the rise of generative AI has collided with a long-standing problem in schools: the act of sharing nonconsensual intimate imagery,” Givens said in a public statement. “In the digital age, kids desperately need support to navigate tech-enabled harassment, and schools hold important power to help curb these harms.”

Along with students, CDT also surveyed 1,006 teachers and 1,028 parents of both middle and high school students for the report. More than half the teachers said they had not heard from their school or district about how to handle incidents involving authentic or deepfake NCII. Only 4 percent of surveyed parents said their child’s school had communicated with them about risks and penalties for students who share sexually explicit deepfakes of other students without their consent.

A key recommendation from the report advises that, at a minimum, "schools should update their Title IX policies to explicitly include online conduct that creates a hostile environment for students at school, including NCII, and meaningfully communicate these policies to teachers, students and parents.”

Title IX, which is enforced by the U.S. Department of Education’s Office for Civil Rights, requires schools that receive federal funding to protect their students and staff against discrimination, which includes sex-based harassment and sexual violence. An April 2024 amendment to the law updated its definition of online sexual harassment to include “the nonconsensual distribution of intimate images (including authentic images and images that have been altered or generated by artificial intelligence technologies).”

CDT also recommends training for teachers and Title IX coordinators on how to report incidents of NCII, both deepfake and authentic; how to protect the privacy of the students involved; and how best to support the victims of such imagery.

“Schools should also adopt educational preventative measures around NCII by taking actions like directly addressing the issue in curriculum, or including it in broader sexual harassment or digital citizenship efforts,” the report states. “Overall, a larger emphasis on proactive efforts to curb this conduct is needed to prevent worse outcomes for all those involved.”