IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New State Laws Address Sexual Deepfakes of Minors

This year, 18 states passed laws that make clear that sexual deepfakes depicting minors are a crime. Experts say schools should update their policies to account for these AI-generated images as well.

A pair of handcuffs resting on top of a laptop keyboard.
Shutterstock
Eighteen states passed laws this year against sexual deepfakes that depict minors, according to government relations firm MultiState Associates. That number is up from just two states in 2023, as lawmakers rush to close legal loopholes around this new kind of crime.

Deepfakes are photos, videos or audio recordings that are not real but appear authentic due to advances in artificial intelligence. They often involve a real person’s image or voice without their consent. When that person is under 18, the deepfake depicts a minor.

Laws against child sexual abuse material (CSAM), the preferred term for child pornography, have long been on the books, but those enacted prior to the rise of AI do not mention deepfakes specifically.

While federal law prohibits CSAM, including computer-generated images that “appear to depict an actual, identifiable minor," it does not narrowly address deepfake CSAM and how courts should handle cases in which a minor’s face has been superimposed on a sexually explicit image.

Meanwhile, such cases are on the rise. The number of sexual deepfakes that depict minors on one dark web forum more than doubled to 5,547 images from September to March, according to a study by the U.K.-based Internet Watch Foundation.

However, lack of legal clarity can make it harder to prosecute cases that involve deepfake CSAM, according to Max Rieper, director and counsel for technology and privacy at MultiState Associates.

“It’s a computer-generated image that may be based on real people, but it’s not a real person, and usually the statutes say you can’t have sexual content of a known minor, which implies a real person,” Rieper said. “It’s difficult for prosecutors to navigate, so just tightening that up, making sure there aren’t any loopholes, I think has been a priority for lawmakers in several states.”


STATES TAKE ACTION


Among the 20 states that now have laws that specifically prohibit sexual deepfakes of minors, most simply added AI-generated images to the definition of CSAM in existing statutes and extended their criminal penalties to include fines and jail time for people who create, possess or distribute these images. The punishments tend to be the same for real and deepfake CSAM but vary from state to state.

In California, for example, a first-time offender who possesses real or deepfake CSAM is to receive up to one year in jail, a fine of not more than $2,500 or both. Fines and prison time increase based on prior CSAM convictions, the number of images found, whether the minors depicted appear under age 12, whether the images portray sexual sadism or masochism, and whether the material is exchanged with or sold to anyone.

In Louisiana, the law states that anyone convicted of creating or possessing a sexual deepfake that depicts a minor is to receive at least five years in prison, a fine of not more than $10,000 or both. If the deepfake CSAM is exchanged with or sold to anyone, the penalty increases to at least 10 years in prison, a fine of not more than $50,000 or both.

Ten states have enacted laws that ban only sexual deepfakes of adults, typically by expanding existing “revenge porn” statutes to include AI-generated images. These laws do not address cases where the sexual deepfake depicts a minor and, therefore, leave potential legal loopholes, Rieper said.

GUIDANCE FOR SCHOOLS


Experts say schools should follow the footsteps of the 20 states that have specifically addressed deepfake CSAM by updating their own policies to account for the fact that these AI-generated images are cropping up in U.S. schools.

In the past year, 15 percent of high school students have heard about sexual deepfakes that depict someone from their school, according to a report by the nonprofit Center for Democracy and Technology (CDT).

If schools fail to manage such incidents properly, they could be found in violation of Title IX of the Education Amendments of 1972, which requires schools that receive federal funding to protect their students against sexual harassment.

An April 2024 amendment to the law changed the definition of sexual harassment to include “the nonconsensual distribution of intimate images (including authentic images and images that have been altered or generated by artificial intelligence technologies).”

The CDT report suggests training for teachers and Title IX coordinators on how to report deepfake CSAM and support victims of these images. CDT also advises school leaders to educate staff, students and families on the dangers of deepfakes.

Tien Le, a lawyer for school districts in California, echoed that recommendation, especially for students.

“Just informing students of the potential consequences and also of the potential impact of deepfakes on those who are depicted in the videos and the images is one way to reach students who may be doing this as a prank without understanding the gravity of their actions,” he said.

Whether students who make and spread deepfake CSAM are suspended, expelled, sent to counseling or given other consequences is up to school leaders, Le said, but school leaders must report the issue to law enforcement immediately. Given that the perpetrators of deepfake CSAM in schools are often minors themselves, he said how police choose to deal with these cases will depend on the details of the situation and the laws of the state.
Brandi Vesco is a staff writer for the Center for Digital Education. She has a bachelor’s degree in journalism from the University of Missouri and has worked as a reporter and editor for magazines and newspapers. She’s located in Northern Nevada.