IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Lawmakers Introduce Bipartisan Bill to Fight AI Deepfakes

A bipartisan bill being led by state representatives from Iowa and Massachusetts will attempt to crack down on the growing threats and distribution of sexually explicit “deepfakes” on digital platforms.

deepfake_shutterstock_1430571869
(TNS) — A bipartisan bill being led by Republican Iowa U.S. Rep. Ashley Hinson and a Massachusetts Democrat will attempt to crack down on the growing threats and distribution of sexually explicit “deepfakes” on digital platforms.

As first reported by Politico, Hinson, of Marion, and U.S. Rep. Jake Auchincloss, of Massachusetts, introduced legislation that would carve out Section 230 protections for Big Tech companies that fail to remove “deepfake” pornography, including that generated by artificial intelligence, from their platforms.

The digitally manipulated images and videos falsely depict someone saying or doing something. They often take the form of sexually explicit photos and videos spread online.

The digitally altered material frequently merge a victim’s face with a body in a pornographic video using generative artificial intelligence models to create audio, videos and images that look and sound realistic.

In 2023, the FBI warned of a sharp increase in deepfakes being used in “sextortion” schemes, in which a person threatens to distribute sexually explicit images and videos of a person unless the victim pays.

Deepfakes have targeted public figures like celebrities, politicians and influencers. But cases also have sprung up at middle and high schools, where students have fabricated explicit images of female classmates and shared the doctored pictures.

Hinson’s bill seeks to protect Americans’ privacy by holding online platforms — such as Facebook, X (formerly Twitter) and YouTube — responsible for cracking down on intimate deepfakes. If not, the companies could lose their legal immunity from lawsuits over content on their platforms.

“We know deepfakes and other AI-altered and AI-generated content present a significant and growing threat to our ability to trust what we see online, as well as the potential for bad actors to create malicious deepfake content that can cause serious harm to victims,” Hinson told reporters during a weekly conference call Thursday.

The New York Times reported that students in several states have used widely available “nudification” apps to create convincing AI porn of their classmates that they’ve posted online.

What the bill would do

The Intimate Privacy Protection Act aims to hold Big Tech accountable for addressing harmful content by amending Section 230 of the Communications Decency Act of 1996, which shields online platforms from legal liability for third-party content created and posted by their users on their services.

The bill would strip Big Tech of immunity in cases where platforms fail to combat cyberstalking, digital forgeries and intimate privacy violations.

Under the legislation, tech platforms would have a legal obligation to act responsibly in protecting others from harm, which includes having a “reasonable process” for preventing such privacy violations, which includes “a clear and accessible process” for reporting, investigating and removing harmful content within 24 hours.

The bill also would require data logging requirements to ensure that victims have access to data for legal proceedings, as well as a process for removing or blocking content that is determined to be unlawful by a court.

“Big Tech companies shouldn’t be able to hide behind Section 230 if they aren’t protecting users from deepfakes and other intimate privacy violations,” Hinson said.

Hinson’s bill comes as Congress and legislators across the country consider how to respond to the emerging problem fueled by artificial intelligence technology and the harms of social media on young people.

“I think it’s just become way too easy for these bad actors to not only create this content, but circulate these inappropriate images, these deepfakes, online,” Hinson said. “As a mom, this really worries me. We are seeing our kids grow up in a totally different time. We are in kind of uncharted territory with the rise of social media and how things can quickly spread.

“So we have to work together to ensure we’re protecting kids from these dangers online, while still ensuring that our Big Tech companies are doing their part to keep our users, and particularly minors, safe online.”

Iowa efforts

States, including Iowa, also have moved to crack down on “deepfake” pornography in recent years.

Iowa state lawmakers this year passed and Gov. Kim Reynolds signed into law legislation criminalizing the dissemination, distribution and posting of pornographic images and videos that have been digitally altered to falsely depict another person.

A person who distributes a digitally altered image or video that portrays a person fully or partially nude or engaging in a sex act is guilty of harassment under the new state law. The offense is an aggravated misdemeanor punishable by up two years in prison and a fine between $855 and $8,540.

It also requires anyone 18 years or older convicted of the new crime to register as a sex offender.

U.S. Senate bills

The U.S. Senate last month passed legislation that would give victims the ability to sue anyone who creates, shares or receives nonconsensual sexually explicit deepfakes depicting them.

Separate legislation passed by the Senate would criminalize the distribution of private sexually explicit or nude images online.

On Tuesday, senators overwhelmingly passed a pair of children’s online safety bills designed to protect children from dangerous online content.

Under the bill, social media platforms would have to provide minors with options to protect their information, limit other users from communicating with children and disable certain features that “increase, sustain, or extend the use” of the platform — such as autoplay for videos or platform rewards.

And it would require companies to give users dedicated pages on which to report harmful content.

First Amendment

Civil rights, civil liberties and privacy organizations argue the Kids Online Safety Act, or KOSA, would violate the First Amendment by enabling the federal government to dictate what information minors can access online.

The American Civil Liberties Union contends the bill will limit minors’ access to vital resources, and silence important online conversations for all ages. The ACLU also has raised concerns about how the bill could be used to limit adults’ ability to express themselves freely online or access diverse viewpoints.

Hinson said her bill explicitly states that it is not construed to violate First Amendment rights.

“There’s a lot of bipartisan momentum here when it comes to protecting our kids online, so I’ll continue working to build consensus on that and strike that right balance,” she said. “… We were very clear to try to balance that protection with making sure we’re holding bad actors accountable.”

© 2024 Waterloo-Cedar Falls Courier (Waterloo, Iowa). Distributed by Tribune Content Agency, LLC.