One of the Republican bills goes as far as to create criminal penalties for creators and sharers of such content. The other two create civil offenses, meaning the deepfakes’ originators might have to pay up but wouldn’t find themselves incarcerated.
The three bills reflect a surge in user-friendly, artificial intelligence-backed technology that can create convincing digital replicas of real people, from politicians to celebrities to private actors. The end products can range from humorous to campy to abject character assassinations. In just the last month, deepfakes impersonating President Joe Biden in a robocall reportedly urged New Hampshire citizens not to vote in a primary. And pornographic deepfakes of pop superstar Taylor Swift were viewed millions of times on social media.
The proposed state legislation raises thorny questions about where constitutionally protected speech ends and a new, futuristic form of synthetic speech that can be more reminiscent of fraud begins.
One bill, proposed by House Republicans this month, would outlaw the nonconsensual creation or distribution of a “convincingly altered” deepfake to harass, extort, threaten, or cause emotional, reputational, or economic harm to a person. A first offense would be a misdemeanor, with all future offenses as felonies.
The two lead sponsors – Republican Reps. Kevin Miller of Newark and Steve Demetriou of Bainbridge Twp. – could not be reached Monday. Two House Democrats have signed on as cosponsors.
Another bill, also from House Republicans, similarly prohibits the creation and distribution of malicious deepfakes, which they define as technologically doctored content intended to cause harm to the subject due to whatever conduct is depicted. However, the bill calls for only civil penalties, and offers exemptions for parodies or political ads that disclose the presence of manipulated media. Rep. Adam Mathews, a Republican from Lebanon, in an interview described the bill as protecting every Ohioan’s right to their name, image and likeness.
The bill creates exemptions for content that your average viewer would know is inauthentic, or for political communications that explicitly disclose the message has been altered “in a manner that renders it fictionalized and inauthentic.” While media editing technology has been around for years, Mathews said AI has made it a lot easier for anyone to create fictionalized content that can destroy reputations of anyone, famous or not.
“The time is now, especially when you’re seeing it come into the news world, that we must move forward with it,” he said.
A third bill, proposed by House Democrats, would create civil liability for those who create or share deepfakes for purposes of influencing the results of an election. The bill creates exemptions, however, if the creators disclose to viewers or listeners that the underlying message has been manipulated.
Rep. Joe Miller, a Democrat from Amherst, said like any tool, AI can be used to improve the world or to destroy it. He said he’s open to a bill focused on deepfakes that target those outside the political arena. He’s also open to adding criminal penalties for those who maliciously create or share deepfakes. But he said protecting the “sacred” process of free and fair American elections is a good starting point.
“It’s only going to get worse before it gets better,” he said. “We have a new technology, like anything, where we don’t really have good rules or parameters about how we use it.”
Six states have enacted policies regulating the use of “generative AI” in campaigns, according to the National Conference of State Legislatures. Some have prohibited the use of such technology in campaigns writ large. Others have required disclaimers.
None of the three deepfake bills have received a hearing yet in the House.
© 2024 Advance Local Media LLC. Distributed by Tribune Content Agency, LLC.