Artificial intelligence is making it easier for politicians to mimic the voices of their rivals or create fake images to bash their opponents.
One ad supporting Gov. Ron DeSantis used an AI version of former President Donald Trump’s voice to read one of his social media posts. Another pro-DeSantis video is widely seen as using AI-generated images to show Trump hugging Dr. Anthony Fauci, the COVID-19 adviser loathed by many conservatives.
Lawmakers, political consultants, academics and tech giants are taking notice of the powerful role AI could play in the future.
“I don’t think we are ready for it,” said Kevin Cate, a Tallahassee-based media consultant who makes ads for Democrats. “I don’t think the laws are ready for it. It is potentially disastrous for self-governance.”
Cate said he doesn’t use AI when making his ads, but it is something he is watching as it becomes harder to distinguish between fact and fiction in today’s political landscape.
“You could potentially incite violence, distrust,” he said. “It is kind of ripping at the moral fabric of the truth.”
Lawmakers in Tallahassee and Washington are exploring ways to tackle the issue.
One bill (SB 850) filed by state Sen. Nick DiCeglie, R-Indian Rocks Beach, would require political ads created with AI to include a disclaimer that the content was “created in whole or in part with the use of generative artificial intelligence.” Violators would be subject to fines under a state statute that allows civil penalties up to $2,500 per count.
In a prepared statement, DiCeglie said he filed the bill to address the “rising concern of deceptive campaign advertising.”
“The increasing access to sophisticated Al-generated content threatens the integrity of elections by facilitating the dissemination of misleading or completely fabricated information that appears more realistic than ever,” he said. “The technology that produces this content has advanced rapidly and outpaced government regulation.”
Another measure (HB 757) by state Rep. Alex Andrade, R-Pensacola, would create a new avenue for people to sue for defamation if AI-generated content casts them in a false light.
A third proposal (SB 972) by Sen. Joe Gruters, R-Sarasota, would create an advisory council to study the use of artificial intelligence in state government.
State lawmakers will kick off their 60-day legislative session on Jan. 9.
At the federal level, politicians on both sides of the aisle are grappling with AI. One bipartisan measure in Congress would ban the use of deceptive AI-generated content in political ads seeking to influence federal elections.
Tech giants are also under pressure to act. Meta, the parent company of Facebook and Instagram, and Google, which owns YouTube, announced recently they will require disclosures for political ads created with AI.
Political ad makers have long used ominous music, blurry images and deceptive editing to manipulate voters and the facts. But AI could make it even easier to produce videos or audio of politicians saying something they never said. These so-called deep-fake clips are increasingly hard to distinguish as hoaxes.
Regulating AI in politics will be difficult because problematic content is often produced by social media influencers protected under the First Amendment or foreign actors seeking to influence U.S. policy, said Joshua Scacco, a political scientist at the University of South Florida who studies political communications.
“There is this whole other ecosystem of individuals —content creators — who you can’t touch with this,” he said.
Other politicians and political groups have made headlines for using AI. The Republican National Committee released an AI-generated ad depicting a dystopian future if President Joe Biden is reelected. Shamaine Daniels, a Democratic congressional candidate in Pennsylvania, has been using an “artificial intelligence volunteer” to call voters, Politico reported.
AI-generated content is covered by existing libel and defamation law, but the technology will likely raise constitutional questions that will need to be settled in the courts, said Jane Bambauer, a University of Florida law professor who specializes in the First Amendment.
Disclaimer requirements for political ads could invite legal challenges that the government is chilling speech by putting a stigma on AI-generated content, she said. Supporters, though, could argue the disclosures are merely informational.
“We need to work out the rules of the road. … It is going to take time to know what the nature of the risks are,” Bambauer said.
© 2023 Orlando Sentinel. Distributed by Tribune Content Agency, LLC.