So far this year, at least 14 states have introduced bills that address AI-generated election disinformation. The proposals take various approaches, but many would require disclosing use of generative AI. Some ban spreading AI-fabricated election content within a certain time frame before an election while others require a disclaimer and some the depicted individual’s consent.
These measures are legally promising, because courts are unlikely to view them as conflicting with the First Amendment, said Shana Broussard, commissioner of the Federal Election Commission (FEC), during the Brookings panel.
But disclaimers specifically can be limited.
“... Letting us know that we're about to be manipulated does not necessarily protect us from the potential harm of manipulation,” said Darrell West, a senior fellow with the Brookings Institution Center for Technology Innovation.
Digital watermarking is a similar tool, with which some AI systems embed an identifier to reveal media as fabricated. Watermarking is helpful, but also unlikely to solve the problem, Broussard said. It’s unclear whether people will check for watermarks or even understand what they mean.
And there are ways bad actors can remove watermarks, said associate professor Soheil Feizi, the director of the Reliable AI Lab at the University of Maryland College Park.
And the reverse is true, too — bad actors can trick deepfake detectors into flagging authentic media and wrongfully casting doubt.
“I can take a real image and maybe add a small amount of noise to it, and it will be flagged as a watermarked image,” Feizi said.
As a result, today's deepfake detections “have a huge false positive rate,” Feizi said, adding that this tech might not ever become fully reliable.
Another persistent challenge is that generative AI developed outside the U.S. would not be subject to this country's digital watermarking laws, thus failing to stop outside disinformation, said Feizi.
As such, multiple defenses are needed, and some secretaries of state are working to counter misinformation about election processes by publishing more accurate information, Broussard said.
Digital literacy programs or an FEC guide on how to detect bias in political advertising could also help, said Matt Perault, a professor at the University of North Carolina’s School of Information and Library Science. Perault has also advised Facebook on topics like AI during a stint as its head of global policy development team.
He proposed focusing policy on addressing harms that deepfake disinformation might cause, rather than trying moderate specific technologies.
In a co-authored report, Perault recommended the federal government outlaw voter suppression. Discriminatory uses of AI could also violate existing federal civil rights law, he said, and the report recommended designating more funding to help better enforce that law.
New rules also could potentially come from the FEC.
Last year, the FEC received a petition asking it to prevent candidates from using AI to make deceptive campaigns ads. Petitioners suggested that an existing regulation covering fraudulent misrepresentation could be applied and sought an amendment clarifying that the regulation covers AI election deception, too.
Ultimately, the FEC received 2,400 public comments on the idea, the majority supporting regulation. The federal agency is still discussing potential rulemaking.
Stepping back, panelists noted that AI-boosted disinformation will likely play out differently in smaller races compared to the presidential election.
For one, voters tend to see less media coverage, information or political advertising about down-ballot races, according to Perault’s report. Given the relative information vacuum, deceptive AI-generated political ads can have more influence.
In contrast, presidential disinformation is most likely to aim to change people’s behavior — such as with the New Hampshire robocall that tried to discourage voting, West said. That’s because most voters have cemented views on the candidates, making it harder to sway beliefs.
Still, use of the Electoral College means disinformation only has to influence a smaller portion of people in a few states to have impact. Some predict the presidential election will be decided by fewer than 100,000 votes across two or three states.
“If we just had a popular direct vote for the president, I actually would be much less worried about disinformation,” West said.