IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

U.S. Senator Voices Concern Over AI Election Disinformation

Millions may have already received phone calls generated by artificial intelligence, promoting or disparaging a candidate using faked, but familiar voices, said Sen. Richard Blumenthal.

Two shadowy, computer-generated human heads are superimposed over the American flag.
Shutterstock
(TNS) — As Election Day looms, Sen. Richard Blumenthal wants voters to be aware that any campaign ads they see or hear in the coming weeks might not be real.

At a Senate hearing last year Blumenthal played a recording that sounded like him speaking, but was instead made using artificial intelligence.

"It sounded exactly like me, but its content and the voice itself were totally a deep fake, and we did it in Connecticut with a recording that was done by my communication staff, who are not the least bit tech savvy," Blumenthal said on Tuesday, making a point of how easily the technology can be used and deployed.

In New Hampshire, political consultant Steve Kramer has been charged with 13 felony counts of voter suppression and 13 misdemeanor counts of impersonation of a candidate for creating deep-fake recordings of President Joe Biden using AI ahead of the state primary.

"He was apprehended," Blumenthal said of Kramer, who faces a $6 million fine and could be sentenced up to seven years in prison if convicted. "But a lot of the perpetrators may be more sophisticated and much more difficult to apprehend."

Millions of people may have already received phone calls generated by artificial intelligence, promoting or disparaging a candidate using faked, but familiar voices, said Blumenthal, whose term does not expire until 2029.

"AI technology can reproduce my face and voice, saying things I would never think of articulating," he said. "This artificial intelligence technology now is so sophisticated it can be used by a layperson with little knowledge about tech."

Now Blumenthal, a Democrat who has served in the Senate since 2011, is proposing both rules changes and federal legislation to curb the use of artificial intelligence to spread misinformation.

Though robocalls are already banned without the expressed permission of the person being called, Blumenthal has proposed an FCC rule change that would require callers to disclose the use of AI, both when that permission is granted and on each AI-generated call.

The FCC has jurisdiction over telephone calls and texts, but there is little the agency can do when it comes advertisements on television or on social media.

So, Blumenthal has also cosponsored legislation, the AI Transparency in Elections Act, which would "require disclaimers on political ads with images, audio, or video that are substantially generated by AI," according to a release from his office.

"These deep fakes are a pernicious, persistent and immediate danger of distortion and deception," he said.

AI technology has "really come into its own as a tactic for mass use by political candidates and others," Blumenthal said, adding that his concern is not only domestic in nature.

"Without going into classified information," Blumenthal said, "our adversaries are already making plans to use this technology, AI-generated messages, ads, in ways that would undermine our election integrity, and they may aim at a specific candidate but, overall, their intent is to divide and undermine our electorate."

Earlier this year, State Sen. James Maroney, D-Milford, co-chairman of the legislative General Law Committee, attempted unsuccessfully to pass similar legislation at the state level.

"People need to be able to trust what they're seeing," Maroney said in April.

That bill would have criminalized deep fake pornography and political shams, though it died when Gov. Ned Lamont threatened to veto the measure if it had been approved by the state legislature.

"I'm not convinced you want Connecticut to be the first one to do it," Lamont said of the proposed bill at the time. "I think you don't begin to know what the potential is for AI, so it's pretty tough to regulate something that you're just beginning to get some feel for."

A similar bill was approved shortly after in Colorado, obligating disclosure of the use of AI among other provisions. It goes into effect in 2026.

Though he said that his proposed legislation is bipartisan in nature, Blumenthal said, "The chances of passage are higher here in Connecticut and other states, because it is complicated and big tech has less power."

"The real obstacle right now to effective safeguards and protections for consumers from the abuses of artificial technology are big tech which are spending literally billions, 10s of billions, investing in more advanced models of artificial intelligence," he said. "They want the Wild West that exists right now. In this election season, artificial technology is in the Wild West. Anything goes, because we have no effective protections requiring disclosure of the use of artificial intelligence technology and penalties for distortive deep fakes or impersonation."

There is little to distinguish an AI-generated phone call, picture or video from the genuine article, but Blumenthal said voters should be suspicious of any major changes or surprising content in images and audio, such as "something a candidate says that seems contrary to their past positions."

"There are almost no telltale signs looking at the ad or listening to the robocall or seeing the text, but anything that seems unexpected, like the change of a voting place or the cancelation of an election, should be checked with town hall before you take action," he said. "Ads can be done on Election Day or near Election Day, and are very difficult to correct because they're done under the demands of deadlines."

© 2024 The Middletown Press, Conn. Distributed by Tribune Content Agency, LLC.