IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Concern Over AI Interfering With Elections Remains Strong

Elections officials and law enforcement officers are hashing out how to stop the threat by investigating who is behind the source and issuing correct information to the public.

A floating cyan checkmark outlined in cyan appears on the right side of a blue image with many boxes and bits of code floating behind in different shades of blue.
Shutterstock
(TNS) — When Contra Costa County’s elections staff met with local police and an FBI agent to plan defenses and responses to voting-related threats for the 2024 election year, an unusual new risk had been added to the mix: Silicon Valley’s blockbuster product, generative artificial intelligence.

In one mock scenario, a news report highlighted a problem at a local polling station, seemingly in an attempt to keep people from voting. But the news report was fake, created by a bad actor using AI to sow misinformation. Elections officials and the law enforcement officers hashed out how to stop the threat by investigating who is behind the source and issuing correct information to the public.

Now, on the eve of next week’s Super Tuesday primaries, AI-risk discussions are occurring in elections departments around the Bay Area and across the country, especially after a faked version of President Joe Biden’s voice was used in a January robocall to deter voting in New Hampshire’s primary. California Attorney General Rob Bonta joined other state AGs in condemning the AI meddling, which Bonta said had potential to damage “the integrity of our voting process.”

Less than three weeks after news of the fake-Biden robocalls broke in January, the U.S. Federal Communications Commission made it illegal to use AI-generated voices for unsolicited robocalls, with agency chairwoman Jessica Rosenworcel citing use of the technology by “bad actors” to “misinform voters” as well as to commit extortion and imitate celebrities.

In Santa Clara County, elections officials are plugged into information-sharing networks with agencies around the country, and are tracking the potential for the new AI technology to affect elections here, said assistant registrar of voters Matt Moreles. He and his colleagues worry little about AI-enabled hacking of voting systems or alteration of results, because defenses are robust. But they fret more about use of AI-generated materials to deceive voters.

“It’s just about spreading misinformation and confusion,” Moreles said.

Artificial intelligence, after creeping into everyday life via apps such as Apple’s Siri bot and assisted-driving technologies, suddenly burst into prominence with the 2022 public release of San Francisco startup OpenAI’s ChatGPT generative AI bot. Other companies soon followed with products that allow realistic generation of text, sound and imagery in response to user prompts.

The explosive growth has raised concerns ranging from copyright infringement by companies hoovering up online data to “train” their software, replacement of human workers by AI, students cheating on exams, and people spreading fake material as propaganda or political misinformation.

“Misinformation is definitely something to worry about in this election cycle,” said UC Berkeley political science professor Susan Hyde. Election deception is not new — efforts to discourage voting have taken place for decades, Hyde said. But AI can be used to spread false information faster and wider than was possible in years past.

“We should watch out for foreign interference — that’s been around for a while,” Hyde said. “We should worry about partisan actors ranging from the local to the national.”

AI provides new tools for seeding the voting population with convincing, election-related falsehoods that can ripple through social and family networks where people may believe false information because the source is close to them, Hyde said. Misinformation that attacks the legitimacy of elections can lead people to conclude that U.S. democracy is a sham, and they may become more receptive to “cult-of-personality” candidates and the hyper-partisan view that “we must win at all costs,” Hyde said.

Marci Andino, a senior director at the Center for Internet Security, said she expected AI-aided interference in this year’s elections, peaking as the November general election nears.

The federal Cybersecurity and Infrastructure Security Agency warns the technology could be used to spread false voting information by text, email, social media channels or publications. “AI tools could be used to make audio or video files impersonating election officials that spread incorrect information to the public about the security or integrity of the elections process,” the agency said in a bulletin about 2024 election security. “AI-generated content, such as compromising deep-fake videos, could be used to harass, impersonate, or delegitimize election officials.”

Convincing but false election results could be generated and used to manipulate public opinion, the agency advised. Systems, too, could be compromised, if voice-cloning is used to impersonate election-office staff and get access to “sensitive election administration or security information,” the agency warned. Or AI could create “a fake video of an election vendor making a false statement that calls the security of election technologies into question,” the agency said.

Chief among the worries of AI consultant Reuven Cohen is the use of generative AI to manufacture “apathy as a weapon” by persuading people not to vote.

“It’s actually easier to make someone do nothing than do something,” said Toronto-based Cohen, who advises Fortune 500 companies.

Newly released software allows cheap, easy generation of realistic videos, and election meddlers can buy data from the dark web allowing them to target people according to demographics, buying habits, or psychological profiles, Cohen said.

“It’s a thousand times difference between where we were in the last election and where we are today in terms of raw ability to do this,” Cohen said. “The ease of access is the part that’s concerning.”

Reliable information is key to preventing damage to elections from AI, officials said, urging members of the public to seek out government elections websites and official social media channels, call local elections offices, and consume credible news sources to obtain information and confirm or reject information arriving via other sources.

The news isn’t all bad. No evidence so far exists that AI-boosted propaganda could affect the outcome of an election, said Georgetown University researcher Josh Goldstein.

© 2024 MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.