IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Nevada Bill Would Require AI Transparency in Political Ads

Lawmakers this session will consider whether to mandate that political campaigns disclose the use of artificial intelligence in ads to create a realistic depiction of something that never took place.

Artificial intelligence Machine Learning Business Internet Technology Concept.
(TNS) — Nevada lawmakers will consider a bill requiring political campaigns to disclose when they use artificial intelligence in ads to alter the reality of a situation.

The bill would require a disclosure if AI or other digital software is used in a campaign ad to create realistic depictions of something that never actually happened. For example, if the bill is signed into law, the phrase "This image has been manipulated" would need to be the largest text on a mailer. Similar requirements address newspaper, radio and TV ads.

The 2025 Legislature begins next month.

The disclosure would only be required when "synthetic media" is used to create "a fundamentally different understanding" of the edited content, meaning red-eye fixes and other small photo touch-ups wouldn't be enough to trigger the law. The legislation would carry a maximum $50,000 penalty for those who violate the rules.

The proposed bill was submitted on behalf of Nevada Secretary of State Cisco Aguilar. It would require that a copy of ads containing the disclaimer be filed with his office.

In April, Aguilar said there had been "no progress" on helping state and local government officials understand threats brought by AI.

"You can't ever rely on the federal government," he told the Sun. "We have to be responsible for ourselves, and we have to take the initiative."

The Nevada Legislature will have to play catchup on the oversight of artificial intelligence during the upcoming session. Forty-seven states proposed nearly 500 bills related to AI last year, according to the National Conference of State Legislatures.

But Aguilar sees waiting as an advantage.

"So really, what it allowed us to do is go through an election process to see what the potential challenges are, understanding what those challenges are, but then understanding what other states have done and taking the best of what existed," Aguilar said.

The proposed bill is modeled after a Washington state law, which Aguilar said properly balanced free speech concerns and the responsibility of ensuring voters get truthful information.

Nevada lawmakers have also been keeping an eye on other states, submitting 10 bill draft requests on the technology for the upcoming session.

"AI in general is a hot topic, whether it's AI in education, AI in small businesses, large businesses," said Assemblywoman Erica Mosca, D-Las Vegas.

Mosca, starting this month, will chair the Committee on Legislative Operations and Elections, which would take up the proposed AI campaign ad bill.

"I think it's important that it's at least considered this session ... because I know that this is what's happening in real time," she said. "It's important to ... not stifle innovation but also figure out how we are able to catch bad actors."

Before President Joe Biden dropped out of the 2024 presidential race, a robocall using his AI-generated voice told New Hampshirites to "save" their vote for the November election during an already-confusing state primary. The man behind the audio told The New York Times he used a free AI software to generate Biden's voice.

In Nevada, former North Las Vegas Mayor John Lee said he was targeted with AI-generated audio while running for Congress.

He sued Republican primary opponent David Flippo, who has denied involvement, over a website allegedly hosting the deepfake. The audio was ostensibly of Lee speaking to a woman about having sex with her and her 13-year-old daughter, the Nevada Independent reported. A trial is set for September, Clark County District Court records show.

Despite incidents like those, AI still made less of an impact on the election than many experts thought, said Andrew Hall, a senior fellow studying elections at Stanford University's Hoover Institution.

"Maybe it's wise that they're trying to get ahead of this problem," Hall said of AI election laws. "But as of now, there's not yet very compelling evidence that this is a problem."

What's been more common is people posting obviously fake AI-generated images to further a political idea.

Most recently, the Democrats' X account posted an AI-generated image Dec. 20 of Elon Musk walking Trump like a dog on a leash, referring to the multibillionaire Tesla owner blowing up negotiations to avoid a government shutdown.

One reason Americans may not be falling for AI-generated political content is that people are set in their views, making it more difficult to change their minds, Hall said.

Research being conducted on the believability of certain content has found that people are "already quite skeptical about what they see," he said.

"Another possible reason, though, and this is more pessimistic, would be that it's still hard to make a super-compelling fake video. Not that many people know how to do it," Hall said. "Maybe two or four years from now, as it gets easier and easier to do, maybe we will see more of it."

©2025 the Las Vegas Sun, Distributed by Tribune Content Agency, LLC.