One more recent service her company offers is "deepfake defense," cracking down on security threats brought on by artificially generated impersonations.
It took March's staff 22 minutes to realize the person they were talking to on a video call, who looked and sounded like her, wasn't her. March gave up her likeness to test her colleagues, and AI security leaders in town last week for CES said those tests were becoming increasingly easy to fail.
Panelists listed AI-generated media's ongoing malicious use cases: propagating disinformation, generating nude images of people without their consent and scamming elderly people with a voice of a relative.
"Most of the time you see deepfakes being used for malicious purposes, but there are some exceptions where they're also being used for good," such as improving video quality, said German Lancioni, chief AI scientist at McAfee. "It really doesn't matter if you're in the public or private sector, we're all quite exposed to deepfakes."
At the same time, it has never been easier to make AI-generated images, audio and video.
"We're beyond democratization, said Andy Parsons, director of content authenticity at Adobe. "These things are free, they're open source models (and) they're trivial to use."
Former North Las Vegas Mayor John Lee said he was hit with a deepfake during the 4th Congressional District's Republican primary. In a lawsuit going to court later this year, he alleged one of his opponents conspired to generate deepfaked audio of his voice talking about wanting to have sex with a mother and her young child.
Most other states have at least proposed legislation in 2024 to tamp down the political impact of deepfakes, and Nevada is following suit this year.
In newly proposed legislation, Secretary of State Cisco Aguilar's office supported fines for political ads that fail to disclose realistic AI-generated media that distorts the reality of a situation.
The most prominent way to combat deepfakes is with labeling, either by attaching it to the media before it's published or by spotting a deepfake after and adding it. Companies like Adobe have also been pushing to add an internal history to all digital files, whether deepfaked or not, in a process called provenance.
However, detecting AI has always been an arms race with people scrubbing out whatever defects got their work caught previously. So far, "the bad guys ... have won," Parsons said.
"I'm often asked, 'Well, until we have provenance on all of our media,' which is not going to happen overnight, 'what can we do (to tell what's a deepfake)?' " he said. "The answer — increasingly, unfortunately — is there's not much you can do."
But people can still evaluate the information they're receiving online.
Deepfakes are designed to "provoke an emotional response" and are usually made in isolation, Lancioni said. That typically means that a deepfake scam won't be published with other information on the Internet to back it up.
"Start by questioning the content," he said. "It's as simple as that."
Outside of systemic change, making the extra online search to verify a story can be the difference.
"Typically it's a celebrity or an influencer or a politician that is being used for the deepfake," Lancioni added. "So you should ask the question: Is this person typically saying this kind of stuff or doing that? Because if the answer is no, then it's likely a deepfake."
©2025 the Las Vegas Sun, Distributed by Tribune Content Agency, LLC.