IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Kamala Harris AI Ad Raises Election Deepfake Questions

A video that used artificial intelligence voice cloning to mimic Vice President Kamala Harris' voice in a parody campaign has raised concerns about how AI may be used to spread election disinformation.

deepfake_shutterstock_526422091
(TNS) — Artificial Intelligence voice-cloning technology recently used to mimic Vice President Kamala Harris's voice in a parody campaign video raised questions about how AI can be used to spread election disinformation, a problem politicians have aimed to tackle for years.

The video, originally created by Youtuber Mr Reagan, gained widespread attention Friday when it was shared on X by tech billionaire Elon Musk, the Associated Press reports.

Musk did not clarify that the video was a parody until Sunday.

AI-altered photos or videos, also known as deepfakes, are a rising concern in the upcoming election.

"The AI-generated voice is very good," Hany Farid, digital forensics expert at the University of California, Berkeley, told the AP in an email. "Even though most people won't believe it is VP Harris' voice, the video is that much more powerful when the words are in her voice."

Texas House Speaker Dade Phelan was the victim of a similar situation earlier this year when a deepfake photo depicting him hugging former U.S. House Speaker Nancy Pelosi was printed on a campaign mailer. The doctored image was based on a photo of Pelosi hugging House Democratic Leader Hakeem Jeffries.

Texas was one of the first states to file legislation addressing the use of AI in elections.

Texas Senate Bill 751, signed into law in 2019, makes fabricating a deceptive video with the intent to influence an election outcome a Class A misdemeanor. However, the law only prohibits the use of deepfake videos within 30 days of an election and does not apply to photos or audio recordings.

Several members of congress have expressed a need for the law to be updated to keep up with the ever-evolving use of AI.

Harris was among the high-profile politician voices replicated by leading generative AI audio tools in a recent experiment by the Center for Countering Digital Hate. In a report, the CCDH said that "popular AI voice cloning tools failed to prevent the generation of election disinformation in politicians' voices 80% of the time."

As a result of their findings, the CCDH recommends that existing election laws be leveraged and updated to prevent AI-generated harm. The organization also suggests that social media platforms introduce emergency measures to prevent the creation and spread of election-related disinformation.

(c)2024 the Houston Chronicle Visit the Houston Chronicle at www.chron.com Distributed by Tribune Content Agency, LLC.