IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Public Must Be Warned About AI Political Ads

While 20 states, including Minnesota, have formalized rules to govern deepfakes, the federal government needs to step up to protect people from blatant lies that they can't easily detect.

Two shadowy, computer-generated human heads are superimposed over the American flag.
Shutterstock
(TNS) — Elon Musk continues to destroy any semblance of objectivity and fair, open discussion on X, the social media platform formerly known as Twitter that he bought in 2022.

Last month Musk reposted a manipulated campaign ad from Vice President Kamala Harris that had her making damaging statements about outgoing President Joe Biden, including calling him senile.

The original manipulated video had a disclaimer that it was a parody, but Musk didn't bother mentioning that as he posted it to hundreds of millions of followers on X.

It's not the only false information the platform has spewed out.

As soon as Biden quit his reelection campaign, false information produced by Grok, an artificial intelligence program on X, was sent out to unsuspecting followers. The false posts included saying Vice President Kamala Harris could not appear on the ballot in a number of states because she had missed their filing deadlines. The fake information was quickly shared by many other platforms.

It's no surprise Musk's personal political views are opposed to Biden and Harris. Musk told The Wall Street Journal that he has committed around $45 million a month to a pro-Trump super PAC.

But then he should stop pretending the social media platform he owns is open to impartial debate.

Musk isn't the only example of misinformation that is rolling out of social media.

While 20 states, including Minnesota, have formalized rules to govern deepfakes, the federal government needs to step up to protect people from blatant lies that they can't easily detect.

There is a bill in Congress that would require disclaimers on ads, audio and images created by AI, stating they were created by artificial intelligence. There is also legislation that would "ban the use of artificial intelligence to generate materially deceptive content falsely depicting federal candidates in political ads to influence federal elections."

Musk and other operators of social media have shown that self-regulation doesn't work.

Tens of millions of voters across the country will get information on the upcoming elections on social media. They need to be skeptical of what they read and see. But social media executives have an obligation to vet information on their sites. If they fail to flag or remove obviously false posts, laws need to be in place to ensure they do.

© 2024 The Free Press (Mankato, Minn.). Distributed by Tribune Content Agency, LLC.