IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How the Election Misinformation Landscape Is Shifting

Brookings Institution panelists considered how the proliferation of generative AI tools, weakening of social media platform trust and safety teams, and drawdown in federal communications with social media firms will impact the the 2024 elections.

Closeup of the word "misinform" in an entry in a dictionary.
Shutterstock
State and local governments are gearing up for the first presidential election since generative AI tools like DALL-E and ChatGPT entered the scene. Researchers speaking recently at a Brookings Institution panel debated just how significantly the technology will impact the spread of false narratives and how generative AI and other factors are changing the information environment in 2024.

Fear of generative AI-fueled disinformation grew even more following events like the use of a President Joe Biden voice clone to attempt to discourage voting in New Hampshire.

Generative AI tools could be used maliciously for everything from creating misleading news articles and social media posts, to fabricating video “evidence” of people trying to rig elections, to even tying up government offices by spamming them with mass records requests, according to recentresearch from panelist Valerie Wirtschafter. And political candidates also might try to dismiss authentic but unflattering recordings of them by asserting the recordings are deepfaked.

But while AI-fabricated content is a challenge to information integrity, it doesn’t appear to have hit a crisis point. Per Wirtschafter’s research, in the time since ChatGPT’s launch, AI-generated media has only accounted for 1 percent of the posts that X users flagged as misleading under the social media platform’s Community Notes program. Such findings suggest that online spaces are not currently seeing “an overwhelming flood” of generated content. It also underscores that more traditional forms of mis- and disinformation, such as images and videos taken out of context, continue to present a real problem.
Videoconference call grid showing in the top row (left to right) Valerie Wirtschafter, Olga Belogolova and Amy Liu. Bottom row of webcam feeds show (left to right) Quinta Jurecic, Laura Edelson and Arvind Narayanan.
Brookings Institution panelists listen as moderator Amy Liu introduces the event.
Screen shot
Generative AI isn’t the only new factor affecting the online information landscape. Another significant change from 2020 is that many social media platforms have reduced their content moderation, with X and Meta both heavily cutting their trust and safety teams, said Quinta Jurecic, senior editor of Lawfare and fellow in Governance Studies at Brookings. And the federal government is providing less help here, too: Conservative pressure has prompted the federal government to pull back from warning social media platforms about potential foreign disinformation. Federal officials stopped alerting Meta about foreign election interference campaigns back in July 2023, for example.

A core piece of fighting misinformation is ensuring the public has access to reliable, trustworthy sources of information — which often includes the local newspaper. But local journalism has long been embattled, and Jurecic said one fear is that media companies eager to use GenAI to cut costs will worsen that problem.

“I’m not a person who thinks that we’re going to be able to replace all reporters with AI. But I am worried that there are people who own media companies who think that,” Jurecic said.

Society is in a phase where people are still re-adjusting their understanding of what “fake” looks like in a world where generative AI exists. But people have been through such shifts before, re-setting their expectations after Photoshop emerged, and after earlier methods of photograph trickery came to light, said Northeastern University Assistant Professor Laura Edelson, who studies “the spread of harmful content through large online networks.”

In today’s media environment, how realistic an image or video seems is no longer an indicator of how authentic it is, said Princeton computer science professor and Director of the Center for Information Technology Policy Arvind Narayanan. Instead, people will likely look to the credibility of content’s source to determine whether to trust it. Some social media platforms are taking steps that can help users assess credibility, he said. For example, X’s Community Notes feature lets qualifying users attach clarifying, contextualizing notes to images and videos that appear in posts. That’s “a big step forward,” Narayanan said, even if the degradation of X’s blue checkmarks was “a big step backward.”

Meta has also promised to start labeling AI-generated images, and Jurecic said it’ll be important to study the impact of such interventions. For example, researchers will want to find out whether people start to automatically trust anything without a label or whether they’re still wary that the system could miss flagging something, and whether people still re-share content marked as GenAI-created. And even so, what matters most in fighting deception isn’t whether or not content was created without the aid of AI, but whether it’s being framed and presented in an honest manner, she added.

Perhaps one of the most helpful parts of a program to label generative AI in social media feeds is that it helps the average person stay aware of just how realistic the latest synthetic media has become, Narayanan said. GenAI is rapidly improving, and not everyone can easily keep themselves up to date on its newest capabilities, but this kind of intervention can help by reaching people in their day-to-day lives.

Panelists also pointed to some early explorations into whether generative AI can also be used to help improve the trustworthiness of online information. For example, former OpenAI Trust and Safety team lead Dave Willner and former Meta civic integrity product team lead Samidh Chakrabarti suggest in a recent paper that large language models (LLM) might eventually be able to help sites enforce their content moderation policies at scale. But policies have to be rewritten in exacting ways to be understandable to LLMs and new technological developments are needed before such an application is practical, the authors said.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.