IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Can AI Do in the War Against Misinformation?

Savvy journalists flagging unreliable content, trusted local practitioners spreading truthful information, and AI tools charting the spread of manipulated narratives are being levied in the fight against misinformation.

A Facebook alert about misleading information.
Shutterstock/Wachiwit
Themisinformation and disinformation landscape is rapidly evolving, with false narratives continuing to impede critical areas like pandemic response and voter confidence and emerging as a tool in Russia’s bid against Ukraine.

Organizations are bringing both human power and advanced technologies to the battle against misinformation.

Attempts to prevent and mitigate false narratives center around pushing out reliable information so that individuals have an easy way to get the truth; debunking content to correct misconceptions; and flagging untrustworthy accounts to reduce the likelihood that people will read it.

PRE-BUNKING STRIKES EARLY


Fact-checking is best done by people rather than AI, because only humans can understand the nuances and complexities of false narratives, said Sarah Brandt, executive vice president of partnerships at NewsGuard.

“It’s just not a topic that AI can really detect at scale,” she said.

NewsGuard provides a browser extension that displays indicators next to news story links to inform users whether articles come from trustworthy or untrustworthy publications.

Public libraries in districts of Illinois, Iowa and Pennsylvania have been using NewsGuard, per the company's recent Social Impact Report. The American Federation of Teachers also recently purchased a license, which will allow the organization to provide the tool to its 1.7 million members and their students and colleagues.

NewsGuard aims to help users recognize unreliable information before they read it. The idea is to stop false conceptions from taking root in people’s minds in the first place and reduce the likelihood that someone fails to notice fact checks.

“We pre-bunk because we provide information about a source directly when a user encounters that source and its content, so that immediately when an article is published, you can see our rating of that source and get important context,” Brandt said. “You don't have to wait hours, or even days, for that piece of content to be manually fact-checked.”

NewsGuard ranks online publications with green or red icons to indicate general reliability or unreliableness, and lets users view more detailed explanations that rank the publications’ credibility and transparency against a set of criteria.
Well-meaning publishers can — and do — use these ratings as road maps for how to improve, and NewsGuard says that, by the end of 2021, 1,801 of the 2,733 poorly rated sites had responded with changes.

TAKES A HUMAN TO CATCH A LIE?


Evaluating the Internet’s content is no small task, and the company says it rated more than 7,400 news domains by the end of 2021. These domains receive about 95 percent of all social media engagements with news sites, Brandt said.

Rather than turn to AI to help manage the load, the organization relies on journalists. Brandt said reporters’ firm understanding of journalistic best practices makes them well-suited for the task.

To make it possible to keep up with the fast pace at which misinformation is released and to address the vast amount of digital news about an array of countries, the organization rates publishers on their practices instead of trying to review and rate individual articles, Brandt said.

Still, some misinformation will always slip through defensive measures, and some local governments are turning attention to combating false narratives that residents may already have absorbed.

Nevada’s Clark County and California’s San Diego County declared COVID-19 misinformation a public health crisis last year. San Diego County's declaration said it would aim to better understand and counter misconceptions through efforts like training health workers and residents to distinguish factual information from “opinion and personal stories,” and working with trusted local partners to bring reliable information to their communities.

Putting the plan in action thus far has seen San Diego County launch an educational website about evaluating COVID-19 information. The board also follows up on public meetings by documenting any pandemic-related misinformation that emerged during public comment periods and gathering a panel of doctors to discuss those claims during public Zoom sessions the following day, per KPBS.

AI TRACKS THE LANDSCAPE


In addition to tackling prevalent false narratives, some organizations are looking to chart where distorted information is starting to crop up next.

This is one area where AI has a role to play, according to Wasim Khaled, CEO and co-founder of Blackbird.AI. Blackbird has done some work for the Department of Defense, Khaled said, and it focuses on analyzing how artificially manipulated narratives about clients in public or private sectors are forming and spreading.

This approach aims to give organizations insight into “signals of emerging risks,” Khaled told Government Technology.

The idea is to use an AI platform to analyze information flows online and the context surrounding them. The goal is to reveal what sort of narratives are developing and whether any are doing so unusually.

The AI platform “deconstruct[s] the massive flow of online information and attempts to [risk] score and highlight the underlying dynamics,” Khaled said. “It’s been used to analyze national security threats and manipulation of brand narratives, and tag unsafe and toxic content.”

The platform might be used to review all publicly available online information about an organization or topic, or might be directed to analyze a particular arena, such as an organization’s Facebook page, to understand the conversations developing in the comments section, Khaled said.

The AI can examine factors like what stories are forming online about an organization or topic, how the stories “flow” through online networks and which communities and individuals engage with the narratives. It also examines whether bots, as opposed to humans, are spreading content and whether influential social media voices are engaging with particular stories.

The tool doesn’t capture whether the content is false or not, only whether it’s being spread in abnormal ways, such as being repeated by bots, “polarized communities” or accounts that have been flagged for harmful content. AI can process such data quickly to give timely warnings about emerging narratives that may be false or harmful.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.