IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How often do AI search engines cite incorrect sources?

Answer: Sixty percent of the time.

Closeup of computer circuits with the letters "AI" in the middle.
Adobe Stock
If you’re using a generative AI (GenAI) program for news searches, you probably want to double-check the results. A recent study from the Columbia Journalism Review’s Tow Center for Digital Journalism has found that GenAI-powered search tools get things wrong more often than they get them right.

In a study of eight such AI-powered tools, the team ran 1,600 queries in which they fed the bots direct excerpts from actual news articles. They then asked the programs to identify the source article’s headline, original publisher, publish date and URL.

Perplexity had the lowest error rate, providing incorrect information in 37 percent of the queries. ChatGPT Search produced incorrect information on 134 out of 200, 67 percent, of the queries it received. The highest error rate came from Grok 3 at 94 percent. The researchers also noted that all eight tools consistently provided incorrect or speculative answers rather than declining to answer if they didn’t have enough information.

You can read more details and results from the study here.
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.