IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New AI Tool Finds Sources to Combat Unsupported Claims in Writing

Software that detects AI use and plagiarism in writing now offers a function to assess the credibility of claims in a body of text, offering Internet sources that either support or contradict the author's claims.

Hand holding a phone with a fact-checking app open
GPTZero, an AI detection software tool that universities can integrate with their learning management systems, launched a new feature last week aimed at combating misinformation and improving the factual integrity of a user's writing.

As described in a recent news release, the company's new Source Finder function uses AI to scan text for claims that might need supporting evidence and offer links to Internet sources that could back them up. Alternatively, it might notify the author that some Internet sources contradict their argument, or offer sources that relate to a claim the author is making but don't definitively support or disprove it.

This new feature comes at a time when tech companies are placing more emphasis on sourcing generative AI’s statements. OpenAI’s ChatGPT, for example, started providing links to web sources relevant to its generated responses in October 2024 for some users, then rolled out that feature for the free version of ChatGPT last month. Up until then, a user had to verify the information on their own, according to a news release from OpenAI.

This extra step in fact-checking has been a concern for academics, especially with the propensity of large language models to hallucinate, or generate plausible-sounding but incorrect information, which can include fabricating sources.

“Misinformation is not a new problem,” GPTZero's news release said. “But in an era where both AI and the Internet have allowed for the rapid generation and distribution of info, you’ll find people making loud, pointed arguments based on evidence that is of dubious origin — or worse, without evidence at all.”

The release said Source Finder uses a data set of over 220 million scholarly articles, preprints — early versions of scientific papers — and real-time news.

“We actively do NOT try to recommend AI-generated content, due to their unreliability, and we label sources that are potentially including AI-generated content,” the news release says.

But catching AI-generated content is difficult. Despite growing demand among educators for an easy way to tell which assignments are student-crafted and which aren’t, some studies have shown AI detection tools still aren’t very effective at catching AI-generated text. OpenAI announced its own detection tool called Classifier in January 2023 and took it down six months later “due to its low rate of accuracy.”