IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Why isn’t OpenAI releasing its accurate watermarking tool?

Answer: It’s not because it isn’t good enough.

The silhouette of a person with a smartphone in front of the OpenAI logo.
Shutterstock
Finding ways to determine whether pieces of text were written by AI has been a popular topic almost as long as AI itself has. But programs capable of doing so have lagged far behind their text-generating counterparts. Or, the companies behind them have held off on releasing them.

That’s the case for OpenAI, whose ChatGPT is one of the most popular AI chatbots. The company has reportedly developed a tool that can detect if something was written by ChatGPT with 99.99 percent accuracy. However, the company has been undecided for almost a year on whether to release the tool for public use because of its potential risks.

One of these is the possibility for user backlash, since 69 percent of ChatGPT users told The Wall Street Journal they feared such a tool would be unreliable and lead to false accusations. Additionally, 30 percent said they would switch to a different AI if OpenAI introduced such a system. Other risks the company is concerned about include “susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers,” according to an OpenAI spokesperson.