OpenAI removes tool that could detect when a text is written by AI

26 Jul 2023

Image: © sofirinaja/Stock.adobe.com

One professor from Australia took to Twitter to say that if OpenAI can’t detect AI-written text then ‘there’s probably no hope for outsiders like Turnitin’.

OpenAI has quietly sunset a tool that was intended to distinguish between AI and human-written text because it is no longer sufficiently accurate.

In a recent update to a blog from January, OpenAI wrote that its AI classifier tool is no longer available “due to its low rate of accuracy” and that the company is currently working on improving the technology for possible future use.

“We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the update reads. “[We] have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”

The AI classifier was first launched in late January this year. OpenAI said at the time that while it is “impossible” to reliably detect all AI-written text, good tools “can inform mitigations for false claims” that AI-generated text was written by a human.

This could have applications for detecting AI-written automated misinformation campaigns, identifying instances of academic dishonesty in university settings and even exposing AI chatbots posing to be human.

“It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text,” OpenAI said at the time.

Just last week, thousands of authors signed a letter written by the Authors Guild calling on the likes of OpenAI, Alphabet and Meta to stop using their work to train AI models without “consent, credit or compensation”. Before that, the US Federal Trade Commission slammed OpenAI with an investigation of its AI practices.

In order to douse some of the flames, OpenAI joined seven US companies – including Google, Meta and Microsoft – in making voluntary commitments to the White House (such as watermarking AI content) to ensure robust AI security measures as world leaders grapple with the rapid rise of AI.

The latest shutdown of its AI classifier is seen as a cause of concern by some. Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Sydney, thinks the move is a blow to AI text detection more generally.

“If the company that builds these chatbots give up on detecting chatbots (with all their inside information on weights, guardrails etc) then there’s probably no hope for outsiders like Turnitin [a plagiarism detection software] to detect real versus fake text reliably,” Walsh tweeted.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com