OpenAI has recently announced that it will shut down its AI detection tool
Due to its low accuracy rate, OpenAI has recently announced that it will discontinue its AI text detection tool, designed to differentiate between human and AI-generated writing. However, in an updated blog post, OpenAI has stated that it is working diligently to incorporate feedback and explore more effective techniques for verifying text’s origin.
As the company shuts down its tool for detecting AI-generated text, it is now focused on developing and implementing new mechanisms to enable users to identify AI-generated audio and visual content. Despite this, OpenAI still needs to disclose the specific features of these mechanisms.
According to the blog post, during evaluations on a “challenge set” of English texts, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Its reliability typically improves as the length of the input text increases. Compared to OpenAi’s previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.
The original version of the AI classifier tool had certain limitations and inaccuracies from the outset. Users were required to manually input at least 1,000 characters of text, which OpenAI then analyzed to classify as either AI or human-written. Unfortunately, the tool’s performance fell short, as it properly identified only 26 percent of AI-generated content and mistakenly labeled human-written text as AI about 9 percent of the time.
Adding to the company’s challenges, OpenAI recently experienced the departure of its trust and safety leader. Concurrently, the Federal Trade Commission (FTC) investigated OpenAI’s information and data vetting practices. OpenAI has chosen not to comment beyond the details in its blog post.
The post Why OpenAI Shuts Down its AI Detection Tool? appeared first on Analytics Insight.