Due to poor accuracy, OpenAI discontinued a tool that was designed to distinguish between human and artificial writing.
OpenAI announced its decision to discontinue its AI classifier as of July 20th in a (updated) blog. The business stated, “We are working to incorporate feedback and are currently researching more useful provenance techniques for text.”
As it discontinues the tool to identify text produced by AI, OpenAI stated that it intends to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.” However, it is unknown what such processes might be at this time.
OpenAI openly acknowledged that the classifier had never been very effective at identifying AI-generated material and cautioned that it might produce false positives, or human-written language that was mistakenly classified as AI-generated. Before adding the update that disabled the program, OpenAI stated that the classifier could improve with more data.
People rushed to understand the technology after OpenAI’s ChatGPT exploded onto the scene and became one of the fastest-growing apps ever. A number of industries have expressed concern over AI-generated writing and artwork, particularly educators who are concerned that pupils may stop studying and instead just have ChatGPT complete their assignments. Due to worries about accuracy, safety, and cheating, New York schools even outlawed access to ChatGPT on school property.
Studies have shown that AI-generated material, like tweets, may be more compelling than text authored by humans, which has raised concerns about misinformation spread by AI. Since governments haven’t yet worked out how to control AI, it’s up to certain groups and organizations to establish their own laws and create their own safeguards against the assault of computer-generated text. Furthermore, it appears that no one currently has solutions for how to deal with it all, not even the business that helped create the generative AI frenzy in the first place. Even though some fraudsters are discovered, it will become more difficult to distinguish between AI and human effort.
In the midst of the Federal Trade Commission’s investigation into OpenAI to investigate how it verifies information and data, the company has just lost its trust and safety leader. Beyond its blog post, OpenAI refuses to comment.