Sam Altman, CEO of OpenAI, leaves lunch during the Allen & Company Sun Valley Conference on July 6, 2022 in Sun Valley, Idaho.
Kevin Dietsch | Getty Images News | fake images
Artificial intelligence research startup OpenAI on Tuesday unveiled a tool that is designed to determine whether text is human-generated or computer-written.
The release comes two months after OpenAI captured the attention of the public when it was introduced. ChatGPT, a chatbot that generates text that appears to have been typed by a person in response to a person’s request. Riding the wave of attention, last week Microsoft announced a multi-million dollar investment on OpenAI and said it would incorporate the startup’s AI models into its consumer and business products.
The schools rushed to limit the use of ChatGPT over concerns that the software could harm learning. Sam Altman, CEO of OpenAI, saying Education has changed in the past after technology like calculators emerged, but he also said there could be ways the company could help teachers detect text written by AI.
The new OpenAI tool is error-prone and a work in progress, company employees Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson and Jan Leike wrote in a blog postnoting that OpenAI would like feedback on the parent-teacher classifier.
“In our assessments of a ‘challenging set’ of English text, our classifier correctly identifies 26% of AI-written text (true positives) as ‘probably AI-written’, while incorrectly labeling human-written text as 9 % written by AI of the time (false positives),” the OpenAI employees wrote.
This is not the first attempt to find out if the text came from a machine. Earlier this month Princeton University student Edward Tian announced a tool called GPTZero, noting on the tool’s website that it was made for educators. OpenAI itself issued a detector in 2019 along with a large language model, or LLM, which is less sophisticated than the ChatGPT core. The new version is more ready to handle text from recent AI systems, the employees wrote.
The new tool is not good at parsing inputs that contain fewer than 1000 characters, and OpenAI does not recommend using it in languages other than English. Also, the AI text can be updated slightly to prevent the classifier from correctly determining that it’s not primarily the work of a human, the employees wrote.
Even in 2019, OpenAI made it clear that identifying synthetic text is no easy task. He intends to continue to pursue the challenge.
“Our work on AI-generated text detection will continue and we look forward to sharing improved methods in the future,” Hendrik Kirchner, Ahmad, Aaronson, and Leike wrote.
TO WATCH: China’s Baidu is developing an AI-powered chatbot to compete with OpenAI, according to a report
Leave a Comment