Home Trending ChatGPT: program detects stolen texts written by artificial intelligence

ChatGPT: program detects stolen texts written by artificial intelligence

0
ChatGPT: program detects stolen texts written by artificial intelligence

OpenAI, the company behind the ChatGPT artificial intelligence program, released a new tool yesterday designed to determine if any text was written by an AI system.

However, the lab warns that the software isn’t entirely reliable and, in fact, is often wrong – at least for now.

The OpenAI update follows an international wave of concerns about copying and plagiarism in academia.

In a post on the lab’s official blog, the company encourages anyone interested to use the new software, which has been “trained” to distinguish text written by humans from text composed by various intelligent systems (not just ChatGPT).

However, Open AI researchers have recognized that it is impossible to detect texts written by artificial intelligence systems with 100% accuracy. However, they added that good classification tools can detect signature “footprints” and signs of AI use.

They said the software could be useful in cases of suspected “dishonesty” at an academic level, as well as detecting AI-powered chatbots impersonating humans.

Examiners return to paper and pencil

Software developers admit that the recognition program is “not entirely reliable”, because correctly identified only 26% of English texts written by AI that were submitted for verification. Verification is also difficult in the case of texts that are actually written by people, since the program calculated that 9% of texts written by human hands and submitted for verification are the product of artificial intelligence.

“The reliability of a classification system generally increases as text size increases. Compared to the classification system we released earlier, this new system is significantly more reliable for text (or writing) than newer AI systems.”

Since access to ChatGPT has become public, it has caused a wave of concern among educational institutions around the world about possible fraud in exams and other assessment processes.

Academics in the UK are being urged to reconsider how they evaluate students in their courses, while some universities have banned the use of electronic systems altogether and back to exams with pen and paper to prevent students from coming into contact with the AI ​​system.

ChatGPT: The program detects stolen texts written by artificial intelligence-1
Source Unsplash

A lecturer at Deakin University in Australia calculated that almost 20% of the papers she graded over the summer were written using artificial intelligence.

Some scientific journals have also banned the use of ChatGPT for writing articles.

For long texts and only in English

OpenAI indicates that the classification tool has several limitations and is unreliable for texts smaller than 1000 characters. The researchers also emphasized that the software should only be used for text written in English, as it performs “significantly worse” in other languages ​​and is unreliable when reviewing code.

“It should not be used as a primary decision-making tool, but as a complement to other methods for determining the origin of text,” OpenAI points out, asking educational institutions to share ChatGPT use cases with the company. in classrooms.

Most institutions have reacted negatively by banning the use of such AI systems, but there are exceptions: Three major universities in South Australia updated their rules last month to allow the use of AI such as ChatGPT, provided the use of the AI ​​system is disclosed in writing. .

Keeper of the Source

Author: newsroom

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here