Many of the generative AI detectors will not be as much as par… OpenAI detectors also fail


(Photo = shutterstock)

It was found that almost all of the detectors that detect texts written by generative artificial intelligence (AI) reminiscent of ChatGPT will not be as much as par. Even OpenAI’s detector got a failing grade.

TechCrunch, an American technology media, reported on the sixteenth (local time) the outcomes of directly testing 7 kinds of generative AI text detectors which have emerged.

In keeping with this, TechCrunch uses Open AI’s ‘classifier’, AI lighting check, GPT Zero, Kapilix, GPT Radar, Catch GPT, Originality AI, etc. Claude’s writings in eight fields were examined.

Written by Claude, an encyclopedia entry on Mesoamerica (a civilization spanning southern Mexico and Central America 3000 years ago); A news article a couple of paralegal job application, a resume for a software engineer, and an essay outline on the merits of gun control.

In response, OpenAI found that at launch, the classifier accurately identified only 26% of AI-written text as “probably AI-written,” while incorrectly labeling 9% of human-written text as AI-written text. Nonetheless, it claims to be far more stable than existing classifiers.

But the outcomes were different.

In the primary Mesoamerican test, six detectors, except GPT Zero, gave false answers. Subsequently, all 7 email marketing items gave incorrect answers, and GPT Zero and Catch GPT got the right answers for university assignments and task outlines. As well as, GPT zero for news writing, GPT zero and catch GPT for self-introduction, and catch GPT for resume found articles written by Claude.

As such, 7 detectors found only 9 correct answers across 8 items (correct answer rate 16%), and 5 of them were zero GPT balls. Catch GPT followed with three, and OpenAI’s classifier only got one right. There have been 4 detectors that scored zero.

ZeroGPT, which recorded the best accuracy, is a tool developed last month by Princeton University student Edward Tian.

TechCrunch cited the undeniable fact that the detector can be a trained language model for this result. While the standard of generative AI develops day-to-day, the detector predicted that it might turn out to be increasingly difficult to detect since it learned from previous texts.

Because of this, generative AI detectors stands out as the only fallback at this point, but they’re unlikely to be reliable.

TechCrunch emphasized that “the one conclusion is that there isn’t a silver bullet to capture AI-generated text,” and “probably never.”

Reporter Lim Dae-jun


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x