Home Artificial Intelligence AI detector that judges by AI if English is poor

AI detector that judges by AI if English is poor

3
AI detector that judges by AI if English is poor

(Photo = shutterstock)

As the quantity of content generated by artificial intelligence (AI) increases, tools are being developed to screen it. Demand for such tools is growing, especially in the academic field.

Nonetheless, it’s shocking to seek out that a lot of these tools have a bias that misjudges texts written by non-English speakers as having been created by AI. One other style of racial discrimination is predicted to be controversial.

The Guardian reported on the tenth (local time) that researchers at Stanford University tested English essays written by non-native speakers of English with an AI text detector, confirmed this trend, and published a related paper in the info science journal ‘Pattern’. .

A research team led by Professor James Zhu of the Department of Data Science compared seven AI text detection tools referred to as “GPT detectors” to 91 essays written by non-English speakers and 88 essays written by native American students.

The detection tools marked greater than half of the essays written by non-native speakers for the TOEFL, an English language proficiency test, as AI-generated, and one in all the tools judged 98% to be AI-written. Then again, greater than 90% of essays written by native American eighth graders were classified as written by humans.

On this regard, the researchers paid attention to the ‘degree of perplexity’ of the text. It is a technical concept that represents “how confused” an AI language model is when it tries to guess the subsequent word.

(Photo = shutterstock)
(Photo = shutterstock)

The language model creates a sentence by probabilistically guessing the word that comes after one word. Should you can easily guess presently, the embarrassment is low. AI text detectors use this in reverse to find out AI-generated text if the word within the sentence has a low embarrassing level.

The researchers found that the non-native speaker’s essays utilized in the experiment had a low level of ’embarrassment’. Accordingly, it was identified that detection tools could drawback non-native speakers who don’t have any selection but to specific their language in a limited range fairly than native speakers.

Misidentification of essays submitted by students, job applications by job seekers, and papers by scholars as AI products could have serious repercussions, the researchers stressed. She also confirmed that the performance of AI text detectors was lower than expected, and advised to watch out in using the outcomes.

‘GPT detectors’ discuss with AI detectors developed to screen out texts written in ‘ChatGPT’ as problems within the education field. Currently, there are tools developed by ‘GPT Zero’, CopyLeaks, Sapling, and HF Space. Nonetheless, the researchers didn’t disclose the names of the products utilized in the experiment.

Reporter Jeong Byeong-il jbi@aitimes.com

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here