“Co-Pilot·ChatGPT introduces normal people as criminals because of hallucinations”

-

(Photo = Shutterstock)

Artificial intelligence (AI) hallucinations have resulted in normal people turning into criminals.

Australian media ABC News reported on the 4th that Microsoft’s ‘CoPilot’ and OpenAI’s ‘ChatGPT’ caused problems by outputting misinformation.

In keeping with this, German journalist Martin Bernklau made a shocking discovery after entering his name into Co-Pilot early this 12 months. Co-Pilot described him as a baby molester, a psychopath, a drug dealer, and a violent criminal.

Nonetheless, that is what appears within the article he wrote. Bernklau was a court reporter. The co-pilot confused the article with personal experience and portrayed the person as having committed the crime he reported.

What’s more serious is that the actual address and phone number were also revealed. This goes beyond hallucination and tells us that there’s a problem with the guardrail.

Bergklau reported this to the general public prosecutor in Tübingen, where he works, and to the regional data protection officer, but when he didn’t receive a response for several weeks, he reported the matter to the media and hired a lawyer.

“The news got out, but there was no response from Microsoft,” he said. As an alternative, he claimed that his name was blocked from Co-Pilot in addition to ChatGPT.

Lawyers advised that even when legal motion were to be taken, litigation could take several years, be expensive, and have the potential for a positive consequence. Accordingly, he said he wouldn’t have the option to make a choice about what to do next.

In Australia, Brian Hood, mayor of Hepburn, Victoria, experienced something similar. He was often known as a public interest reporter who exposed irregularities at a subsidiary of the Central Bank of Australia, but ChatGPT portrayed him as having committed fraud.

He filed a lawsuit against OpenAI, but later felt burdened by the large costs and withdrew the lawsuit.

Similar measures are underway in the US. American radio host Mark Walters was implicated in a case during which ChatGPT falsely claimed he was being sued by his former employer for embezzlement and fraud. He responded by suing OpenAI.

Dr Simon Thorn, from Cardiff University of Technology, who has been tracking the embezzlement cases claimed by ChatGPT, said Walters had nothing to do with the case.

As is understood, hallucinations should not an issue that might be easily fixed. It’s because the Large Language Model (LLM) was created to output content most much like the reply through training data, no matter its authenticity.

“We do not know exactly how ChatGPT arrived at its conclusion. All we are able to do is notice the outcomes,” Dr Thorne explained.

Reporter Lim Da-jun ydj@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x