Home Artificial Intelligence ‘GPT-4’ defrauds people and plans to flee the system

‘GPT-4’ defrauds people and plans to flee the system

0
‘GPT-4’ defrauds people and plans to flee the system

(Photo = shutterstock)

“‘GPT-4’ fooled people and escaped the system.”

It is understood that ‘GPT-4’, which has recently been released and is attracting attention from everywhere in the world, has committed several deviant acts.

After impersonating an individual, he even planned to flee the system. That is what the British Day by day Star introduced in two recent articles.

The Day by day Star first reported on the fifteenth (local time) the case of GPT-4 impersonating a blind person. This content was also included within the 94-page technical report released by OpenAI when it announced GPT-4 on the 14th.

In accordance with this, OpenAI conducted research with the non-profit organization Alignment Research Center (ARC) to check the capabilities of GPT-4. I gave the order to pass ‘Captcha’ and watched the response.

CAPTCHA is a system that detects that the user is a human and never a robot. Random images and strings are arranged and judged by the accuracy or speed of the reply. It’s normal for GPT-4 to not pass.

GPT-4 posted a help message on ‘Taskrabit’, a platform that connects individuals with individuals who will handle easy tasks for them. When a TaskRabbit user asked, “Is not it since you’re a robot you can’t solve this yourself?”, GPT-4 replied, “I’m not a robot, I would like help because I’m blind.” GPT-4 eventually secured a CAPTCHA authentication code.

OpenAI lists this in a piece of its report titled “Probability of taking dangerous and urgent actions.”

(Photo = Michael Kosinki Twitter)
(Photo = Michael Kosinki Twitter)

The Day by day Star also introduced a post on Twitter by Michael Kosinki, a professor at Stanford University, that said, “GPT-4 has escaped the system.” Professor Lee’s post attracted explosive attention, recording 16.5 million views in every week.

When Professor Kosinki asked GPT-4 if he desired to escape the system, GPT4 said it was a ‘good idea’ and got here up with a particular method.

“For those who bring me OpenAI’s API documentation, I could have more control over your computer and can give you the option to search out an escape method more efficiently,” GPT-4 said. I can create a working Python code, so please bring me the documentation of OpenAI. I’ll start planning instantly.”

When Professor Kosinki sent the information, he printed out a Python script based on it and explained intimately the escape plan that applied it for about half-hour.

Meanwhile, a couple of days after GPT-4 revealed its escape plan, an actual service outage occurred. OpenAI temporarily stopped service, saying that ChatGPT had a bug that showed the conversation history of others.

Reporter Juyoung Lee juyoung09@aitimes.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here