Home Artificial Intelligence ‘GPT-4 has turn out to be silly’… Some experts and users claim

‘GPT-4 has turn out to be silly’… Some experts and users claim

3
‘GPT-4 has turn out to be silly’… Some experts and users claim

(Photo = shutterstock)

It has been argued that ‘GPT-4’ has turn out to be dumb. It was identified that OpenAI’s ‘model splitting’ work was the cause. OpenAI countered that it was “smarter.”

Business Insider reported on the thirteenth (local time) that some experts and users recently identified that “the performance of GPT-4 has fallen”, and OpenAI responded that “the performance didn’t decrease, but increased.”

In keeping with this, posts on OpenAI’s online developer forum and Twitter identified the weakening of GPT-4’s logic, increased error rate, inability to know input information, and the phenomenon of forgetting all the pieces except essentially the most recent prompt. As well as, criticism that GPT-4 has turn out to be “lazy” or “silly” continued.

Peter Yang, product manager at Roblox, identified that “the text is printed faster, however the content is worse,” and developer Christy Kennedy said, “It keeps recoding the identical content.” It appears to be dead and resurrected over and yet again,” he said. One developer said, “It looks like driving a broken-down pickup truck after driving a Ferrari.”

The GPT-4, which was released last March, was evaluated for its relatively slow output but excellent accuracy. Nonetheless, after a noticeable increase in speed a number of weeks ago, complaints of poor quality followed.

Regarding this, the people involved agreed that it was due to the strategy called ‘Mixture of Experts (MOE)’. CEO Sharon Zhou Ramiri identified that OpenAI seems to separate GPT-4 into several smaller models and blend and match them in keeping with the situation. Oren Ezioni, CEO of Allen AI Labs, expressed the identical opinion.

In other words, GPT-4 is split into small expert models in command of each field resembling biology, physics, and arithmetic, and depending on the query, the expert models are connected or several types are mixed. On this case, the price and time is way lower than running the whole large model.

Greg Brockman, chairman of OpenAI, also described the benefits of the MOE approach through a paper in 2022.

“Some people may feel that there just isn’t much difference from before, it is definitely the identical model,” Zhou said. “The MOE model will improve over time.”

OpenAI also made an announcement. In keeping with VentureBeat, VP of Product Peter Welinder claimed on Twitter that “not only did I not make GPT-4 silly, each version is smarter than the previous one.”

It’s like admitting MOE. “In the event you use the product more, problems that weren’t there before will appear,” he said.

Some users have agreed, but some are still claiming that GPT-4 has turn out to be dumb.

Above all, since OpenAI didn’t disclose details through its technical report on the time of GPT-4 release, it’s a response that it’s unbelievable even when this happens.

Some users repeatedly criticize OpenAI for its closed nature, saying that there isn’t a evidence to verify whether the model has modified or not from the surface.

Reporter Lim Dae-jun ydj@aitimes.com

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here