Reports have emerged that OpenAI’s next-generation flagship model, ‘GPT-5’, also referred to as ‘Orion’, is experiencing significant release delays. Although most of the main points were already known, it was revealed that preliminary training was conducted twice to launch the brand new model.
The Wall Street Journal (WSJ), citing an anonymous insider, reported on the twentieth (local time) that OpenAI has encountered problems one after one other within the Orion project, which is why its release continues to be delayed.
Based on this, OpenAI worked on developing a brand new model for greater than 18 months, but didn’t achieve the performance improvement expected in comparison with GPT-4. Whether or not it’s smart enough to offer it the name GPT-5 will largely rely on intuition and what many technologists are saying, and to this point the vibe is not looking good.
As well as, despite efforts to integrate inference models, the general pace of development is slower than expected, raising concerns that artificial intelligence (AI) development has reached a plateau.
Most of it’s consistent with what The Information reported on the ninth of last month. Because the speed of GPT model improvement slows, OpenAI changes its strategy and focuses on strengthening post-reinforcement learning or inference functions quite than significantly increasing model performance through pre-training. After this news, it became established that the ‘scaling law’ of enormous language models (LLM) had reached its limit.
Nevertheless, this news includes information that OpenAI has repeated preliminary training at the least twice. Each preliminary training took about six months, and the price of every training was reported to be $500 million (about 725 billion won).
It’s true that performance has improved in comparison with the previous model, but the outcomes are less in comparison with the whole investment of $1 billion (roughly 1.45 trillion won).
It also coincides with the news that announced the launch of GPT-5 this yr.
OpenAI is thought to have launched GPT-4 in March of last yr and commenced training for GPT-5 at the tip of last yr. Then, there was a prediction that GPT-5 can be released in August, and there was also a report not way back that Orion can be released in December. Ultimately, it may possibly be seen that the model trained in the primary half of the yr and the 2 models trained within the second half of the yr each failed to fulfill expectations.
It was also revealed that every one sorts of methods were utilized in the training process, which can also be just like the previous news. Nevertheless, more detailed information has emerged since about 40 days ago.
OpenAI has secured high-quality data through contracts with greater than 20 large global media outlets over the past yr. Nevertheless, when pre-training results weren’t clearly revealed through this, people were hired to create latest data by writing code or solving math problems, and artificial data created by ‘o1’ was also used.
Also, it was said inside Open AI that the explanation was the dearth of supercomputing infrastructure. The recent effort to construct the world’s largest data center by investing 50,000 pieces of NVIDIA’s ‘Blackwell’ in collaboration with startup Crusoe and Oracle, and the event of its own AI chip in partnership with Broadcom-TSMC, all represent these efforts. The reason is that it does.
In fact, the conclusion is that even when a large-scale data center is built instantly, it can take at the least six months to repeat the pre-training of the model. The brand new data center under construction in Texas, USA, is scheduled to be accomplished in the primary half of next yr.
It isn’t known whether OpenAI will discard the second model that has accomplished pre-training and spend an enormous sum of money to construct a brand new model, or whether it can strengthen post-training and release it at an appropriate level.
OpenAI declined to comment. Nevertheless, when launch rumors emerged in December, it was stated that “there aren’t any plans to release models called Orion or GPT-5 inside this yr.”
Meanwhile, Google can also be known to have suffered from similar performance issues. Nevertheless, Google pushed ahead with the launch of ‘Gemini 2.0’ on the eleventh. As a substitute of claiming that artificial general intelligence (AGI) performance was achieved, the main target was on “models optimized for AI agents.”
There have been reports that OpenAI can even announce an agent called ‘Operator’ in January next yr.
Reporter Lim Da-jun ydj@aitimes.com