Antropic CEO “The scaling law remains to be valid… Superintelligence will emerge in 2-3 years.”

-

Amodei CEO (Photo = YouTube, capture from Lex Fridman channel ‘Dario Amodei: Anthropic CEO on Claude, AGI & the Way forward for AI & Humanity’)

Dario Amodei, CEO of Antropic, expressed his opposition to the stagnation of growth of enormous language models (LLM) attributable to the recently raised limits of the ‘scaling law’ being reached. In other words, he emphasized that LLM performance will proceed to enhance ultimately and that ‘superintelligence’ will emerge in a couple of years.

On the twelfth (local time), Amodei CEO appeared on famous YouTuber and technology expert Rex Friedman’s podcast and talked about superintelligence and the event status of Antropic for five hours.

Amongst them, what caught essentially the most attention was the limitation of LLM performance improvement. That is something that OpenAI and Google have recently identified, and that there’s a limit to the flexibility to extend LLM performance to a certain level through pre-training by investing more computing infrastructure and data. Because of this, it is thought that Open AI and Google are searching for to extend the proportion of post-training, strengthen inference functions, and improve performance through supervised learning.

But Amodei’s CEO insisted that scaling models remains to be the trail to more capable AI. “I can’t really base it on anything aside from inductive reasoning to say that the subsequent few years might be just like the last 10,” he said. “Perhaps the expansion will proceed, and there might be some magic that we haven’t yet been capable of explain theoretically.” He said.

Nevertheless, he also chosen strengthening reasoning and post-training pretty much as good methods. He also said, “The scaling hypothesis appears to be related to big networks. Big data results in intelligence. We now have documented scaling laws in lots of areas aside from language.” It is a story a few form of ‘world model (LWM)’ that learns from images and videos.

Even when the performance improvement decreases, investment in infrastructure to strengthen dictionary learning is predicted to proceed. It is predicted to spend billions of dollars next 12 months to construct clusters for model training, and a whole lot of billions of dollars by 2027.

He also confessed that model alignment, which is a bonus of Antropic, could be very difficult.

“It’s really hard to regulate the model, especially to control its behavior in a really direct way,” he said. “It’s like a game of ‘whack-a-mole,’ where you hit one and one other pops up, and also you don’t notice. “There are numerous cases that pass by without having the ability to do it,” he said.

Nevertheless, it was predicted that Antropic or other corporations would create superintelligence by 2026 or 2027. “It’s becoming increasingly difficult to seek out any reason or logic why superintelligence won’t emerge.”

Nevertheless, he said that his sincere concern isn’t about whether superintelligence will emerge, but about what’s going to occur consequently.

“I’m concerned that the emergence of superintelligence could lead on to the economy and power being concentrated in specific corporations,” he said. “The true problem is the abuse of power through this.”

Reporter Lim Da-jun ydj@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x