Altman: “We already found out tips on how to construct AGI… Our latest goal is to attain superintelligence.”

-

(Photo = Shutterstock)

Sam Altman, CEO of OpenAI, declared that he would transcend artificial general intelligence (AGI) and develop super intelligence. That is the third story to look in the brand new yr, and plainly they at the moment are making it official that the achievement of AGI is a fait accompli and that the goal is being adjusted upward.

Sam Altman, CEO of OpenAI, said on the sixth (local time) through personal blog The corporate said it’s now shifting its goal to developing superintelligence.

This text follows an interview with Bloomberg yesterday during which he revealed his journey over the past 10 years, including the establishment of OpenAI, the launch of ChatGPT, and his ouster from the board of directors. It appears to be an try to complement the interview content.

Amongst these, the part that caught probably the most attention was about AGI and superintelligence, or ASI.

“We’re confident that we now know tips on how to construct AGI because it has been traditionally understood,” Altman said.

“We imagine that by 2025, the primary AI agents will join the workforce and make an actual difference to corporations’ outcomes,” he continued. In other words, when the performance of the currently achieved inference model is combined with the agent system, it can show human-level capabilities.

CEO Altman also emphasized this point in an interview yesterday. He defined that “AGI has been reached when A can do what a highly expert human can do.” Models which have already surpassed human knowledge in a selected field with inference models resembling o1 have autonomy through agent systems. That is explained as a solution to construct AGI.

And he emphasized, “We’re beginning to shift our goal beyond that to achieving true superintelligence.”

“With superintelligence, we will do anything. Superintelligence tools can tremendously speed up scientific discovery and innovation far beyond what we could do on our own, and create great abundance and prosperity,” he said. said.

This statement is an extension of the statement from last yr that AGI is getting closer. Specifically, following the announcement on the fifth through

Meanwhile, one other part that stands out from today’s post and the Bloomberg interview is the argument that “AI risks may change into real, but the perfect solution to solve them is to launch products and learn from experience.” That is the logic that has been recommend before. Which means delaying the discharge of a model isn’t the reply since it can’t be perfect irrespective of what number of safety tests are conducted upfront.

He also denied the query about the potential for Elon Musk harassing him within the Trump administration, saying, “I don’t think so.”

As well as, he cited probably the most shocking moment when OpenAI’s board of directors dismissed him, when the board appointed Emmett Shea as CEO. He said, “At that moment, I believed all the things was over. “It was truly shocking,” he recalled.

Reporter Park Chan cpark@aitimes.com

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x