Dario Amodei, co-founder of Antropic, posted an article about what is going to occur when artificial general intelligence (AGI) appears. Following Stanford University professor Fei Fei Lee’s recent statement that “I do not know much about AGI,” this text explains AGI intimately.
The founding father of Amodei posted an extended article of 35 A4 pages on the web site on the eleventh (local time). The title is ‘Machines of Loving Grace.’
The predominant content is an in depth explanation of 5 areas where AI can directly improve the standard of human life: ▲biology and health, ▲neuroscience and mental health, ▲economic development and poverty, ▲peace and governance, and ▲work and meaning.
To place it simply, AI can contribute to reducing physical and mental diseases, poverty, and inequality. Amongst these, what attracts attention is the claim that if AI is used well, human lifespan can increase to 150 years, or nearly double.
Prior to this, he emphasized that he just isn’t an AI pessimist or an AI doomsday person. Just as most individuals underestimate how radical the advantages of AI might be, additionally they underestimate how serious the risks might be, he said. And he emphasized that the one obstacle to a positive future with AI is issues of safety, so that they are specializing in them.
He said he doesn’t just like the term AGI and would as a substitute use the term ‘poerful AI’. He also explained that he tried to avoid getting used in a promotional, exaggerated, or sci-fi sense.
Specifically, he identified, “I lose interest in the best way many AI alarmists and AI company leaders talk concerning the post-AGI world,” and “they talk as whether it is their mission to make it occur alone, like a prophet leading people to salvation.” . “I feel it’s dangerous to see corporations as unilaterally shaping the world, and I feel it’s dangerous to view practical technological goals from an inherently religious perspective,” he added.
This is similar tone as Professor Lee, who opposed the overuse of AGI. This also means opposing the position that AI will destroy humanity.
Nevertheless, unlike Professor Lee, the founding father of Amodei explained AGI intimately.
He introduced strong AI as “an AI model with the next properties that’s prone to be similar in form to the present LLM, but based on a special architecture, include multiple interaction models, and might be trained in a different way.” .
To begin with, it was expected that he would have a strong intelligence, smarter than a Nobel Prize winner. Additionally it is said to be equipped with an AI agent function, allowing it to surpass even essentially the most capable humans in handling tasks.
It is predicted that it should have autonomy, corresponding to planning by itself and asking inquiries to humans when needed, and will even have the power to manage physical tools, robots, and equipment through computers.
The model can absorb information and generate actions at roughly 10 to 100 times the speed of a human, he explained, but could also be limited only by the response times of the physical world or the software with which it interacts.
Finally, fairly than being a single system, it’s made up of hundreds of thousands of copies, each able to acting independently on unrelated tasks or, when needed, all working together in the identical way humans collaborate, all tailored to excel at specific tasks. It was predicted that there could also be subgroups. This was described as “a rustic of geniuses in data centers.”

The Amodey founder went on to spotlight the necessity to proactively manage risks while providing a practical approach for AI to reshape five critical areas: health, neuroscience, and governance.
And even when the primary 4 issues are well resolved, one necessary query still stays. The query is, “How can humans have meaning if AI does all the things?” He confessed that this problem was actually essentially the most difficult.
Nevertheless, he identified, “It’s a mistake to consider that the work done by humans is meaningless simply because AI can do it higher.” Most individuals usually are not the very best on the planet at a specific field and do not really care about it. He also believed that the meaning of labor comes from reference to humans, no matter economics.
It’s predicted that if AI replaces humans in 90% of fields, the role of humans will increase within the remaining 10%. Also, even when humans are more inefficient or cost greater than AI, it’s believed that if humans give you more meaningful content than AI, they are going to have a comparative advantage. Due to this fact, even when the era of ‘data center genius nation’ comes, it is predicted that the economy by which humans participate will likely be maintained in the interim.
He said that sooner or later, when human labor loses all value and AI approaches the purpose where wealth is distributed to humans, the ‘Whuffie system’ could also be introduced. It is a phrase that appears in Cory Doctorow’s science fiction novel, and refers to a way of distributing wealth in keeping with an individual’s social repute or level of respect.
Finally, he said, “I feel the worth of culture is a winning strategy, because culture is the sum of hundreds of thousands of small decisions which have moral force and are likely to get everyone on the identical side.”
Because of this we must find our way based on common sense that everybody agrees with. AI will give us the chance to achieve our destination faster, he added.
Meanwhile, the founding father of Armodey added to DeepMind CEO Demis Hassabis, winner of the 2024 Nobel Prize in Chemistry, “Thanks for showing us the best way.” University of Toronto professor Geoffrey Hinton, a physics prize winner and AI pessimist, didn’t comment.
Reporter Lim Da-jun ydj@aitimes.com