Google warns that artificial intelligence (AGI) can appear by 2030, which might result in a deadly crisis. Nevertheless, it was not a vague guess, but due to the recursive AI Improvement. In other words, it becomes dangerous if AI creates AI.
Google Deep Mind is on the 2nd (local time) ‘An Approach to Technical AGI Safety and Security’The paper was published 145 pages. That is written by 30 Deep Mind researchers, including Shane Leg Deep Mind Co -founder.
The researchers predicted that the ‘exceptional AGI (Exceptional AGI)’ shall be developed until the top of this 10 years, that’s, by 2030. “The exceptional AGI is a system with a capability of not less than 99%of the expert adults in a big selection of non -physical tasks, including meta -cognitive tasks reminiscent of learning latest technologies.”
He also warned that “serious damage” could occur. Even though it was not specifically explained, it was described as “a risk of permanently destroying mankind.”
The query questioned whether the AI’s latest goal is that the ASI is definitely possible. The researchers said, “If there isn’t a essential structural innovation, we will not be sure that ASI will soon appear.”
Nevertheless, essentially the most dangerous of the present paradigms is recursive AI improvement. This can be a feedback loop where AI conducts its own research and creates a more sophisticated AI system.
past 2018 paperIt is understood by way of recursive self-IMPROVEMENT AI. In the course of last 12 months, it became a hot topic within the AI ​​community.

In reality, that is being attempted within the AI ​​agent system. Some agents write an easy program through coding inside to resolve certain problems. If these abilities step by step evolve, they might be upgraded or upgraded to construct their very own models. In spite of everything, it could create AI or robots that will not be related to human intentions just like the movie ‘Terminator’.
The conclusion of the paper is common. Google argued that this paper should block a virtual AGI approach with malicious intentions, improve understanding of the behavior of AI systems, and develop technologies to strengthen the environment that AI can act. He also acknowledged that these studies are of their early stages and have a variety of room for improvement.
“The progressive characteristics of AGI have the potential to bring each tremendous advantages and serious damage,” he said. “In an effort to construct AGI responsibly, it will be significant for advanced AI developers to make a preliminary plan to alleviate serious damage.”
Nevertheless, some experts didn’t agree with some premises on this paper.
Heidi Class, chief AI scientist of AI Nau Institute, a non -profit organization, said in an interview with TechCrunch, “The AGI concept that Google claims is simply too vague to scientifically evaluate.”
Specifically, Matthew Guz Dial Alberta University professor said that the recursive AI improvement is just not realistic at the present technology level. “Recursive improvement is a basic logic basis for claiming Singularity,” he said. “But I actually have never seen evidence that such a system works.”
The more realistic problem is that AI is strengthening itself with incorrect output. Sandra Wahater’s Oxford University researcher said, “Because the output of AI generated on the Web and the information are step by step contaminated, the model is now learning with false and hallucinations,” he said.
By Dae -jun Lim, reporter ydj@aitimes.com