Dario Amoday & Tropic CEO insisted that the hallucinations of artificial intelligence (AI) are usually not an obstacle to the AGI. Relatively, it suggested that “AI is less hallucinated than humans,” and that it might be not possible to eliminate hallucinations.
Based on the Techk Lunch on the twenty third, Amoday said on the Code with Claude, the primary developer event of Antrovic.
Picture is a phenomenon during which AI generates information, not true. Amoday CEO claimed that this phenomenon shouldn’t be an obstacle to AI reaching AGI, but reasonably than a few of the progress towards AGI.
“It will depend on the measurement method, however the AI model is probably lower than humans,” he said.
In other words, it refuted the view that the AI is lacking in intelligence simply because it makes a mistake. “TV hosts, politicians, and other people of assorted occupations are continually making mistakes,” he said. Nonetheless, he admitted that the tendency for AI to convey false information to the “convincing tone” is an issue.
It’s difficult to see this remarks as a straightforward confidence that Antropic has been conducting more research than some other company on this field. Antropic has put AI safety as a top priority, and has been a ruins called ‘Constitutional AI’ for 2 years, and has later published a study to grasp the ‘black box’ of the model.
He also released ‘Claude 4’ on the day and revealed all the issues corresponding to AI deceiving or intimidating humans. It is solely avoiding emphasizing that the model is protected, and is conducting intensive research on the actual problems.
He was attracting attention last 12 months that AGI could appear around 2026, and on the event, “the general AI level is rising rapidly.”
In spite of everything, it’s not possible to eliminate the hallucinations, like humans, and that it’s fallacious to see that the event of AI has reached its limit.
The truth is, Open AI also said that probably the most advanced models ‘O3’ and ‘O4-Mini’ have higher hallucinations than the previous models. Particularly, Sam Altman Open AI CEO also said, “It is solely solved should you instruct that you simply are usually not sure concerning the undeniable fact that you are usually not sure to eliminate the hallucinations.”
Unlike his argument, nonetheless, other leaders see hallucinations as a key obstacle to reaching AGI. Demies Husabis, CEO of Google Deep Mind, identified that this week, “AI is currently in a fallacious answer to too many basic questions.”
Consequently, one other controversial factor will probably be added to the standards for AGI. Many experts indicate that AI, which cannot distinguish between facts and fiction, can’t be viewed as AGI.
Nonetheless, Amoday’s remarks say that completely eliminating hallucinations are as not possible as humans, and it shouldn’t be appropriate to make use of it as a criterion for AGI.
As well as, Antropic seems to unravel this problem by strengthening the security and reliability of AI.
By Park Chan, reporter cpark@aitimes.com