“Development of AI that identifies AI risks”… ‘AI 4 Heavenly Kings’ Bengio joins AI safety project

-

(Photo = yoshuabengio.org/)

Yoshua Bengio, a professor on the University of Montreal, has joined a British government-led artificial intelligence (AI) safety project that goals to develop AI models to discover risks in AI.

MIT Technology Review reported on the seventh (local time) that Professor Bengio participated in a project called ‘Safeguarded AI’ supported by the British government and the ARIA Foundation.

The project, which is able to cost £59m over the following 4 years, is constructing an AI system that may calculate a quantitative index, or “risk rating,” of the impact of AI models on the true world. The thought is to enhance human testing with mathematical evaluation of the potential harm of latest systems.

The goal is to construct AI safety mechanisms by combining scientific world models, that are simulations of the world, with mathematical proofs. This includes descriptions of what the AI ​​does, and humans are answerable for checking whether the security checks of the AI ​​model are correct.

Professor Bengio will join as Scientific Director and supply scientific advice.

He argued that the complexity of advanced large language models (LLMs) makes it inevitable to make use of AI to guard AI. “That’s the one way, because in some unspecified time in the future the models turn out to be too complex,” he said. “Even the models that exist today cannot interpret the answers as a sequence of inference steps that humans can understand.”

Tech corporations also added that there is no such thing as a mathematical option to guarantee that AI systems will work as programmed, an issue that could lead on to catastrophic results.

Professor Bengio is a widely known AI safety advocate. Last October, he published a paper with scholars reminiscent of Stuart Russell calling for more investment in safety and ethics research from AI corporations and governments. He also chairs a global scientific report on advanced AI safety, which incorporates 30 countries, the European Union (EU), and the UN.

“We’re running right into a fog that will have a cliff behind it,” he said. “It could take years, it could take many years, and we do not know the way bad it’s. But we’d like to construct tools to clear the fog and be certain that that if there is a cliff, we do not fall over it.”

Meanwhile, Safeguard AI is considered one of the UK government’s key projects to solidify its position as a pacesetter in the sphere of AI safety. With this intention, the UK held its first ‘AI Safety Summit’ in London last November.

Reporter Im Dae-jun ydj@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x