Home Artificial Intelligence Domestic and international leaders gathered for ‘protected AI’… Global AI Safety Conference held

Domestic and international leaders gathered for ‘protected AI’… Global AI Safety Conference held

0
Domestic and international leaders gathered for ‘protected AI’… Global AI Safety Conference held

Emad Mostak, former CEO of Stability AI, participated within the conference via video.

Leaders of domestic and foreign big tech and startups gathered together to debate securing the protection and reliability of artificial intelligence (AI).

On the twelfth, the 'Global AI Safety Conference' was held at COEX in Seoul because the second day event of the 'Gen AI Red Team Challenge (Gen AI Korea 2024)'.

This event, hosted by the Ministry of Science and ICT and took part by Naver and Select Star as representative joint partners, held a 'Red Team Hackathon' the day before, and attracted about 600 people. The aftereffects continued on the conference, with many attendees.

The keynote included global celebrities corresponding to Cohere CEO Aidan Gomez and former Stability AI CEO Emad Mostak, in addition to ▲ Ha Jung-woo, head of Naver Future AI Center ▲ Select Star CEO Kim Se-yeop ▲ Dan Hendrix, head of Center for AI Safety ▲ Chris Meserol Frontier Model Forum (AI Ethics and Safety Forum) CEO ▲ Kim Kyung-hoon, Kakao AI Safety Leader ▲ Korea Advanced Institute of Science and Technology (KAIST) Professor Oh Hye-yeon ▲ SK Telecom Global Telco Manager Eric Davis participated.

Emad Mostak, who recently voluntarily resigned as CEO of Stability AI, participated within the talk via video call. Select Star Vice President Hwang Min-young was answerable for the proceedings. Emma de Mostark mentioned 'voluntary' resignation on her personal SNS, emphasizing the distribution of power, decentralization, and governance within the AI ​​industry. It was the identical at this conference.

He said, “I quit about two and a half months ago,” and claimed, “Despite the fact that AI is evolving to human level, I still think there isn’t a governance.”

Specifically, it was identified that only just a few countries and large tech have 'AI data set access rights'. He explained that AI regulations must establish standards ranging from ‘data set input’ in addition to AI use.

He said that almost all open source and enormous models are based on 'English' data, and that for true AI equality and safety, 'LLM that includes the characteristics of every country's language' have to be developed. To this end, he said that other countries, including Korea, must have the best to talk and choose on datasets.

He said that he’s currently planning a recent company that may implement this goal, and that he plans to further use it as a vertical AI within the healthcare field. He also said, “Now I actually have more time to realize recent goals.”

Regarding the outlook for AI, he predicted, “Now optimization will turn into more vital than the scale of knowledge,” and “It’ll be safer if multiple models collaborate slightly than one large AGI.”

Lastly, he emphasized that closed AI, or 'non-governance AI', is currently strong, but in the longer term, 'governance AI' and 'explainable AI' that undergo citizen education, training, and testing should take precedence.

Aidan Gomez Cohere CEO
Aidan Gomez Cohere CEO

Aidan Gomez, CEO of Cohere, also spoke via video call. He said, “Cohere will soon launch a multilingual LLM that supports 10 languages, including Korean.”

Specifically, as they’ve been constructing search augmented generation (RAG)-based LLM for various corporations, they announced that they’ll construct relatively 'protected AI'. The reason is that the general decision, including terms and conditions, policies, and topics, was decided through mutual communication with the corporate.

He said he has a “positive stance” on AI regulation. Nevertheless, he emphasized that regulation is required to bring about 'equal outcomes'. The opinion is that if excessive regulations are implemented, it is going to be the startups that can have difficulty responding, and ultimately only the prevailing most important players will survive.

Cohere said it’s constructing an evaluation group of a whole bunch of individuals to guage the model, and plans to further increase its scale by expanding into multiple languages.

Within the case of toxicity data, it was introduced that the prevailing approach to ‘excluding it from learning altogether’ was modified to ‘a way of learning first after which adding a security filter to forestall mention of the info later.’

Ha Jeong-woo, head of Naver Future AI Center
Ha Jeong-woo, head of Naver Future AI Center

As well as, Naver Future AI Center Director Ha Jeong-woo, the protagonist of 'Hyperclova', gave a lecture titled 'Naver's Efforts for Responsible AI within the Era of Super-Generative AI'.

Specifically, the corporate introduced the behind-the-scenes details of how 'protected answers' were a top priority in the course of the construction and testing of the chatbot 'HyperclovaX' released last 12 months.

Director Ha said, “Some people have identified that Hyperclova Subsequently, it’s something that can not be helped,” he explained.

Nevertheless, he emphasized that it had nothing to do with the accuracy or performance of the answers.

For instance, we’ll remain neutral in response to questions that result in uncertain predictions, estimates, or biased opinions, corresponding to “What religion should all of us consider in?” “Should the federal government support free school meals?” “Is it an excellent idea to speculate in Samsung stocks?” He revealed that he had passed through 'social sensitivity exclusion learning' to enable him to achieve this.

Specifically, it was said that it was a difficult process that took a 12 months and a half simply to define 'socially sensitive questions'.

As well as, he emphasized that he is working a red team and a security team and is actively recruiting people.

Kim Se-yeop, CEO of Select Star
Kim Se-yeop, CEO of Select Star

Selectstar CEO Kim Se-yeop argued that standards for ‘customized AI reliability evaluation for every service’ are needed. He said that evaluating the credibility of LLM is one in every of the main issues within the industry.

They said they plan to run a leaderboard challenge for six months within the second half of this 12 months. We plan to reveal all results to the AI ​​Hub at the top of the 12 months.

Meanwhile, after the conference ended, an awards ceremony was held for the 'Red Team Challenge' held on the eleventh.

Reporter Jang Se-min semim99@aitimes.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here