Acrylic, Jonathan ‘ALLM’ Tops Open Source ‘Tiger Leaderboard’

-

Tiger Leaderboard as of the first (Photo = Acrylic)

Artificial intelligence (AI) specialist Acrylic (CEO Park Oe-jin) announced on the first that its large language model (LLM) ‘Jonathan Allm’ ranked first within the open source category on the ‘Tiger Leaderboard’ operated by Weight & Bias (W&B).

Tiger Leaderboard is a platform that evaluates the performance of Korean LLM models, and was launched in April. It evaluates language comprehension and language production abilities from various angles and discloses the outcomes.

Jonathan Arum, developed by Acrylic, scored high in language comprehension and production ability, rating third overall with a mean rating of 0.6675.

Considering that the first place Antropic’s ‘Claude 3 Opus (0.7542)’ and the 2nd place OpenAI’s ‘GPT-4 (0.7363)’ are closed-source models, it ranked first amongst open sources.

Particularly, it was ahead of strong models corresponding to Google’s ‘Gemini Pro (0.6645)’ in 4th place and Mistral’s ‘Large (0.6259)’ in fifth place.

Acrylic said, “We were in a position to achieve this result through joint research with Professor Woo Hong-wook’s research team at Sungkyunkwan University.” They added, “This research was conducted using a Korean dataset that Acrylic collected and developed itself, and we achieved performance that outperformed large models with a small 8B model.”

Within the case of fine-tuning, he emphasized that the Jonathan platform was used to realize optimal performance. He added that Acrylic’s LLMops is optimized for fine-tuning and augmented search generation (RAG), and that rapid learning of LLM is feasible when combined with the distributed machine learning platform Jonathan.

Meanwhile, many high-performance models released prior to now 1-2 months haven’t yet been registered on the Tiger Leaderboard. ‘Claude 3.5 Sonnet’, ‘GPT-4o’, ‘Rama 3.1’, ‘Mistral Large 2’, and ‘Gemini 1.5 Flash’ usually are not yet reflected on the leaderboard.

Nonetheless, because the recent major reorganization of the Hugging Face LLM leaderboard, that is the primary time that a domestic model has stood out in the worldwide rankings.

Park Oe-jin, CEO of Acrylic, said, “This primary place on the Tiger Leaderboard signifies a step forward in Korean LLM technology, and we are going to proceed to supply improved performance through continuous research and development in the longer term.” He added, “At the identical time, this result proves our LLM Ops technology, and we are going to proceed to present modern AI models through research and development in various fields.”

Reporter Jang Se-min semim99@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x