Upstage’s early test model ‘Solar Pro Preview’ version of ‘Solar Pro’, which was released as open source on the eleventh, took first place within the Hugging Face Open LLM Leaderboard. It has been a couple of 12 months because it took first place in open source model performance in August of last 12 months.
Upstage (CEO Kim Seong-hun) announced on the nineteenth that the Solar Pro preview model achieved 1st place within the Open LLM leaderboard under parameter 70B immediately after its release.
It currently ranks third amongst models with parameters under 70B and fifteenth overall.
That is an evaluation that it is a remarkable achievement, especially considering that the benchmark difficulty has increased significantly for the reason that global leaderboard was reorganized for Season 2. With only 22 billion (22B) parameters, it recorded a mean rating of 39.61, showing performance comparable to that of a big language model (LLM).
Considering that it continues to be a test version, its performance is predicted to enhance further in the long run.
“We’re delighted that our latest model has once more proven its world-class performance on the worldwide leaderboard,” said an Upstage official. “Specifically, Solar Pro outperformed GPT-4o Mini within the EQ Bench and MAGI-Hard evaluations, which evaluate LLM’s emotional intelligence and advanced reasoning capabilities, for the primary time, outperforming the GPT-4 family.”
He continued, “With this, Upstage Solar has established a novel position amongst lightweight language models not only in Korea but additionally in comparison with big tech corporations.”
Meanwhile, Upstage took first place in HuggingFace’s open source LLM leaderboard in August of last 12 months. Then, Solar was utilized by many developers in a fine-tuned version early this 12 months, sweeping the highest 10 spots on HuggingFace’s leaderboard.
EQ benchand Global Leaderboardmay be checked on each respective website.
Reporter Jang Se-min semim99@aitimes.com