Home Artificial Intelligence Saltlux achieved first place within the Hugging Face LLM rating for those under 35B

Saltlux achieved first place within the Hugging Face LLM rating for those under 35B

0
Saltlux achieved first place within the Hugging Face LLM rating for those under 35B

Achieved 1st place within the 35B and below category on the Open LLM leaderboard (Photo = Saltlux)

Saltlux (CEO Gyeong-il Lee), a synthetic intelligence (AI) specialist, announced on the fifteenth that it achieved first place within the parameter 35B or less category on the Hugging Face Open Large Language Model (LLM) leaderboard with LLM 'LUXIA 21.4B'.

The common rating reached 77.74 points. Even based on the general model, it ranked 4th, beating models with parameters of 50B or more. The reason is that it has proven its ‘economic feasibility’ by showing excellent capabilities with few parameters.

Specifically, it was reported that the 'common sense ability (HellaSwag)' and 'reasoning ability (ARC)' test scores were 91.88 and 77.47 points, respectively, breaking the all-time high scores.

Currently, the 21.4B Lucia model is attracting attention, recording greater than 500 downloads in only 3 days of uploading the cuddling face. They announced that they plan to moreover release the bottom version of the Lucia 21.4B model.

Meanwhile, the Lucia Korean model has achieved RAG (retrieval augmented generation) and summary evaluation scores that surpass 'GPT-3.5 Turbo' within the Korean language self-verification using the Korea Intelligence Agency (NIA) AI hub evaluation dataset.

The reason is that based on excellent performance, comparable to achieving over 90% accuracy in RAG-based query answering, the corporate is quickly pursuing a project to introduce LLM with various institutions and firms.

As well as, through industry-academic cooperation with Konkuk University's Natural Language Processing Laboratory (Professor Hak-soo Kim), he said that he’s researching latest methods of model learning and knowledge editing technology that may significantly improve the recency, realism, and security of data.

A Saltlux official said, “At this 12 months’s Saltlux AI Conference, our annual event, we are going to officially introduce an progressive LLM that overwhelmingly surpasses the present Lucia,” adding, “We are going to proceed to secure super-lattice competitiveness to dominate the worldwide AI market.”

Reporter Jang Se-min semim99@aitimes.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here