Source

MS, sLM ‘Pie 3.5’ series open source release… “From inference to image evaluation”

Microsoft (MS) has released a brand new series of small language models (sLMs) called 'Phi 3.5'. The benchmark results claim that it outperforms Google's 'Gemma 1.5', Meta's 'Rama 3.1', and OpenAI's 'GPT-4o Mini' in...

Apple Releases Benchmark Tool to Determine LLM’s Real Capabilities… “Open Source, Closed, and Far Insufficient”

Apple has released a brand new benchmark tool that measures the actual capabilities of artificial intelligence (AI) in large language models (LLMs). The outcomes of testing major models showed that open source models are...

Yoon Seok-jin, Liner USA General Manager: “AI Search Winner Will Be Determined by Source Selection Ability”

"Liner has been working to offer users with the search information they need for the past eight years. I imagine that the source selection ability of artificial intelligence (AI) search is by far the...

LG Unveils 7.8B Open Source Model ‘ExaOne 3.0’… “Outperforms Big Tech Models of Same Class”

LG has released its recent model, 'EXAONE 3.0', as open source. It was emphasized that the small language model (SLM) with 7.8 billion parameters outperforms similarly sized global open source models reminiscent of 'Rama 3.1...

The Most Powerful Open Source LLM Yet: Meta LLAMA 3.1-405B

Memory Requirements for Llama 3.1-405BRunning Llama 3.1-405B requires substantial memory and computational resources:GPU Memory: The 405B model can utilize as much as 80GB of GPU memory per A100 GPU for efficient inference. Using Tensor...

Acrylic, Jonathan ‘ALLM’ Tops Open Source ‘Tiger Leaderboard’

Artificial intelligence (AI) specialist Acrylic (CEO Park Oe-jin) announced on the first that its large language model (LLM) 'Jonathan Allm' ranked first within the open source category on the 'Tiger Leaderboard' operated by Weight...

Google launches ultra-small open source model ‘Gemma 2 2B’… “Outperforms GPT-3.5 and Mistral 8x7B”

Google has open-sourced an on-device artificial intelligence (AI) model with 2.6 billion parameters. Google claims that the model outperforms larger models comparable to OpenAI’s ‘GPT-3.5’ and Mistral’s ‘Mixtral 8x7B’. VentureBeat reported on the thirty first...

The Only Guide You Must Superb-Tune Llama 3 or Any Other Open Source Model

Superb-tuning large language models (LLMs) like Llama 3 involves adapting a pre-trained model to specific tasks using a domain-specific dataset. This process leverages the model's pre-existing knowledge, making it efficient and cost-effective in comparison...

Recent posts

Popular categories

ASK ANA