Home Artificial Intelligence Meta unveils ‘LLaMA’, a large-scale language model like GPT3… joins the generative AI war

Meta unveils ‘LLaMA’, a large-scale language model like GPT3… joins the generative AI war

1
Meta unveils ‘LLaMA’, a large-scale language model like GPT3… joins the generative AI war

(Photo = shutterstock)

Meta also joined the generative artificial intelligence (AI) war in earnest. Nonetheless, as a substitute of releasing products corresponding to search and chatbots, it selected to release the large-scale language model (LLM) that’s the idea.

Meta announced on its official blog on the twenty fourth (local time) that it could openly release ‘LLaMA (Large Language Model Meta AI)’. Meta CEO Mark Zuckerberg also said via Instagram, “LLaMA shows many possibilities in text generation, conversation, data summarization, mathematics, protein generation, etc.” and “Meta is committed to this open research model.”

In keeping with this, Meta will provide LLaMA with a non-commercial license to researchers, academia, government, and civic groups. It has not yet been introduced to meta products corresponding to Facebook and Instagram, Bloomberg said.

As a bonus of LLaMA, Meta cited that it has fewer parameters than OpenAI or Google’s LLM, but it may well achieve great performance with much less computing power by increasing efficiency with high-quality data training. So, we need to make it easy for anyone to make use of.

Including the essential type 66B (65 billion parameters), ▲7B (7 billion units) ▲13B (13 billion units) ▲33B (33 billion units), etc.

OpenAI’s ‘GPT-3.0’ and ‘GPT-3.5 (ChatGPT)’ have 175 billion parameters, and Google’s ‘PaLM’ has 540 billion parameters.

Nonetheless, Meta explained that the standard was improved by increasing the quantity of tokens (text data units) used for LLM training slightly than increasing the parameters. It is claimed that 1.4 trillion were used for LLaMA 65B and 33B, and 1 trillion was also used for LLaMA 7B, the smallest model. level.

As well as, Meta added that he used texts from 20 of the world’s most spoken languages ​​for training.

Meta revealed that by sharing LLM like this, many researchers can use it in various fields through fine-tuning. As well as, he said that if many developers use LLaMA and share test results, it’s going to be of great assist in solving the issues of existing chatbots. To this end, it said it could also share the “LLaMA Model Card,” which details approach “responsible AI.”

Meta emphasized its vision for AI in a conference call earlier this month. On the time, CEO Zuckerberg devoted greater than half of the presentation to AI, saying, “Generative AI is the technology that can lead us now, and the metaverse is the technology of the longer term.”

Regarding this, Gilluria, DA Davidson’s chief analyst, said, “Today’s announcement of Meta appears to be a step toward testing generative AI functions in order that they could be implemented in future products.” point,” he told Reuters.

Meanwhile, Meta unveiled LLM ‘OPT-175B’ in May of last yr, and based on this, it produced a chatbot called ‘Blenderbot 3’ in August. Also in November, it launched Galactica, an AI tool that generates scientific papers, however the service was stopped after three days after being identified for writing inaccurate and racist phrases.

Reporter Lim Dae-jun ydj@aitimes.com

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here