Cerebras, self-developed small language model released as open source


(Photo = Cerebras)

Artificial intelligence (AI) chip developer Cerebras has released seven small language models trained by itself supercomputer ‘Andromeda’ as an open source.

The language model released by the corporate totally free is a man-made intelligence model that functions similarly to ‘ChatGPT’. Nonetheless, the variety of parameters used for training ranged from 111 million to 13 billion, which may be very small in comparison with the 175 billion utilized in ChatGPT.

In response to Reuters, Cerebras announced on the twenty eighth (local time) that it has released these models totally free to advertise more collaboration within the AI ​​research and development community.

On this regard, CEO Andrew Feldman said, “There’s an inclination to not open source AI technologies, which shouldn’t be surprising because development costs a variety of money.” is released freed from charge,” he said.

Cerebras explained that the small language model unveiled this time might be moreover trained or customized on the Nvidia system, and that the training period for the biggest model took a little bit over per week.

As well as, it said that models with small parameters might be deployed on phones or smart speakers, and models with large parameters might be used on personal computers or servers.

Large language models with many parameters can perform more complex generative functions, while small language models can reduce training cost and duration, and have the advantage of being easier to integrate with real applications.

Jeong Byeong-il, member jbi@aitimes.com


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x