[AI&빅데이터쇼] Shin Jeong-gyu, CEO of Rabble Up, “The bounds of AI support will steadily decrease.”

-

Shin Jeong-gyu, CEO of Rabble Up, is giving a lecture on the ‘The Wave Seoul’ conference.

Shin Jeong-gyu, CEO of Rabble Up, spoke on the ‘Gen Artificial Intelligence (AI)’ session on the first day of ‘THE WAVE Seoul’, a side event of ‘thirteenth Smart Tech Korea’ on the nineteenth, ‘Connecting the dots of generation: on the desktop level. He gave a lecture on the subject of ‘resource-intensive AI workloads’.

‘Artificial Intelligence & Big Data Show’, hosted and hosted by the Intelligence Information Industry Association (AIIA), Exporum, and AI Times, and ‘Smart Tech Korea’, the nation’s largest cutting-edge technology exhibition, will host 40 conferences including Techcon and The Wave Seoul. The above lecture sessions are being presented.

Accordingly, RavelUp, which has GPU split provision technology, also participated and presented the present status and prospects of artificial intelligence, which is steadily reducing environmental limitations corresponding to local AI and on-device AI.

CEO Shin Jeong-gyu said, “As issues about dependence on NVIDIA GPUs have recently been raised, ‘local AI’ is rapidly emerging again.” “It is because conditions corresponding to cost are met,” he emphasized.

Specifically, within the case of Lamar.cpp, he explained that it’s a tool that supports running OpenAI Whisper on the CPU, and that “it has change into so famous that if LLM works offline, it could possibly be said to be based on Lama.cpp.”

It’s explained that ‘hardware design’, ‘model lightweighting and compression’, and ‘model inference software’ will likely be the technical points for the revival of on-device and native AI in the longer term.

CEO Shin Jeong-gyu said, “We’re predicting the opportunity of revitalization of the external NPU market by 2026. At the identical time, with Google, Microsoft, and NVIDIA all announcing ARM-based CPUs, a big competitive market will likely be established.”

He continued, “Before, there was a time after we needed to ask where to get 200 GPUs. We recently used greater than 20,000 GPUs when developing Rama 3, and if we take a look at recent trends, changes within the AI ​​environment will proceed to speed up in the longer term.” .

Reporter Jang Se-min semim99@aitimes.com

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x