GIST, NVIDIA Conduct Multi-Node GPU Programming Training

-

GIST students are practicing deep learning models using multi-node GPUs. (Photo = GIST)

Gwangju Institute of Science and Technology (GIST, President Lim Ki-cheol) announced on the twenty sixth that it held a deep learning model training (DLI Day) along with the Supercomputing Center (Director Kim Jong-won) and NVIDIA.

To efficiently utilize the multi-node clustering service of 32 or more GPUs, the GIST Supercomputing Center invited NVIDIA experts and conducted a ‘multi-node GPU programming training’ on the X+AI Studio on the primary floor of the AI ​​Graduate School.

A complete of 28 people, including faculty, students, and external researchers from Korea Aerospace University, Hallym University, Chonnam National University, and Chosun University, practiced the most recent technologies and methodologies of deep learning models using supercomputers through the HPC-AI shared infrastructure (Dream-AI) built at GIST.

GIST has maintained an in depth cooperative relationship since inviting NVIDIA’s Large Language Model (LLM) research and development experts in August of last 12 months to offer training on Korean-based language model construction methodology.

Director Kim Jong-won said, “GIST Supercomputing Center is a specialized autonomous driving ultra-high-performance computing center designated by the Ministry of Science and ICT that has the best performance supercomputer amongst domestic educational and research institutions,” and added, “We plan to proceed promoting global open collaboration between academia, industry, and research in order that we are able to provide HPC-specific education within the fields of mobility and digital twins in the long run.”

Reporter Park Soo-bin sbin08@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x