Home Artificial Intelligence Korea’s largest supercom infrastructure built at Gwangju AI Data Center

Korea’s largest supercom infrastructure built at Gwangju AI Data Center

0
Korea’s largest supercom infrastructure built at Gwangju AI Data Center

A synthetic intelligence data center under construction within the high-tech district 3 of Gwangju (Photo=Artificial Intelligence Industry Convergence Project Group)

The AI ​​Industry Convergence Project Group (Director Kim Jun-ha) introduces H100 GPU, a high-tech GPU product to be released within the second half of Nvidia, to the AI ​​data center scheduled to open in October, and promotes a plan to significantly expand performance and capability.

The project group announced on the eleventh that it plans to moreover construct H100 GPU, storage and network equipment by investing a budget of 90 billion won.

Although the precise amount of introduction has not yet been confirmed, it appears that evidently a complete of 1,000 H100 GPUs will likely be introduced to extend the capability of 67 petaflops (PF).

The AI ​​Data Center, which is being in-built the bogus intelligence-centered industrial convergence complex of Gwangju High-Tech District 3, is currently providing 9.3PF computing resources to 121 corporations through cloud services.

In October, it plans so as to add the H100 GPU infrastructure announced this time and open it with a complete capability of 88.5PF. That is greater than thrice larger than the 25,7PF supercomputer No. 5 ‘Nurion’ operated by the Korea Institute of Science and Technology Information (KISTI) National Supercomputing Center.

Through this, the project group plans to supply domestic corporations with the supercomputing infrastructure essential to develop a super-giant artificial intelligence (AI) model at the worldwide level and support them to accumulate global competitiveness.

To make use of AI data center computing resources, you possibly can apply through the ‘AI Integrated Support Service Platform’.

The H100 GPU is a product family based on the ‘Hopper’ architecture, which is taken into account the core of an inference platform for generative AI. It was first announced on the developer conference ‘GTC 2023’ held by Nvidia last month.

In comparison with the present A100 family, it boasts 12 times faster speed. Currently, Microsoft’s cloud service ‘Azure’ is conducting a test in the shape of a preview.

As well as, after the official opening of the AI ​​data center in October, the project group will provide storage services that may collect, analyze, and process big data secured by AI corporations, and services that support testing on the pre-commercialization stage after AI learning. am.

“From this month, now we have been providing A100 GPU systems to support the event of complex AI models. After October, domestic AI corporations will have the opportunity to directly experience the H100 system, which is thrice faster than A100,” said Director Kim Jun-ha. We sit up for contributing to the event of the ecosystem.”

Reporter Hojeong Na hojeong9983@aitimes.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here