Home Artificial Intelligence The value of ‘H100’ soars on account of the blast of generative AI… Asking price of as much as 60 million won per unit

The value of ‘H100’ soars on account of the blast of generative AI… Asking price of as much as 60 million won per unit

0
The value of ‘H100’ soars on account of the blast of generative AI… Asking price of as much as 60 million won per unit

Nvidia H100 GPU (Photo = Nvidia)

It was found that the value of H100, a GPU module for Nvidia’s supercomputer, is skyrocketing on account of the explosion of generative artificial intelligence (AI), including ‘ChatGPT’.

Based on a CNBC report on the 14th (local time), the value of the H100 sold on eBay was $36,000 (about 47 million won) until last 12 months, however it has recently soared to a maximum of $46,000 (about 60 million won). It has risen by about 10,000 dollars (about 13 million won) in a couple of months.

The H100 is the newest GPU system based on the ‘Hopper’ architecture that Nvidia released in October last 12 months. As much as 256 of them will be connected to speed up Exascale’s work, and a dedicated Transformer engine can handle large language models with trillions of parameters.

The recent surge in price is interpreted because the rapid increase within the variety of corporations attempting to develop similar large language models as the recognition of ChatGPT exploded. Demand is much greater than supply.

The truth is, Nvidia released the H100 GPU in October of last 12 months, but within the meantime, it has been supplying mainly to corporations with high priority. Specifically, even when graphics cards are produced, it takes a substantial period of time to configure them into modules, so it will not be yet in a position to meet the growing demand.

Furthermore, Nvidia plans to mass-produce and provide DGX, a supercomputer for data centers that connects as much as 8 H100 modules around October, and at the identical time launch its own rental service. The provision shortage doesn’t appear to be easily resolved.

Meanwhile, a lot of the GPUs utilized in recent supercomputer configurations were A100 systems based on the ‘Ampere’ architecture. The value of the A100 module is around 10,000 dollars (about 13 million won) per unit. The supercomputer utilized by OpenAI to develop ChatGPT can be equipped with this GPU.

DGX for data centers, which Nvidia announced earlier this 12 months that it should provide rental services for $37,000 (about 48 million won) per thirty days, can be an infrastructure equipped with A100. The value of DGX service equipped with H100 is predicted to be no less than 4 times costlier than this.

Chan Park, cpark@aitimes.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here