Today at GTC Paris, we’re excited to announce Training Cluster as a Service in collaboration with NVIDIA, to make large GPU clusters more easily accessible for research organizations all around the world, so that they can train the foundational models of tomorrow in every domain.
Making GPU Clusters Accessible
Many Gigawatt-size GPU supercluster projects are being built to coach the subsequent gen of AI models. This could make it seem that the compute gap between the “GPU poor” and the “GPU wealthy” is quickly widening. However the GPUs are on the market, as hyperscalers, regional and AI-native cloud providers all quickly expand their capability.
How can we then connect AI compute capability with the researchers who need it? How can we enable universities, national research labs and corporations all around the world to construct their very own models?
That is what Hugging Face and NVIDIA are tackling with Training Cluster as a Service – providing GPU cluster accessibility, with the pliability to only pay at some stage in training runs.
To start, any of the 250,000 organizations on Hugging Face can request the GPU cluster size they need, after they need it.
How it really works
To start, you possibly can request a GPU cluster on behalf of your organization at hf.co/training-cluster
Training Cluster as a Service integrates key components from NVIDIA and Hugging Face into a whole solution:
- NVIDIA Cloud Partners provide capability for the most recent NVIDIA accelerated computing like NVIDIA Hopper and NVIDIA GB200 in regional datacenters, all centralized inside NVIDIA DGX Cloud
- NVIDIA DGX Cloud Lepton – announced today at GTC Paris – provides quick access to the infrastructure provisioned for researchers, and enables training run scheduling and monitoring
- Hugging Face developer resources and open source libraries make it easy to get training runs began.
Once your GPU cluster request is accepted, Hugging Face and NVIDIA will collaborate to source, price, provision and arrange your GPU cluster per your size, region and duration requirements.
Clusters at Work
Advancing Rare Genetic Disease Research with TIGEM
The Telethon Institute of Genomics and Medicine – TIGEM for brief – is a research center dedicated to understanding the molecular mechanisms behind rare genetic diseases and developing novel treatments. Training recent AI models is a brand new path to predict the effect of pathogenic variants and for drug repositioning.
AI offers recent ways to research the causes of rare genetic diseases and to develop treatments, but our domain requires training recent models. Training Cluster as a Service made it easy to obtain the GPU capability we would have liked, at the fitting time
— Diego di Bernardo, Coordinator of the Genomic Medicine program at TIGEM
Advancing AI for Mathematics with Numina
Numina is a non-profit organization constructing open-source, open-dataset AI for reasoning in math – and won the 2024 AIMO progress prize.
We’re tracking well on our objective to construct open alternatives to one of the best closed-source models, resembling Deepmind’s AlphaProof. Computing resources is our bottleneck today – with Training Cluster as a Service we are going to have the opportunity to succeed in our goal!
— Yann Fleureau, cofounder of Project Numina
Advancing Material Science with Mirror Physics
Mirror Physics is a startup creating frontier AI systems for chemistry and materials science.
Along with the MACE team, we’re working to push the bounds of AI for chemistry. With Training Cluster as a Service, we’re producing high-fidelity chemical models at unprecedented scale. That is going to be a major step forward for the sphere.
— Sam Walton Norwood, CEO and founder at Mirror
Powering the Diversity of AI Research
Training Cluster as a Service is a brand new collaboration between Hugging Face and NVIDIA to make AI compute more available to the worldwide community of AI researchers.
Access to large-scale, high-performance compute is crucial for constructing the subsequent generation of AI models across every domain and language. Training Cluster as a Service will remove barriers for researchers and corporations, unlocking the power to coach essentially the most advanced models and push the boundaries of what’s possible in AI.
— Clément Delangue, cofounder and CEO of Hugging Face
Integrating DGX Cloud Lepton with Hugging Face’s Training Cluster as a Service gives developers and researchers a seamless option to access high-performance NVIDIA GPUs across a broad network of cloud providers. This collaboration makes it easier for AI researchers and organizations to scale their AI training workloads while using familiar tools on Hugging Face.
— Alexis Bjorlin, vp of DGX Cloud at NVIDIA
Enabling AI Builders with NVIDIA
We’re excited to collaborate with NVIDIA to supply Training Cluster as a Service to Hugging Face organizations – you possibly can start today at hf.co/training-cluster
Today at GTC Paris, NVIDIA announced many recent contributions for Hugging Face users, from agents to robots!
