Constructing for an Open Future

-


Jeff Boudier's avatar

Simon Pagezy's avatar


Today, we’re completely satisfied to announce a brand new and deeper partnership with Google Cloud, to enable firms to construct their very own AI with open models.

“Google has made a few of the most impactful contributions to open AI, from the OG transformer to the Gemma models. I consider in a future where all firms will construct and customize their very own AI. With this recent strategic partnership, we’re making it easy to do on Google Cloud.” says Jeff Boudier, at Hugging Face.

“Hugging Face has been the driving force enabling firms large and small everywhere in the world to access, use and customize now greater than 2 million open models, and we’ve been proud to contribute over 1,000 of our models to the community”, says Ryan J. Salva, Senior Director of Product Management at Google Cloud. “Together we are going to make Google Cloud one of the best place to construct with open models.”



A Partnership for Google Cloud customers

Google Cloud customers use open models from Hugging Face in a lot of its leading AI services. In Vertex AI, the preferred open models are able to deploy in a pair clicks inside Model Garden. Customers who want greater control over their AI infrastructure can find an analogous model library available in GKE AI/ML, or use pre-configured environments maintained by Hugging Face. Customers also run AI inference workloads with Cloud Run GPUs, enabling serverless open model deployments.

The common thread: we work with Google Cloud to construct seamless experiences fully leveraging the unique capabilities of every service to supply alternative to the shoppers.



The Gateway to Open Models – A Fast Lane for Google Cloud Customers

Usage of Hugging Face by Google Cloud customers has grown 10x during the last 3 years, and today, this translates into tens of petabytes of model downloads every month, in billions of requests.

To be certain Google Cloud customers have one of the best experience constructing with models and datasets from Hugging Face, we’re working together to create a CDN Gateway for Hugging Face repositories built on top of each Hugging Face Xet optimized storage and data transfer technologies, and Google Cloud advanced storage and networking capabilities.

This CDN Gateway will cache Hugging Face models and datasets directly on Google Cloud to significantly reduce downloading times, and strengthen model supply chain robustness for Google Cloud customers. Whether you’re using Vertex, GKE, Cloud Run or simply constructing your individual stack in VMs in Compute Engine, you’ll profit from faster time-to-first-token and simplified model governance.



A partnership for Hugging Face customers

Hugging Face Inference Endpoints is the best technique to go from model to deployment in only a pair clicks. Through this deepened partnership we are going to bring the unique capabilities and value performance of Google Cloud to Hugging Face customers, starting with Inference Endpoints. Expect more and newer instances available in addition to price drops!

We are going to ensure all of the fruits of our product and engineering collaboration change into easily available to the ten million AI Builders on Hugging Face. Going from a model page to deploying on Vertex Model Garden or GKE should only take a pair steps. Taking a non-public model securely hosted in an Enterprise organization on Hugging Face must be as easy as working with public models.

TPUs, Google custom AI accelerator chips now of their seventh generation, have been steadily improving in performance and software stack maturity. We would like to be certain Hugging Face users can fully profit from the present and the subsequent generations of TPUs once they construct AI with open models. We’re excited to make TPUs as easy to make use of as GPUs for Hugging Face models, because of native support in our libraries.

Moreover, this recent partnership will enable Hugging Face to leverage Google industry-leading security technology to make the hundreds of thousands of open models on Hugging Face safer. Powered by VirusTotal, Google Threat Intelligence and Mandiant, this joint effort goals to secure models, datasets and Spaces as you utilize the Hugging Face Hub each day.



Constructing the open way forward for AI together

We would like to see a future where every company can construct their very own AI with open models and host it inside their very own secure infrastructure, with full control. We’re excited to make this future occur with Google Cloud. Our deep collaboration will speed up this vision, whether you’re using Vertex AI Model Garden, Google Kubernetes Engine, Cloud Run or Hugging Face Inference Endpoints.

Is there something you wish us to create or improve because of our partnership with Google? Tell us within the comments!




Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x