
All hype aside, it’s hard to disclaim the profound impact that AI is having on society and businesses. From startups to enterprises to the general public sector, every customer we confer with is busy experimenting with large language models and generative AI, identifying probably the most promising use cases, and steadily bringing them to production.
The #1 comment we get from customers is that no single model will rule all of them. They understand the worth of constructing one of the best model for every use case to maximise its relevance on company data while optimizing the compute budget. In fact, privacy and mental property are also top concerns, and customers wish to ensure they maintain complete control.
As AI finds its way into every department and business unit, customers also realize the necessity to train and deploy many various models. In a big multinational organization, this might mean running lots of, even 1000’s, of models at any time. Given the pace of AI innovation, newer and higher-performance model architectures can even lead customers to switch their models quicker than expected, reinforcing the necessity to train and deploy recent models in production quickly and seamlessly.
All of this may only occur with standardization and automation. Organizations cannot afford to construct models, tools, and infrastructure from scratch for brand spanking new projects. Fortunately, the previous few years have seen some very positive developments:
- Model standardization: the Transformer architecture is now the de facto standard for Deep Learning applications like Natural Language Processing, Computer Vision, Audio, Speech, and more. It’s now easier to construct tools and workflows that perform well across many use cases.
- Pre-trained models: lots of of 1000’s of pre-trained models are only a click away. You’ll be able to discover and test them directly on Hugging Face and quickly shortlist the promising ones to your projects.
- Open-source libraries: the Hugging Face libraries allow you to download pre-trained models with a single line of code, and you’ll be able to start experimenting together with your data in minutes. From training to deployment to hardware optimization, customers can depend on a consistent set of community-driven tools that work the identical in every single place, from their laptops to their production environment.
As well as, our cloud partnerships let customers use Hugging Face models and libraries at any scale without worrying about provisioning infrastructure and constructing technical environments. This makes it much easier to get high-quality models out the door at a rapid pace without having to reinvent the wheel.
Following up on our collaboration with AWS on Amazon SageMaker and Microsoft on Azure Machine Learning, we’re thrilled to work with none aside from IBM on their recent AI studio, watsonx.ai. watsonx.ai is the next-generation enterprise studio for AI builders to coach, validate, tune, and deploy each traditional ML and recent generative AI capabilities, powered by foundation models.
IBM decided that open source ought to be on the core of watsonx.ai. We couldn’t agree more! Built on RedHat OpenShift, watsonx.ai shall be available within the cloud and on-premise. This is great news for purchasers who cannot use the cloud due to strict compliance rules or are more comfortable working with their confidential data on their infrastructure. Until now, these customers often had to construct their in-house ML platform. They now have an open-source off-the-shelf alternative deployed and managed using standard DevOps tools.
Under the hood, watsonx.ai also integrates many Hugging Face open-source libraries, corresponding to transformers (100k+ GitHub stars!), speed up, peft and our Text Generation Inference server, to call just a few. We’re pleased to partner with IBM and to collaborate on the watsonx AI and data platform in order that Hugging Face customers can work natively with their Hugging Face models and datasets to multiply the impact of AI across businesses.
As well as, IBM has also developed its own collection of Large Language Models, and we’ll work with their team to open-source them and make them easily available within the Hugging Face Hub.
To learn more, watch Dr. Darío Gil, SVP and Director of IBM Research, and our CEO Clem Delangue, announce our collaboration, walk through the watsonx platform, and present IBM’s suite of Large Language Models in an IBM THINK 2023 keynote.
Our joint team is tough at work in the intervening time. We will not wait to indicate you what we have been as much as! Probably the most iconic of technology corporations joining forces with an up-and-coming startup to tackle AI within the Enterprise… who would have thought?
Fascinating times. Stay tuned!
