Cross-posted from the Gradio blog.
The Hugging Face Model Hub has greater than 10,000 machine learning models submitted by users. You’ll find every kind of natural language processing models that, for instance, translate between Finnish and English or recognize Chinese speech. More recently, the Hub has expanded to even include models for image classification and audio processing.
Hugging Face has at all times worked to make models accessible and simple to make use of. The transformers library makes it possible to load a model in a number of lines of code. After a model is loaded, it may be used to make predictions on recent data programmatically. Nevertheless it’s not only programmers which can be using machine learning models! An increasingly common scenario in machine learning is demoing models to interdisciplinary teams or letting non-programmers use models (to assist discover biases, failure points, etc.).
The Gradio library lets machine learning developers create demos and GUIs from machine learning models very easily, and share them without cost along with your collaborators as easily as sharing a Google docs link. Now, we’re excited to share that the Gradio 2.0 library allows you to load and use almost any Hugging Face model with a GUI in only 1 line of code. Here’s an example:
By default, this uses HuggingFace’s hosted Inference API (you’ll be able to supply your individual API key or use the general public access without an API key), or you may also run pip install transformers and run the model computations locally for those who’d like.
Do you must customize the demo? You may override any of the default parameters of the Interface class by passing in your individual parameters:
But wait, there’s more! With 10,000 models already on Model Hub, we see models not only as standalone pieces of code, but as lego pieces that could be composed and mixed to create more sophisticated applications and demos.
For instance, Gradio allows you to load multiple models in parallel (imagine you must compare 4 different text generation models from Hugging Face to see which one is one of the best to your use case):
Or put your models in series. This makes it easy to construct complex applications built from multiple machine learning models. For instance, here we will construct an application to translate and summarize Finnish news articles in 3 lines of code:
You may even mix multiple models in series in comparison with one another in parallel (we’ll let you are trying that yourself!). To try any of this out, just install Gradio (pip install gradio) and pick a Hugging Face model you must try. Start constructing with Gradio and Hugging Face 🧱⛏️




