Streamlit lets you visualize datasets and construct demos of Machine Learning models in a neat way. On this blog post we’ll walk you thru hosting models and datasets and serving your Streamlit applications in Hugging Face Spaces.
Constructing demos in your models
You may load any Hugging Face model and construct cool UIs using Streamlit. On this particular example we’ll recreate “Write with Transformer” together. It’s an application that permits you to write anything using transformers like GPT-2 and XLNet.
We won’t dive deep into how the inference works. You simply have to know that it’s good to specify some hyperparameter values for this particular application. Streamlit provides many components so that you can easily implement custom applications. We are going to use a few of them to receive vital hyperparameters contained in the inference code.
- The
.text_areacomponent creates a pleasant area to input sentences to be accomplished. - The Streamlit
.sidebarmethod enables you to simply accept variables in a sidebar. - The
slideris used to take continuous values. Do not forget to presentslidera step, otherwise it’ll treat the values as integers. - You may let the end-user input integer vaues with
number_input.
import streamlit as st
default_value = "See how a contemporary neural network auto-completes your text 🤗 This site, built by the Hugging Face team, permits you to write a complete document directly out of your browser, and you may trigger the Transformer anywhere using the Tab key. Its like having a wise machine that completes your thoughts 😀 Start by typing a custom snippet, try the repository, or try one in all the examples. Have a good time!"
sent = st.text_area("Text", default_value, height = 275)
max_length = st.sidebar.slider("Max Length", min_value = 10, max_value=30)
temperature = st.sidebar.slider("Temperature", value = 1.0, min_value = 0.0, max_value=1.0, step=0.05)
top_k = st.sidebar.slider("Top-k", min_value = 0, max_value=5, value = 0)
top_p = st.sidebar.slider("Top-p", min_value = 0.0, max_value=1.0, step = 0.05, value = 0.9)
num_return_sequences = st.sidebar.number_input('Variety of Return Sequences', min_value=1, max_value=5, value=1, step=1)
The inference code returns the generated output, you may print the output using easy st.write.
st.write(generated_sequences[-1])
Here’s what our replicated version looks like.

You may checkout the complete code here.
Showcase your Datasets and Data Visualizations
Streamlit provides many components to make it easier to visualize datasets. It really works seamlessly with 🤗 Datasets, pandas, and visualization libraries similar to matplotlib, seaborn and bokeh.
Let’s start by loading a dataset. A brand new feature in Datasets, called streaming, lets you work immediately with very large datasets, eliminating the necessity to download the entire examples and cargo them into memory.
from datasets import load_dataset
import streamlit as st
dataset = load_dataset("merve/poetry", streaming=True)
df = pd.DataFrame.from_dict(dataset["train"])
If you will have structured data like mine, you may simply use st.dataframe(df) to indicate your dataset. There are numerous Streamlit components to plot data interactively. One such component is st.barchart() , which I used to visualise probably the most used words within the poem contents.
st.write("Most appearing words including stopwords")
st.bar_chart(words[0:50])
When you’d prefer to use libraries like matplotlib, seaborn or bokeh, all you will have to do is to place st.pyplot() at the top of your plotting script.
st.write("Variety of poems for every creator")
sns.catplot(x="creator", data=df, kind="count", aspect = 4)
plt.xticks(rotation=90)
st.pyplot()
You may see the interactive bar chart, dataframe component and hosted matplotlib and seaborn visualizations below. You can try the code here.
Hosting your Projects in Hugging Face Spaces
You may simply drag and drop your files as shown below. Note that it’s good to include your additional dependencies in the necessities.txt. Also note that the version of Streamlit you will have in your local is identical. For seamless usage, consult with Spaces API reference.
There are such a lot of components and packages you need to use to reveal your models, datasets, and visualizations. You may start here.



