TL;DR: Daggr is a brand new, open-source Python library for constructing AI workflows that connect Gradio apps, ML models, and custom functions. It robotically generates a visible canvas where you’ll be able to inspect intermediate outputs, rerun individual steps, and manage state for complex pipelines, all in a couple of lines of Python code!
Table of Contents
Background
If you happen to’ve built AI applications that mix multiple models or processing steps, you recognize the pain: chaining API calls, debugging pipelines, and losing track of intermediate results. When something goes fallacious in step 5 of a 10-step workflow, you regularly must re-run all the pieces simply to see what happened.
Most developers either construct fragile scripts which are hard to debug or turn to heavy orchestration platforms designed for production pipelines—not rapid experimentation.
We have been working on Daggr to unravel problems we kept running into when constructing AI demos and workflows:
Visualize your code flow: Unlike node-based GUI editors, where you drag and connect nodes visually, Daggr takes a code-first approach. You define workflows in Python, and a visible canvas is generated robotically. This implies you get the perfect of each worlds: version-controllable code and visual inspection of intermediate outputs.
Inspect and Rerun Any Step: The visual canvas is not only for show. You’ll be able to inspect the output of any node, modify inputs, and rerun individual steps without executing the complete pipeline. That is invaluable while you’re debugging a 10-step workflow and only step 7 is misbehaving. You’ll be able to even provide “backup nodes” – replacing one model or Space with one other – to construct resilient workflows.
First-Class Gradio Integration: Since Daggr is built by the Gradio team, it really works seamlessly with Gradio Spaces. Point to any public (or private) Space and you should use it as a node in your workflow. No adapters, no wrappers—just reference the Space name and API endpoint.
State Persistence: Daggr robotically saves your workflow state, input values, cached results, canvas position—so you’ll be able to pick up where you left off. Use “sheets” to keep up multiple workspaces throughout the same app.
Getting Began
Install daggr with pip or uv, it just requires Python 3.10 or higher:
pip install daggr
uv pip install daggr
Here’s a straightforward example that generates a picture and removes its background. Try this Space’s API reference from the underside of the Space to see which inputs it takes and which outputs it yields. In this instance, the Space returns each original image and the edited image, so we return only the edited image.
import random
import gradio as gr
from daggr import GradioNode, Graph
image_gen = GradioNode(
"hf-applications/Z-Image-Turbo",
api_name="/generate_image",
inputs={
"prompt": gr.Textbox(
label="Prompt",
value="A cheetah sprints across the grassy savanna.",
lines=3,
),
"height": 1024,
"width": 1024,
"seed": random.random,
},
outputs={
"image": gr.Image(label="Generated Image"),
},
)
bg_remover = GradioNode(
"hf-applications/background-removal",
api_name="/image",
inputs={
"image": image_gen.image,
},
outputs={
"original_image": None,
"final_image": gr.Image(label="Final Image"),
},
)
graph = Graph(
name="Transparent Background Generator",
nodes=[image_gen, bg_remover]
)
graph.launch()
That is it. Run this script and also you get a visible canvas served on port 7860 launched robotically, in addition to a shareable live link, showing each nodes connected, with inputs you’ll be able to modify and outputs you’ll be able to inspect at each step.
Node Types
Daggr supports three forms of nodes:
GradioNode calls a Gradio Space API endpoint or locally served Gradio app. Passing run_locally=True, Daggr robotically clones the Space, creates an isolated virtual environment, and launches the app. If local execution fails, it gracefully falls back to the distant API.
node = GradioNode(
"username/space-name",
api_name="/predict",
inputs={"text": gr.Textbox(label="Input")},
outputs={"result": gr.Textbox(label="Output")},
)
node = GradioNode(
"hf-applications/background-removal",
api_name="/image",
run_locally=True,
inputs={"image": gr.Image(label="Input")},
outputs={"final_image": gr.Image(label="Output")},
FnNode — runs a custom Python function:
def process(text: str) -> str:
return text.upper()
node = FnNode(
fn=process,
inputs={"text": gr.Textbox(label="Input")},
outputs={"result": gr.Textbox(label="Output")},
)
InferenceNode — calls a model via Hugging Face Inference Providers:
node = InferenceNode(
model="moonshotai/Kimi-K2.5:novita",
inputs={"prompt": gr.Textbox(label="Prompt")},
outputs={"response": gr.Textbox(label="Response")},
)
Sharing Your Workflows
Generate a public URL with Gradio’s tunneling:
graph.launch(share=True)
For everlasting hosting, deploy on Hugging Face Spaces using the Gradio SDK—just add daggr to your requirements.txt.
End-to-End Example with Different Nodes
We’ll now develop an app that takes in a picture and generates a 3D asset. This demo can run on daggr 0.4.3. Listed below are the steps:
- Take a picture, remove the background: For this, we are going to clone the BiRefNet Space and run it locally.
- Downscale the image for efficiency: We’ll write a straightforward function for this with FnNode.
- Generate a picture in 3D asset style for higher results: We’ll use InferenceNode with Flux.2-klein-4B model on Inference Providers.
- Pass the output image to a 3D generator: We’ll send the output image to the Trellis.2 Space hosted on Spaces.
Spaces which are run locally might take models to CUDA (with
to.(“cuda”)) or ZeroGPU inside the applying file. To disable this behavior to run the model on CPU (useful if you have got a tool with no NVIDIA GPU) duplicate the Space you wish to use and clone it.
The resulting graph looks like below.
Let’s write step one, which is the background remover. We’ll clone and run this Space locally. This Space runs on CPU, and takes ~13 seconds to run. You’ll be able to swap with this app if you have got an NVIDIA GPU.
from daggr import FnNode, GradioNode, InferenceNode, Graph
background_remover = GradioNode(
"merve/background-removal",
api_name="/image",
run_locally=True,
inputs={
"image": gr.Image(),
},
outputs={
"original_image": None,
"final_image": gr.Image(
label="Final Image"
),
},
)
For the second step, we’d like to jot down a helper function to downscale the image and pass it to FnNode.
from PIL import Image
from daggr.state import get_daggr_files_dir
def downscale_image_to_file(image: Any, scale: float = 0.25) -> str | None:
pil_img = Image.open(image)
scale_f = max(0.05, min(1.0, float(scale)))
w, h = pil_img.size
new_w = max(1, int(w * scale_f))
new_h = max(1, int(h * scale_f))
resized = pil_img.resize((new_w, new_h), resample=Image.LANCZOS)
out_path = get_daggr_files_dir() / f"{uuid.uuid4()}.png"
resized.save(out_path)
return str(out_path)
We are able to now pass within the function to initialize the FnNode.
downscaler = FnNode(
downscale_image_to_file,
name="Downscale image for Inference",
inputs={
"image": background_remover.final_image,
"scale": gr.Slider(
label="Downscale factor",
minimum=0.25,
maximum=0.75,
step=0.05,
value=0.25,
),
},
outputs={
"image": gr.Image(label="Downscaled Image", type="filepath"),
},
)
We’ll now write the InferenceNode with the Flux model.
flux_enhancer = InferenceNode(
model="black-forest-labs/FLUX.2-klein-4B:fal-ai",
inputs={
"image": downscaler.image,
"prompt": gr.Textbox(
label="prompt",
value=("Transform this right into a clean 3D asset render"),
lines=3,
),
},
outputs={
"image": gr.Image(label="3D-Ready Enhanced Image"),
},
)
When deploying apps with InferenceNode to Hugging Face Spaces, use a fine-grained Hugging Face access token with the choice “Make calls to Inference Providers” only.
Last node is 3D generation with querying the Trellis.2 Space on Hugging Face.
trellis_3d = GradioNode(
"microsoft/TRELLIS.2",
api_name="/image_to_3d",
inputs={
"image": flux_enhancer.image,
"ss_guidance_strength": 7.5,
"ss_sampling_steps": 12,
},
outputs={
"glb": gr.HTML(label="3D Asset (GLB preview)"),
},
)
Chaining them together and launching the app is so simple as follows.
graph = Graph(
name="Image to 3D Asset Pipeline",
nodes=[background_remover, downscaler, flux_enhancer, trellis_3d],
)
if __name__ == "__main__":
graph.launch()
You will discover the entire example running in this Space, to run locally you only must take app.py, install requirements and login to Hugging Face Hub.
Next Steps
Daggr is in beta and intentionally lightweight. APIs may change between versions, and while we persist workflow state locally, data loss is feasible during updates. If you have got feature requests or find bugs, please open a problem here. We’re looking forward to your feedback! Share your daggr workflows on socials with Gradio for a likelihood to be featured. Try all of the featured works here.


