Intermediate
You have trained a machine learning model. It runs beautifully in a Jupyter notebook, produces accurate predictions, and you are proud of it. But when your product manager asks “can I try it?” the answer is an awkward “let me email you the results.” Building a proper web interface used to mean learning Flask, writing HTML forms, handling AJAX requests, and deploying a server. Gradio eliminates all of that. You describe your function’s inputs and outputs, and Gradio generates the UI automatically.
Gradio is an open-source Python library that wraps any Python function in a web interface within minutes. You define what goes in (text, images, audio, dataframes) and what comes out, and Gradio creates the appropriate input and output components automatically. It also generates a shareable public URL with one extra parameter, so you can send a link to anyone in the world without touching server configuration.
This article covers the core Gradio concepts: creating basic interfaces, using the Blocks API for custom layouts, handling images and audio, building tabbed multi-model demos, and deploying to Hugging Face Spaces. By the end, you will have a fully functional image classifier interface and a text processing pipeline — both shareable via a public URL.
Gradio in Python: Quick Example
The simplest Gradio app wraps a single Python function. Here is a text sentiment classifier that runs in under 10 lines:
# quick_gradio.py
import gradio as gr
def analyze_text(text: str) -> str:
words = text.lower().split()
positives = {"good", "great", "excellent", "happy", "love", "amazing"}
negatives = {"bad", "terrible", "awful", "hate", "horrible", "poor"}
pos = sum(1 for w in words if w in positives)
neg = sum(1 for w in words if w in negatives)
if pos > neg:
return f"Positive ({pos} positive words found)"
elif neg > pos:
return f"Negative ({neg} negative words found)"
return "Neutral (no strong signals)"
demo = gr.Interface(fn=analyze_text, inputs="text", outputs="text",
title="Simple Sentiment Analyser")
demo.launch()
To run it:
pip install gradio
python quick_gradio.py
Output (terminal + browser):
Running on local URL: http://127.0.0.1:7860
A browser tab opens with a text input field, an "Analyze" button,
and a text output box. Type "This is great!" and click -- you get:
"Positive (1 positive words found)"
This is the core Gradio pattern: any Python function in, web UI out. The inputs="text" and outputs="text" shorthand auto-creates a text area and a text output box. The sections below cover more specific component types, the Blocks API for custom layouts, and real model integration.
What Is Gradio and How Does It Differ from Streamlit?
Gradio is designed specifically around the input-process-output pattern. You give it a function, describe what types flow in and out, and it renders a form-based UI. Streamlit takes a different approach — it renders a script line by line, making it better for dashboards with complex layouts and state. Gradio excels at model demos.
| Feature | Gradio | Streamlit |
|---|---|---|
| Best use case | Model demos, input-output UIs | Dashboards, data exploration |
| UI model | Function-centric (inputs -> outputs) | Script-centric (top to bottom) |
| Sharing | Built-in share=True public URL | Requires deployment |
| Hugging Face integration | First-class (Spaces) | Supported but not native |
| Custom layouts | Blocks API | Columns, containers |
| State management | State components | st.session_state |
The share=True parameter is Gradio’s killer feature for demos — it creates a temporary public URL (valid for 72 hours) that tunnels through Gradio’s servers, so a model running on your laptop becomes accessible worldwide without any port forwarding or cloud deployment.
Interface Components: Inputs and Outputs
Gradio has specialised component classes for different data types. Using the full component class (instead of the shorthand string) gives you fine-grained control over labels, placeholders, file types, and styling.
Text and Number Components
# text_number_demo.py
import gradio as gr
def process(name: str, age: int, bio: str) -> str:
return f"Hello {name}, age {age}.\nBio: {bio[:100]}..."
demo = gr.Interface(
fn=process,
inputs=[
gr.Textbox(label="Your Name", placeholder="Enter full name"),
gr.Number(label="Age", value=25, minimum=1, maximum=120),
gr.Textbox(label="Short Bio", lines=4, placeholder="Tell us about yourself..."),
],
outputs=gr.Textbox(label="Result", lines=3),
title="User Profile Demo",
description="Fill in the form and click Submit.",
examples=[["Alice", 30, "Python developer who loves data science."]],
)
demo.launch()
Output (browser):
Form with name field, number stepper, and multi-line bio area
A "Result" output box below
An "Examples" row at the bottom -- click any example to auto-fill the form
The examples parameter is particularly useful for demos: it pre-populates the form with sample inputs so visitors can see your model working immediately without having to type anything. Each item in the list corresponds to one set of inputs in the same order as the inputs list.
Image and Audio Components
Gradio shines when your function processes images or audio. The gr.Image() component accepts uploads, webcam captures, or URLs and delivers a NumPy array to your function automatically.
# image_demo.py
import gradio as gr
import numpy as np
from PIL import Image
def apply_filter(image: np.ndarray, filter_type: str) -> np.ndarray:
"""Apply a simple image filter and return the result."""
img = Image.fromarray(image)
if filter_type == "Grayscale":
img = img.convert("L �convert("RGB")
elif filter_type == "Flip Horizontal":
img = img.transpose(Image.FLIP_LEFT_RIGHT)
elif filter_type == "Rotate 90":
img = img.rotate(90, expand=True)
elif filter_type == "Invert":
arr = np.array(img)
img = Image.fromarray(255 - arr)
return np.array(img)
demo = gr.Interface(
fn=apply_filter,
inputs=[
gr.Image(label="Input Image", type="numpy"),
gr.Dropdown(
choices=["Grayscale", "Flip Horizontal", "Rotate 90", "Invert"],
label="Filter",
value="Grayscale"
),
],
outputs=gr.Image(label="Result"),
title="Image Filter Demo",
)
demo.launch()
Output (browser):
Image upload zone (drag-and-drop or click to browse)
Dropdown with 4 filter options
Result image panel that updates after you click Submit
Upload any JPEG or PNG to see the filter applied
The type="numpy" parameter tells Gradio to convert the uploaded image to a NumPy array before passing it to your function. The alternative is type="pil" for a PIL Image object or type="filepath" to get the saved path. Always specify the type explicitly — the default may change between Gradio versions.
The Blocks API: Custom Layouts
The gr.Interface() shortcut is great for single-function demos, but when you need multiple functions, tabs, or custom column layouts, use the gr.Blocks() API. Blocks gives you full control over the layout with a context manager syntax.
# blocks_demo.py
import gradio as gr
def uppercase(text: str) -> str:
return text.upper()
def count_words(text: str) -> str:
words = text.split()
return f"Words: {len(words)}, Characters: {len(text)}, Sentences: {text.count('.')}"
def reverse_text(text: str) -> str:
return text[::-1]
with gr.Blocks(title="Text Tools") as demo:
gr.Markdown("# Text Processing Toolkit")
gr.Markdown("Three text utilities in one app.")
with gr.Tab("Uppercase"):
inp1 = gr.Textbox(label="Input", placeholder="Type something...")
out1 = gr.Textbox(label="Uppercase Output")
btn1 = gr.Button("Convert")
btn1.click(fn=uppercase, inputs=inp1, outputs=out1)
with gr.Tab("Word Count"):
inp2 = gr.Textbox(label="Input", lines=5)
out2 = gr.Textbox(label="Stats")
btn2 = gr.Button("Count")
btn2.click(fn=count_words, inputs=inp2, outputs=out2)
with gr.Tab("Reverse"):
with gr.Row():
inp3 = gr.Textbox(label="Original Text")
out3 = gr.Textbox(label="Reversed Text")
btn3 = gr.Button("Reverse")
btn3.click(fn=reverse_text, inputs=inp3, outputs=out3)
demo.launch()
Output (browser):
App with three tabs: Uppercase, Word Count, Reverse
Each tab has its own input, output, and button
Reverse tab uses a side-by-side Row layout
Clicking the button triggers only that tab's function -- no full page re-run
In Blocks, the trigger is explicit: btn.click(fn=..., inputs=..., outputs=...). This event-driven model is more efficient than Streamlit’s full-script-rerun approach — only the specific function connected to the clicked button executes. This matters for slow model inference where you want button 1 to trigger model A and button 2 to trigger model B independently.
Integrating a Hugging Face Model
Gradio was designed to work seamlessly with the Hugging Face ecosystem. The transformers pipeline API produces a callable that Gradio can wrap directly. Here is a complete sentiment analysis demo using a real pre-trained model.
# hf_sentiment_demo.py
import gradio as gr
from transformers import pipeline
# Load once at startup -- not inside the function
classifier = pipeline(
"sentiment-analysis",
model="distilbert-base-uncased-finetuned-sst-2-english"
)
def classify_sentiment(text: str) -> dict:
"""Returns label and confidence score."""
if not text.strip():
return {"label": "No input", "score": 0.0}
result = classifier(text)[0]
# Return as dict for Gradio Label component
return {result["label"]: result["score"],
"OPPOSITE": 1 - result["score"]}
demo = gr.Interface(
fn=classify_sentiment,
inputs=gr.Textbox(label="Enter a sentence", placeholder="This movie was fantastic!"),
outputs=gr.Label(num_top_classes=2, label="Sentiment"),
title="Sentiment Analysis",
description="Powered by DistilBERT fine-tuned on SST-2.",
examples=[
["I absolutely love this product, it works great!"],
["This was a waste of money, terrible quality."],
["The package arrived on time."],
],
allow_flagging="never",
)
demo.launch(share=False) # Set share=True for a public URL
Output (browser):
Text input with example sentences below
Label component showing a bar chart of POSITIVE vs NEGATIVE confidence
Clicking an example auto-fills the input
First run downloads the DistilBERT model (~260 MB) from Hugging Face Hub
Loading the model outside the function is critical for performance. If classifier = pipeline(...)) were inside classify_sentiment(), the 260 MB model would re-download and reload from disk on every single button click. Loading at module startup means it happens once and stays in memory for all subsequent requests.
Real-Life Example: Multi-Model Text Analysis Dashboard
This project combines three NLP tasks in a single Blocks app: language detection, readability scoring, and keyword extraction — all without requiring a GPU or paid API.
# text_analysis_dashboard.py
import gradio as gr
import re
from collections import Counter
def detect_language_heuristic(text: str) -> str:
"""Simple heuristic language detection (no external library needed)."""
common = {
"en": {"the", "and", "is", "in", "it", "of", "to", "a"},
"es": {"el", "la", "los", "en", "es", "de", "que", "un"},
"fr": {"le", "la", "les", "un", "une", "est", "en", "de"},
"de": {"der", "die", "das", "ist", "in", "und", "ein", "zu"},
}
words = set(text.lower().split())
scores = {lang: len(words & vocab) for lang, vocab in common.items()}
best = max(scores, key=scores.get)
return f"Detected: {best.upper()} (score: {scores[best]})"
def readability_score(text: str) -> str:
"""Flesch Reading Ease approximation."""
sentences = max(len(re.split(r'[.!?]+', text)), 1)
words = text.split()
word_count = max(len(words), 1)
syllables = sum(max(len(re.findall(r'[aeiouAEIOU]', w)), 1) for w in words)
score = 206.835 - 1.015 * (word_count / sentences) - 84.6 * (syllables / word_count)
score = max(0, min(100, score))
if score >= 70:
level = "Easy (suitable for most readers)"
elif score >= 50:
level = "Moderate (some education required)"
else:
level = "Difficult (academic or technical audience)"
return f"Flesch Score: {score:.1f}/100 -- {level}"
def extract_keywords(text: str, top_n: int = 10) -> str:
"""Extract top keywords by frequency (excluding stopwords)."""
stopwords = {"the","a","an","is","in","it","of","to","and","or","for",
"was","be","with","as","at","by","from","on","are","this"}
words = re.findall(r'\b[a-zA-Z]{4,}\b', text.lower())
filtered = [w for w in words if w not in stopwords]
counts = Counter(filtered).most_common(top_n)
return "\n".join(f"{word}: {count}" for word, count in counts) or "No keywords found"
with gr.Blocks(title="Text Analysis Dashboard", theme=gr.themes.Soft()) as demo:
gr.Markdown("## Text Analysis Dashboard")
gr.Markdown("Paste any text to analyse its language, readability, and keywords.")
with gr.Row():
text_input = gr.Textbox(
label="Input Text",
lines=8,
placeholder="Paste any text here...",
scale=2
)
with gr.Column(scale=1):
lang_output = gr.Textbox(label="Language")
read_output = gr.Textbox(label="Readability")
keywords_output = gr.Textbox(label="Top Keywords", lines=6)
analyse_btn = gr.Button("Analyse", variant="primary")
def analyse_all(text):
return detect_language_heuristic(text), readability_score(text), extract_keywords(text)
analyse_btn.click(
fn=analyse_all,
inputs=text_input,
outputs=[lang_output, read_output, keywords_output]
)
gr.Examples(
examples=["Python is a high-level programming language known for its clear syntax."],
inputs=text_input
)
demo.launch()
Output (browser):
Large text input on the left, two output boxes stacked on the right
Full-width keywords box below
Primary blue "Analyse" button
Example text pre-loaded at the bottom
Paste "Python is a high-level programming language..." and click:
Language: Detected: EN (score: 3)
Readability: Flesch Score: 54.2/100 -- Moderate
Keywords: language: 2 / python: 1 / high: 1 / level: 1
The key pattern here is a single function analyse_all() that calls the three individual functions and returns a tuple. Gradio maps each element of the return tuple to the corresponding output component in the outputs list. This avoids three separate button clicks — one analysis run populates all three panels simultaneously.
Frequently Asked Questions
How do I share a Gradio app publicly?
Add share=True to demo.launch(share=True). Gradio creates a temporary tunnel through its servers and prints a URL like https://abc123.gradio.live. This URL is valid for 72 hours and accessible from any device with internet access. For a permanent public URL, deploy to Hugging Face Spaces — create a new Space, select “Gradio” as the SDK, upload your script and requirements.txt, and the Space handles hosting for free.
Can I add authentication to a Gradio app?
Yes. Pass auth=("username", "password") to demo.launch() to enable HTTP basic auth, or pass a list of tuples for multiple users: auth=[("alice", "pass1"), ("bob", "pass2")]. For OAuth or token-based auth, deploy to Hugging Face Spaces, which supports organisation-level access restrictions through the Spaces settings panel without any code changes.
Can Gradio handle asynchronous functions?
Yes — Gradio supports async def functions natively. If your function calls an async API (like aiohttp or an async database client), define it as async def my_fn(text): ... and Gradio runs it in its event loop automatically. This is useful for long-running model inference where you want the UI to remain responsive while waiting for the result.
How do I handle file downloads in Gradio?
Return a file path string from your function and use gr.File() as the output component. Gradio serves the file and gives the user a download link. For example, if your function generates a PDF report and saves it to a temp file, return the temp file path and the output component handles the rest. Use Python’s tempfile module to create the temp path: with tempfile.NamedTemporaryFile(suffix=".pdf", delete=False) as f: ....
My model is slow — how do I handle concurrent requests?
Call demo.queue() before demo.launch(). The queue serialises requests, shows users their position in the queue, and prevents timeouts on long-running functions. Without queueing, a slow inference function blocks all other requests. For GPU models, consider max_batch_size on demo.queue() to batch concurrent requests together for efficiency.
Conclusion
Gradio transforms the “it works in my notebook” problem into “here is a link to try it yourself” in minutes. You have covered the full workflow: wrapping simple functions with gr.Interface(), building custom multi-tab layouts with gr.Blocks(), handling images and audio components, integrating Hugging Face models, and deploying with share=True. The text analysis dashboard demonstrates how to combine multiple functions into a single clean interface.
The next logical extensions are deploying the dashboard to Hugging Face Spaces for permanent hosting, connecting a real Hugging Face model for language detection, and adding a gr.State() component to keep a history of previous analyses in the session. All three require only small additions to the code you have already written.
Explore the full component library and layout options in the official Gradio documentation.