How To Use uv: The Fast Python Package Manager

Beginner

Python package management is one of the most critical parts of Python development. Whether you’re installing libraries, managing dependencies, or creating reproducible environments, you need a reliable package manager. For years, pip has been the de facto standard, but it’s slow, fragmented, and sometimes frustrating to use. Enter uv—a blazing-fast Python package manager written in Rust that replaces pip, virtualenv, and poetry with a single, unified tool.

In this comprehensive guide, we’ll explore uv from the ground up. You’ll learn how to install it, use it to manage projects and dependencies, understand how it differs from traditional tools, and discover why developers are rapidly adopting it. By the end, you’ll understand why uv is being called “the next-generation Python package manager.”

What is uv?

uv is a modern Python package manager that’s designed to be ridiculously fast. Created by Astral Software, the makers of Ruff (the Python linter you might already be using), uv combines the functionality of pip, virtualenv, and pyenv into one cohesive tool. But that’s not the main selling point—the main point is speed.

Here’s what uv is NOT: it’s not a replacement for pip that works the same but faster. It’s a rethinking of what a Python package manager should be. It’s designed from scratch using Rust, with modern parallelization, caching, and optimization.

Why Choose uv?

  • 10-100x faster: Installation is dramatically faster due to Rust performance and parallel downloads
  • Single tool: Replaces pip, virtualenv, and pyenv—no more context switching
  • Dependency resolution: Lightning-fast conflict detection and resolution
  • Cross-platform: Works on Windows, macOS, and Linux without modification
  • Built for modern Python: Designed with Python 3.8+ in mind from the start
  • Zero configuration needed: Works out of the box with sensible defaults

Installing uv

Installing uv is incredibly simple. On macOS or Linux, just run:

curl -LsSf https://astral.sh/uv/install.sh | sh

On Windows, use:

powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

That’s it. uv is now installed and ready to use.

Basic uv Commands

Creating a New Project

To create a new Python project with uv, simply run:

uv init my_project

This creates a new directory with a basic project structure:

my_project/
├── .python-version      # Python version specification
├── pyproject.toml       # Project configuration
└── src/
    └── my_project/
        └── __init__.py

Adding Dependencies

To add a package to your project:

uv add requests

This automatically:

  • Resolves the dependency
  • Installs it
  • Updates your pyproject.toml
  • Creates a uv.lock file for reproducibility

Installing from pyproject.toml

To install all dependencies from your pyproject.toml:

uv sync

This ensures exact version matching for reproducibility.

Running Python Scripts

With uv, you don’t need to manually activate virtual environments:

uv run python script.py

uv automatically creates and uses the appropriate environment.

Advanced Usage

Python Version Management

uv can automatically manage Python versions. To use a specific Python version in your project:

uv init --python 3.11

To list available Python versions:

uv python list

Working with Virtual Environments

Create an environment explicitly:

uv venv

Activate it like you normally would:

source .venv/bin/activate  # On Windows: .venvScriptsactivate

Pre-Release and Development Versions

To include pre-release versions in dependency resolution:

uv add --pre package_name

Comparing with Pip and Poetry

Here’s how uv stacks up against traditional tools:

Feature uv pip poetry
Installation Speed 10-100x faster Baseline 2-3x faster than pip
Single Tool Yes No (+ virtualenv + pip) Yes
Lock File Yes (uv.lock) No (requires pip-tools) Yes (poetry.lock)
Ease of Use Very Easy Moderate Very Easy
Performance Excellent Good Good
Python Version Management Built-in Requires pyenv Requires pyenv

Real-World Example: Setting Up a Data Science Project

Here’s how you’d set up a complete data science project with uv:

# Create the project
uv init data_science_project

# Enter the directory
cd data_science_project

# Add scientific computing dependencies
uv add numpy pandas scikit-learn jupyter matplotlib

# Add development dependencies (optional)
uv add --dev pytest pytest-cov black

# Run Jupyter notebooks
uv run jupyter notebook

# Run tests
uv run pytest

Notice how simple that is? No manual environment activation, no separate commands for different tools. Everything flows naturally.

Frequently Asked Questions

Is uv production-ready?

Absolutely. While it’s relatively new, it’s being used in production by many organizations. The Astral team is committed to stability, and it continues to improve with every release.

Will uv replace pip?

Eventually, yes. Many Python developers are switching to uv. However, pip will likely remain the standard for a while. The Python ecosystem moves slowly, and that’s a good thing.

Can I use uv alongside pip?

You shouldn’t mix package managers in the same environment, but you can use uv for some projects and pip for others.

What about compatibility?

uv is compatible with PyPI and all standard Python packages. There’s no special “uv-only” ecosystem—it works with everything pip does.

Conclusion

uv represents the future of Python package management. It’s fast, simple, and incredibly well-designed. Whether you’re building a small script, a data science project, or a large production application, uv makes package management feel effortless. If you haven’t tried it yet, I highly recommend giving it a shot. Your development workflow will thank you.

Key Takeaways:

  • uv is a faster, more unified replacement for pip, virtualenv, and pyenv
  • Installation is a single command
  • Project setup and dependency management are incredibly straightforward
  • It’s production-ready and actively maintained
  • Making the switch is risk-free—it’s fully compatible with the existing Python ecosystem

How To Build a REST API with FastAPI in Python

Intermediate

Building modern web applications often requires a robust API layer, and FastAPI has emerged as one of the most powerful and developer-friendly frameworks for this task. If you’ve been working with Flask or Django and found yourself wanting something faster, more intuitive, and built specifically for modern Python, FastAPI is exactly what you’ve been waiting for. In this tutorial, we’ll walk through creating production-ready REST APIs from scratch, complete with validation, error handling, and real-world examples.

You don’t need to be an API expert to follow along. We’ll start with a simple “Hello World” endpoint and progressively build toward a complete CRUD application. By the end of this guide, you’ll understand how to design endpoints, validate user input with Pydantic, structure your code professionally, and handle errors gracefully.

Here’s what we’ll cover: we’ll install FastAPI and Uvicorn, create our first endpoint, explore path and query parameters, implement request validation using Pydantic models, build a complete CRUD Todo API, handle errors properly, define response models, and finally answer the most common questions developers ask when getting started. Let’s build something real.

Quick Example: Your First FastAPI Application

Before we dive into the details, here’s a fully functional FastAPI application that you can run right now. This will give you a taste of how simple and elegant FastAPI can be:

# main.py
from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class Item(BaseModel):
    name: str
    description: str = None
    price: float

@app.get("/")
def read_root():
    return {"message": "Hello, FastAPI!"}

@app.get("/items/{item_id}")
def read_item(item_id: int, query_param: str = None):
    return {"item_id": item_id, "query_param": query_param}

@app.post("/items/")
def create_item(item: Item):
    return {"created_item": item}

Output: When you run this with uvicorn main.py:app --reload, you’ll have a fully functional API server with automatic documentation at http://localhost:8000/docs. This is the magic of FastAPI–your code is simultaneously your documentation.

What is FastAPI and Why Use It?

FastAPI is a modern, fast web framework for building APIs with Python 3.6+. Created by Sebastian Ramirez, it combines the simplicity of Flask with the power of Django REST Framework, while being incredibly performant. FastAPI automatically generates interactive API documentation (Swagger UI and ReDoc), validates request data using Pydantic models, and provides type hints that make your code self-documenting.

The “fast” in FastAPI refers to both development speed and runtime performance. Development is fast because you write less boilerplate code and get built-in validation. Runtime is fast because it’s built on top of Starlette and uses async/await natively. Unlike older frameworks, FastAPI was designed from the ground up for modern Python async programming.

Here’s how FastAPI compares to other popular frameworks:

Feature FastAPI Flask Django REST
Setup Complexity Minimal Minimal Moderate
Built-in Validation Yes (Pydantic) No Yes
Async Support Native Limited Limited
Auto Documentation Yes No Optional
Performance Very High High High
Learning Curve Low Very Low Moderate

FastAPI wins when you need a modern, high-performance API with minimal setup. It’s particularly valuable for microservices, data science applications, and any project where you want to move fast without sacrificing code quality.

Installing FastAPI and Uvicorn

FastAPI is just a framework–it needs a web server to run. The most common choice is Uvicorn, an ASGI server that’s fast, simple, and perfect for development and production. Let’s get everything installed.

First, make sure you have Python 3.7 or higher installed. You can check your version with python --version. Then, create a virtual environment to keep your project dependencies isolated:

# Setup (bash/zsh)
python -m venv fastapi_env
source fastapi_env/bin/activate  # On Windows: fastapi_env\Scripts\activate

# Install FastAPI and Uvicorn
pip install fastapi uvicorn[standard]

Output: The installation will complete in seconds. The [standard] extras include additional features like WebSocket support and automatic reload.

That’s it! You now have everything needed to build professional REST APIs. The next step is to create your first application file and start writing endpoints.

Creating Your First Endpoint

An endpoint is a URL on your API that clients can request. FastAPI uses Python decorators to define endpoints in a way that’s both elegant and powerful. Let’s create a simple application with a few endpoints:

# app.py
from fastapi import FastAPI

app = FastAPI(
    title="My First API",
    description="A simple API to learn FastAPI",
    version="1.0.0"
)

@app.get("/")
def read_root():
    """This is the root endpoint"""
    return {"message": "Welcome to FastAPI!"}

@app.get("/about")
def read_about():
    """Learn more about this API"""
    return {
        "app_name": "My First API",
        "version": "1.0.0",
        "author": "Your Name"
    }

@app.post("/data")
def receive_data(data: str):
    """Receive data from client"""
    return {"received": data}

Output: Save this as app.py and run uvicorn app:app --reload. Visit http://localhost:8000 in your browser and you’ll see the welcome message. Then visit http://localhost:8000/docs to see the interactive API documentation that FastAPI generates automatically.

The --reload flag makes Uvicorn automatically restart when you change your code–perfect for development. Each decorator specifies the HTTP method (@app.get, @app.post) and the path. The function name doesn’t matter to the API, but descriptive names help readability. The docstring automatically becomes the endpoint description in the documentation.

Path Parameters and Query Parameters

APIs need flexibility to handle different requests. Path parameters are part of the URL itself (like /users/123), while query parameters come after a question mark (like /users?page=1&limit=10). FastAPI makes both incredibly easy to implement.

Path parameters are the most common way to identify a specific resource. When you want users to retrieve a specific item by ID, you use a path parameter:

# parameters.py
from fastapi import FastAPI

app = FastAPI()

@app.get("/users/{user_id}")
def get_user(user_id: int):
    """Get a specific user by ID"""
    return {"user_id": user_id, "name": f"User {user_id}"}

@app.get("/posts/{post_id}/comments/{comment_id}")
def get_comment(post_id: int, comment_id: int):
    """Get a specific comment on a specific post"""
    return {
        "post_id": post_id,
        "comment_id": comment_id,
        "text": "This is a comment"
    }

Output: When you request /users/42, the API returns {"user_id": 42, "name": "User 42"}. FastAPI automatically extracts the user_id from the URL and validates that it’s an integer. If someone sends /users/abc, FastAPI automatically returns a 422 validation error without any extra code from you.

Query parameters are optional filtering and pagination options. They appear after a question mark in the URL:

# query_params.py
from fastapi import FastAPI

app = FastAPI()

@app.get("/search")
def search(
    query: str,
    page: int = 1,
    limit: int = 10,
    sort_by: str = "relevance"
):
    """Search with pagination and sorting"""
    return {
        "query": query,
        "page": page,
        "limit": limit,
        "sort_by": sort_by,
        "results": []
    }

@app.get("/products")
def list_products(
    category: str = None,
    min_price: float = 0,
    max_price: float = 1000,
    in_stock: bool = True
):
    """List products with optional filters"""
    return {
        "category": category,
        "price_range": {"min": min_price, "max": max_price},
        "in_stock": in_stock,
        "products": []
    }

Output: A request to /search?query=python&page=2&limit=20 will parse all three parameters correctly. Query parameters with default values are optional–you can call /search?query=python and page and limit will use their defaults. This is how you build flexible, user-friendly APIs.

Request Body with Pydantic Models

When clients need to send complex data to your API (like creating a new user or updating a product), you use request bodies. This is where Pydantic models become invaluable. Pydantic automatically validates the incoming data against your model definition, converts types, and provides helpful error messages if something is wrong.

A Pydantic model is a Python class that defines the structure of your data. Here’s how to use them for request bodies:

# models.py
from fastapi import FastAPI
from pydantic import BaseModel, Field
from typing import Optional

app = FastAPI()

class User(BaseModel):
    username: str
    email: str
    full_name: Optional[str] = None
    age: Optional[int] = None

class Product(BaseModel):
    name: str = Field(..., min_length=1, max_length=100)
    description: str = Field(default="", max_length=500)
    price: float = Field(..., gt=0, le=999999)
    stock: int = Field(default=0, ge=0)
    tags: list = Field(default_factory=list)

@app.post("/users")
def create_user(user: User):
    """Create a new user with validation"""
    return {
        "message": f"User {user.username} created successfully",
        "user": user
    }

@app.post("/products")
def create_product(product: Product):
    """Create a new product with detailed validation"""
    return {
        "message": f"Product '{product.name}' created",
        "product": product
    }

@app.put("/users/{user_id}")
def update_user(user_id: int, user: User):
    """Update a user by ID"""
    return {
        "user_id": user_id,
        "message": "User updated",
        "user": user
    }

Output: When you send this JSON to the /products endpoint:

{
  "name": "Python Book",
  "price": 29.99,
  "stock": 50,
  "tags": ["programming", "python", "learning"]
}

FastAPI validates that the price is positive, the stock is non-negative, and the name exists and is between 1 and 100 characters. If you send "price": -10, it rejects it with a clear error message. This validation is automatic–you don’t write any custom validation code. The Field function provides fine-grained control: gt=0 means “greater than zero”, le=999999 means “less than or equal to”, min_length and max_length validate string length.

CRUD Operations

CRUD stands for Create, Read, Update, and Delete–the four fundamental database operations. Let’s build a complete CRUD API for managing a collection of articles. This example uses an in-memory list to keep things simple, but in production you’d use a real database like PostgreSQL:

# crud_api.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List, Optional
from datetime import datetime

app = FastAPI(title="Article API")

class Article(BaseModel):
    id: Optional[int] = None
    title: str
    content: str
    author: str
    created_at: Optional[datetime] = None

# In-memory storage (replace with database in production)
articles_db = []
article_counter = 1

@app.get("/articles", response_model=List[Article])
def list_articles(skip: int = 0, limit: int = 10):
    """Get all articles with pagination"""
    return articles_db[skip:skip + limit]

@app.get("/articles/{article_id}", response_model=Article)
def get_article(article_id: int):
    """Get a specific article by ID"""
    for article in articles_db:
        if article.id == article_id:
            return article
    raise HTTPException(status_code=404, detail="Article not found")

@app.post("/articles", response_model=Article)
def create_article(article: Article):
    """Create a new article"""
    global article_counter
    article.id = article_counter
    article.created_at = datetime.now()
    article_counter += 1
    articles_db.append(article)
    return article

@app.put("/articles/{article_id}", response_model=Article)
def update_article(article_id: int, updated_article: Article):
    """Update an existing article"""
    for i, article in enumerate(articles_db):
        if article.id == article_id:
            updated_article.id = article_id
            updated_article.created_at = article.created_at
            articles_db[i] = updated_article
            return updated_article
    raise HTTPException(status_code=404, detail="Article not found")

@app.delete("/articles/{article_id}")
def delete_article(article_id: int):
    """Delete an article by ID"""
    global articles_db
    initial_length = len(articles_db)
    articles_db = [a for a in articles_db if a.id != article_id]
    if len(articles_db) == initial_length:
        raise HTTPException(status_code=404, detail="Article not found")
    return {"message": f"Article {article_id} deleted successfully"}

Output: This API provides all four CRUD operations:

  • CREATE: POST to /articles with a JSON body to create a new article
  • READ: GET /articles to list all, or GET /articles/1 to get a specific one
  • UPDATE: PUT to /articles/1 to modify an article
  • DELETE: DELETE to /articles/1 to remove an article

The response_model parameter tells FastAPI what shape the response should be, enabling automatic validation of your response and generation of proper API documentation. The HTTPException with status code 404 is the standard way to signal that a resource wasn’t found.

Error Handling and Status Codes

Professional APIs don’t just return data–they communicate what went wrong when something fails. HTTP status codes are the standard way to do this. FastAPI makes error handling straightforward with the HTTPException class and customizable exception handlers.

Here are the most important HTTP status codes for your API:

  • 200 OK: Request succeeded, here’s your data
  • 201 Created: Resource was successfully created
  • 204 No Content: Success but no data to return (like DELETE)
  • 400 Bad Request: Client sent invalid data
  • 401 Unauthorized: Authentication required
  • 403 Forbidden: Not allowed to access this resource
  • 404 Not Found: Resource doesn’t exist
  • 422 Unprocessable Entity: Validation failed (FastAPI uses this automatically)
  • 500 Internal Server Error: Server error

Let’s implement comprehensive error handling:

# error_handling.py
from fastapi import FastAPI, HTTPException, status
from pydantic import BaseModel, validator
from typing import Optional

app = FastAPI()

class Item(BaseModel):
    name: str
    price: float
    quantity: int

    @validator('price')
    def price_must_be_positive(cls, v):
        if v <= 0:
            raise ValueError('Price must be greater than 0')
        return v

    @validator('quantity')
    def quantity_must_be_positive(cls, v):
        if v < 0:
            raise ValueError('Quantity cannot be negative')
        return v

items_db = {
    1: Item(name="Laptop", price=999.99, quantity=5),
    2: Item(name="Mouse", price=29.99, quantity=50)
}

@app.get("/items/{item_id}")
def get_item(item_id: int):
    """Get item, with proper error handling"""
    if item_id not in items_db:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail=f"Item with ID {item_id} not found"
        )
    return items_db[item_id]

@app.post("/items", status_code=status.HTTP_201_CREATED)
def create_item(item: Item):
    """Create new item with 201 response"""
    new_id = max(items_db.keys()) + 1 if items_db else 1
    items_db[new_id] = item
    return {"id": new_id, "item": item}

@app.delete("/items/{item_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_item(item_id: int):
    """Delete item, returning 204 No Content"""
    if item_id not in items_db:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail="Item not found"
        )
    del items_db[item_id]
    return None

@app.post("/checkout/{item_id}")
def checkout(item_id: int, quantity: int):
    """Purchase item with inventory check"""
    if item_id not in items_db:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail="Item not found"
        )

    item = items_db[item_id]
    if quantity > item.quantity:
        raise HTTPException(
            status_code=status.HTTP_400_BAD_REQUEST,
            detail=f"Insufficient stock. Only {item.quantity} available"
        )

    if quantity <= 0:
        raise HTTPException(
            status_code=status.HTTP_400_BAD_REQUEST,
            detail="Quantity must be positive"
        )

    item.quantity -= quantity
    return {
        "status": "success",
        "item": item.name,
        "quantity": quantity,
        "total": quantity * item.price
    }

Output: When you request an item that doesn't exist, you get a proper 404 response with a descriptive message. When you try to buy more than available, you get a 400 error explaining the shortage. The validators in the Pydantic model automatically reject invalid data before your handler code even runs.

Response Models

Response models define the structure of your API responses and enable several powerful features: automatic response validation, filtering, serialization, and documentation. Even though your code might work with complex objects, response models let you control exactly what gets sent to the client.

Consider a real-world example where your database contains sensitive information you shouldn't expose:

# response_models.py
from fastapi import FastAPI
from pydantic import BaseModel
from typing import List, Optional
from datetime import datetime

app = FastAPI()

class UserInDB(BaseModel):
    """Full user data with sensitive info"""
    id: int
    username: str
    email: str
    password_hash: str  # Never expose this!
    created_at: datetime

class UserResponse(BaseModel):
    """What we return to clients"""
    id: int
    username: str
    email: str
    created_at: datetime

class BlogPost(BaseModel):
    title: str
    content: str
    author: str

class BlogPostResponse(BaseModel):
    """Response with computed fields"""
    title: str
    content: str
    author: str
    word_count: Optional[int] = None
    excerpt: Optional[str] = None

# Simulated database with sensitive data
users_db = {
    1: UserInDB(
        id=1,
        username="alice",
        email="alice@example.com",
        password_hash="$2b$12$...hashed...",
        created_at=datetime.now()
    )
}

@app.get("/users/{user_id}", response_model=UserResponse)
def get_user(user_id: int):
    """
    Get user - only returns safe fields.
    Password hash is never exposed even though
    the internal representation includes it.
    """
    if user_id not in users_db:
        from fastapi import HTTPException
        raise HTTPException(status_code=404, detail="User not found")
    return users_db[user_id]

@app.post("/posts", response_model=BlogPostResponse)
def create_post(post: BlogPost):
    """
    Create post - response includes computed fields
    like word_count and excerpt that aren't in the request
    """
    content = post.content
    word_count = len(content.split())
    excerpt = content[:100] + "..." if len(content) > 100 else content

    return {
        "title": post.title,
        "content": content,
        "author": post.author,
        "word_count": word_count,
        "excerpt": excerpt
    }

Output: When you request a user, the response only contains the fields defined in UserResponse. The password hash from the database is automatically filtered out. When you create a post, the response includes computed fields like word_count and excerpt even though the client never sent them. This is the power of response models--they decouple your internal data structures from your API contract.

Real-Life Example: Building a Todo API

Let's tie everything together with a complete, production-ready Todo API that demonstrates all the concepts we've learned. This is a practical example you can extend with a real database like PostgreSQL or MongoDB:

# todo_api.py
from fastapi import FastAPI, HTTPException, status
from pydantic import BaseModel, Field
from typing import List, Optional
from datetime import datetime
from enum import Enum

app = FastAPI(
    title="Todo API",
    description="A complete todo management API",
    version="1.0.0"
)

class PriorityLevel(str, Enum):
    low = "low"
    medium = "medium"
    high = "high"

class TodoCreate(BaseModel):
    title: str = Field(..., min_length=1, max_length=200)
    description: Optional[str] = Field(None, max_length=1000)
    priority: PriorityLevel = PriorityLevel.medium
    due_date: Optional[datetime] = None

class TodoResponse(BaseModel):
    id: int
    title: str
    description: Optional[str]
    priority: PriorityLevel
    completed: bool
    due_date: Optional[datetime]
    created_at: datetime
    completed_at: Optional[datetime] = None

todos_db: List[TodoResponse] = []
todo_counter = 1

@app.get("/todos", response_model=List[TodoResponse])
def list_todos(
    completed: Optional[bool] = None,
    priority: Optional[PriorityLevel] = None,
    skip: int = 0,
    limit: int = 20
):
    """Get todos with optional filtering"""
    results = todos_db

    if completed is not None:
        results = [t for t in results if t.completed == completed]
    if priority:
        results = [t for t in results if t.priority == priority]

    return results[skip:skip + limit]

@app.get("/todos/{todo_id}", response_model=TodoResponse)
def get_todo(todo_id: int):
    """Get a specific todo"""
    for todo in todos_db:
        if todo.id == todo_id:
            return todo
    raise HTTPException(status_code=404, detail="Todo not found")

@app.post("/todos", response_model=TodoResponse, status_code=status.HTTP_201_CREATED)
def create_todo(todo: TodoCreate):
    """Create a new todo"""
    global todo_counter
    new_todo = TodoResponse(
        id=todo_counter,
        title=todo.title,
        description=todo.description,
        priority=todo.priority,
        completed=False,
        due_date=todo.due_date,
        created_at=datetime.now()
    )
    todo_counter += 1
    todos_db.append(new_todo)
    return new_todo

@app.put("/todos/{todo_id}", response_model=TodoResponse)
def update_todo(todo_id: int, todo_update: TodoCreate):
    """Update an existing todo"""
    for i, todo in enumerate(todos_db):
        if todo.id == todo_id:
            updated = TodoResponse(
                id=todo.id,
                title=todo_update.title,
                description=todo_update.description,
                priority=todo_update.priority,
                completed=todo.completed,
                due_date=todo_update.due_date,
                created_at=todo.created_at
            )
            todos_db[i] = updated
            return updated
    raise HTTPException(status_code=404, detail="Todo not found")

@app.patch("/todos/{todo_id}/complete", response_model=TodoResponse)
def complete_todo(todo_id: int):
    """Mark a todo as complete"""
    for i, todo in enumerate(todos_db):
        if todo.id == todo_id:
            todo.completed = True
            todo.completed_at = datetime.now()
            todos_db[i] = todo
            return todo
    raise HTTPException(status_code=404, detail="Todo not found")

@app.delete("/todos/{todo_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_todo(todo_id: int):
    """Delete a todo"""
    global todos_db
    initial_length = len(todos_db)
    todos_db = [t for t in todos_db if t.id != todo_id]
    if len(todos_db) == initial_length:
        raise HTTPException(status_code=404, detail="Todo not found")

Output: This Todo API supports:

  • Creating todos with title, description, priority, and optional due date
  • Listing all todos with filtering by completion status and priority
  • Getting individual todos by ID
  • Updating todo details
  • Marking todos as complete (with completion timestamp)
  • Deleting todos
  • Proper HTTP status codes for each operation
  • Full validation of input data through Pydantic models

Run this with uvicorn todo_api:app --reload and visit http://localhost:8000/docs to see the interactive documentation. You can test every endpoint right from your browser. To use this in production, replace the in-memory todos_db list with actual database calls to PostgreSQL, MongoDB, or any other database your project uses.

Frequently Asked Questions

Should I use async functions in FastAPI?

Use async functions when your endpoint does I/O-bound operations like database queries, API calls, or file operations. These operations have "wait time" during which your server can handle other requests. For CPU-bound operations (heavy calculations), use regular synchronous functions. FastAPI handles both seamlessly--just use async def when appropriate. Most real-world APIs benefit from async because they spend time waiting for databases and external services.

How do I add custom validation to Pydantic models?

Use the @validator decorator from Pydantic. The validator function runs after type conversion and can raise a ValueError with a custom message. You can validate a single field or multiple fields by passing multiple field names to the decorator. FastAPI automatically includes validation errors in the 422 response.

How do I secure my FastAPI endpoints?

FastAPI includes built-in support for OAuth2, JWT tokens, and HTTP Basic authentication. Use the Security dependency from FastAPI to protect your endpoints. For a quick start with JWT: install python-jose and passlib, create login endpoints that return tokens, and use Depends() to require tokens on protected endpoints. Never store passwords in plain text--always use password hashing with libraries like bcrypt.

How do I handle CORS (Cross-Origin) requests?

CORS is needed when your frontend runs on a different domain than your API. Install fastapi-cors and add middleware to your app. Specify which origins are allowed, which HTTP methods they can use, and which headers are permitted. This prevents unauthorized websites from accessing your API while allowing your legitimate frontend to communicate.

What's the best way to integrate a database?

Use SQLAlchemy for relational databases (PostgreSQL, MySQL) or an async-compatible library like `tortoise-orm` or `sqlalchemy` with `asyncpg`. FastAPI works great with both synchronous and asynchronous database libraries. Create a separate database.py file with connection logic, use dependency injection with Depends() to pass database sessions to your endpoints, and keep database models separate from Pydantic models to decouple your API contract from your database schema.

How do I test my FastAPI endpoints?

Use the TestClient from FastAPI and pytest. Create a test file that imports your app, creates a client with TestClient(app), and makes requests to your endpoints. The test client simulates HTTP requests without actually running a server. Test successful requests, validation failures, authorization errors, and edge cases. FastAPI makes testing incredibly easy because everything is synchronous in tests.

How do I deploy a FastAPI application to production?

Don't use the development server in production. Use Gunicorn with Uvicorn workers: gunicorn app:app --workers 4 --worker-class uvicorn.workers.UvicornWorker. For containerization, create a Dockerfile with Python, install dependencies, and run Gunicorn. Deploy to any platform that supports Docker: AWS ECS, Google Cloud Run, Heroku, DigitalOcean App Platform, or Kubernetes. Make sure your environment variables for secrets, database URLs, and API keys are set properly--never hardcode them.

Conclusion

You've now learned how to build professional REST APIs with FastAPI. You understand decorators and path parameters, you can validate complex data structures with Pydantic models, you know how to implement complete CRUD operations, and you can handle errors gracefully with proper HTTP status codes. These foundations will serve you well whether you're building a simple microservice or a complex distributed system.

The real power of FastAPI lies in its simplicity and performance. Your code is simultaneously your tests, documentation, and API specification. The validation is automatic. The performance is exceptional. What used to take dozens of lines of boilerplate in older frameworks now takes a few lines of elegant Python.

Your next steps: start building real projects with FastAPI, integrate a real database when your app grows beyond in-memory storage, add authentication with JWT tokens, and deploy to your preferred cloud platform. The FastAPI documentation at https://fastapi.tiangolo.com/ is comprehensive and worth reading as you tackle more advanced topics like WebSockets, background tasks, and database migrations.

  • Building a Web Scraper with Beautiful Soup and Requests
  • Django REST Framework: Build Powerful APIs with Django
  • Database Modeling with SQLAlchemy in Python
  • Deploying Python Applications with Docker and Kubernetes
  • Async Programming in Python with asyncio
  • Testing Python Code with pytest and Mock
  • Building a Real-Time Chat Application with FastAPI and WebSockets
  • API Authentication with JWT Tokens in Python

How To Use Python 3.14 Template Strings (T-Strings) for Safe Interpolation

Intermediate

Python 3.14 introduces a powerful new feature called T-Strings (Template Strings) that revolutionizes how you handle string interpolation. If you’ve been frustrated with the limitations of f-strings when dealing with security-sensitive operations like database queries or HTML generation, T-Strings offer an elegant solution. Unlike traditional f-strings that immediately evaluate and return strings, T-Strings return Template objects that give you fine-grained control over how values are processed.

Don’t worry if you’re new to the concept–by the end of this tutorial, you’ll understand exactly when and how to use T-Strings in your projects. We’ll start with practical examples, explore the underlying mechanisms, and then dive into real-world use cases like SQL injection prevention and HTML escaping.

In this guide, we’ll cover the syntax of T-Strings, the Template protocol they implement, how to process templates with custom functions, and several practical applications that will make your code more secure and maintainable. We’ll also explore how this feature compares to existing string formatting methods in Python.

Quick Example

Before diving into the theory, let’s see T-Strings in action. This example demonstrates the fundamental difference between f-strings and T-Strings:

# quick_tstring_demo.py
from __future__ import annotations

# T-String creates a Template object, not a plain string
user_input = "Robert'; DROP TABLE students; --"
database_query = t"SELECT * FROM users WHERE id = {user_input}"

print(f"Template type: {type(database_query)}")
print(f"Template strings attr: {database_query.strings}")
print(f"Template values attr: {database_query.values}")

# You can process the template safely
def escape_sql_value(value):
    """Escape value for SQL injection prevention"""
    escaped = str(value).replace("'", "''")
    return f"'{escaped}'"

# Process the template and build the safe query
safe_parts = [database_query.strings[0]]
for i, value in enumerate(database_query.values):
    safe_parts.append(escape_sql_value(value))
    safe_parts.append(database_query.strings[i + 1])

final_query = "".join(safe_parts)
print(f"\nFinal query: {final_query}")

Output:

Template type: 
Template strings attr: ['SELECT * FROM users WHERE id = ', '']
Template values attr: ["Robert'; DROP TABLE students; --"]

Final query: SELECT * FROM users WHERE id = 'Robert''; DROP TABLE students; --'

Notice how the T-String defers the actual string assembly, allowing us to sanitize values before they’re combined. This is the core advantage of T-Strings over f-strings.

What Are T-Strings?

T-Strings, defined in PEP 750, introduce a new string prefix `t` that transforms string literals into Template objects instead of plain strings. This small change has significant implications for security and control over string interpolation.

Here’s a comparison of different string formatting approaches in Python:

Approach Returns Evaluated At Best Use Case
f-string: `f”Hello {name}”` str Statement time Simple output, logging
T-String: `t”Hello {name}”` Template Never (deferred) Security-sensitive, custom processing
.format(): `”Hello {}”.format(name)` str Call time Legacy code, simple formatting
% formatting: `”Hello %s” % name` str Call time Very old codebases

The key insight is that T-Strings separate the specification of a template (which values go where) from the actual interpolation of values into the template. This separation allows you to apply custom processing logic before combining strings and values.

T-String Basic Syntax

Creating Template Objects

The syntax for creating a T-String is straightforward–use the `t` prefix just like you would use `f` for an f-string:

# tstring_syntax.py
from __future__ import annotations

# Basic T-String
product = "Laptop"
price = 1299.99

template = t"Product: {product}, Price: ${price}"

# Check the type
print(f"Type: {type(template)}")
print(f"Is it a Template? {type(template).__name__}")

# Access the components
print(f"Strings: {template.strings}")
print(f"Values: {template.values}")

Output:

Type: 
Is it a Template? Template
Strings: ('Product: ', ', Price: $', '')
Values: ('Laptop', 1299.99)

The Template object stores the literal string parts and the values separately. The `strings` attribute is a tuple of the static parts, and `values` is a tuple of the interpolated values.

Accessing Template Parts

Once you have a Template object, you can inspect its structure in detail. Templates provide multiple ways to access their components:

# template_inspection.py
from __future__ import annotations

username = "alice"
timestamp = "2026-04-01 14:30:00"
template = t"User {username} logged in at {timestamp}"

# Direct attribute access
print("Strings part:", template.strings)
print("Values part:", template.values)

# Using .args for detailed interpolation info
print("\nInterpolation details:")
for interpolation in template.args:
    print(f"  Value: {interpolation.value}")
    print(f"  Expression: {interpolation.expression}")
    print(f"  Conversion: {interpolation.conversion}")
    print(f"  Format spec: {interpolation.format_spec}")
    print()

Output:

Strings part: ('User ', ' logged in at ', '')
Values part: ('alice', '2026-04-01 14:30:00')

Interpolation details:
  Value: alice
  Expression: username
  Conversion: None
  Format spec: ''

  Value: 2026-04-01 14:30:00
  Expression: timestamp
  Conversion: None
  Format spec: ''

The Interpolation objects give you access to the original expression as a string, the conversion flag (if any), and the format specification. This information is crucial when building custom processors.

The Template Protocol

Implementing the Template Protocol

Python defines a formal Template protocol that allows custom objects to work with the `__format__()` method. When you want to create custom template processors, you implement this protocol. The Template protocol requires implementing methods to process template objects systematically.

# custom_template_processor.py
from __future__ import annotations
from typing import Any

class SecureFormatter:
    """Processor that escapes all template values for safe output"""

    def __init__(self, escape_func):
        self.escape_func = escape_func

    def __call__(self, template):
        """Process a template object and return a safe string"""
        result_parts = []

        # Start with the first literal string
        result_parts.append(template.strings[0])

        # Process each value with the escape function
        for i, value in enumerate(template.values):
            escaped_value = self.escape_func(value)
            result_parts.append(str(escaped_value))
            result_parts.append(template.strings[i + 1])

        return "".join(result_parts)

# Define an HTML escaping function
def escape_html(value):
    """Escape HTML special characters"""
    replacements = {
        "&": "&",
        "<": "<",
        ">": ">",
        '"': """,
        "'": "'"
    }
    result = str(value)
    for char, escaped in replacements.items():
        result = result.replace(char, escaped)
    return result

# Use the processor
html_formatter = SecureFormatter(escape_html)
user_comment = ""
template = t"User comment: {user_comment}"
safe_html = html_formatter(template)
print(f"Safe output: {safe_html}")

Output:

Safe output: User comment: <script>alert('XSS')</script>

This example shows how you can create a custom processor that takes any escape function and applies it uniformly to all values in a template. This is one of the primary strengths of T-Strings over f-strings.

Processing Templates with Custom Functions

Building Safe Template Processors

Now let’s build more sophisticated template processors for real-world scenarios. The key is to leverage the separation of strings and values that T-Strings provide:

# advanced_processors.py
from __future__ import annotations
from typing import Callable

class TemplateProcessor:
    """Base processor for handling template objects"""

    def __init__(self, value_handler: Callable):
        self.value_handler = value_handler

    def process(self, template):
        """Process a template with custom handling for each value"""
        parts = [template.strings[0]]

        for i, value in enumerate(template.values):
            processed_value = self.value_handler(value)
            parts.append(str(processed_value))
            parts.append(template.strings[i + 1])

        return "".join(parts)

# Example 1: JSON safe processor
def jsonify(value):
    """Convert value to JSON-safe representation"""
    if isinstance(value, str):
        return f'"{value.replace('"', '\\"')}"'
    elif isinstance(value, bool):
        return "true" if value else "false"
    elif value is None:
        return "null"
    else:
        return str(value)

# Example 2: URL safe processor (basic)
def urlencode(value):
    """Simple URL encoding"""
    import urllib.parse
    return urllib.parse.quote(str(value))

# Create processors
json_processor = TemplateProcessor(jsonify)
url_processor = TemplateProcessor(urlencode)

# Use cases
config_data = '{"admin": true}'
search_term = "python 3.14"

json_template = t"var data = {config_data};"
url_template = t"https://example.com/search?q={search_term}"

print("JSON output:", json_processor.process(json_template))
print("URL output:", url_processor.process(url_template))

Output:

JSON output: var data = "{"admin": true}";
URL output: https://example.com/search?q=python%203.14

By encapsulating the processing logic, you create reusable processors that can handle multiple templates with consistent behavior.

SQL Injection Prevention with T-Strings

Why SQL Injection Prevention Matters

SQL injection is one of the most critical security vulnerabilities in web applications. It occurs when untrusted input is concatenated directly into SQL queries. T-Strings provide an elegant mechanism to prevent this by forcing you to process all values before they enter your SQL.

# sql_safe_queries.py
from __future__ import annotations

class SQLQueryBuilder:
    """Safe SQL query builder using T-Strings"""

    def __init__(self):
        self.query_parts = []

    @staticmethod
    def escape_value(value):
        """Escape values for SQL"""
        if value is None:
            return "NULL"
        elif isinstance(value, bool):
            return "TRUE" if value else "FALSE"
        elif isinstance(value, (int, float)):
            return str(value)
        else:
            # Escape single quotes
            escaped = str(value).replace("'", "''")
            return f"'{escaped}'"

    def build_from_template(self, template):
        """Build a safe query from a T-String template"""
        query_parts = [template.strings[0]]

        for i, value in enumerate(template.values):
            escaped = self.escape_value(value)
            query_parts.append(escaped)
            query_parts.append(template.strings[i + 1])

        return "".join(query_parts)

# Example usage with dangerous input
builder = SQLQueryBuilder()

# Safe input
user_id = 42
query1 = builder.build_from_template(t"SELECT * FROM users WHERE id = {user_id}")
print("Safe query:", query1)

# Dangerous input that would fail with f-strings
email = "test@example.com'; DELETE FROM users; --"
query2 = builder.build_from_template(t"SELECT * FROM users WHERE email = {email}")
print("Protected query:", query2)

Output:

Safe query: SELECT * FROM users WHERE id = 42
Protected query: SELECT * FROM users WHERE email = 'test@example.com''; DELETE FROM users; --'

Notice how the dangerous SQL injection attempt is neutralized–the single quote in the input is escaped to two quotes, and the entire value is wrapped in quotes, making it a literal string value rather than executable SQL code.

Using T-Strings with Parameterized Queries

For database operations, you typically want to use parameterized queries (prepared statements) rather than string concatenation. T-Strings make this cleaner:

# parameterized_queries.py
from __future__ import annotations

class ParameterizedQueryBuilder:
    """Build parameterized queries using T-Strings"""

    def build_query(self, template):
        """Extract placeholders and parameters from T-String"""
        placeholders = []
        parameters = []

        for i, value in enumerate(template.values):
            placeholders.append(f"${i + 1}")  # PostgreSQL style
            parameters.append(value)

        # Reconstruct query with placeholders
        query_parts = [template.strings[0]]
        for i, placeholder in enumerate(placeholders):
            query_parts.append(placeholder)
            query_parts.append(template.strings[i + 1])

        query = "".join(query_parts)
        return query, tuple(parameters)

# Usage
builder = ParameterizedQueryBuilder()

user_id = 42
email = "alice@example.com"
template = t"SELECT * FROM users WHERE id = {user_id} AND email = {email}"

query, params = builder.build_query(template)
print(f"Query: {query}")
print(f"Parameters: {params}")

# This would then be executed as: cursor.execute(query, params)

Output:

Query: SELECT * FROM users WHERE id = $1 AND email = $2
Parameters: (42, 'alice@example.com')

Using parameterized queries is the gold standard for SQL safety, and T-Strings make it easy to construct these queries while keeping your code readable.

HTML Escaping and Content Security

Preventing Cross-Site Scripting (XSS) Attacks

Just as T-Strings help prevent SQL injection, they’re equally valuable for preventing XSS attacks when generating HTML. The process is identical–escape user input before it enters the template output:

# html_template_rendering.py
from __future__ import annotations
import html

class HTMLRenderer:
    """Render HTML templates safely with T-Strings"""

    @staticmethod
    def render(template):
        """Render T-String template as safe HTML"""
        parts = [template.strings[0]]

        for i, value in enumerate(template.values):
            # Use html.escape for automatic entity encoding
            escaped = html.escape(str(value))
            parts.append(escaped)
            parts.append(template.strings[i + 1])

        return "".join(parts)

# Example: User-generated content
renderer = HTMLRenderer()

username = ""
user_bio = "I love "

welcome_html = renderer.render(t"

Welcome, {username}!

") bio_html = renderer.render(t"

{user_bio}

") print("Rendered welcome:", welcome_html) print("Rendered bio:", bio_html)

Output:

Rendered welcome: 

Welcome, <img src=x onerror='alert("XSS")'>!

Rendered bio:

I love <script>alert('hack')</script>

The `html.escape()` function automatically converts dangerous characters like `<`, `>`, and quotes into their HTML entity equivalents. Combined with T-Strings, this creates a clean, declarative way to generate safe HTML from user input.

Building Safe Template Systems

For more complex HTML generation, you can build template systems that enforce safety at the framework level:

# safe_template_system.py
from __future__ import annotations
import html

class SafeHTMLTemplate:
    """A template class that automatically escapes all interpolations"""

    def __init__(self, content_template):
        self.template = content_template
        self.escaper = html.escape

    def render(self):
        """Render the template with automatic escaping"""
        parts = [self.template.strings[0]]

        for i, value in enumerate(self.template.values):
            escaped_value = self.escaper(str(value))
            parts.append(escaped_value)
            parts.append(self.template.strings[i + 1])

        return "".join(parts)

# Usage in a web framework context
user_data = {
    "name": "Alice",
    "title": "Admin",
    "bio": "Python "
}

# Create safe templates
name_template = SafeHTMLTemplate(t"{user_data['name']}")
title_template = SafeHTMLTemplate(t"

{user_data['title']}

") bio_template = SafeHTMLTemplate(t"

{user_data['bio']}

") # Render all safely print(name_template.render()) print(title_template.render()) print(bio_template.render())

Output:

Alice

<b>Admin</b>

Python <developer>

Notice how the HTML markup in the user data is escaped, preventing any script injection while preserving the intended content.

Advanced: Custom Processors and DSLs

Building Domain-Specific Languages

T-Strings enable the creation of domain-specific languages (DSLs) by allowing custom processing of templates. For example, you could build a templating language for configuration files, data validation, or custom syntax:

# custom_dsl_processor.py
from __future__ import annotations

class ConfigProcessor:
    """Process T-Strings as configuration templates"""

    def __init__(self):
        self.variables = {}

    def register_variable(self, name, value):
        """Register a variable for substitution"""
        self.variables[name] = value

    def process_template(self, template):
        """Process template with variable replacement and formatting"""
        parts = [template.strings[0]]

        for i, value in enumerate(template.values):
            # Apply custom processing based on type
            if isinstance(value, bool):
                processed = "yes" if value else "no"
            elif isinstance(value, (list, tuple)):
                processed = ", ".join(str(v) for v in value)
            elif isinstance(value, dict):
                processed = "; ".join(f"{k}={v}" for k, v in value.items())
            else:
                processed = str(value)

            parts.append(processed)
            parts.append(template.strings[i + 1])

        return "".join(parts)

# Usage in a configuration context
processor = ConfigProcessor()

debug_mode = True
log_level = "INFO"
features = ["auth", "api", "websocket"]
db_config = {"host": "localhost", "port": 5432}

config_template = t"""
Debug mode: {debug_mode}
Log level: {log_level}
Enabled features: {features}
Database config: {db_config}
"""

config_output = processor.process_template(config_template)
print("Generated config:")
print(config_output)

Output:

Generated config:
Debug mode: yes
Log level: INFO
Enabled features: auth, api, websocket
Database config: host=localhost; port=5432

This demonstrates how T-Strings allow you to build sophisticated text generation systems with custom rules for different data types.

Real-World Example: Building a Log Formatter

Let’s build a practical logging system that uses T-Strings to format log messages with automatic context escaping and structuring:

# structured_logging.py
from __future__ import annotations
import json
from datetime import datetime
from enum import Enum

class LogLevel(Enum):
    DEBUG = "DEBUG"
    INFO = "INFO"
    WARNING = "WARNING"
    ERROR = "ERROR"

class StructuredLogger:
    """Logger that formats messages using T-Strings"""

    def __init__(self, service_name):
        self.service_name = service_name
        self.logs = []

    def _create_log_entry(self, level, template):
        """Create a structured log entry from a T-String template"""
        # Build the message
        message_parts = [template.strings[0]]
        context = {}

        for i, value in enumerate(template.values):
            # Use expression as context key
            interpolation = template.args[i]
            key = interpolation.expression or f"arg{i}"

            # Store in context dict
            context[key] = value

            # Add to message
            message_parts.append(str(value))
            message_parts.append(template.strings[i + 1])

        message = "".join(message_parts)

        # Create structured log entry
        log_entry = {
            "timestamp": datetime.now().isoformat(),
            "service": self.service_name,
            "level": level.value,
            "message": message,
            "context": context
        }

        return log_entry

    def log(self, level, template):
        """Log a message with automatic context capture"""
        entry = self._create_log_entry(level, template)
        self.logs.append(entry)
        print(json.dumps(entry, indent=2))

# Usage
logger = StructuredLogger("auth-service")

user_id = 12345
username = "alice_smith"
ip_address = "192.168.1.100"

logger.log(LogLevel.INFO, t"User {username} (ID: {user_id}) logged in from {ip_address}")

failed_attempts = 5
max_attempts = 10
logger.log(LogLevel.WARNING, t"User {username} has {failed_attempts} failed login attempts (max: {max_attempts})")

Output:

{
  "timestamp": "2026-04-01T12:30:45.123456",
  "service": "auth-service",
  "level": "INFO",
  "message": "User alice_smith (ID: 12345) logged in from 192.168.1.100",
  "context": {
    "username": "alice_smith",
    "user_id": 12345,
    "ip_address": "192.168.1.100"
  }
}
{
  "timestamp": "2026-04-01T12:30:46.234567",
  "service": "auth-service",
  "level": "WARNING",
  "message": "User alice_smith has 5 failed login attempts (max: 10)",
  "context": {
    "username": "alice_smith",
    "failed_attempts": 5,
    "max_attempts": 10
  }
}

This example shows how T-Strings enable automatic context extraction and structured logging, capturing not just the formatted message but also the individual values and their names for later analysis.

How to Try Python 3.14 T-Strings Today

Since Python 3.14 is still in development as of this writing, you’ll need to run Python from the development version. Here’s how to get started:

# Installation options for trying T-Strings

# Option 1: Build from source (Linux/macOS)
git clone https://github.com/python/cpython.git
cd cpython
./configure --prefix=$HOME/python314
make
make install

# Option 2: Use Docker
docker run -it python:3.14-dev bash

# Option 3: Download pre-built alpha/beta releases
# Visit https://www.python.org/downloads/
# Look for 3.14 alpha or beta versions

# Once installed, verify T-String support:
python3.14 -c "t = t'Test'; print(type(t))"

The official Python downloads page provides alpha and beta releases as they become available. Join the Python community discussions on python-discuss@python.org if you want to provide feedback on T-Strings and PEP 750.

Frequently Asked Questions

Are T-Strings backwards compatible with older Python versions?

No, T-Strings are a new feature in Python 3.14 and will not work in earlier versions. If you need to support older Python versions, you’ll need to either use f-strings or implement your own template processor. You can use `from __future__ import annotations` in Python 3.7+ to help with some compatibility, but the `t` prefix itself is new to 3.14.

What’s the performance impact of using T-Strings instead of f-strings?

T-Strings have a slightly higher memory footprint because they create Template objects rather than immediately evaluating to strings. However, if you’re processing templates (which is the entire point), the overhead is minimal compared to the safety benefits. For simple one-off templates where you don’t need processing, f-strings remain slightly more efficient.

Can I combine T-Strings with f-strings in the same code?

Absolutely! There’s no conflict between using both. Use f-strings for simple formatting and T-Strings when you need custom processing. In fact, many real applications will use both depending on context. Remember that you cannot use `f` and `t` prefixes together on the same string literal.

How do custom format specs work with T-Strings?

Format specs like `{value:.2f}` are captured in the Interpolation object’s `format_spec` attribute. Your custom processor can then apply these format specifications when processing the template. Here’s a quick example:

# format_spec_example.py
from __future__ import annotations

def format_aware_processor(template):
    parts = [template.strings[0]]

    for i, value in enumerate(template.values):
        interpolation = template.args[i]
        format_spec = interpolation.format_spec

        if format_spec:
            formatted = format(value, format_spec)
        else:
            formatted = str(value)

        parts.append(formatted)
        parts.append(template.strings[i + 1])

    return "".join(parts)

price = 19.5
quantity = 5
template = t"Price: ${price:.2f}, Qty: {quantity:03d}"
result = format_aware_processor(template)
print(result)

Output:

Price: $19.50, Qty: 005

Can T-Strings contain other T-Strings?

Yes, you can nest T-Strings, but the outer template will contain a Template object as one of its values rather than a string. You would need to process the inner template first, or create a processor that handles nested Template objects specially. Most use cases don’t require this complexity.

How do multiline T-Strings work?

T-Strings support multiline strings just like regular Python strings. Use triple quotes for multiline templates:

# multiline_template.py
from __future__ import annotations

name = "Bob"
email = "bob@example.com"

template = t"""
User Profile:
Name: {name}
Email: {email}
"""

print(template.strings)
print(template.values)

Output:

('"\n\nUser Profile:\nName: ', '\nEmail: ', '\n"')
('Bob', 'bob@example.com')

Conclusion

T-Strings represent a significant evolution in Python’s string handling capabilities, particularly for security-sensitive applications. By deferring the combination of string parts and values, they enable custom processing that’s impossible with f-strings, making your code more secure against injection attacks and more flexible for advanced use cases.

The key advantages are clear: T-Strings naturally support SQL injection prevention, HTML escaping, URL encoding, and custom domain-specific languages through a clean, consistent API. Whether you’re building web applications, CLI tools, or data processing pipelines, understanding and leveraging T-Strings will improve both the security and maintainability of your code.

For more information, consult the official Python documentation for PEP 750 at https://peps.python.org/pep-0750/ and the standard Template protocol documentation in the Python standard library.

Explore these related topics to deepen your Python expertise:

  • String Formatting in Python: A Complete Guide (f-strings, .format(), and legacy methods)
  • SQL Injection: Prevention Strategies and Best Practices
  • Web Security in Python: CSRF, XSS, and CORS
  • Building Custom DSLs with Python
  • Advanced Template Engines: Jinja2, Mako, and Cheetah

How To Mock API Calls in Python

Intermediate

Your Python application talks to external APIs — fetching weather data, processing payments, sending notifications, pulling user profiles from third-party services. But when you write tests, you do not want those tests to actually hit the internet. Real API calls are slow, flaky, cost money, and make your test results depend on whether some server halfway around the world is having a good day. Mocking API calls lets you test your code’s logic in complete isolation, with predictable responses that run in milliseconds.

Python’s standard library includes everything you need through the unittest.mock module. You do not need to install anything extra to get started — patch, MagicMock, and Mock are all built in. For more advanced scenarios, the third-party responses library provides an elegant way to mock the requests library specifically. Both approaches work seamlessly with pytest.

In this article, we will cover everything you need to mock API calls in Python. We will start with a quick example, then explain how mocking works under the hood. From there, we will walk through patching with decorators and context managers, configuring mock return values and side effects, verifying that calls were made correctly, using the responses library for request-level mocking, and handling error scenarios. We will finish with a complete real-life project that tests a GitHub user profile fetcher end to end.

Mocking an API Call in Python: Quick Example

Here is the simplest possible example of mocking an API call. We have a function that fetches a user from an API, and a test that replaces the HTTP call with a fake response.

# quick_mock_example.py
from unittest.mock import patch, MagicMock

import requests

def get_user(user_id):
    """Fetch a user from the API."""
    response = requests.get(f"https://jsonplaceholder.typicode.com/users/{user_id}")
    response.raise_for_status()
    return response.json()

# Test it without hitting the real API
@patch("requests.get")
def test_get_user(mock_get):
    mock_response = MagicMock()
    mock_response.json.return_value = {"id": 1, "name": "Leanne Graham"}
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    user = get_user(1)
    assert user["name"] == "Leanne Graham"
    mock_get.assert_called_once_with("https://jsonplaceholder.typicode.com/users/1")

if __name__ == "__main__":
    test_get_user()
    print("Test passed!")

Output:

$ python quick_mock_example.py
Test passed!

The @patch decorator replaced requests.get with a MagicMock before the test ran, and restored the real function afterward. We configured the mock to return a fake JSON response, then verified that our function called the right URL. The entire test runs without any network access, making it fast and reliable.

Want to learn about context managers, side effects, error simulation, and the responses library? Keep reading — we cover all of that below.

What Is Mocking and Why Mock API Calls?

Mocking is a testing technique where you replace a real object with a fake one that behaves however you configure it. In the context of API calls, mocking means replacing the HTTP client (usually requests.get or requests.post) with a controlled substitute that returns predetermined responses without making any network requests.

Here is why mocking API calls matters for any serious Python project:

Problem with real API calls in testsHow mocking solves it
Tests are slow (network round-trips)Mocks return instantly from memory
Tests fail when API is down or rate-limitedMocks always respond predictably
Tests cost money (paid APIs charge per call)Mocks are free — no HTTP requests leave your machine
Tests depend on external data that changesMocks return the exact data you specify
Tests cannot simulate errors easilyMocks can raise any exception on demand
Tests require authentication tokensMocks bypass all authentication

The core principle is simple: your tests should verify that your code handles API responses correctly. They should not verify that the API itself is working — that is the API provider’s job. Mocking draws a clean boundary between your logic and the external world.

Patching With the Decorator Pattern

The most common way to mock an API call is the @patch decorator from unittest.mock. It temporarily replaces a specified object with a MagicMock for the duration of the test, then restores the original when the test finishes.

# github_client.py
import requests

def get_repo_stars(owner, repo):
    """Fetch the star count for a GitHub repository."""
    url = f"https://api.github.com/repos/{owner}/{repo}"
    response = requests.get(url, timeout=10)
    response.raise_for_status()
    data = response.json()
    return data["stargazers_count"]

def is_popular(owner, repo, threshold=1000):
    """Check if a repository has more stars than the threshold."""
    stars = get_repo_stars(owner, repo)
    return stars >= threshold
# test_github_client.py
from unittest.mock import patch, MagicMock
from github_client import get_repo_stars, is_popular

@patch("github_client.requests.get")
def test_get_repo_stars(mock_get):
    mock_response = MagicMock()
    mock_response.json.return_value = {"stargazers_count": 54321}
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    stars = get_repo_stars("python", "cpython")
    assert stars == 54321

@patch("github_client.requests.get")
def test_is_popular_true(mock_get):
    mock_response = MagicMock()
    mock_response.json.return_value = {"stargazers_count": 5000}
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    assert is_popular("pallets", "flask") is True

@patch("github_client.requests.get")
def test_is_popular_false(mock_get):
    mock_response = MagicMock()
    mock_response.json.return_value = {"stargazers_count": 50}
    mock_response.raise_for_status.return_value = None
    mock_get.return_value = mock_response

    assert is_popular("someone", "small-project") is False

Output:

$ pytest test_github_client.py -v
========================= test session starts =========================
collected 3 items

test_github_client.py::test_get_repo_stars PASSED
test_github_client.py::test_is_popular_true PASSED
test_github_client.py::test_is_popular_false PASSED

========================= 3 passed in 0.02s ==========================

Notice the patch target is "github_client.requests.get", not "requests.get". This is the single most common mistake with @patch: you must patch where the object is looked up, not where it is defined. Since github_client.py imports requests and calls requests.get, you patch it inside github_client‘s namespace.

Patching With Context Managers

Sometimes the decorator pattern is too broad — you only need the mock active for a few lines, not the entire test. The with statement gives you finer control over exactly when the mock is active.

# test_context_manager.py
from unittest.mock import patch, MagicMock
from github_client import get_repo_stars

def test_patch_as_context_manager():
    with patch("github_client.requests.get") as mock_get:
        mock_response = MagicMock()
        mock_response.json.return_value = {"stargazers_count": 999}
        mock_response.raise_for_status.return_value = None
        mock_get.return_value = mock_response

        stars = get_repo_stars("test", "repo")
        assert stars == 999

    # After the with block, requests.get is real again

Output:

$ pytest test_context_manager.py -v
========================= test session starts =========================
collected 1 item

test_context_manager.py::test_patch_as_context_manager PASSED

========================= 1 passed in 0.01s ==========================

The context manager approach is especially useful when your test needs to verify behavior both with and without the mock in the same test function. Inside the with block, the mock is active. Outside it, the original object is restored. This gives you precise control that the decorator cannot match.

Side Effects: Simulating Errors and Dynamic Responses

Real APIs do not always return happy responses. They time out, return 500 errors, send malformed JSON, and rate-limit your requests. The side_effect parameter on a mock lets you simulate all of these scenarios so your code handles failures gracefully.

# test_error_handling.py
from unittest.mock import patch, MagicMock
import requests
from github_client import get_repo_stars

@patch("github_client.requests.get")
def test_api_timeout(mock_get):
    mock_get.side_effect = requests.exceptions.Timeout("Connection timed out")

    try:
        get_repo_stars("python", "cpython")
        assert False, "Should have raised Timeout"
    except requests.exceptions.Timeout:
        pass  # Expected behavior

@patch("github_client.requests.get")
def test_api_404(mock_get):
    mock_response = MagicMock()
    mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError(
        "404 Not Found"
    )
    mock_get.return_value = mock_response

    try:
        get_repo_stars("nonexistent", "repo")
        assert False, "Should have raised HTTPError"
    except requests.exceptions.HTTPError:
        pass  # Expected behavior

@patch("github_client.requests.get")
def test_api_returns_different_responses(mock_get):
    response_1 = MagicMock()
    response_1.json.return_value = {"stargazers_count": 100}
    response_1.raise_for_status.return_value = None

    response_2 = MagicMock()
    response_2.json.return_value = {"stargazers_count": 200}
    response_2.raise_for_status.return_value = None

    mock_get.side_effect = [response_1, response_2]

    assert get_repo_stars("owner", "repo1") == 100
    assert get_repo_stars("owner", "repo2") == 200

Output:

$ pytest test_error_handling.py -v
========================= test session starts =========================
collected 3 items

test_error_handling.py::test_api_timeout PASSED
test_error_handling.py::test_api_404 PASSED
test_error_handling.py::test_api_returns_different_responses PASSED

========================= 3 passed in 0.02s ==========================

When side_effect is an exception class or instance, the mock raises that exception when called. When it is a list, the mock returns each item in sequence on successive calls. This is incredibly powerful for testing retry logic, fallback behavior, and error recovery paths that would be nearly impossible to trigger with real API calls.

Verifying How Your Code Calls the API

Mocking is not just about controlling what comes back from the API — it also lets you verify exactly how your code called it. Did it use the right URL? Did it send the correct headers? Did it call the API the expected number of times? The mock object records every call for inspection.

# notification_service.py
import requests

def send_notification(user_email, message, urgent=False):
    """Send a notification via the company API."""
    payload = {
        "to": user_email,
        "body": message,
        "priority": "high" if urgent else "normal"
    }
    headers = {"Authorization": "Bearer fake-token-123"}
    response = requests.post(
        "https://api.notifications.internal/send",
        json=payload,
        headers=headers,
        timeout=5
    )
    response.raise_for_status()
    return response.json()
# test_notification_service.py
from unittest.mock import patch, MagicMock, call
from notification_service import send_notification

@patch("notification_service.requests.post")
def test_sends_correct_payload(mock_post):
    mock_response = MagicMock()
    mock_response.json.return_value = {"status": "sent", "id": "msg-123"}
    mock_response.raise_for_status.return_value = None
    mock_post.return_value = mock_response

    result = send_notification("alice@example.com", "Hello!", urgent=True)

    mock_post.assert_called_once_with(
        "https://api.notifications.internal/send",
        json={
            "to": "alice@example.com",
            "body": "Hello!",
            "priority": "high"
        },
        headers={"Authorization": "Bearer fake-token-123"},
        timeout=5
    )
    assert result["status"] == "sent"

@patch("notification_service.requests.post")
def test_normal_priority_by_default(mock_post):
    mock_response = MagicMock()
    mock_response.json.return_value = {"status": "sent"}
    mock_response.raise_for_status.return_value = None
    mock_post.return_value = mock_response

    send_notification("bob@example.com", "Update available")

    actual_call = mock_post.call_args
    assert actual_call.kwargs["json"]["priority"] == "normal"

Output:

$ pytest test_notification_service.py -v
========================= test session starts =========================
collected 2 items

test_notification_service.py::test_sends_correct_payload PASSED
test_notification_service.py::test_normal_priority_by_default PASSED

========================= 2 passed in 0.01s ==========================

The assert_called_once_with method checks both that the mock was called exactly once and that it received the exact arguments you specified. For more flexible inspection, call_args gives you the actual positional and keyword arguments from the most recent call. This is how you verify that your code is building the right request body, sending the correct headers, and using proper timeout values — all without any network traffic.

The responses Library: Mocking at the HTTP Level

While unittest.mock works at the Python object level (replacing requests.get itself), the responses library works at the HTTP level — it intercepts outgoing HTTP requests and returns configured responses. This is closer to how the real code works and requires less boilerplate for request-heavy tests.

# test_with_responses.py
import responses
import requests

def fetch_todos(user_id):
    """Fetch todos for a user from the API."""
    url = f"https://jsonplaceholder.typicode.com/todos?userId={user_id}"
    response = requests.get(url, timeout=10)
    response.raise_for_status()
    todos = response.json()
    return [t for t in todos if not t["completed"]]

@responses.activate
def test_fetch_incomplete_todos():
    responses.add(
        responses.GET,
        "https://jsonplaceholder.typicode.com/todos",
        json=[
            {"id": 1, "userId": 1, "title": "Buy groceries", "completed": False},
            {"id": 2, "userId": 1, "title": "Walk the dog", "completed": True},
            {"id": 3, "userId": 1, "title": "Write tests", "completed": False},
        ],
        status=200
    )

    incomplete = fetch_todos(1)
    assert len(incomplete) == 2
    assert incomplete[0]["title"] == "Buy groceries"
    assert incomplete[1]["title"] == "Write tests"

@responses.activate
def test_fetch_todos_server_error():
    responses.add(
        responses.GET,
        "https://jsonplaceholder.typicode.com/todos",
        json={"error": "Internal Server Error"},
        status=500
    )

    try:
        fetch_todos(1)
        assert False, "Should have raised HTTPError"
    except requests.exceptions.HTTPError:
        pass

Output:

$ pytest test_with_responses.py -v
========================= test session starts =========================
collected 2 items

test_with_responses.py::test_fetch_incomplete_todos PASSED
test_with_responses.py::test_fetch_todos_server_error PASSED

========================= 2 passed in 0.03s ==========================

The @responses.activate decorator intercepts all HTTP requests made through the requests library during the test. You register expected responses with responses.add(), specifying the HTTP method, URL, response body, and status code. If your code tries to make a request to an unregistered URL, responses raises a ConnectionError, which catches accidental real API calls. Install it with pip install responses.

Combining Mocks With pytest Fixtures

When multiple tests share the same mock setup, pytest fixtures eliminate the repetition. You can create fixtures that set up mocks and inject them into any test that needs them.

# test_with_fixtures.py
import pytest
from unittest.mock import patch, MagicMock
from github_client import get_repo_stars, is_popular

@pytest.fixture
def mock_github_api():
    """Fixture that patches requests.get for GitHub API tests."""
    with patch("github_client.requests.get") as mock_get:
        mock_response = MagicMock()
        mock_response.raise_for_status.return_value = None
        mock_get.return_value = mock_response
        yield {"mock_get": mock_get, "mock_response": mock_response}

def test_repo_with_many_stars(mock_github_api):
    mock_github_api["mock_response"].json.return_value = {"stargazers_count": 50000}
    assert get_repo_stars("big", "project") == 50000

def test_repo_with_few_stars(mock_github_api):
    mock_github_api["mock_response"].json.return_value = {"stargazers_count": 3}
    assert get_repo_stars("tiny", "project") == 3

def test_popular_repo(mock_github_api):
    mock_github_api["mock_response"].json.return_value = {"stargazers_count": 9999}
    assert is_popular("big", "project", threshold=5000) is True

def test_unpopular_repo(mock_github_api):
    mock_github_api["mock_response"].json.return_value = {"stargazers_count": 10}
    assert is_popular("tiny", "project", threshold=5000) is False

Output:

$ pytest test_with_fixtures.py -v
========================= test session starts =========================
collected 4 items

test_with_fixtures.py::test_repo_with_many_stars PASSED
test_with_fixtures.py::test_repo_with_few_stars PASSED
test_with_fixtures.py::test_popular_repo PASSED
test_with_fixtures.py::test_unpopular_repo PASSED

========================= 4 passed in 0.02s ==========================

The fixture uses yield inside the with patch() context manager, which means the mock is active while the test runs and automatically cleaned up afterward. Each test only needs to configure the specific return value it cares about — the common setup (patching, creating the mock response, wiring up raise_for_status) is handled once in the fixture. Put shared fixtures like this in conftest.py to make them available across multiple test files.

Real-Life Example: Testing a GitHub Profile Fetcher

Let us build a complete module that fetches GitHub user profiles and formats them for display, then write a comprehensive test suite covering happy paths, error handling, and edge cases.

# github_profile.py
import requests

class GitHubProfileError(Exception):
    """Custom exception for GitHub profile fetching errors."""
    pass

class GitHubProfile:
    API_BASE = "https://api.github.com"

    def __init__(self, username):
        self.username = username
        self._data = None

    def fetch(self):
        """Fetch the user profile from GitHub API."""
        try:
            response = requests.get(
                f"{self.API_BASE}/users/{self.username}",
                headers={"Accept": "application/vnd.github.v3+json"},
                timeout=10
            )
            response.raise_for_status()
            self._data = response.json()
        except requests.exceptions.Timeout:
            raise GitHubProfileError(f"Timeout fetching profile for {self.username}")
        except requests.exceptions.HTTPError as e:
            if e.response.status_code == 404:
                raise GitHubProfileError(f"User '{self.username}' not found")
            raise GitHubProfileError(f"API error: {e}")
        return self

    def summary(self):
        """Return a formatted summary string."""
        if not self._data:
            raise GitHubProfileError("Profile not fetched yet. Call fetch() first.")
        name = self._data.get("name", self.username)
        bio = self._data.get("bio", "No bio available")
        repos = self._data.get("public_repos", 0)
        followers = self._data.get("followers", 0)
        return f"{name} | {repos} repos | {followers} followers | {bio}"

    @property
    def is_prolific(self):
        """Check if the user has more than 50 public repos."""
        if not self._data:
            return False
        return self._data.get("public_repos", 0) > 50

Now the test suite that exercises the full class:

# test_github_profile.py
import pytest
from unittest.mock import patch, MagicMock
import requests
from github_profile import GitHubProfile, GitHubProfileError

@pytest.fixture
def mock_api():
    with patch("github_profile.requests.get") as mock_get:
        mock_response = MagicMock()
        mock_response.raise_for_status.return_value = None
        mock_get.return_value = mock_response
        yield {"get": mock_get, "response": mock_response}

@pytest.fixture
def sample_profile_data():
    return {
        "login": "octocat",
        "name": "The Octocat",
        "bio": "GitHub mascot and occasional developer",
        "public_repos": 85,
        "followers": 12000
    }

# --- Fetch Tests ---

def test_fetch_sets_data(mock_api, sample_profile_data):
    mock_api["response"].json.return_value = sample_profile_data
    profile = GitHubProfile("octocat").fetch()
    assert profile._data == sample_profile_data

def test_fetch_uses_correct_url(mock_api, sample_profile_data):
    mock_api["response"].json.return_value = sample_profile_data
    GitHubProfile("torvalds").fetch()
    mock_api["get"].assert_called_once_with(
        "https://api.github.com/users/torvalds",
        headers={"Accept": "application/vnd.github.v3+json"},
        timeout=10
    )

def test_fetch_timeout(mock_api):
    mock_api["get"].side_effect = requests.exceptions.Timeout("timed out")
    with pytest.raises(GitHubProfileError, match="Timeout"):
        GitHubProfile("slowuser").fetch()

def test_fetch_user_not_found(mock_api):
    error_response = MagicMock()
    error_response.status_code = 404
    mock_api["response"].raise_for_status.side_effect = (
        requests.exceptions.HTTPError(response=error_response)
    )
    with pytest.raises(GitHubProfileError, match="not found"):
        GitHubProfile("ghost").fetch()

# --- Summary Tests ---

def test_summary_format(mock_api, sample_profile_data):
    mock_api["response"].json.return_value = sample_profile_data
    profile = GitHubProfile("octocat").fetch()
    result = profile.summary()
    assert "The Octocat" in result
    assert "85 repos" in result
    assert "12000 followers" in result

def test_summary_without_fetch():
    profile = GitHubProfile("someone")
    with pytest.raises(GitHubProfileError, match="not fetched"):
        profile.summary()

def test_summary_missing_bio(mock_api):
    mock_api["response"].json.return_value = {
        "login": "minimal", "name": "Min User",
        "public_repos": 1, "followers": 0
    }
    profile = GitHubProfile("minimal").fetch()
    assert "No bio available" in profile.summary()

# --- Property Tests ---

@pytest.mark.parametrize("repo_count, expected", [
    (100, True),
    (51, True),
    (50, False),
    (0, False),
])
def test_is_prolific(mock_api, repo_count, expected):
    mock_api["response"].json.return_value = {"public_repos": repo_count}
    profile = GitHubProfile("user").fetch()
    assert profile.is_prolific is expected

def test_is_prolific_without_fetch():
    profile = GitHubProfile("someone")
    assert profile.is_prolific is False

Output:

$ pytest test_github_profile.py -v
========================= test session starts =========================
collected 11 items

test_github_profile.py::test_fetch_sets_data PASSED
test_github_profile.py::test_fetch_uses_correct_url PASSED
test_github_profile.py::test_fetch_timeout PASSED
test_github_profile.py::test_fetch_user_not_found PASSED
test_github_profile.py::test_summary_format PASSED
test_github_profile.py::test_summary_without_fetch PASSED
test_github_profile.py::test_summary_missing_bio PASSED
test_github_profile.py::test_is_prolific[100-True] PASSED
test_github_profile.py::test_is_prolific[51-True] PASSED
test_github_profile.py::test_is_prolific[50-False] PASSED
test_github_profile.py::test_is_prolific_without_fetch PASSED

========================= 11 passed in 0.03s =========================

This test suite demonstrates all the mocking techniques from the article working together. The mock_api fixture handles common setup, side_effect simulates timeouts and HTTP errors, assert_called_once_with verifies the request details, and parametrize covers the boundary cases for is_prolific. Every test runs without internet access and completes in milliseconds.

Frequently Asked Questions

Should I use unittest.mock or the responses library?

Use unittest.mock when you need general-purpose mocking that works with any library or object, not just HTTP calls. Use responses when you are specifically testing code that uses the requests library and want a cleaner syntax for defining mock HTTP responses. For most projects, start with unittest.mock since it is built in and covers all use cases. Add responses when you have many request-heavy tests and the mock setup becomes repetitive.

Why does my patch not seem to work?

The most common reason is patching the wrong target. You must patch where the object is used, not where it is defined. If your module my_app.py does import requests and calls requests.get, you patch "my_app.requests.get", not "requests.get". If the module does from requests import get, you patch "my_app.get" instead. Check your import style and make sure the patch target matches it.

How do I mock async API calls with aiohttp or httpx?

For aiohttp, use the aioresponses library which works like responses but for async HTTP. For httpx, use respx. Both follow the same pattern: register expected URLs with mock responses, run your async code, and verify the calls. You can also use unittest.mock.AsyncMock (Python 3.8+) for general async mocking with patch.

Can I mock only some API calls and let others go through?

With unittest.mock, you can use side_effect with a function that conditionally returns a mock or calls the real implementation. With responses, add responses.passthrough_prefixes = ("https://allowed-api.com",) to let specific URLs through while mocking others. However, mixing real and mocked calls in tests is generally a sign that you should split the test into separate unit and integration tests.

How many things should I mock in a single test?

Mock only the external boundaries — the things that cross your application’s edge (HTTP calls, database queries, file I/O, system clocks). Do not mock internal functions or classes within your own codebase unless you have a specific reason. Over-mocking makes tests brittle because they break whenever you refactor internal code, even if the external behavior stays the same. A good rule of thumb: if you are mocking more than two things in one test, the function under test might be doing too much and should be refactored.

Conclusion

We covered the complete toolkit for mocking API calls in Python: patching with decorators and context managers, configuring return values and side effects with MagicMock, verifying call arguments with assert_called_once_with, using the responses library for HTTP-level mocking, combining mocks with pytest fixtures, and simulating errors like timeouts and 404s. The GitHub profile project showed how all these techniques work together in a realistic codebase.

Try extending the GitHub profile tests as practice: add a method that fetches the user’s repositories, handle pagination, or add caching with a TTL. Each new feature gives you more opportunities to practice mocking different response shapes and error conditions.

For the complete unittest.mock documentation, visit the official Python docs at docs.python.org/3/library/unittest.mock. For the responses library, see its GitHub page at github.com/getsentry/responses.