Intermediate

Every Python developer learns to use with open('file.txt') as f early on, but few explore the full power of context managers beyond file handling. If you have ever needed to manage database connections, acquire locks, temporarily change settings, or ensure cleanup code always runs, the contextlib module is your toolkit. It turns complex resource management into clean, readable code.

The contextlib module is part of Python’s standard library, so there is nothing to install. It provides decorators and utilities that let you create context managers without writing a full class with __enter__ and __exit__ methods. The most powerful tool is @contextmanager, which turns a simple generator function into a fully functional context manager.

In this tutorial, you will learn how to create custom context managers with @contextmanager, manage multiple resources with ExitStack, suppress specific exceptions cleanly, redirect output streams, and build reusable resource management patterns. By the end, you will write context managers as naturally as you write regular functions.

Custom Context Manager: Quick Example

Here is how to create a timing context manager that measures how long a block of code takes to run:

# quick_timer.py
from contextlib import contextmanager
import time

@contextmanager
def timer(label="Block"):
    start = time.perf_counter()
    yield
    elapsed = time.perf_counter() - start
    print(f"{label} took {elapsed:.4f} seconds")

# Use it
with timer("Data processing"):
    total = sum(range(1_000_000))
    print(f"Sum: {total}")

Output:

Sum: 499999500000
Data processing took 0.0312 seconds

The @contextmanager decorator transforms the generator function into a context manager. Everything before yield runs on entry (like __enter__), and everything after yield runs on exit (like __exit__). The yield itself is where your with block executes.

What is contextlib and Why Use It?

Context managers in Python follow the protocol defined by the __enter__ and __exit__ magic methods. Any object with these two methods can be used with the with statement. The contextlib module provides shortcuts that save you from writing boilerplate class definitions for simple use cases.

Without contextlib, creating a context manager requires a full class with two methods. With it, you can do the same thing in a few lines using a generator function. This matters because context managers are everywhere in professional Python code — they manage database transactions, HTTP sessions, temporary files, thread locks, and any resource that needs guaranteed cleanup.

ApproachLines of CodeBest For
Class with __enter__/__exit__10-20 linesComplex state management, reusable libraries
@contextmanager decorator5-10 linesSimple setup/teardown, one-off utilities
ExitStack3-5 linesDynamic number of resources, conditional cleanup

The rule of thumb: if your context manager is straightforward setup-yield-cleanup, use @contextmanager. If it needs to manage dynamic or conditional resources, reach for ExitStack.

The @contextmanager Decorator

The @contextmanager decorator is the workhorse of the module. It takes a generator that yields exactly once and turns it into a context manager. Here is a practical example that temporarily changes the working directory:

# change_dir.py
from contextlib import contextmanager
import os

@contextmanager
def change_directory(path):
    """Temporarily change the working directory."""
    original = os.getcwd()
    try:
        os.chdir(path)
        yield path
    finally:
        os.chdir(original)

# Usage
print(f"Before: {os.getcwd()}")
with change_directory("/tmp") as new_dir:
    print(f"Inside: {os.getcwd()}")
    print(f"Yielded: {new_dir}")
print(f"After: {os.getcwd()}")

Output:

Before: /home/user/project
Inside: /tmp
Yielded: /tmp
After: /home/user/project

The try/finally block inside the generator ensures cleanup happens even if an exception occurs within the with block. Whatever you pass to yield becomes the value bound by as in the with statement.

Handling Errors in Context Managers

When an exception occurs inside a with block, it gets thrown into the generator at the yield point. You can catch it, handle it, or let it propagate:

# error_handling.py
from contextlib import contextmanager

@contextmanager
def safe_operation(name):
    """Context manager that logs errors without suppressing them."""
    print(f"Starting: {name}")
    try:
        yield
    except Exception as e:
        print(f"Error in {name}: {type(e).__name__}: {e}")
        raise  # Re-raise to let caller handle it
    finally:
        print(f"Finished: {name}")

# Normal usage
with safe_operation("calculation"):
    result = 42 / 2
    print(f"Result: {result}")

print("---")

# With an error
try:
    with safe_operation("bad calculation"):
        result = 42 / 0
except ZeroDivisionError:
    print("Caught the error outside")

Output:

Starting: calculation
Result: 21.0
Finished: calculation
---
Starting: bad calculation
Error in bad calculation: ZeroDivisionError: division by zero
Finished: bad calculation
Caught the error outside

The finally block guarantees cleanup runs whether the operation succeeds or fails. If you want to suppress the exception (prevent it from propagating), do not re-raise it — but be careful, as silently swallowing exceptions makes debugging difficult.

Managing Multiple Resources with ExitStack

When you need to manage a variable number of resources — like opening multiple files determined at runtime — ExitStack is the right tool:

# exitstack_demo.py
from contextlib import ExitStack
import tempfile
import os

def process_multiple_files(filenames):
    """Open and process multiple files safely."""
    with ExitStack() as stack:
        # Open all files -- ExitStack closes them all on exit
        files = [
            stack.enter_context(open(f, 'w'))
            for f in filenames
        ]
        
        # Write to each file
        for i, f in enumerate(files):
            f.write(f"Content for file {i}\n")
            print(f"Wrote to {filenames[i]}")
        
        print(f"All {len(files)} files open simultaneously")
    # All files are closed here, even if an error occurred
    print("All files closed")

# Create temp files for demo
temp_dir = tempfile.mkdtemp()
filenames = [os.path.join(temp_dir, f"file_{i}.txt") for i in range(3)]
process_multiple_files(filenames)

# Verify files are written and closed
for f in filenames:
    print(f"{os.path.basename(f)}: {open(f).read().strip()}")

Output:

Wrote to /tmp/tmpXXXXXX/file_0.txt
Wrote to /tmp/tmpXXXXXX/file_1.txt
Wrote to /tmp/tmpXXXXXX/file_2.txt
All 3 files open simultaneously
All files closed
file_0.txt: Content for file 0
file_1.txt: Content for file 1
file_2.txt: Content for file 2

The key advantage of ExitStack is that it handles cleanup for all registered resources, even if opening a later resource fails. Without it, you would need deeply nested with statements or manual cleanup logic.

Suppressing Exceptions with suppress()

Sometimes you want to ignore specific exceptions cleanly. Instead of writing try/except: pass, use contextlib.suppress():

# suppress_demo.py
from contextlib import suppress
import os

# Without suppress (verbose)
try:
    os.remove("nonexistent_file.txt")
except FileNotFoundError:
    pass

# With suppress (clean)
with suppress(FileNotFoundError):
    os.remove("nonexistent_file.txt")

# Multiple exception types
with suppress(FileNotFoundError, PermissionError):
    os.remove("/protected/file.txt")

print("Cleanup complete -- no crashes")

Output:

Cleanup complete -- no crashes

Use suppress() when you genuinely do not care about the exception — like deleting a file that might not exist, or disconnecting a client that might already be disconnected. Do not use it to hide errors you should be handling.

Redirecting Output with redirect_stdout

The redirect_stdout and redirect_stderr context managers temporarily redirect output streams. This is useful for capturing output from third-party libraries or silencing noisy functions:

# redirect_demo.py
from contextlib import redirect_stdout, redirect_stderr
import io

# Capture stdout to a string
buffer = io.StringIO()
with redirect_stdout(buffer):
    print("This goes to the buffer")
    print("So does this")

captured = buffer.getvalue()
print(f"Captured {len(captured)} characters:")
print(repr(captured))

# Silence stderr
with redirect_stderr(io.StringIO()):
    import warnings
    warnings.warn("This warning is silenced")

Output:

Captured 39 characters:
'This goes to the buffer\nSo does this\n'

This pattern is especially useful in testing, where you need to verify what a function prints without modifying the function itself.

Real-Life Example: Database Transaction Manager

Let us build a practical transaction manager that handles database connections, commits on success, and rolls back on failure — all with proper resource cleanup:

# transaction_manager.py
from contextlib import contextmanager, ExitStack
import sqlite3
import os
import tempfile

@contextmanager
def database_connection(db_path):
    """Manage a database connection lifecycle."""
    conn = sqlite3.connect(db_path)
    try:
        yield conn
    finally:
        conn.close()

@contextmanager
def transaction(conn):
    """Manage a database transaction with auto-commit/rollback."""
    cursor = conn.cursor()
    try:
        yield cursor
        conn.commit()
        print("Transaction committed")
    except Exception as e:
        conn.rollback()
        print(f"Transaction rolled back: {e}")
        raise

@contextmanager
def managed_database(db_path):
    """Complete database session with connection and transaction."""
    with ExitStack() as stack:
        conn = stack.enter_context(database_connection(db_path))
        cursor = stack.enter_context(transaction(conn))
        yield cursor

# Demo
db_path = os.path.join(tempfile.mkdtemp(), "demo.db")

# Successful transaction
with managed_database(db_path) as cursor:
    cursor.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)")
    cursor.execute("INSERT INTO users (name, email) VALUES (?, ?)", ("Alice", "alice@example.com"))
    cursor.execute("INSERT INTO users (name, email) VALUES (?, ?)", ("Bob", "bob@example.com"))

# Read back
with managed_database(db_path) as cursor:
    cursor.execute("SELECT * FROM users")
    for row in cursor.fetchall():
        print(f"  User: {row}")

# Failed transaction (rollback)
print("\nAttempting bad insert:")
try:
    with managed_database(db_path) as cursor:
        cursor.execute("INSERT INTO users (name, email) VALUES (?, ?)", ("Charlie", "charlie@example.com"))
        raise ValueError("Simulated error -- transaction should rollback")
except ValueError:
    pass

# Verify rollback worked
with managed_database(db_path) as cursor:
    cursor.execute("SELECT COUNT(*) FROM users")
    count = cursor.fetchone()[0]
    print(f"Users after rollback: {count} (Charlie was NOT added)")

Output:

Transaction committed
Transaction committed
  User: (1, 'Alice', 'alice@example.com')
  User: (2, 'Bob', 'bob@example.com')

Attempting bad insert:
Transaction rolled back: Simulated error -- transaction should rollback
Transaction committed
Users after rollback: 2 (Charlie was NOT added)

This example composes three context managers: database_connection handles the connection lifecycle, transaction handles commit/rollback logic, and managed_database combines both using ExitStack. The composition pattern keeps each context manager focused on a single responsibility while providing a convenient combined interface.

Frequently Asked Questions

When should I use a class-based context manager instead of @contextmanager?

Use a class when you need to store state between __enter__ and __exit__, when the context manager will be reused as a library component, or when you need the __exit__ method’s exception arguments to make decisions about exception handling. Use @contextmanager for simple setup-teardown patterns where a generator function is more readable.

Does contextlib work with async code?

Yes. Python 3.7+ includes contextlib.asynccontextmanager for async generators, and AsyncExitStack for managing async resources. The patterns are identical — just use async with instead of with and async def instead of def.

Can I nest context managers?

Yes, and there are multiple ways: nested with statements, comma-separated managers in a single with statement (with A() as a, B() as b:), or ExitStack for dynamic nesting. The comma syntax is preferred for a fixed number of managers; ExitStack is preferred when the number is determined at runtime.

Can I reuse a context manager instance?

It depends. @contextmanager-based managers are single-use — calling __enter__ twice on the same instance raises a RuntimeError. Class-based managers can be made reentrant by resetting state in __enter__. If you need a reentrant version, create a new instance each time or use a factory function.

What is contextlib.closing() for?

The closing() wrapper calls .close() on an object when the with block exits. Use it for objects that have a close() method but do not implement the context manager protocol natively — like urllib.request.urlopen() in older Python versions or custom connection objects.

Conclusion

The contextlib module transforms resource management from boilerplate-heavy class definitions into clean, expressive patterns. We covered @contextmanager for generator-based managers, ExitStack for dynamic resource composition, suppress() for clean exception ignoring, and redirect_stdout for output capture. The transaction manager example showed how these tools compose into production-ready patterns.

Try extending the database transaction manager with connection pooling, nested savepoints, or retry logic. For the complete API reference, see the official contextlib documentation.