Intermediate

You have written a function that works perfectly. Then one day it starts running slowly in production, and you have no idea which function is the bottleneck. Or maybe you need to log every time a critical function gets called, but you do not want to litter every function body with print() statements. These are the exact problems that Python decorators solve — they let you wrap extra behavior around your functions without touching the function code itself.

The good news is that decorators are built into Python’s core syntax. You do not need to install anything — just standard Python 3. The concept uses closures and first-class functions, which sounds intimidating, but the pattern is surprisingly simple once you see it in action. By the end of this article you will be writing your own decorators from scratch with confidence.

In this article we will start with a quick working example so you can see the payoff immediately. Then we will break down what decorators actually are and how they work under the hood. From there we will build a logging decorator, a timing decorator, a retry decorator, and learn how to stack multiple decorators together. We will also cover functools.wraps — a small but critical detail that most tutorials skip. Finally, we will tie everything together with a real-life performance monitoring toolkit you can drop into any project.

Python Decorators for Logging: Quick Example

Let us jump straight into a working example. Here is a decorator that automatically logs every time a function is called, including the arguments it received and the value it returned. You can copy this code, run it, and see the result immediately.

# quick_example.py
import functools

def log_calls(func):
    """Decorator that logs function calls with arguments and return values."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        print(f"CALL: {func.__name__}({args}, {kwargs})")
        result = func(*args, **kwargs)
        print(f"RETURN: {func.__name__} -> {result}")
        return result
    return wrapper

@log_calls
def add(a, b):
    return a + b

print(add(3, 5))
print(add(10, 20))

Output:

CALL: add((3, 5), {})
RETURN: add -> 8
8
CALL: add((10, 20), {})
RETURN: add -> 30
30

With just the @log_calls line above the function definition, every call to add() now prints a log message automatically. The function itself has no idea it is being logged — it just does its job and returns the sum. That separation of concerns is the entire point of decorators. The functools.wraps call inside the decorator preserves the original function’s name and docstring, which we will explain in detail later.

Want to go deeper? Below we cover exactly how this pattern works, how to build timing and retry decorators, and how to combine multiple decorators on a single function.

What Are Python Decorators and Why Use Them?

A decorator is a function that takes another function as input, adds some behavior to it, and returns a new function. Think of it like gift wrapping — the present inside (your original function) stays exactly the same, but the wrapping paper (the decorator) adds something extra on the outside. When someone opens the gift, they get both the wrapping experience and the present itself.

In Python, functions are first-class objects. That means you can pass a function as an argument to another function, return a function from a function, and assign a function to a variable. Decorators take advantage of all three of these capabilities. The @decorator_name syntax above a function definition is just shorthand — writing @log_calls above def add() is exactly the same as writing add = log_calls(add) after the function definition.

Here is when decorators are the right tool for the job versus when you should use something else:

Use CaseDecorator?Why
Add logging to multiple functionsYesSame behavior applied to many functions without repeating code
Measure execution timeYesTiming logic stays separate from business logic
Retry on failureYesRetry policy wraps around the function cleanly
Input validationYesValidate arguments before the function runs
Change what a function returnsMaybeCan work, but modifying return values can be confusing to callers
Complex state managementNoUse a class instead — decorators should be stateless or nearly so

The key principle is that decorators work best when you want to add the same cross-cutting behavior to multiple functions. If you find yourself copying the same three lines of logging code into every function, that is a strong signal that a decorator would clean things up. Let us start building decorators from scratch.

Basic Decorator Syntax in Python

Before we build useful decorators, let us understand the basic pattern. A decorator is a function that accepts a function as its argument, defines an inner function (called a wrapper) that adds behavior, and returns the wrapper. Here is the simplest possible decorator that does nothing except call the original function.

# basic_decorator.py
def my_decorator(func):
    """A minimal decorator that wraps a function without changing behavior."""
    def wrapper(*args, **kwargs):
        # You could add behavior here (before the call)
        result = func(*args, **kwargs)
        # You could add behavior here (after the call)
        return result
    return wrapper

@my_decorator
def greet(name):
    return f"Hello, {name}!"

print(greet("Alice"))
print(greet("Bob"))

Output:

Hello, Alice!
Hello, Bob!

The wrapper function uses *args and **kwargs to accept any combination of positional and keyword arguments. This is important because the decorator needs to work with functions that have different signatures — a function with two parameters, a function with five parameters, or a function with keyword-only parameters should all work the same way. The result = func(*args, **kwargs) line calls the original function with whatever arguments were passed in, and return result passes the return value back to the caller.

Now let us replace those placeholder comments with real, useful behavior. We will start with a logging decorator.

Building a Logging Decorator

A logging decorator captures information about function calls as they happen. This is incredibly useful for debugging — instead of sprinkling print() statements throughout your code and then removing them later, you simply add or remove the @log_calls decorator. Here is a robust version that formats the output nicely and handles both positional and keyword arguments.

# logging_decorator.py
import functools
from datetime import datetime

def log_calls(func):
    """Log every call to the decorated function with timestamp, args, and result."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
        # Format arguments for readable output
        arg_parts = [repr(a) for a in args]
        kwarg_parts = [f"{k}={v!r}" for k, v in kwargs.items()]
        all_args = ", ".join(arg_parts + kwarg_parts)

        print(f"[{timestamp}] CALL: {func.__name__}({all_args})")
        result = func(*args, **kwargs)
        print(f"[{timestamp}] RETURN: {func.__name__} -> {result!r}")
        return result
    return wrapper

@log_calls
def calculate_discount(price, discount_percent=10):
    """Calculate the discounted price."""
    discount_amount = price * (discount_percent / 100)
    return round(price - discount_amount, 2)

# Test with positional and keyword arguments
print(calculate_discount(99.99))
print()
print(calculate_discount(49.99, discount_percent=25))

Output:

[2026-03-13 10:30:00] CALL: calculate_discount(99.99)
[2026-03-13 10:30:00] RETURN: calculate_discount -> 89.99
89.99

[2026-03-13 10:30:00] CALL: calculate_discount(49.99, discount_percent=25)
[2026-03-13 10:30:00] RETURN: calculate_discount -> 37.49
37.49

Notice how the decorator captures the function name, the exact arguments passed (including keyword arguments like discount_percent=25), and the return value. The !r format specifier uses repr() so strings show up with quotes, making it easy to distinguish "hello" from hello in your logs. The timestamp tells you exactly when each call happened, which is essential for debugging timing-related issues in production. You can easily swap print() for Python’s built-in logging module to write these messages to a file instead of the console.

Sudo Sam standing confidently with arms crossed next to a giant stopwatch
Clean code isn’t written — it’s decorated. One @wrapper and your function gains superpowers.

Building a Timing Decorator

Performance matters, and the first step to improving performance is measuring it. A timing decorator wraps a function and records how long it takes to execute. This is far more convenient than manually adding time.time() calls at the start and end of every function you want to profile.

# timing_decorator.py
import functools
import time

def timer(func):
    """Measure and print the execution time of the decorated function."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        start_time = time.perf_counter()  # High-precision timer
        result = func(*args, **kwargs)
        end_time = time.perf_counter()
        elapsed = end_time - start_time
        print(f"⏱ {func.__name__} took {elapsed:.4f} seconds")
        return result
    return wrapper

@timer
def slow_operation():
    """Simulate a slow operation."""
    total = 0
    for i in range(1_000_000):
        total += i * i
    return total

@timer
def fast_operation():
    """Simulate a fast operation."""
    return sum(range(1000))

print(f"Result: {slow_operation()}")
print(f"Result: {fast_operation()}")

Output:

⏱ slow_operation took 0.0892 seconds
Result: 333332833333500000
⏱ fast_operation took 0.0000 seconds
Result: 499500

We use time.perf_counter() instead of time.time() because it provides the highest resolution timer available on your operating system. This matters when measuring fast functions — time.time() might show 0.0 seconds for something that actually takes 0.3 milliseconds. The :.4f format gives us four decimal places, which is enough precision for most profiling needs. If you need sub-millisecond accuracy, change it to :.6f for microsecond resolution.

Using functools.wraps for Metadata Preservation

You may have noticed that every decorator we have written includes @functools.wraps(func) on the wrapper function. This is not optional — without it, the decorated function loses its identity. Let us see exactly what goes wrong when you skip it.

# without_wraps.py
def bad_decorator(func):
    """Decorator WITHOUT functools.wraps — demonstrates the problem."""
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

@bad_decorator
def calculate_tax(amount, rate=0.08):
    """Calculate sales tax on a purchase amount."""
    return round(amount * rate, 2)

# Check the function's identity
print(f"Function name: {calculate_tax.__name__}")
print(f"Docstring: {calculate_tax.__doc__}")
print(f"Help output:")
help(calculate_tax)

Output:

Function name: wrapper
Docstring: None
Help output:
Help on function wrapper in module __main__:

wrapper(*args, **kwargs)

The function thinks its name is wrapper and its docstring is None. This breaks debugging tools, documentation generators, and any code that inspects function metadata. Now let us see the same decorator with functools.wraps applied.

# with_wraps.py
import functools

def good_decorator(func):
    """Decorator WITH functools.wraps — preserves the original function's metadata."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

@good_decorator
def calculate_tax(amount, rate=0.08):
    """Calculate sales tax on a purchase amount."""
    return round(amount * rate, 2)

# Check the function's identity
print(f"Function name: {calculate_tax.__name__}")
print(f"Docstring: {calculate_tax.__doc__}")
print(f"Help output:")
help(calculate_tax)

Output:

Function name: calculate_tax
Docstring: Calculate sales tax on a purchase amount.
Help output:
Help on function calculate_tax in module __main__:

calculate_tax(amount, rate=0.08)
    Calculate sales tax on a purchase amount.

With @functools.wraps(func), the decorated function retains its original name, docstring, and even its parameter signature in the help output. This is a one-line addition that prevents hours of confusion. Every decorator you write should include it — no exceptions. The rule is simple: if your decorator defines a wrapper function, put @functools.wraps(func) on the line directly above it.

Loop Larry tangled up in colorful ribbons surrounded by gift boxes
Stack too many decorators and suddenly nobody knows what the function actually does. Including you.

Decorators That Accept Arguments

Sometimes you want your decorator to be configurable. For example, a logging decorator where you can set the log level, or a retry decorator where you can specify the number of retries. This requires an extra layer of nesting — a function that returns a decorator, which returns a wrapper. It sounds complicated, but the pattern is consistent once you see it.

# decorator_with_args.py
import functools

def log_with_level(level="INFO"):
    """Decorator factory that accepts a log level argument."""
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            arg_parts = [repr(a) for a in args]
            kwarg_parts = [f"{k}={v!r}" for k, v in kwargs.items()]
            all_args = ", ".join(arg_parts + kwarg_parts)
            print(f"[{level}] Calling {func.__name__}({all_args})")
            result = func(*args, **kwargs)
            print(f"[{level}] {func.__name__} returned {result!r}")
            return result
        return wrapper
    return decorator

@log_with_level("DEBUG")
def fetch_user(user_id):
    """Simulate fetching a user from a database."""
    return {"id": user_id, "name": "Alice", "email": "alice@example.com"}

@log_with_level("WARNING")
def delete_user(user_id):
    """Simulate deleting a user — dangerous operation."""
    return f"User {user_id} deleted"

print(fetch_user(42))
print()
print(delete_user(7))

Output:

[DEBUG] Calling fetch_user(42)
[DEBUG] fetch_user returned {'id': 42, 'name': 'Alice', 'email': 'alice@example.com'}
{'id': 42, 'name': 'Alice', 'email': 'alice@example.com'}

[WARNING] Calling delete_user(7)
[WARNING] delete_user returned 'User 7 deleted'
User 7 deleted

The key difference is that log_with_level("DEBUG") is not the decorator itself — it is a decorator factory that returns the actual decorator. When Python sees @log_with_level("DEBUG"), it first calls log_with_level("DEBUG"), which returns the decorator function. Then Python applies that decorator function to fetch_user. This three-layer structure (factory → decorator → wrapper) is the standard pattern for any decorator that needs configuration.

Building a Retry Decorator

Network calls fail. APIs time out. Databases hiccup. A retry decorator automatically re-runs a function when it raises an exception, with a configurable number of attempts and delay between retries. This is one of the most practical decorators you will ever write.

# retry_decorator.py
import functools
import time
import random

def retry(max_attempts=3, delay=1.0):
    """Retry a function up to max_attempts times with a delay between retries."""
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            last_exception = None
            for attempt in range(1, max_attempts + 1):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    last_exception = e
                    if attempt < max_attempts:
                        print(f"⚠ {func.__name__} failed (attempt {attempt}/{max_attempts}): {e}")
                        print(f"  Retrying in {delay}s...")
                        time.sleep(delay)
                    else:
                        print(f"✗ {func.__name__} failed after {max_attempts} attempts: {e}")
            raise last_exception
        return wrapper
    return decorator

@retry(max_attempts=3, delay=0.5)
def unreliable_api_call():
    """Simulate an API call that fails randomly."""
    if random.random() < 0.7:  # 70% chance of failure
        raise ConnectionError("Server unavailable")
    return {"status": "ok", "data": [1, 2, 3]}

# Set seed for reproducible output
random.seed(42)
try:
    result = unreliable_api_call()
    print(f"Success: {result}")
except ConnectionError as e:
    print(f"Final failure: {e}")

Output:

⚠ unreliable_api_call failed (attempt 1/3): Server unavailable
  Retrying in 0.5s...
⚠ unreliable_api_call failed (attempt 2/3): Server unavailable
  Retrying in 0.5s...
✗ unreliable_api_call failed after 3 attempts: Server unavailable
Final failure: Server unavailable

The retry decorator stores the last exception in last_exception so it can re-raise it if all attempts fail. This way the caller still gets the original exception type (ConnectionError) rather than a generic error. The time.sleep(delay) adds a pause between retries, which is important for network operations — hammering a server that just failed with immediate retries usually makes things worse. In production, you would typically use exponential backoff (doubling the delay each time) instead of a fixed delay.

Stacking Multiple Decorators

One of the most powerful features of decorators is that you can stack them. When you put multiple @decorator lines above a function, they are applied from bottom to top — the decorator closest to the function definition runs first, and the outermost decorator wraps everything. This lets you combine logging, timing, and retry logic on a single function.

# stacking_decorators.py
import functools
import time

def log_calls(func):
    """Log function calls."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        print(f"LOG: Calling {func.__name__}")
        result = func(*args, **kwargs)
        print(f"LOG: {func.__name__} returned {result!r}")
        return result
    return wrapper

def timer(func):
    """Time function execution."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        start = time.perf_counter()
        result = func(*args, **kwargs)
        elapsed = time.perf_counter() - start
        print(f"⏱ {func.__name__} took {elapsed:.4f}s")
        return result
    return wrapper

@log_calls   # Applied second (outermost)
@timer       # Applied first (innermost)
def process_data(items):
    """Process a list of items with a simulated delay."""
    time.sleep(0.1)  # Simulate work
    return [item.upper() for item in items]

result = process_data(["hello", "world", "python"])

Output:

LOG: Calling process_data
⏱ process_data took 0.1003s
LOG: process_data returned ['HELLO', 'WORLD', 'PYTHON']

The execution order matters. Because @timer is closest to the function, it wraps process_data first. Then @log_calls wraps the already-timed version. When you call process_data(), the log decorator runs first (printing the "Calling" message), then the timer starts, then the actual function runs, then the timer stops (printing the elapsed time), and finally the log decorator prints the return value. If you reversed the order, the timer would measure the time including the logging overhead, which is usually not what you want.

Cache Katie racing with a stopwatch optimizing for speed
@lru_cache — turning O(2^n) into O(n) with one line. Your CPU sends its regards.

Real-Life Example: Performance Monitoring Toolkit

Pyro Pete excitedly stacking colorful building blocks
Decorators are just functions with ambition. Stack them right and your code writes itself.

Let us tie everything together into a practical project. This performance monitoring toolkit gives you three decorators that you can drop into any Python project: @monitor for combined logging and timing, @retry for fault tolerance, and @validate_types for runtime type checking. Together they form a lightweight monitoring layer for any application.

# performance_toolkit.py
import functools
import time
from datetime import datetime

def monitor(func):
    """Combined logging and timing decorator for production monitoring."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        arg_parts = [repr(a) for a in args]
        kwarg_parts = [f"{k}={v!r}" for k, v in kwargs.items()]
        signature = ", ".join(arg_parts + kwarg_parts)

        print(f"[{timestamp}] → {func.__name__}({signature})")
        start = time.perf_counter()

        try:
            result = func(*args, **kwargs)
            elapsed = time.perf_counter() - start
            print(f"[{timestamp}] ← {func.__name__} returned {result!r} ({elapsed:.4f}s)")
            return result
        except Exception as e:
            elapsed = time.perf_counter() - start
            print(f"[{timestamp}] ✗ {func.__name__} raised {type(e).__name__}: {e} ({elapsed:.4f}s)")
            raise
    return wrapper

def validate_types(**expected_types):
    """Validate argument types at runtime before the function executes."""
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            # Check keyword arguments against expected types
            for param_name, expected_type in expected_types.items():
                if param_name in kwargs:
                    value = kwargs[param_name]
                    if not isinstance(value, expected_type):
                        raise TypeError(
                            f"{param_name} must be {expected_type.__name__}, "
                            f"got {type(value).__name__}"
                        )
            return func(*args, **kwargs)
        return wrapper
    return decorator

# --- Use the toolkit ---

@monitor
def fetch_user_profile(user_id):
    """Simulate fetching a user profile from a database."""
    time.sleep(0.05)  # Simulate database query
    users = {1: "Alice", 2: "Bob", 3: "Charlie"}
    if user_id not in users:
        raise ValueError(f"User {user_id} not found")
    return {"id": user_id, "name": users[user_id], "active": True}

@monitor
@validate_types(amount=float, description=str)
def process_payment(amount=0.0, description=""):
    """Process a payment transaction."""
    time.sleep(0.02)  # Simulate payment processing
    return {"status": "completed", "amount": amount, "ref": "TXN-001"}

# Run the toolkit
print("=== Performance Monitoring Toolkit Demo ===\n")

# Successful call
profile = fetch_user_profile(1)
print(f"Got profile: {profile}\n")

# Successful payment
payment = process_payment(amount=29.99, description="Monthly subscription")
print(f"Payment result: {payment}\n")

# Failed call (user not found)
try:
    fetch_user_profile(999)
except ValueError:
    print("(Error handled gracefully)\n")

# Type validation failure
try:
    process_payment(amount="not a number", description="Bad payment")
except TypeError as e:
    print(f"Type error caught: {e}")

Output:

=== Performance Monitoring Toolkit Demo ===

[10:30:00] → fetch_user_profile(1)
[10:30:00] ← fetch_user_profile returned {'id': 1, 'name': 'Alice', 'active': True} (0.0503s)
Got profile: {'id': 1, 'name': 'Alice', 'active': True}

[10:30:00] → process_payment(amount=29.99, description='Monthly subscription')
[10:30:00] ← process_payment returned {'status': 'completed', 'amount': 29.99, 'ref': 'TXN-001'} (0.0201s)
Payment result: {'status': 'completed', 'amount': 29.99, 'ref': 'TXN-001'}

[10:30:00] → fetch_user_profile(999)
[10:30:00] ✗ fetch_user_profile raised ValueError: User 999 not found (0.0501s)
(Error handled gracefully)

Type error caught: amount must be float, got str

This toolkit demonstrates how decorators keep your business logic clean. The fetch_user_profile function only cares about looking up users — it knows nothing about logging, timing, or error formatting. All of that cross-cutting behavior lives in the decorators. You could add @monitor to every function in your application with a single line per function, giving you instant visibility into how your code performs. To extend this project, try adding a decorator that caches results (memoization) or one that limits how many times a function can be called per minute (rate limiting).

Frequently Asked Questions

What is the difference between a decorator and a regular function call?

A decorator wraps a function at definition time, not at call time. When you put @timer above a function, the wrapping happens once when Python loads the file. After that, every call to the function automatically goes through the wrapper. A regular function call like timer(my_func)(args) does the same thing but only for that one call. Decorators give you permanent, reusable wrapping with clean syntax.

Can you use a class as a decorator instead of a function?

Yes. Any callable object can be a decorator, and classes are callable (calling a class creates an instance). You define __init__ to accept the function and __call__ to act as the wrapper. Class-based decorators are useful when you need to maintain state between calls, such as counting how many times a function has been called. For stateless decorators like logging and timing, function-based decorators are simpler and more common.

How do you debug a decorated function?

Always use @functools.wraps(func) in your decorators — this is the single most important step. Without it, tracebacks show the wrapper function name instead of your actual function name, making debugging nearly impossible. If you need to temporarily remove a decorator for debugging, just comment out the @decorator line. You can also access the original unwrapped function via decorated_func.__wrapped__, which functools.wraps sets automatically.

Do decorators add performance overhead?

Yes, but it is typically negligible. Each decorated call adds the overhead of one extra function call plus whatever your wrapper does. For a simple logging decorator, this is microseconds. The overhead only matters if you are decorating a function that gets called millions of times in a tight loop. In that case, measure with time.perf_counter() and decide whether the convenience is worth the cost. For most applications — web servers, CLI tools, data processing scripts — the overhead is invisible.

Can decorators work with async functions?

Yes, but you need to make the wrapper function async too. Use async def wrapper(*args, **kwargs) and result = await func(*args, **kwargs) inside the decorator. If you want a single decorator that works with both sync and async functions, use import asyncio and check asyncio.iscoroutinefunction(func) to decide whether to use await. The functools.wraps pattern works the same way for async decorators.

Does the order of stacked decorators matter?

Absolutely. Decorators are applied bottom-to-top (closest to the function first), but they execute top-to-bottom when the function is called. If you stack @log_calls on top of @timer, the logging will record the total time including timer overhead. If you reverse them, the timer will only measure the function itself. The general rule is: put the decorator whose behavior you want to see first (outermost) on top, and put the decorator that should run closest to the actual function on the bottom.

Conclusion

In this article we covered everything you need to start using Python decorators effectively in your own projects. We started with the basic decorator pattern — a function that takes a function and returns a wrapper function. We built a logging decorator that captures function calls with timestamps and arguments, a timing decorator that measures execution time with time.perf_counter(), and a retry decorator that gracefully handles transient failures. We also covered functools.wraps (always use it), decorator factories for configurable decorators, and how stacking multiple decorators affects execution order.

The performance monitoring toolkit in the real-life example gives you a foundation you can build on. Try extending it with a caching decorator using functools.lru_cache, a rate limiter that tracks calls per time window, or an authentication decorator for a web application. Decorators are one of Python's most elegant features — once you get comfortable with the pattern, you will find uses for them everywhere.

For a deeper dive into decorators, closures, and first-class functions, check out the official Python documentation on decorators and the functools module documentation.