Intermediate
Python’s functools module is a treasure chest of higher-order functions that transform how you write and compose functions. Whether you need to cache expensive computations, create partial function applications, or build powerful decorators, functools provides elegant solutions that make your code cleaner and faster.
If you have ever written the same wrapper logic around multiple functions, or wished you could “freeze” some arguments into a function call, functools is exactly what you need. It sits in the standard library, so there is nothing extra to install — just import and go.
In this tutorial, you will master the most practical functools tools with hands-on examples you can use in real projects today. We will cover caching, partial application, function composition, comparison helpers, and more.
Quick Answer
The functools module provides higher-order functions for working with callable objects. The most commonly used tools are @lru_cache for memoization, partial() for freezing function arguments, reduce() for cumulative operations, and @wraps for building proper decorators. Import with from functools import lru_cache, partial, reduce, wraps.
Quick Example
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(50))
print(fibonacci.cache_info())
CacheInfo(hits=48, misses=51, maxsize=128, currsize=51)
Without the cache, computing fibonacci(50) would take an impossibly long time due to exponential recursive calls. With @lru_cache, it returns instantly because each unique input is computed only once.
What Is the Functools Module?
The functools module is part of Python's standard library and provides functions that act on or return other functions. The name stands for "function tools," and every utility in the module helps you work with callables more effectively.
Higher-order functions are functions that take other functions as arguments or return functions as results. This is a core concept in functional programming, and functools brings these ideas into Python in a practical, Pythonic way. You do not need to adopt a fully functional style — you can sprinkle these tools into your existing object-oriented or procedural code wherever they help.
The module has been part of Python since version 2.5 and has gained powerful additions over the years. Python 3.8 added cached_property, and Python 3.9 improved cache as a simpler alias for unbounded LRU caching.
Core Functools Tools
lru_cache — Automatic Memoization
The @lru_cache decorator caches function results based on the arguments passed. LRU stands for "Least Recently Used," meaning when the cache reaches its maximum size, the oldest unused entries get evicted first.
from functools import lru_cache
import time
@lru_cache(maxsize=256)
def expensive_lookup(user_id):
# Simulate a slow database query
time.sleep(0.5)
return {"id": user_id, "name": f"User_{user_id}", "active": True}
# First call takes 0.5 seconds
start = time.time()
result = expensive_lookup(42)
print(f"First call: {time.time() - start:.3f}s -> {result}")
# Second call is instant (cached)
start = time.time()
result = expensive_lookup(42)
print(f"Cached call: {time.time() - start:.6f}s -> {result}")
# Check cache statistics
print(expensive_lookup.cache_info())
# Clear the cache when needed
expensive_lookup.cache_clear()
Cached call: 0.000002s -> {'id': 42, 'name': 'User_42', 'active': True}
CacheInfo(hits=1, misses=1, maxsize=256, currsize=1)
cache — Unbounded Memoization
Python 3.9 introduced @cache as a simpler alternative to @lru_cache(maxsize=None). It caches every unique call forever, which is perfect when you know the set of possible inputs is bounded.
from functools import cache
@cache
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
print(factorial(10))
print(factorial(20)) # Reuses cached results for 1-10
print(factorial.cache_info())
2432902008176640000
CacheInfo(hits=10, misses=20, maxsize=None, currsize=20)
partial — Freeze Function Arguments
The partial() function creates a new callable with some arguments pre-filled. This is incredibly useful when you need to pass a function somewhere that expects fewer arguments than your function takes.
from functools import partial
def power(base, exponent):
return base ** exponent
# Create specialized functions
square = partial(power, exponent=2)
cube = partial(power, exponent=3)
print(square(5))
print(cube(4))
# Practical example: configuring a logger
def log_message(level, component, message):
print(f"[{level}] {component}: {message}")
# Create component-specific loggers
auth_log = partial(log_message, component="AUTH")
db_log = partial(log_message, component="DATABASE")
auth_log("INFO", message="User logged in")
auth_log("WARNING", message="Failed login attempt")
db_log("ERROR", message="Connection timeout")
64
[INFO] AUTH: User logged in
[WARNING] AUTH: Failed login attempt
[ERROR] DATABASE: Connection timeout
reduce — Cumulative Operations
The reduce() function applies a two-argument function cumulatively to the items of a sequence, reducing it to a single value. It processes items left to right, carrying the result forward at each step.
from functools import reduce
# Sum of all numbers (same as built-in sum)
numbers = [1, 2, 3, 4, 5]
total = reduce(lambda a, b: a + b, numbers)
print(f"Sum: {total}")
# Find maximum value (same as built-in max)
largest = reduce(lambda a, b: a if a > b else b, numbers)
print(f"Max: {largest}")
# Flatten nested lists
nested = [[1, 2], [3, 4], [5, 6]]
flat = reduce(lambda a, b: a + b, nested)
print(f"Flattened: {flat}")
# Build a dictionary from pairs
pairs = [("name", "Alice"), ("age", 30), ("city", "Melbourne")]
result = reduce(lambda d, pair: {**d, pair[0]: pair[1]}, pairs, {})
print(f"Dict: {result}")
Max: 5
Flattened: [1, 2, 3, 4, 5, 6]
Dict: {'name': 'Alice', 'age': 30, 'city': 'Melbourne'}
sum(), max(), min(), and any() are more readable for common cases. Use reduce when you need a custom accumulation pattern that does not have a built-in equivalent.
wraps — Build Proper Decorators
When you write a decorator, the wrapper function replaces the original function's metadata (name, docstring, signature). The @wraps decorator preserves this metadata, which is essential for debugging and documentation tools.
from functools import wraps
import time
def timing_decorator(func):
@wraps(func) # Preserves func's __name__, __doc__, etc.
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f}s")
return result
return wrapper
@timing_decorator
def process_data(items):
"""Process a list of items and return the total."""
return sum(items)
# The function metadata is preserved
print(f"Name: {process_data.__name__}")
print(f"Doc: {process_data.__doc__}")
print(f"Result: {process_data(range(1000000))}")
Doc: Process a list of items and return the total.
process_data took 0.0234s
Result: 499999500000
cached_property — One-Time Computed Attributes
The @cached_property decorator turns a method into a property that is computed once and then cached as a normal attribute. This is perfect for expensive calculations that do not change after the object is created.
from functools import cached_property
import statistics
class DataAnalysis:
def __init__(self, data):
self._data = list(data)
@cached_property
def mean(self):
print("Computing mean...")
return statistics.mean(self._data)
@cached_property
def std_dev(self):
print("Computing standard deviation...")
return statistics.stdev(self._data)
@cached_property
def summary(self):
print("Building summary...")
return {
"count": len(self._data),
"mean": self.mean,
"std_dev": self.std_dev,
"min": min(self._data),
"max": max(self._data)
}
analysis = DataAnalysis(range(1, 10001))
print(analysis.mean) # Computes and caches
print(analysis.mean) # Returns cached value (no "Computing..." message)
print(analysis.summary) # Triggers mean cache hit, computes std_dev
5000.5
5000.5
Building summary...
Computing standard deviation...
{'count': 10000, 'mean': 5000.5, 'std_dev': 2886.896, 'min': 1, 'max': 10000}
total_ordering — Complete Comparison Methods
The @total_ordering class decorator lets you define just __eq__ and one ordering method (__lt__, __le__, __gt__, or __ge__), and it fills in the rest automatically.
from functools import total_ordering
@total_ordering
class Version:
def __init__(self, major, minor, patch):
self.major = major
self.minor = minor
self.patch = patch
def __eq__(self, other):
return (self.major, self.minor, self.patch) == \
(other.major, other.minor, other.patch)
def __lt__(self, other):
return (self.major, self.minor, self.patch) < \
(other.major, other.minor, other.patch)
def __repr__(self):
return f"Version({self.major}.{self.minor}.{self.patch})"
versions = [Version(2, 1, 0), Version(1, 9, 5), Version(2, 0, 1), Version(1, 9, 5)]
print(sorted(versions))
print(Version(2, 0, 0) >= Version(1, 9, 9))
print(Version(1, 0, 0) <= Version(1, 0, 0))
True
True
Real-Life Project: Building a Plugin System with Functools
Let us build a practical plugin system for a web application that uses several functools features together. This system registers handler functions, caches their results, and supports partial configuration.
from functools import wraps, partial, lru_cache, reduce
from collections import defaultdict
import time
import json
class PluginRegistry:
"""A plugin system using functools for caching and composition."""
def __init__(self):
self._plugins = defaultdict(list)
self._middleware = []
def register(self, event_type, priority=0):
"""Decorator to register a handler for an event type."""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
wrapper._priority = priority
self._plugins[event_type].append(wrapper)
self._plugins[event_type].sort(
key=lambda f: f._priority, reverse=True
)
return wrapper
return decorator
def add_middleware(self, middleware_func):
"""Add a middleware function that processes all events."""
self._middleware.append(middleware_func)
@lru_cache(maxsize=64)
def get_handlers(self, event_type):
"""Get cached tuple of handlers for an event type."""
return tuple(self._plugins.get(event_type, []))
def emit(self, event_type, data):
"""Emit an event through middleware then to handlers."""
# Apply middleware chain using reduce
processed = reduce(
lambda d, mw: mw(event_type, d),
self._middleware,
data
)
handlers = self._plugins.get(event_type, [])
results = []
for handler in handlers:
result = handler(processed)
if result is not None:
results.append(result)
return results
# Create registry and register plugins
registry = PluginRegistry()
# Middleware: add timestamp to all events
def timestamp_middleware(event_type, data):
return {**data, "timestamp": time.time()}
# Middleware: log all events
def logging_middleware(event_type, data):
print(f" [LOG] Event '{event_type}' with keys: {list(data.keys())}")
return data
registry.add_middleware(timestamp_middleware)
registry.add_middleware(logging_middleware)
@registry.register("user.login", priority=10)
def validate_login(data):
"""Check if the user credentials are valid."""
if data.get("username") and data.get("password"):
return {"status": "validated", "user": data["username"]}
return {"status": "invalid"}
@registry.register("user.login", priority=5)
def record_login(data):
"""Record the login attempt for analytics."""
return {"recorded": True, "user": data.get("username")}
@registry.register("data.transform", priority=10)
def normalize_data(data):
"""Normalize string fields to lowercase."""
return {
k: v.lower() if isinstance(v, str) else v
for k, v in data.items()
}
# Using partial to create pre-configured event emitters
emit_login = partial(registry.emit, "user.login")
emit_transform = partial(registry.emit, "data.transform")
# Emit events
print("Login event:")
results = emit_login({"username": "alice", "password": "secret123"})
for r in results:
print(f" Result: {r}")
print("\nTransform event:")
results = emit_transform({"Name": "BOB", "City": "MELBOURNE", "age": 25})
for r in results:
print(f" Result: {r}")
[LOG] Event 'user.login' with keys: ['username', 'password', 'timestamp']
Result: {'status': 'validated', 'user': 'alice'}
Result: {'recorded': True, 'user': 'alice'}
Transform event:
[LOG] Event 'data.transform' with keys: ['Name', 'City', 'age', 'timestamp']
Result: {'Name': 'bob', 'City': 'melbourne', 'age': 25, 'timestamp': 1712745600.0}
Common Pitfalls and Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
| TypeError: unhashable type with lru_cache | Passing a list or dict as argument to cached function | Convert to tuple or frozenset before passing |
| Memory growing with @cache | Unbounded cache stores every unique call | Use @lru_cache(maxsize=N) to limit cache size |
| cached_property not updating | Value computed once and stored as attribute | Delete the attribute with del obj.prop to force recompute |
| Decorated function loses metadata | Missing @wraps in decorator | Add @wraps(func) to every decorator wrapper |
| reduce gives unexpected result | Missing initial value argument | Pass initializer as third argument to reduce() |
| partial kwargs overridden | Caller passes same keyword argument | Document which args are frozen or use positional args |
Frequently Asked Questions
@cache is equivalent to @lru_cache(maxsize=None). It stores every unique call result forever without evicting old entries. Use @lru_cache(maxsize=N) when you want to limit memory usage by keeping only the N most recent unique results. For most applications, @lru_cache with a reasonable maxsize is the safer choice.
Yes, but with a caveat. The self parameter becomes part of the cache key, meaning each instance gets its own cache entries. For instance-level caching, consider @cached_property instead. For class-level caching, use a module-level function or a custom descriptor.
Functionally, yes — reduce performs the same cumulative operation you could write with a for loop. However, reduce expresses the intent more declaratively. Use reduce when the accumulation pattern is clear and concise. If the logic is complex or needs multiple lines, a regular for loop is more readable and Pythonic.
Delete the attribute using del instance.property_name. The next access will recompute and re-cache the value. This works because cached_property stores the result as a regular instance attribute that shadows the descriptor.
Use partial() when you want to freeze arguments of an existing function — it is more readable, preserves the original function's metadata, and works better with pickling. Use a lambda when you need a quick inline expression or when the logic goes beyond simple argument freezing. In general, partial is preferred for configuration-style currying.
Conclusion
The functools module gives you powerful tools that make Python functions more flexible and efficient. You learned how @lru_cache and @cache can dramatically speed up expensive or recursive functions, how partial() creates specialized versions of general functions, how reduce() handles cumulative operations, and how @wraps keeps your decorators well-behaved.
These tools work beautifully together, as the plugin system example showed. Start by adding @lru_cache to your most expensive functions and @wraps to your decorators — those two changes alone will improve most Python projects. From there, explore partial() and cached_property as your needs grow.