Intermediate
You have probably written hundreds of Python functions, but have you ever wondered what happens when a function is defined inside another function — and the inner function remembers the outer function’s variables even after the outer function has finished running? That is a closure, and it is one of the most powerful and underused features in Python.
Closures let you create lightweight, stateful functions without defining a class. They are used extensively in decorators, callback systems, event handlers, and factory patterns. Understanding closures also unlocks a deeper understanding of how Python’s scoping rules actually work.
In this tutorial, you will learn how nested functions work, what closures are and how Python creates them, the LEGB scoping rule, the nonlocal keyword for modifying enclosed variables, and practical patterns where closures replace classes. By the end, you will be using closures confidently in your own projects.
Python Closures: Quick Example
Here is the simplest possible closure — a function that remembers a greeting prefix and uses it every time you call it.
# quick_closure.py
def make_greeter(prefix):
def greet(name):
return f"{prefix}, {name}!"
return greet
hello = make_greeter("Hello")
howdy = make_greeter("Howdy")
print(hello("Alice"))
print(howdy("Bob"))
print(hello("Charlie"))
Output:
Hello, Alice!
Howdy, Bob!
Hello, Charlie!
The make_greeter function returns the inner greet function. Even though make_greeter has finished executing, the returned greet function still remembers the prefix value it was created with. That is a closure — the inner function “closes over” the variable from its enclosing scope. Let us explore how this works under the hood.
What Are Closures and Nested Functions?
A nested function (also called an inner function) is simply a function defined inside another function. The outer function is sometimes called the enclosing function. In Python, functions are first-class objects — you can pass them as arguments, return them from other functions, and assign them to variables.
A closure is a special case of a nested function: it is a function object that remembers values from its enclosing lexical scope even when the enclosing function is no longer active. For a closure to exist, three conditions must be met: there must be a nested function, the nested function must reference a variable from the enclosing scope, and the enclosing function must return the nested function.
| Concept | Definition | Example |
|---|---|---|
| Nested function | Function defined inside another function | def outer(): def inner(): ... |
| Free variable | Variable used in inner function but defined in outer | prefix in the greeter example |
| Closure | Inner function + its free variables | The returned greet function |
| Cell object | Python’s internal storage for free variables | Accessible via __closure__ |
You can verify that a function is a closure by inspecting its __closure__ attribute. If it returns None, the function is not a closure. If it returns a tuple of cell objects, each cell contains one of the free variables the function has closed over.
Understanding LEGB Scoping Rules
Python resolves variable names using the LEGB rule, checking each scope in order until it finds the variable. Understanding this rule is essential for understanding how closures capture variables.
# legb_demo.py
x = "global"
def outer():
x = "enclosing"
def inner():
x = "local"
print(f"Inner sees: {x}")
inner()
print(f"Outer sees: {x}")
outer()
print(f"Module sees: {x}")
Output:
Inner sees: local
Outer sees: enclosing
Module sees: global
LEGB stands for Local, Enclosing, Global, Built-in. When Python encounters a variable name, it first checks the local scope (inside the current function), then the enclosing scope (any outer functions), then the global scope (module level), and finally the built-in scope (Python’s built-in names like print and len). A closure captures variables from the Enclosing scope — the “E” in LEGB.
The nonlocal Keyword
By default, you can read enclosed variables from within a closure, but you cannot reassign them. If you try to assign a new value to an enclosed variable, Python creates a new local variable instead. The nonlocal keyword tells Python to look in the enclosing scope for the variable, allowing you to modify it.
# nonlocal_demo.py
def make_counter():
count = 0
def increment():
nonlocal count
count += 1
return count
def get_count():
return count
return increment, get_count
increment, get_count = make_counter()
print(increment())
print(increment())
print(increment())
print(f"Final count: {get_count()}")
Output:
1
2
3
Final count: 3
Without nonlocal count, the line count += 1 would raise an UnboundLocalError because Python would treat count as a local variable being referenced before assignment. The nonlocal declaration explicitly tells Python that count lives in the enclosing scope and should be modified there. This is what makes closures stateful — they can maintain and update state across calls.
Practical Closure Patterns
Closures are not just a theoretical concept — they solve real problems more elegantly than alternatives. Here are three patterns you will use regularly.
Factory Functions
A factory function creates and returns specialized functions. This is cleaner than creating a class when you just need a callable with some configuration baked in.
# factory_demo.py
def make_multiplier(factor):
def multiply(number):
return number * factor
return multiply
double = make_multiplier(2)
triple = make_multiplier(3)
to_cents = make_multiplier(100)
print(f"Double 5: {double(5)}")
print(f"Triple 5: {triple(5)}")
print(f"$4.99 in cents: {to_cents(4.99)}")
# Apply to a list
prices = [9.99, 14.50, 3.25]
prices_in_cents = list(map(to_cents, prices))
print(f"Prices in cents: {prices_in_cents}")
Output:
Double 5: 10
Triple 5: 15
$4.99 in cents: 499.0
Prices in cents: [999.0, 1450.0, 325.0]
Memoization Cache
Closures can maintain a cache dictionary that persists across calls, implementing memoization without global variables or classes.
# memoize_demo.py
def memoize(func):
cache = {}
def wrapper(*args):
if args not in cache:
cache[args] = func(*args)
print(f" Computing {func.__name__}{args}")
else:
print(f" Cache hit for {func.__name__}{args}")
return cache[args]
wrapper.cache = cache # Expose cache for inspection
return wrapper
@memoize
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(f"fib(6) = {fibonacci(6)}")
print(f"fib(4) = {fibonacci(4)}")
print(f"Cache size: {len(fibonacci.cache)} entries")
Output:
Computing fibonacci(6,)
Computing fibonacci(5,)
Computing fibonacci(4,)
Computing fibonacci(3,)
Computing fibonacci(2,)
Computing fibonacci(1,)
Computing fibonacci(0,)
Cache hit for fibonacci(1,)
Cache hit for fibonacci(2,)
Cache hit for fibonacci(3,)
Cache hit for fibonacci(4,)
fib(6) = 8
Cache hit for fibonacci(4,)
fib(4) = 3
Cache size: 7 entries
Event Handlers and Callbacks
Closures are perfect for callbacks that need context. Instead of creating a class just to hold one piece of state, create a closure.
# callback_demo.py
def make_logger(log_level):
messages = []
def log(message):
entry = f"[{log_level.upper()}] {message}"
messages.append(entry)
print(entry)
def get_logs():
return messages.copy()
def clear():
nonlocal messages
messages = []
log.get_logs = get_logs
log.clear = clear
return log
error_log = make_logger("error")
info_log = make_logger("info")
error_log("Database connection failed")
error_log("Retry attempt 1")
info_log("Server started on port 8080")
error_log("Retry succeeded")
print(f"\nError log has {len(error_log.get_logs())} entries")
print(f"Info log has {len(info_log.get_logs())} entries")
Output:
[ERROR] Database connection failed
[ERROR] Retry attempt 1
[INFO] Server started on port 8080
[ERROR] Retry succeeded
Error log has 3 entries
Info log has 1 entries
Each logger maintains its own independent list of messages because each call to make_logger creates a new messages list in a new enclosing scope. This is the same isolation you would get from separate class instances, but with less boilerplate.
Closures vs Classes: When to Use Each
A common question is whether to use a closure or a class. Both can maintain state, but they have different strengths.
# closure_vs_class.py
# Closure approach
def make_accumulator_closure(initial=0):
total = initial
def add(value):
nonlocal total
total += value
return total
return add
# Class approach
class Accumulator:
def __init__(self, initial=0):
self.total = initial
def add(self, value):
self.total += value
return self.total
# Both work the same way
closure_acc = make_accumulator_closure(10)
class_acc = Accumulator(10)
print(f"Closure: {closure_acc(5)}, {closure_acc(3)}")
print(f"Class: {class_acc.add(5)}, {class_acc.add(3)}")
Output:
Closure: 15, 18
Class: 15, 18
| Criteria | Use a Closure | Use a Class |
|---|---|---|
| State complexity | 1-3 variables | Many attributes |
| Methods needed | 1-2 functions | Multiple methods |
| Inheritance | Not needed | Need to subclass |
| Serialization | Not needed | Need pickle/JSON |
| Debugging | Simple state | Need inspection |
| Use case | Decorators, callbacks, factories | Domain objects, complex state |
The rule of thumb: if your "object" has one main action and minimal state, a closure is simpler. If it has multiple methods, complex state, or needs to participate in inheritance, use a class.
Real-Life Example: Building a Rate Limiter with Closures
Let us build a practical rate limiter that tracks function calls and enforces a maximum number of calls per time window. This demonstrates closures maintaining complex state across multiple calls.
# rate_limiter.py
import time
def rate_limit(max_calls, window_seconds):
call_timestamps = []
def decorator(func):
def wrapper(*args, **kwargs):
nonlocal call_timestamps
now = time.time()
# Remove timestamps outside the window
call_timestamps = [t for t in call_timestamps if now - t < window_seconds]
if len(call_timestamps) >= max_calls:
wait_time = window_seconds - (now - call_timestamps[0])
print(f"Rate limited! Try again in {wait_time:.1f}s")
return None
call_timestamps.append(now)
remaining = max_calls - len(call_timestamps)
print(f"Call allowed ({remaining} remaining in window)")
return func(*args, **kwargs)
wrapper.get_usage = lambda: len([t for t in call_timestamps if time.time() - t < window_seconds])
return wrapper
return decorator
@rate_limit(max_calls=3, window_seconds=5)
def fetch_data(query):
return f"Results for: {query}"
# Simulate rapid API calls
for i in range(5):
result = fetch_data(f"query_{i}")
if result:
print(f" Got: {result}")
time.sleep(0.5)
print(f"\nCurrent usage: {fetch_data.get_usage()} calls in window")
Output:
Call allowed (2 remaining in window)
Got: Results for: query_0
Call allowed (1 remaining in window)
Got: Results for: query_1
Call allowed (0 remaining in window)
Got: Results for: query_2
Rate limited! Try again in 3.5s
Rate limited! Try again in 3.0s
Current usage: 3 calls in window
This rate limiter uses three levels of closures: rate_limit captures the configuration (max_calls, window_seconds), decorator captures the function being decorated, and wrapper does the actual work using the call_timestamps list from the enclosing scope. Each decorated function gets its own independent rate limit state because each call to rate_limit creates a new call_timestamps list.
Frequently Asked Questions
What exactly makes a function a closure?
A function becomes a closure when it is defined inside another function and references variables from the enclosing function's scope. The closure "closes over" those variables, keeping them alive even after the enclosing function returns. You can check if a function is a closure by inspecting its __closure__ attribute -- if it is not None, the function is a closure.
Why do closures in loops all share the same variable?
This is the most common closure pitfall. When you create closures inside a loop, they all reference the same loop variable, not a copy of it. By the time the closures run, the loop variable has its final value. Fix this by using a default argument: lambda i=i: i captures the current value of i at each iteration.
Are closures slower than regular functions?
The overhead is negligible. Accessing a free variable through a cell object adds one extra pointer dereference compared to accessing a local variable. In practice, this difference is unmeasurable. Closures are used internally by Python for decorators, generators, and comprehensions, so they are highly optimized.
Do closures cause memory leaks?
Closures keep their free variables alive as long as the closure exists, which can prevent garbage collection of those variables. This is rarely a problem in practice, but if your closure captures a large object (like a database connection or a huge list), be aware that the object will not be garbage collected until the closure itself is collected.
Are decorators just closures?
Most decorators are implemented as closures, yes. The decorator function takes the original function as an argument and returns a wrapper closure that adds behavior before or after calling the original. However, decorators can also be implemented as classes with a __call__ method -- the decorator pattern is about the wrapping behavior, not the specific implementation technique.
Conclusion
You have learned how Python closures work from the ground up -- starting with nested functions and the LEGB scoping rule, through the nonlocal keyword for modifying enclosed state, to practical patterns like factory functions, memoization caches, and rate limiters. Closures give you stateful functions without the ceremony of defining a class.
Try refactoring one of your existing classes that has a single method and minimal state into a closure-based factory function. You will be surprised how much simpler the code becomes. For advanced closure patterns, explore Python's functools module, which provides closure-based utilities like lru_cache, partial, and wraps.