Beginner
For most serious applications, you will often have to have persistent storage (storage that still exists after your applications stops running) of some sort. For new developers, it can be quite daunting to decide which option to go for. Is a simple flat file enough? When should you use something like a database? Which database should you use? There are so many options that are available it becomes quite daunting to decide which way to go for.
This is a starting guide to provide an overview of some of the many data storage options that are available for you and how you can go about deciding. One thing to keep in mind is that if you are developing an application which is either planned or has a possibility to scale over time, your underlying database might also grow overtime. It may be quick and easy to implement a file as storage, but as your data grows it might be better to use a relational database but it will take a little bit more effort. Let’s look at this a bit deeper

What are the possible ways to store data?
There are many methods of persistent storage that you can use (persistent storage means that after your program is finished running your data is not lost). The typical ways you can do this is either by using a file which you save data to, or by using the python pickle mechanism. Firstly I will explain what some of the persistent storage options are:
- File: This is where you store the data in a text based file in format such as CSV (comma separated values), JSON, and others
- Python Pickle: A python pickle is a mechanism where you can save a data structure directly to a file, and then you can retrieve the data directly from the file next time you run your program. You can do this with a library called “pickle”
- Config files: config files are similar to File and Python Pickle in that the data is stored in a file format but is intended to be directly edited by a user
- Database SQLite: this is a database where you can run queries to search for data, but the data is stored in a file
- Database Postgres (or other SQL based database): this is a database service where there’s another program that you run to manage the database, and you call functions (or SQL queries) on the database service to get the data back in an efficient manner. SQL based databases are great for structured data – e.g. table-like/excel-like data. You would search for data by category fields as an example
- Key-value database (e.g redis is one of the most famous): A key-value database is exactly that, it contains a database where you search by a key, and then it returns a value. This value can be a single value or it can be a set of fields that are associated with that value. A common use of a key-value database is for hash-based data. Meaning that you have a specific key that you want to search for, and then you get all the related fields associated with that key – much like a dictionary in python, but the benefit being its in a persistent storage
- Graph Database (e.g. Neo4J): A graph database stores data which is built to navigate relationships. This is something that is rather cumbersome to do in a relational database where you need to have many intermediary tables but becomes trivial with GraphQL language
- Text Search (e.g. Elastic Search): A purpose built database for text search which is extremely fast when searching for strings or long text
- Time series database (e.g. influx): For IoT data where each record is stored with a timestamp key and you need to do queries in time blocks, time series databases are ideal. You can do common operations such as to aggregate, search, slice data through specific query operations
- NOSQL document database (e.g. mongodb, couchdb): this is a database that also runs as a separate service but is specifically for “unstructured data” (non-table like data) such as text, images where you search for records in a free form way such as by text strings.
There is no one persistent storage mechanism that fits all, it really depends on your purpose (or “use case”) to determine which database works best for you as there are pros and cons for each.
| Setup | Editable outside Python | Volume | Read Speed | Write Speed | Inbuilt Redundancy | |
| File | None – you can create a file in your python code | For text based | Small | Slow | Slow | No – manual |
| Python Pickle | None- you can create this in your python code | No – only in python | Small | Slow | Slow | No – manual |
| Config File | Optional. You can create a config file before hand | Yes – you can use any text based editor | Small | Slow | Slow | No – manual |
| Database SQLite | None – database created automatically | No – only in python | Small-Med | Slow-Med | Slow-Med | No – manual |
| Relational SQL Database | Separate installation of server | Through the SQL console or other SQL clients | Large | Fast | Fast | Yes, require extra setup |
| NoSQL Column Database | Separate installation of server | Yes, through external client | Very large | Very fast | Very fast | Yes, inbuilt |
| Key-Value database | Separate installation of server | Yes, through external client | Very large | Very fast | Fast-Very Fast | Yes, require extra setup |
| Graph Database | Separate installation of serverSeparate installation of server | Yes, through external client | Large | Med | Med | Yes, require extra setup |
| Time Series Database | Separate installation of server | Yes, through external client | Very large | Very fast | Fast | Yes, require extra setup |
| Text Search Database | Separate installation of server | Yes, through external client | Very large | Very fast | Fast | Yes, require extra setup |
| NoSQL Documet DB | Separate installation of server | Yes, through external client | Very large | Very fast | Fast | Yes, require extra setup |

A big disclaimer here, for some of the responses, the more accurate answer is “it depends”. For example, for redundancy for relational databases, some have it inbuilt such as Oracle RAC enterprise databases and for others you can set up redundancy where you could have an infrastructure solution. However, to provide a simpler guidance, I’ve made this a bit more prescriptive. If you would like to dive deeper, then please don’t rely purely on the table above! Look into the documentation of the particular database product you are considering or reach out to me and I’m happy to provide some advice.
Summary
There are in fact plenty of SaaS-based options for database or persistent storage that are popping up which is exciting. These newer SaaS options (for example, firebase, restdb.io, anvil.works etc) are great in that they save you time on the heavy lifting, but then there may be times you still want to manage your own database. This may be because you want to keep your data yourself, or simply because you want to save costs as you already have an environment either on your own laptop, or you’re paying a fixed price for a virtual machine. Hence, managing your own persistent storage may be more cost effective rather than paying for another SaaS. However, certainly don’t discount the SaaS options altogether, as they will at least help you with things like backups, security updates etc for you.
How To Use Python Decorators: A Complete Guide
Intermediate
You’ve probably seen the @ symbol above function definitions in Python code and wondered what it does. That’s a decorator — one of Python’s most powerful and elegant features. Decorators let you wrap a function with additional behavior (logging, caching, access control, rate limiting, timing) without modifying the function’s code. They’re the reason you can add authentication to a Flask route with a single line, or enable caching with @functools.lru_cache.
Decorators are a pure Python feature — no installation required. They’re built on Python’s first-class functions (functions that can be passed as arguments and returned from other functions). Once you understand how decorators work mechanically, you’ll be able to read and write the patterns used by virtually every Python framework, from Django’s @login_required to FastAPI’s @app.get() to pytest’s @pytest.fixture.
In this tutorial, you’ll learn how decorators work from first principles, how to use functools.wraps to preserve function metadata, how to write parameterized decorators (decorators that take arguments), how to stack multiple decorators, how to use class-based decorators, and how to apply these techniques in real-world scenarios like timing, retry logic, and access control.
Decorators: Quick Example
Here’s the simplest useful decorator — one that logs when a function is called:
# decorator_quick.py
import functools
def log_calls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}({args}, {kwargs})")
result = func(*args, **kwargs)
print(f"{func.__name__} returned {result}")
return result
return wrapper
@log_calls
def add(a, b):
return a + b
# This is equivalent to: add = log_calls(add)
result = add(3, 4)
print(f"Final result: {result}")
# The function's identity is preserved
print(f"Function name: {add.__name__}")
Output:
Calling add((3, 4), {})
add returned 7
Final result: 7
Function name: add
The @log_calls syntax is shorthand for add = log_calls(add). The decorator receives the original function, returns a new wrapper function that adds behavior before and after calling the original, and replaces the name add with the wrapper. The @functools.wraps(func) line copies the original function’s name, docstring, and other metadata onto the wrapper — always include this.
How Decorators Work: First Principles
To truly understand decorators, you need to understand that in Python, functions are objects — they can be passed as arguments and returned from other functions. This is called “first-class functions.” Decorators are just a syntax shortcut for a function transformation pattern.
# first_class_functions.py
# Functions can be passed as arguments
def apply_twice(func, value):
return func(func(value))
def double(x):
return x * 2
result = apply_twice(double, 3)
print(f"Apply twice: {result}") # 3 -> 6 -> 12
# Functions can be returned from other functions
def make_multiplier(n):
def multiplier(x):
return x * n
return multiplier # Returns the inner function
triple = make_multiplier(3)
print(f"Triple 5: {triple(5)}") # 15
# The decorator pattern manually, without @ syntax
def shout(func):
def wrapper(*args, **kwargs):
result = func(*args, **kwargs)
return result.upper() + "!!!"
return wrapper
def greet(name):
return f"Hello, {name}"
# Without @ syntax -- same result
greet = shout(greet)
print(greet("alice")) # HELLO, ALICE!!!
Output:
Apply twice: 12
Triple 5: 15
HELLO, ALICE!!!
The key insight: @shout above a function definition is exactly equivalent to writing greet = shout(greet) after the definition. The @ syntax just makes it more readable and places the decoration visually near the function definition where it belongs.
Always Use functools.wraps
Without @functools.wraps(func), your decorator replaces the original function’s metadata with the wrapper’s. This causes problems with debugging, documentation, and tools that inspect function names. Always include it:
# wraps_example.py
import functools
# WITHOUT functools.wraps -- breaks function identity
def bad_decorator(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
# WITH functools.wraps -- preserves identity
def good_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@bad_decorator
def my_function_bad():
"""This function does something important."""
pass
@good_decorator
def my_function_good():
"""This function does something important."""
pass
print(f"Bad decorator name: {my_function_bad.__name__}")
print(f"Bad decorator docstr: {my_function_bad.__doc__}")
print()
print(f"Good decorator name: {my_function_good.__name__}")
print(f"Good decorator docstr: {my_function_good.__doc__}")
Output:
Bad decorator name: wrapper
Bad decorator docstr: None
Good decorator name: my_function_good
Good decorator docstr: This function does something important.
Practical Decorator Examples
Timing Functions
A timer decorator measures how long a function takes to execute — great for performance monitoring and identifying bottlenecks:
# timer_decorator.py
import functools
import time
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f} seconds")
return result
return wrapper
@timer
def slow_function():
time.sleep(0.1)
return "done"
@timer
def sum_million():
return sum(range(1_000_000))
slow_function()
result = sum_million()
print(f"Sum result: {result:,}")
Output:
slow_function took 0.1002 seconds
sum_million took 0.0312 seconds
Sum result: 499,999,500,000
Retry Logic
A retry decorator automatically re-runs a function if it raises an exception — essential for network calls, database operations, and any code that can fail transiently:
# retry_decorator.py
import functools
import time
import random
def retry(max_attempts=3, delay=1.0, exceptions=(Exception,)):
"""Decorator factory: retries a function up to max_attempts times."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_error = None
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as e:
last_error = e
print(f"Attempt {attempt}/{max_attempts} failed: {e}")
if attempt < max_attempts:
time.sleep(delay)
raise last_error
return wrapper
return decorator
# Simulated unreliable function (fails 70% of the time)
call_count = 0
@retry(max_attempts=5, delay=0.1, exceptions=(ValueError,))
def unreliable_api_call():
global call_count
call_count += 1
if random.random() < 0.7:
raise ValueError(f"API timeout on call #{call_count}")
return f"Success on call #{call_count}"
random.seed(42)
result = unreliable_api_call()
print(f"Final result: {result}")
Output:
Attempt 1/5 failed: API timeout on call #1
Attempt 2/5 failed: API timeout on call #2
Attempt 3/5 failed: API timeout on call #3
Final result: Success on call #4
Notice the decorator factory pattern: retry(max_attempts=5, delay=0.1) returns a decorator, which then returns a wrapper. This is a three-level nesting -- outer function configures, middle function receives the function to decorate, inner function is what actually runs. This is the standard pattern for parameterized decorators.
Parameterized Decorators
When your decorator needs configuration (like the number of retries in the example above), you add one more level of nesting -- a "decorator factory" that takes the parameters and returns the actual decorator:
# parameterized_decorator.py
import functools
def repeat(n):
"""Call the decorated function n times."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
results = []
for _ in range(n):
results.append(func(*args, **kwargs))
return results
return wrapper
return decorator
@repeat(3)
def say_hello(name):
return f"Hello, {name}!"
results = say_hello("Alice")
for r in results:
print(r)
Output:
Hello, Alice!
Hello, Alice!
Hello, Alice!
Stacking Multiple Decorators
You can apply multiple decorators to the same function by stacking them. They apply from bottom to top (closest to the function first):
# stacking_decorators.py
import functools
import time
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
print(f" [timer] {func.__name__}: {time.perf_counter()-start:.4f}s")
return result
return wrapper
def log_result(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
result = func(*args, **kwargs)
print(f" [log] {func.__name__} returned: {result}")
return result
return wrapper
# Applied bottom-up: log_result wraps the original,
# then timer wraps log_result's wrapper
@timer
@log_result
def compute(x, y):
return x ** y
result = compute(2, 10)
print(f"Final result: {result}")
Output:
[log] compute returned: 1024
[timer] compute: 0.0001s
Final result: 1024
Real-Life Example: Access Control Decorators
Here's a practical access control system using decorators -- the same pattern used by web frameworks for route authentication:
# access_control.py
import functools
# Simulated current user session
current_user = {'name': 'alice', 'roles': ['user', 'editor'], 'logged_in': True}
def login_required(func):
"""Decorator that requires the user to be logged in."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not current_user.get('logged_in'):
print(f"Access denied: login required for {func.__name__}")
return None
return func(*args, **kwargs)
return wrapper
def require_role(role):
"""Decorator factory: requires the user to have a specific role."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if role not in current_user.get('roles', []):
print(f"Access denied: '{role}' role required for {func.__name__}")
return None
return func(*args, **kwargs)
return wrapper
return decorator
@login_required
def view_dashboard():
return f"Dashboard for {current_user['name']}"
@login_required
@require_role('admin')
def delete_user(user_id):
return f"Deleted user {user_id}"
@login_required
@require_role('editor')
def publish_post(post_id):
return f"Published post {post_id}"
# Alice is logged in and has 'editor' but not 'admin'
print(view_dashboard())
print(delete_user(42))
print(publish_post(101))
# Simulate a logged-out user
current_user['logged_in'] = False
print(view_dashboard())
Output:
Dashboard for alice
Access denied: 'admin' role required for delete_user
Published post 101
Access denied: login required for view_dashboard
This is the exact pattern used by Flask's @login_required and Django's @permission_required. The decorators are reusable across any number of functions -- add access control to a new function by adding one line above its definition. The stacked @login_required @require_role('admin') means the user must pass both checks: logged in AND has the required role.
Frequently Asked Questions
When should I use a decorator instead of a helper function?
Use a decorator when you want to add the same cross-cutting behavior (logging, timing, validation, caching) to multiple functions without repeating the logic. If you find yourself writing the same "before" and "after" code in many functions, that's a strong signal to extract it into a decorator. For one-off or highly specific behavior, a regular helper function is simpler.
Can I use a class as a decorator?
Yes -- any callable can be a decorator. A class with a __call__ method works as a decorator. Class-based decorators are useful when you need to maintain state between calls (like call counts or cached results). Define __init__(self, func) to receive the function and __call__(self, *args, **kwargs) to wrap it. The @functools.wraps(func) approach works on __call__ too.
Do decorators work on class methods?
Yes, but with one caveat: the first argument of instance methods is self. Since decorators use *args, **kwargs, this is handled automatically. However, @staticmethod and @classmethod are themselves decorators. When stacking with them, always place @staticmethod or @classmethod outermost (closest to the def).
What is @functools.lru_cache and when should I use it?
@functools.lru_cache(maxsize=128) memoizes a function's return values -- if the function is called again with the same arguments, it returns the cached result instead of recomputing. Use it for pure functions (no side effects) that are called repeatedly with the same inputs. It's especially powerful for recursive functions like Fibonacci where the same sub-problems repeat many times.
Why does my IDE show wrong type hints after applying a decorator?
Without @functools.wraps, the decorated function's signature shows as (*args, **kwargs) -- losing the original type hints. With @functools.wraps, the function identity is preserved, but the signature the type checker sees is still the wrapper's. For full type hint preservation in decorated functions, use typing.ParamSpec and typing.Concatenate (Python 3.10+) to annotate the wrapper correctly.
Conclusion
Decorators are one of Python's most powerful code-reuse mechanisms. In this tutorial, you learned how Python's first-class functions make decorators possible, why @functools.wraps(func) is essential in every decorator, how to write practical decorators for timing, retry logic, and logging, how to create parameterized decorators using a decorator factory pattern, how to stack multiple decorators on a single function, and how the access control pattern mirrors real framework implementations.
The access control project is a foundation you can extend: add role inheritance, time-based access restrictions, or rate limiting. Every web framework you'll encounter -- Flask, Django, FastAPI -- relies heavily on decorators for its most important features.
For deeper coverage, see the functools module documentation and PEP 318 which introduced decorator syntax to Python.
Related Articles
Further Reading: For more details, see the Python sqlite3 documentation.
Frequently Asked Questions
What are the main data storage options in Python?
Python supports flat files (text, CSV, JSON), databases (SQLite, PostgreSQL, MySQL), key-value stores (Redis, shelve), pickle serialization, and cloud storage. The best choice depends on data size, structure, and access patterns.
When should I use SQLite vs a full database?
Use SQLite for single-user apps, prototypes, and small-to-medium datasets. Switch to PostgreSQL or MySQL for concurrent multi-user access, complex queries at scale, or production-grade reliability.
How do I save Python objects to disk?
Use pickle for Python-specific serialization, json for interoperable data, shelve for dictionary-like persistent storage, or databases for structured data. For data analysis, pandas can save to CSV, Parquet, or HDF5.
Is JSON or CSV better for storing data?
JSON handles nested, hierarchical data well. CSV is simpler for tabular, flat data. Use JSON for API data and configuration; use CSV for datasets and spreadsheet-compatible exports.
How do I choose between file storage and a database?
Use file storage for simple, single-user scenarios. Use a database when you need querying, indexing, concurrent access, or ACID transactions. SQLite bridges both worlds for simpler applications.