Intermediate
You have written a Python function that works perfectly — until someone changes a dependency, refactors a helper, or feeds it unexpected input. Without tests, you discover these problems the same way your users do: in production. Testing is how professional developers protect their code from silent regressions, and pytest is the tool that makes writing tests almost enjoyable.
The great news is that pytest requires zero boilerplate to get started. There are no test classes to inherit from, no special assertion methods to memorize, and no XML configuration files to maintain. You write plain Python functions that start with test_, use regular assert statements, and pytest handles the rest — discovery, execution, rich failure diffs, and detailed reporting. Install it with pip install pytest and you are ready to go.
In this article, we will cover everything you need to start testing Python code with pytest. We will begin with a quick example, then explain why pytest beats the built-in unittest module for most projects. From there, we will walk through writing basic tests, using fixtures to manage setup and teardown, parametrizing tests to cover multiple inputs, mocking external dependencies, and organizing test files. We will finish with a complete real-life project that tests a shopping cart module end to end.
Testing Python Code With pytest: Quick Example
Here is the fastest way to see pytest in action. We will create a tiny function and a test for it, then run pytest from the command line.
# test_quick.py
import pytest
def add(a, b):
return a + b
def test_add_positive_numbers():
assert add(2, 3) == 5
def test_add_negative_numbers():
assert add(-1, -1) == -2
def test_add_mixed():
assert add(-1, 1) == 0
Output:
$ pytest test_quick.py -v
========================= test session starts =========================
collected 3 items
test_quick.py::test_add_positive_numbers PASSED
test_quick.py::test_add_negative_numbers PASSED
test_quick.py::test_add_mixed PASSED
========================= 3 passed in 0.01s ==========================
That is the entire workflow: write functions that start with test_, use assert to check results, and run pytest from the terminal. No classes, no inheritance, no ceremony. When a test fails, pytest shows you exactly what the expected and actual values were, which makes debugging fast.
Want to learn about fixtures, parametrize, mocking, and real-world project testing? Keep reading — we cover all of that below.
What Is pytest and Why Use It?
pytest is a testing framework for Python that makes it simple to write small, readable tests — and scales to support complex functional testing for applications and libraries. It has been the dominant Python testing tool for years, used by major projects like Django, Flask, and requests.
Python ships with a built-in testing module called unittest, which works fine but requires more boilerplate. Here is how the two compare:
| Feature | pytest | unittest |
|---|---|---|
| Test discovery | Automatic (files/functions starting with test_) | Requires TestCase classes |
| Assertions | Plain assert statements | self.assertEqual, self.assertTrue, etc. |
| Setup/Teardown | Fixtures (flexible, composable) | setUp/tearDown methods |
| Parametrization | Built-in @pytest.mark.parametrize | Requires third-party library or loops |
| Plugin ecosystem | 1,000+ plugins (coverage, mock, async, etc.) | Limited built-in extensions |
| Output on failure | Rich diffs showing exact values | Basic assertion error messages |
| Boilerplate | Minimal (plain functions) | Heavy (classes, inheritance, special methods) |
The biggest practical advantage is readability. A pytest test reads like regular Python code, which means teammates who have never written tests before can understand what each test checks just by reading it. This matters more than any technical feature because tests that nobody reads are tests that nobody maintains.
Writing Basic Tests
Let us start with the fundamentals. We will create a small module with a few functions, then write tests for each one. This pattern — source code in one file, tests in another — is how real Python projects are organized.
# calculator.py
def divide(a, b):
"""Divide a by b, raising ValueError for zero division."""
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
def is_even(n):
"""Return True if n is even."""
return n % 2 == 0
def clamp(value, minimum, maximum):
"""Clamp value between minimum and maximum."""
return max(minimum, min(value, maximum))
Now let us write tests for these functions. Notice how each test function has a descriptive name that explains what scenario it checks. This makes the test output act as living documentation for your code.
# test_calculator.py
import pytest
from calculator import divide, is_even, clamp
def test_divide_positive():
assert divide(10, 2) == 5.0
def test_divide_returns_float():
result = divide(7, 2)
assert result == 3.5
def test_divide_by_zero_raises():
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)
def test_is_even_with_even():
assert is_even(4) is True
def test_is_even_with_odd():
assert is_even(7) is False
def test_clamp_within_range():
assert clamp(5, 0, 10) == 5
def test_clamp_below_minimum():
assert clamp(-5, 0, 10) == 0
def test_clamp_above_maximum():
assert clamp(15, 0, 10) == 10
Output:
$ pytest test_calculator.py -v
========================= test session starts =========================
collected 8 items
test_calculator.py::test_divide_positive PASSED
test_calculator.py::test_divide_returns_float PASSED
test_calculator.py::test_divide_by_zero_raises PASSED
test_calculator.py::test_is_even_with_even PASSED
test_calculator.py::test_is_even_with_odd PASSED
test_calculator.py::test_clamp_within_range PASSED
test_calculator.py::test_clamp_below_minimum PASSED
test_calculator.py::test_clamp_above_maximum PASSED
========================= 8 passed in 0.02s ==========================
Two key patterns to notice here. First, pytest.raises is a context manager that catches expected exceptions — you use it to verify that your code fails correctly when it should. The match parameter checks the exception message with a regex pattern, so you catch the right error for the right reason. Second, each test checks exactly one behavior. This makes failures precise: when test_clamp_below_minimum fails, you know instantly that the lower-bound logic is broken, not some vague “calculator is broken” signal.
Fixtures: Managing Test Setup and Teardown
Real-world tests need shared setup: database connections, temporary files, sample data objects. Copying setup code into every test function creates duplication and makes tests fragile. Fixtures solve this by letting you define reusable setup logic that pytest automatically injects into any test that needs it.
# test_user_service.py
import pytest
class User:
def __init__(self, name, email, role="member"):
self.name = name
self.email = email
self.role = role
def promote(self):
self.role = "admin"
def display_name(self):
return f"{self.name} ({self.role})"
@pytest.fixture
def sample_user():
"""Create a fresh user for each test."""
return User("Alice", "alice@example.com")
@pytest.fixture
def admin_user():
"""Create a pre-promoted admin user."""
user = User("Bob", "bob@example.com")
user.promote()
return user
def test_new_user_is_member(sample_user):
assert sample_user.role == "member"
def test_promote_changes_role(sample_user):
sample_user.promote()
assert sample_user.role == "admin"
def test_admin_display_name(admin_user):
assert admin_user.display_name() == "Bob (admin)"
def test_member_display_name(sample_user):
assert sample_user.display_name() == "Alice (member)"
Output:
$ pytest test_user_service.py -v
========================= test session starts =========================
collected 4 items
test_user_service.py::test_new_user_is_member PASSED
test_user_service.py::test_promote_changes_role PASSED
test_user_service.py::test_admin_display_name PASSED
test_user_service.py::test_member_display_name PASSED
========================= 4 passed in 0.01s ==========================
Each fixture is a function decorated with @pytest.fixture. When a test function includes the fixture name as a parameter, pytest calls the fixture function and passes the return value into the test automatically. Every test gets a fresh instance, so test_promote_changes_role modifying the user does not affect test_member_display_name. This isolation is what makes fixtures so much better than shared global state.
Parametrize: Testing Multiple Inputs Without Repetition
When you need to test the same logic with different inputs, writing separate test functions for each case gets tedious fast. The @pytest.mark.parametrize decorator lets you run a single test function across multiple input-output combinations, keeping your test suite compact and easy to extend.
# test_string_utils.py
import pytest
def slugify(text):
"""Convert text to a URL-friendly slug."""
return text.lower().strip().replace(" ", "-")
@pytest.mark.parametrize("input_text, expected", [
("Hello World", "hello-world"),
("Python Is Great", "python-is-great"),
(" spaces ", "spaces"),
("ALLCAPS", "allcaps"),
("already-slugged", "already-slugged"),
])
def test_slugify(input_text, expected):
assert slugify(input_text) == expected
Output:
$ pytest test_string_utils.py -v
========================= test session starts =========================
collected 5 items
test_string_utils.py::test_slugify[Hello World-hello-world] PASSED
test_string_utils.py::test_slugify[Python Is Great-python-is-great] PASSED
test_string_utils.py::test_slugify[ spaces -spaces] PASSED
test_string_utils.py::test_slugify[ALLCAPS-allcaps] PASSED
test_string_utils.py::test_slugify[already-slugged-already-slugged] PASSED
========================= 5 passed in 0.01s ==========================
The decorator takes a comma-separated string of parameter names and a list of tuples with the values. Each tuple becomes a separate test case with its own pass/fail status. If you later realize you need to handle punctuation in slugs, you just add another tuple to the list — no new function needed. The test output also labels each case with the inputs, making it obvious which specific scenario broke when something fails.
Mocking External Dependencies
Tests should be fast, deterministic, and independent of external systems. If your function calls an API, reads from a database, or sends an email, you do not want your tests to actually do those things. Mocking replaces those external dependencies with controlled fakes so you can test your logic in isolation.
Python includes unittest.mock in the standard library, and pytest works with it seamlessly. The key tool is patch, which temporarily replaces an object during a test.
# weather.py
import requests
def get_temperature(city):
"""Fetch current temperature for a city from a weather API."""
response = requests.get(
f"https://api.weather-service.com/current?city={city}"
)
data = response.json()
return data["temperature"]
def weather_advice(city):
"""Return clothing advice based on temperature."""
temp = get_temperature(city)
if temp > 30:
return "Wear shorts and sunscreen"
elif temp > 15:
return "A light jacket should do"
else:
return "Bundle up, it is cold"
# test_weather.py
from unittest.mock import patch, MagicMock
from weather import weather_advice
@patch("weather.requests.get")
def test_hot_weather_advice(mock_get):
# Configure the mock to return a hot temperature
mock_response = MagicMock()
mock_response.json.return_value = {"temperature": 35}
mock_get.return_value = mock_response
result = weather_advice("Sydney")
assert result == "Wear shorts and sunscreen"
mock_get.assert_called_once()
@patch("weather.requests.get")
def test_cold_weather_advice(mock_get):
mock_response = MagicMock()
mock_response.json.return_value = {"temperature": 5}
mock_get.return_value = mock_response
result = weather_advice("Moscow")
assert result == "Bundle up, it is cold"
@patch("weather.requests.get")
def test_mild_weather_advice(mock_get):
mock_response = MagicMock()
mock_response.json.return_value = {"temperature": 20}
mock_get.return_value = mock_response
result = weather_advice("London")
assert result == "A light jacket should do"
Output:
$ pytest test_weather.py -v
========================= test session starts =========================
collected 3 items
test_weather.py::test_hot_weather_advice PASSED
test_weather.py::test_cold_weather_advice PASSED
test_weather.py::test_mild_weather_advice PASSED
========================= 3 passed in 0.03s ==========================
The @patch decorator replaces requests.get inside the weather module with a MagicMock object for the duration of the test. We configure the mock’s .json() return value to simulate different API responses. This means our tests never make real HTTP requests, run in milliseconds, and work even when there is no internet connection. The assert_called_once() check verifies that our code actually tried to call the API — catching bugs where the function might skip the API call entirely.
Organizing Test Files in a Real Project
As your project grows, you need a consistent structure for your test files. The standard convention is to mirror your source directory with a tests/ directory. Each source module gets a corresponding test file.
# Example project structure
my_project/
src/
__init__.py
calculator.py
user_service.py
weather.py
tests/
__init__.py
test_calculator.py
test_user_service.py
test_weather.py
conftest.py
pyproject.toml
Output:
$ pytest tests/ -v
========================= test session starts =========================
collected 15 items
tests/test_calculator.py::test_divide_positive PASSED
tests/test_calculator.py::test_divide_returns_float PASSED
tests/test_calculator.py::test_divide_by_zero_raises PASSED
tests/test_user_service.py::test_new_user_is_member PASSED
tests/test_user_service.py::test_promote_changes_role PASSED
tests/test_weather.py::test_hot_weather_advice PASSED
...
========================= 15 passed in 0.05s ==========================
The conftest.py file is a special pytest file that holds shared fixtures available to all test files in the same directory. You do not need to import from it — pytest discovers and loads it automatically. Put fixtures that multiple test files need (like database connections or test data factories) in conftest.py instead of duplicating them across files.
# tests/conftest.py
import pytest
@pytest.fixture
def sample_users():
"""Shared fixture available to all test files."""
return [
{"name": "Alice", "email": "alice@example.com", "role": "admin"},
{"name": "Bob", "email": "bob@example.com", "role": "member"},
{"name": "Charlie", "email": "charlie@example.com", "role": "member"},
]
Any test file in the tests/ directory can now use sample_users as a fixture parameter without any import statement. This keeps your test setup DRY and makes it easy for new team members to find shared test data in one place.
Useful pytest Command-Line Flags
Knowing a few key command-line flags makes pytest significantly more productive in daily development. Here are the ones you will use constantly.
# Run only tests matching a keyword pattern
$ pytest -k "divide" -v
# Stop on first failure (great for debugging)
$ pytest -x
# Show print() output from tests (normally captured)
$ pytest -s
# Run only tests that failed last time
$ pytest --lf
# Show the 5 slowest tests
$ pytest --durations=5
# Run tests in parallel (requires pytest-xdist)
$ pytest -n auto
Output (for -k “divide”):
$ pytest -k "divide" -v
========================= test session starts =========================
collected 15 items / 12 deselected / 3 selected
tests/test_calculator.py::test_divide_positive PASSED
tests/test_calculator.py::test_divide_returns_float PASSED
tests/test_calculator.py::test_divide_by_zero_raises PASSED
========================= 3 passed, 12 deselected in 0.02s ===========
The -k flag is invaluable when you are working on a specific feature and want to run only the related tests. The -x flag prevents pytest from running hundreds of tests after the first failure — when something is broken, you want to see the first failure immediately. And --lf (last failed) re-runs only the tests that failed in your previous run, giving you a tight feedback loop during debugging.
Real-Life Example: Testing a Shopping Cart Module
Let us put everything together in a realistic scenario. We will build a simple shopping cart module and write a comprehensive test suite that uses fixtures, parametrize, and exception testing.
# cart.py
class Product:
def __init__(self, name, price):
if price < 0:
raise ValueError("Price cannot be negative")
self.name = name
self.price = price
class ShoppingCart:
def __init__(self):
self.items = []
def add(self, product, quantity=1):
"""Add a product to the cart."""
if quantity < 1:
raise ValueError("Quantity must be at least 1")
self.items.append({"product": product, "quantity": quantity})
def total(self):
"""Calculate the total price of all items."""
return sum(
item["product"].price * item["quantity"]
for item in self.items
)
def item_count(self):
"""Return the total number of individual items."""
return sum(item["quantity"] for item in self.items)
def remove(self, product_name):
"""Remove all entries of a product by name."""
before = len(self.items)
self.items = [
item for item in self.items
if item["product"].name != product_name
]
if len(self.items) == before:
raise ValueError(f"Product '{product_name}' not in cart")
def apply_discount(self, percent):
"""Return the total after a percentage discount."""
if not 0 <= percent <= 100:
raise ValueError("Discount must be between 0 and 100")
return self.total() * (1 - percent / 100)
Now the test suite that exercises every method, edge case, and error condition:
# test_cart.py
import pytest
from cart import Product, ShoppingCart
# --- Fixtures ---
@pytest.fixture
def apple():
return Product("Apple", 1.50)
@pytest.fixture
def laptop():
return Product("Laptop", 999.99)
@pytest.fixture
def cart():
return ShoppingCart()
@pytest.fixture
def loaded_cart(apple, laptop):
cart = ShoppingCart()
cart.add(apple, quantity=3)
cart.add(laptop, quantity=1)
return cart
# --- Product Tests ---
def test_product_creation():
product = Product("Widget", 9.99)
assert product.name == "Widget"
assert product.price == 9.99
def test_product_negative_price():
with pytest.raises(ValueError, match="Price cannot be negative"):
Product("Bad", -5.00)
# --- Cart Basic Tests ---
def test_empty_cart_total(cart):
assert cart.total() == 0
def test_empty_cart_count(cart):
assert cart.item_count() == 0
def test_add_single_item(cart, apple):
cart.add(apple)
assert cart.item_count() == 1
assert cart.total() == 1.50
def test_add_multiple_quantity(cart, apple):
cart.add(apple, quantity=5)
assert cart.item_count() == 5
assert cart.total() == 7.50
def test_add_invalid_quantity(cart, apple):
with pytest.raises(ValueError, match="Quantity must be at least 1"):
cart.add(apple, quantity=0)
# --- Loaded Cart Tests ---
def test_loaded_cart_total(loaded_cart):
expected = (1.50 * 3) + (999.99 * 1) # 1004.49
assert loaded_cart.total() == pytest.approx(expected)
def test_loaded_cart_count(loaded_cart):
assert loaded_cart.item_count() == 4
# --- Remove Tests ---
def test_remove_item(loaded_cart):
loaded_cart.remove("Apple")
assert loaded_cart.item_count() == 1
assert loaded_cart.total() == pytest.approx(999.99)
def test_remove_nonexistent_item(loaded_cart):
with pytest.raises(ValueError, match="not in cart"):
loaded_cart.remove("Banana")
# --- Discount Tests ---
@pytest.mark.parametrize("percent, expected_total", [
(0, 1004.49),
(10, 904.041),
(50, 502.245),
(100, 0.0),
])
def test_discount(loaded_cart, percent, expected_total):
assert loaded_cart.apply_discount(percent) == pytest.approx(
expected_total, rel=1e-2
)
def test_invalid_discount(loaded_cart):
with pytest.raises(ValueError, match="Discount must be between"):
loaded_cart.apply_discount(110)
Output:
$ pytest test_cart.py -v
========================= test session starts =========================
collected 16 items
test_cart.py::test_product_creation PASSED
test_cart.py::test_product_negative_price PASSED
test_cart.py::test_empty_cart_total PASSED
test_cart.py::test_empty_cart_count PASSED
test_cart.py::test_add_single_item PASSED
test_cart.py::test_add_multiple_quantity PASSED
test_cart.py::test_add_invalid_quantity PASSED
test_cart.py::test_loaded_cart_total PASSED
test_cart.py::test_loaded_cart_count PASSED
test_cart.py::test_remove_item PASSED
test_cart.py::test_remove_nonexistent_item PASSED
test_cart.py::test_discount[0-1004.49] PASSED
test_cart.py::test_discount[10-904.041] PASSED
test_cart.py::test_discount[50-502.245] PASSED
test_cart.py::test_discount[100-0.0] PASSED
test_cart.py::test_invalid_discount PASSED
========================= 16 passed in 0.03s =========================
This test suite demonstrates how fixtures, parametrize, and exception testing work together in a real project. The loaded_cart fixture composes two other fixtures (apple and laptop) to build a realistic test scenario without duplicating setup code. The parametrized discount test covers four boundary conditions in five lines instead of twenty. And pytest.approx handles floating-point comparison so you do not get bitten by rounding errors when comparing monetary totals.
Frequently Asked Questions
How do I run a single test file or a single test function?
To run a single file, pass its path directly: pytest test_calculator.py. To run a specific function within that file, use double-colon syntax: pytest test_calculator.py::test_divide_positive. You can also use -k "keyword" to match test names by pattern, which is faster when you cannot remember the exact function name but know it involves "divide".
What is the difference between a fixture and a regular helper function?
A fixture is managed by pytest's dependency injection system -- you declare it as a parameter and pytest calls it for you, handles cleanup, and ensures fresh instances per test. A regular helper function is just a Python function you call manually. Use fixtures when you need automatic setup/teardown or when multiple test files share the same setup logic. Use helpers for simple utility operations that do not need lifecycle management.
How do I check code coverage with pytest?
Install pytest-cov with pip install pytest-cov, then run pytest --cov=src tests/ where src is your source directory. It will print a coverage report showing which lines and branches are exercised by your tests. For an HTML report that lets you see uncovered lines visually, add --cov-report html and open the generated htmlcov/index.html in your browser.
Can pytest run unittest-style test classes?
Yes, pytest is fully backwards compatible with unittest.TestCase classes. It will discover and run them alongside pytest-style functions. This makes migration gradual -- you can start writing new tests in pytest style while your old unittest tests continue to work. Over time, you can convert them to pytest functions to take advantage of fixtures and parametrize.
How do I skip a test or mark it as expected to fail?
Use @pytest.mark.skip(reason="Not implemented yet") to skip a test unconditionally. Use @pytest.mark.skipif(condition, reason="...") to skip based on a runtime condition (like the Python version or OS). For tests that document a known bug, use @pytest.mark.xfail -- pytest runs them but does not count them as failures, and it will alert you when the bug is fixed and the test starts passing unexpectedly.
Conclusion
We covered the complete pytest workflow in this article: writing basic tests with assert, managing test setup with fixtures, testing multiple inputs with @pytest.mark.parametrize, mocking external dependencies with unittest.mock.patch, organizing test files with conftest.py, and using command-line flags for productive debugging. The shopping cart project showed how all these pieces fit together in a realistic codebase.
Try extending the shopping cart tests as practice: add a coupon code feature, implement a maximum cart size, or add a loyalty points calculation. Each new feature is an opportunity to write the test first (test-driven development) and watch it go from red to green.
For the complete pytest documentation, including advanced features like custom markers, plugin development, and async testing, visit the official docs at docs.pytest.org.