Intermediate

Why Logging Matters in Python

You’re debugging a production issue, but your application is silent. You added a few print() statements weeks ago, the messages got buried in the terminal, and now you have no idea what’s happening. Or worse: your app is logging to console, but the logs disappear the moment the process restarts. You need a way to capture what your application is doing—when it’s doing it, at what severity level, and where it should be recorded.

This is where Python’s built-in logging module becomes essential. Unlike print() statements, which are crude and destructive once you delete them, the logging module is a professional-grade system designed for production applications. It comes built-in to Python, requires no external dependencies, and provides granular control over message levels, formatting, and output destinations.

In this article, you’ll learn how to set up the logging module to output messages simultaneously to both your console (for immediate feedback during development) and to a file (for long-term record-keeping and debugging). We’ll cover logging levels, handlers, formatters, log rotation to prevent massive log files, and the patterns used in real multi-module projects. By the end, you’ll understand how to instrument your code with logging that developers trust.

How To Set Up Logging: Quick Example

Here’s a minimal example that outputs log messages to both console and file:

# quick_logging_example.py
import logging

# Create a logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

# File handler
file_handler = logging.FileHandler("app.log")
file_handler.setLevel(logging.DEBUG)

# Console handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

# Formatter
formatter = logging.Formatter(
    "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
file_handler.setFormatter(formatter)
console_handler.setFormatter(formatter)

# Add handlers to logger
logger.addHandler(file_handler)
logger.addHandler(console_handler)

# Log some messages
logger.debug("Debug message (goes to file only)")
logger.info("Info message (goes to both)")
logger.warning("Warning message (goes to both)")
logger.error("Error message (goes to both)")
logger.critical("Critical message (goes to both)")

Output (to console):

2026-03-29 14:22:15,342 - __main__ - INFO - Info message (goes to both)
2026-03-29 14:22:15,343 - __main__ - WARNING - Warning message (goes to both)
2026-03-29 14:22:15,344 - __main__ - ERROR - Error message (goes to both)
2026-03-29 14:22:15,344 - __main__ - CRITICAL - Critical message (goes to both)

Output (written to app.log):

2026-03-29 14:22:15,341 - __main__ - DEBUG - Debug message (goes to file only)
2026-03-29 14:22:15,342 - __main__ - INFO - Info message (goes to both)
2026-03-29 14:22:15,343 - __main__ - WARNING - Warning message (goes to both)
2026-03-29 14:22:15,344 - __main__ - ERROR - Error message (goes to both)
2026-03-29 14:22:15,344 - __main__ - CRITICAL - Critical message (goes to both)

Notice the key pattern: we created a logger, attached two separate handlers (one for files, one for console), set different levels for each, and applied a formatter that includes timestamps and severity levels. This is the foundation for everything that follows. The sections below show you how to customize each piece.

Debug Dee examining floating log entries through magnifying glass
Good logs are how you debug code you wrote six months ago and forgot about.

What is Python Logging and Why Use It?

The logging module is Python’s standard library tool for recording events that happen during program execution. Unlike print statements, logging provides:

  • Severity levels — categorize messages by importance (DEBUG, INFO, WARNING, ERROR, CRITICAL)
  • Multiple outputs — send logs to files, console, email, syslog, or custom handlers simultaneously
  • Formatting control — include timestamps, function names, line numbers, and custom metadata
  • Filtering — selectively log messages based on logger name, level, or custom criteria
  • No side effects — unlike print, you can leave logging code in production without cluttering output

The alternative—using print() for debugging—breaks down immediately:

Aspectprint() Statementslogging Module
Disable in productionMust manually removeAdjust level, keep code in place
Output destinationAlways stdoutFile, console, email, or custom
TimestampsManual string concatenationAutomatic, customizable format
Severity levelsNoneDEBUG, INFO, WARNING, ERROR, CRITICAL
PerformanceAlways evaluatesCan be filtered; lazy evaluation
Multi-module coordinationNo built-in supportHierarchical logger names

The logging module is designed for exactly what you need: professional-grade event recording that stays in your code indefinitely.

Understanding Logging Levels

Python’s logging module defines five standard severity levels, plus a catch-all NOTSET. Each level has a numeric value, and loggers will only record messages at or above their configured level:

LevelNumeric ValueWhen to UseExample
DEBUG10Detailed diagnostic info for debuggingVariable values, function entry/exit, loop iterations
INFO20General informational messagesApplication startup, config loaded, request received
WARNING30Something unexpected or potentially harmfulDeprecated API usage, missing optional config, retrying failed request
ERROR40A serious problem; some operation failedFile not found, API returned 500, database connection lost
CRITICAL50A very serious error; program may not continueOut of memory, permissions denied, unrecoverable system error

When you set a logger’s level to INFO, it will log INFO, WARNING, ERROR, and CRITICAL messages—but not DEBUG messages. This is how you control verbosity.

# logging_levels_demo.py
import logging

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

# Add a console handler so we can see output
handler = logging.StreamHandler()
handler.setLevel(logging.WARNING)
formatter = logging.Formatter("%(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)

# These will NOT appear (level is below WARNING)
logger.debug("This is a debug message")
logger.info("This is an info message")

# These WILL appear
logger.warning("This is a warning message")
logger.error("This is an error message")
logger.critical("This is a critical message")

Output:

WARNING - This is a warning message
ERROR - This is an error message
CRITICAL - This is a critical message

Notice: the logger itself has one level (DEBUG), but the console handler has a different level (WARNING). You can filter messages at multiple levels—first at the logger, then at each handler. This is crucial for sending different messages to different outputs (e.g., all DEBUG messages to a debug log file, only ERROR+ to a critical alert file).

Handlers and Formatters: Controlling Where and How Logs Go

A logger is just a container. The actual work happens in handlers and formatters:

  • Handler — an output destination. FileHandler writes to a file, StreamHandler writes to console, etc.
  • Formatter — defines how log messages are formatted: which fields to include (timestamp, function name, etc.) and in what order

You create a handler, assign a formatter to it, set a level, and attach it to a logger. A single logger can have multiple handlers, each with different levels and formatters.

Creating a StreamHandler (Console Output):

# stream_handler_example.py
import logging

logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)

# Create a console handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

# Format: timestamp, logger name, level, message
formatter = logging.Formatter(
    "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
console_handler.setFormatter(formatter)

logger.addHandler(console_handler)

logger.info("Application started")
logger.warning("This is a warning")
logger.error("An error occurred")

Output:

2026-03-29 14:25:30,123 - myapp - INFO - Application started
2026-03-29 14:25:30,124 - myapp - WARNING - This is a warning
2026-03-29 14:25:30,125 - myapp - ERROR - An error occurred

The %(asctime)s token automatically includes a timestamp. Other useful tokens include %(funcName)s (the function name), %(lineno)d (line number), and %(module)s (the module filename).

Creating a FileHandler (File Output):

# file_handler_example.py
import logging

logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)

# Create a file handler
file_handler = logging.FileHandler("app.log")
file_handler.setLevel(logging.DEBUG)

formatter = logging.Formatter(
    "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
file_handler.setFormatter(formatter)

logger.addHandler(file_handler)

logger.debug("Debug: application starting")
logger.info("Info: loading configuration")
logger.warning("Warning: deprecated API used")
logger.error("Error: failed to connect to database")

After running this, check your app.log file. All four messages will be there because the file handler’s level is DEBUG.

Output (written to app.log):

2026-03-29 14:27:01,456 - myapp - DEBUG - Debug: application starting
2026-03-29 14:27:01,457 - myapp - INFO - Info: loading configuration
2026-03-29 14:27:01,458 - myapp - WARNING - Warning: deprecated API used
2026-03-29 14:27:01,459 - myapp - ERROR - Error: failed to connect to database
Sudo Sam directing log traffic at an intersection
Handlers are traffic directors: DEBUG takes the file fork, ERROR takes the console.

Logging to Console and File Simultaneously

The most common pattern in production is to send all logs to a file (for permanent record) and only show WARNING+ messages on the console (for immediate visibility during operation). Here’s how:

# console_and_file_logging.py
import logging
import os

# Create a logger
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)

# Create log directory if it doesn't exist
log_dir = "logs"
if not os.path.exists(log_dir):
    os.makedirs(log_dir)

# File handler: captures all messages
file_handler = logging.FileHandler(os.path.join(log_dir, "app.log"))
file_handler.setLevel(logging.DEBUG)

# Console handler: shows only warnings and above
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.WARNING)

# Shared formatter for both handlers
formatter = logging.Formatter(
    "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    datefmt="%Y-%m-%d %H:%M:%S"
)
file_handler.setFormatter(formatter)
console_handler.setFormatter(formatter)

# Attach handlers to logger
logger.addHandler(file_handler)
logger.addHandler(console_handler)

# Lof messages at different levels
logger.debug("Starting application initialization")
logger.info("Configuration loaded successfully")
logger.info("Database connection established")
logger.warning("API response time is higher than usual")
logger.error("Failed to write to cache, continuing without cache")
logger.critical("Memory usage exceeded safe threshold")

Output (to console):

2026-03-29 14:30:12 - myapp - WARNING - API response time is higher than usual
2026-03-29 14:30:12 - myapp - ERROR - Failed to write to cache, continuing without cache
2026-03-29 14:30:12 - myapp - CRITICAL - Memory usage exceeded safe threshold

Output (written to logs/app.log):

2026-03-29 14:30:12 - myapp - DEBUG - Starting application initialization
2026-03-29 14:30:12 - myapp - INFO - Configuration loaded successfully
2026-03-29 14:30:12 - myapp - INFO - Database connection established
2026-03-29 14:30:12 - myapp - WARNING - API response time is higher than usual
2026-03-29 14:30:12 - myapp - ERROR - Failed to write to cache, continuing without cache
2026-03-29 14:30:12 - myapp - CRITICAL - Memory usage exceeded safe threshold

This pattern is powerful: you get a permanent record of everything (including debug messages developers need when troubleshooting), but the console stays clean during normal operation—only showing problems that need immediate attention. When a warning or error occurs, developers see it right away.

Custom Log Formatting with Timestamps and Metadata

The formatter string controls what information appears in each log message. The most useful format tokens are:

TokenMeaningExample
%(asctime)sTimestamp (human-readable)2026-03-29 14:30:12,456
%(name)sLogger namemyapp.database
%(levelname)sSeverity levelINFO, WARNING, ERROR
%(message)sThe actual log messageDatabase query completed
%(funcName)sName of function that loggedconnect_to_db
%(filename)sSource filenamedatabase.py
%(lineno)dLine number in source42
%(module)sModule namedatabase
%(process[=]dProcess ID12345
%(thread)dThread ID140256789012345

Here are some practical format examples:

# formatting_examples.py
import logging

logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)

# Example 1: Detailed format with function and line number
handler1 = logging.StreamHandler()
formatter1 = logging.Formatter(
    "%(asctime)s [%(levelname)s] %(funcName)s:;%(lineno)d - %(message)s"
)
handler1.setFormatter(formatter1)

# Example 2: Compact format (good for production)
handler2_formatter = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"

# Example 3: Include module name (useful in multi-file projects)
handler3_formatter = (
    "[%(asctime)s] %(module)s - %(levelname)s - %(message)s"
)

# Example 4: ISO 8601 timestamp with timezone
handler4 = logging.StreamHandler()
formatter4 = logging.Formatter(
    "%(asctime)s - %(levelname)s - %(message)s",
    datefmt="%Y-%m-%dT%H:%M:%S"
)
handler4.setFormatter(formatter4)

logger.addHandler(handler1)

def process_payment(user_id):
    logger.info(f"Processing payment for user {user_id}")
    logger.debug("Validating card information")
    logger.info("Payment submitted to processor")
    return True

process_payment(12345)

Output (Example 1 format):

2026-03-29 14:32:45,123 [INFO] process_payment:55 - Processing payment for user 12345
2026-03-29 14:32:45,124 [DEBUG] process_payment:56 - Validating card information
2026-03-29:0;( 14:32:45,125 [INFO] process_payment:57 - Payment submitted to processor

Controlling Log File Size with Log Rotation

If your application runs 24/7 and logs every request, your log files can grow huge fast, eating disk space and slowing down anything that tries to read or grep them. The solution is RotatingFileHandler, which caps file size and automatically rolls old logs into numbered backups:

# File: rotating_logger.py
import logging
from logging.handlers import RotatingFileHandler

logger = logging.getLogger("payments")
logger.setLevel(logging.DEBUG)

# Max 5 MB per file, keep 3 old files (app.log.1, app.log.2, app.log.3)
handler = RotatingFileHandler(
    "app.log",
    maxBytes=5 * 1024 * 1024,
    backupCount=3,
)
handler.setFormatter(logging.Formatter(
    "%(asctime)s [%(levelname)s] %(message)s"
))
logger.addHandler(handler)

# Simulate heavy logging
for i in range(100_000):
    logger.info(f"Processed request {i}")

When app.log hits 5 MB, the handler renames it to app.log.1, shifts older backups up the chain, and starts a fresh app.log. Once backupCount is reached, the oldest file is deleted. You get bounded disk usage with no manual cleanup.

For time-based rotation — one log file per day, week, or hour — use TimedRotatingFileHandler instead:

from logging.handlers import TimedRotatingFileHandler

# Roll over at midnight every day, keep 14 days of history
handler = TimedRotatingFileHandler(
    "app.log",
    when="midnight",
    interval=1,
    backupCount=14,
)

This is ideal for compliance scenarios where you need a clean audit trail per day, or for shipping logs to a daily archive bucket.

Logging Exceptions and Tracebacks

One of the most common logging mistakes is catching an exception and only writing the error message — losing the traceback that tells you where things went wrong. Compare these two patterns:

# Bad — just the message, no traceback
try:
    result = risky_operation()
except Exception as e:
    logger.error(f"Operation failed: {e}")

# Good — full traceback automatically included
try:
    result = risky_operation()
except Exception:
    logger.exception("Operation failed")

logger.exception() is shorthand for logger.error(msg, exc_info=True). It records the message AND the full stack trace, so when you’re debugging at 2 AM you can see exactly which line raised, what the call chain was, and which third-party library was involved. Always use logger.exception() inside except blocks.

You can also force a traceback on lower-severity log calls with exc_info=True:

try:
    cache.get(key)
except CacheTimeout:
    logger.warning("Cache miss with timeout, falling back to DB", exc_info=True)
    return db.query(key)
Pick your log levels. Stick to them. Future you will read them.
Pick your log levels. Stick to them. Future you will read them.

Logging Across Multiple Modules

In real applications you have dozens of modules, and you want logs to show which one wrote each message. The convention is logger = logging.getLogger(__name__) at the top of every file. __name__ resolves to the dotted module path, so logs from app/services/payments.py appear under the logger name app.services.payments.

# File: app/services/payments.py
import logging

logger = logging.getLogger(__name__)  # name = "app.services.payments"

def charge_card(amount):
    logger.info("Charging card for $%s", amount)
    # ... charge logic ...

The benefit: in main.py (or wherever you configure logging) you can route specific modules to different handlers, set finer-grained levels, or silence noisy third-party libraries:

# File: main.py
import logging

# Root logger — catches everything at INFO+
logging.basicConfig(level=logging.INFO)

# Quiet down a noisy third-party library
logging.getLogger("urllib3").setLevel(logging.WARNING)

# Turn on DEBUG just for our payments module
logging.getLogger("app.services.payments").setLevel(logging.DEBUG)

This pattern scales — instead of editing logging calls in every file, you control verbosity from one place.

Structured Logging with JSON

Plain-text logs are great for tailing in a terminal, but if you ship logs to a centralized system (Elasticsearch, Datadog, Loki, CloudWatch), JSON-structured logs are dramatically easier to query. Each log line becomes a parsed record with searchable fields instead of regex-matchable strings.

The simplest path is python-json-logger:

# Install: pip install python-json-logger

import logging
from pythonjsonlogger import jsonlogger

logger = logging.getLogger("api")
handler = logging.StreamHandler()
handler.setFormatter(jsonlogger.JsonFormatter(
    "%(asctime)s %(name)s %(levelname)s %(message)s"
))
logger.addHandler(handler)
logger.setLevel(logging.INFO)

logger.info("user signup", extra={"user_id": 4231, "plan": "pro"})

Output:

{"asctime": "2026-03-29 15:01:22,847", "name": "api", "levelname": "INFO", "message": "user signup", "user_id": 4231, "plan": "pro"}

Now in your log aggregator you can filter by plan = "pro" directly, no regex required. The extra={} parameter is the secret — anything you pass there becomes a top-level JSON field.

Production Logging Best Practices

A few rules that pay back tenfold once your application is live and you’re not the only one debugging it:

  • Use lazy string formatting. Write logger.info("Got %s rows", count), not logger.info(f"Got {count} rows"). The lazy form only builds the string if the log level is actually enabled — important when DEBUG logs are off in production.
  • Don’t log secrets. Audit your messages for tokens, passwords, full credit card numbers, or PII. Centralized log storage is often broader-access than your production database.
  • Pick one log level per environment. DEBUG locally, INFO in staging, WARNING in production. Don’t mix.
  • Always include identifiers. Every log line tied to a user action should carry the user ID, request ID, or correlation ID. Logs without identifiers are noise.
  • Configure once, in one place. Use logging.config.dictConfig() with a config dict (or a YAML file) at app startup. Don’t sprinkle basicConfig() calls throughout the codebase.
  • Test that logs are being written. A surprising number of production outages are made worse by “we didn’t have any logs” — usually because someone called logging.basicConfig() after another module had already configured the root logger, and the second call silently no-ops.

Common Logging Pitfalls

Three patterns to watch for:

1. Calling logging.basicConfig() after another module has logged. basicConfig() only adds handlers if the root logger has none. The fix: configure logging as the very first thing in main.py, before importing your modules.

2. Duplicate log messages. If you accidentally add the same handler twice — or if your code sets up logging on import and again in __main__ — every message prints twice. The fix: check if logger.hasHandlers() before adding handlers, or rely on dictConfig which idempotently rebuilds the config.

3. Logger.propagate surprises. Child loggers propagate to parents by default. If you add a console handler to app AND to app.services, messages from app.services appear twice. Set logger.propagate = False on the child or only add handlers at the root.

FAQ

Q: What’s the difference between logger.info() and logger.debug()?
A: Severity. INFO is for “normal operational events I want to see in production” — startup, request completion, scheduled job ran. DEBUG is for verbose internal state useful when reproducing a bug locally. In production, DEBUG is usually off so the noise doesn’t drown out the signal.

Q: Should I use print() instead?
A: For one-off scripts, fine. For anything you’ll run more than once, no. print() can’t be filtered by severity, can’t be redirected to multiple destinations, doesn’t carry timestamps or module names, and writes to stdout which mingles with your application’s actual output.

Q: How do I log to a remote system like CloudWatch or Datadog?
A: Two common approaches. (1) Ship logs to a local file in JSON format and run a sidecar agent (CloudWatch Agent, Vector, Fluent Bit) that tails the file and forwards. (2) Use a Python handler that posts directly — watchtower for CloudWatch, datadog-python for Datadog. Option 1 is more resilient because it survives network blips.

Q: Why are my logs not appearing?
A: Most common cause: the root logger’s level is higher than the message level. Try logging.basicConfig(level=logging.DEBUG) at the very top of main.py. Second most common: another import called basicConfig() first and you didn’t notice.

Q: How do I correlate logs across services in a microservices setup?
A: Generate a UUID-based request ID at the API gateway, pass it through every downstream service in a header (X-Request-ID), and include it in every log line via extra={"request_id": ...}. When you’re debugging an issue, you grep the request ID across all services’ logs and see the full timeline.

Wrapping Up

Python’s logging module is one of those tools where the 90% solution is straightforward — call logging.basicConfig(), get a logger with logging.getLogger(__name__), write info/error messages — and the remaining 10% (rotation, JSON output, multi-handler routing) becomes important as soon as your application leaves your laptop. Get the basics right early and the advanced patterns are small additions, not refactors.

The official Python logging documentation has the full reference for everything covered here plus the more obscure handlers (SMTP, SysLog, HTTP). For tutorials on related topics, see the related articles section below.