Intermediate

Every production Python application needs logging. Not print() statements that vanish when your script closes — real, structured logs with timestamps, severity levels, file rotation, and the ability to turn verbosity up or down without touching your code. When something goes wrong at 2am on a production server, your log file is the only witness. If all you left behind are a few print("here") calls, you’re debugging blind.

Python ships with a powerful, flexible logging module in its standard library. It’s built around a hierarchy of loggers, handlers, and formatters that you configure once and use everywhere. The learning curve is a bit steeper than print(), but the payoff — structured, timestamped, level-filtered, file-backed logs — is enormous. No third-party packages are required to get started.

In this article we’ll cover the five log levels, the basicConfig shortcut, named loggers and the logger hierarchy, handlers (console, file, rotating), formatters, logging from multiple modules, and a complete real-world logging setup for a data pipeline application. By the end, you’ll have a professional logging setup you can drop into any project.

Python Logging: Quick Example

Here’s the fastest way to get meaningful logging output with timestamps and levels:

# quick_logging.py
import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(levelname)s - %(message)s',
    datefmt='%Y-%m-%d %H:%M:%S'
)

logging.debug('This is a debug message')
logging.info('Application started')
logging.warning('Disk space running low')
logging.error('Failed to connect to database')
logging.critical('Application cannot continue')

Output:

2026-04-16 09:00:01 - DEBUG - This is a debug message
2026-04-16 09:00:01 - INFO - Application started
2026-04-16 09:00:01 - WARNING - Disk space running low
2026-04-16 09:00:01 - ERROR - Failed to connect to database
2026-04-16 09:00:01 - CRITICAL - Application cannot continue

basicConfig() sets up the root logger — the single logging object all other loggers inherit from if not configured themselves. The format string uses special tokens like %(asctime)s, %(levelname)s, and %(message)s. In the sections below we’ll go beyond the root logger to set up named, per-module loggers and file handlers.

What Is the logging Module and Why Use It?

The logging module provides a standardized way to emit messages from your application at different severity levels. Unlike print(), log messages carry metadata (timestamp, level, logger name, file, line number), can be routed to multiple destinations simultaneously (console AND file), and can be filtered by level without changing any code.

LevelNumeric ValueWhen to Use
DEBUG10Detailed diagnostic info during development
INFO20Confirmation that things are working as expected
WARNING30Something unexpected happened, but the app continues
ERROR40A serious problem — part of the app couldn’t run
CRITICAL50A severe error — the app may not be able to continue

The level you set on a logger or handler acts as a filter: only messages at that level or higher are processed. Set to DEBUG in development to see everything; set to WARNING or ERROR in production to reduce noise. No code changes required — just a config change.

Named Loggers and the Logger Hierarchy

The best practice is to create a named logger for each module using __name__. This gives every log message a module-level identifier and lets you control logging granularity per module in large applications.

# named_logger.py
import logging

# Create a logger named after this module
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

# Create a console handler with formatting
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)

# Use the logger
logger.info('Module initialized')
logger.debug('Loading configuration from file')
logger.warning('Config file not found, using defaults')

Output:

2026-04-16 09:00:01 - __main__ - INFO - Module initialized
2026-04-16 09:00:01 - __main__ - DEBUG - Loading configuration from file
2026-04-16 09:00:01 - __main__ - WARNING - Config file not found, using defaults

Loggers form a hierarchy based on their names. A logger named myapp.database is a child of myapp, which is a child of the root logger. Messages propagate up the hierarchy by default — so configuring handlers on the root logger or a parent logger affects all children. This hierarchy is what makes it possible to configure logging once in your main module and have it work across all your imports.

Logging to a File

Writing logs to a file ensures you have a record of what happened, even after the terminal session closes. The FileHandler writes log messages to a file you specify.

# file_logging.py
import logging

logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG)

# Console handler -- only show WARNING and above in the terminal
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.WARNING)
console_handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))

# File handler -- write everything DEBUG and above to a file
file_handler = logging.FileHandler('app.log', encoding='utf-8')
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(
    logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
)

logger.addHandler(console_handler)
logger.addHandler(file_handler)

# Now emit messages
logger.debug('Processing item 1')       # Only in file
logger.info('Item 1 processed OK')     # Only in file
logger.warning('Item 2 skipped')        # Console AND file
logger.error('Item 3 failed: timeout') # Console AND file

Terminal output:

WARNING: Item 2 skipped
ERROR: Item 3 failed: timeout

app.log contents:

2026-04-16 09:00:01 - myapp - DEBUG - Processing item 1
2026-04-16 09:00:01 - myapp - INFO - Item 1 processed OK
2026-04-16 09:00:01 - myapp - WARNING - Item 2 skipped
2026-04-16 09:00:01 - myapp - ERROR - Item 3 failed: timeout

This dual-handler pattern is extremely common in production: the console shows only what operators need to see in real time (warnings and errors), while the file captures the full diagnostic history for post-mortem debugging.

Rotating Log Files

Log files grow indefinitely if nothing manages them. The RotatingFileHandler automatically rotates log files when they hit a size limit, keeping a configurable number of backup files. The TimedRotatingFileHandler rotates on a schedule (daily, hourly, etc.).

# rotating_logs.py
import logging
from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler

logger = logging.getLogger('rotating_demo')
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')

# Rotate when file hits 1MB, keep 5 backups
size_handler = RotatingFileHandler(
    'app_size.log',
    maxBytes=1_000_000,   # 1 MB
    backupCount=5,
    encoding='utf-8'
)
size_handler.setFormatter(formatter)

# Rotate daily at midnight, keep 7 days of logs
time_handler = TimedRotatingFileHandler(
    'app_daily.log',
    when='midnight',
    interval=1,
    backupCount=7,
    encoding='utf-8'
)
time_handler.setFormatter(formatter)

logger.addHandler(size_handler)
logger.addHandler(time_handler)

for i in range(100):
    logger.info(f'Processing record {i}')

print('Logging complete. Check app_size.log and app_daily.log')

Output:

Logging complete. Check app_size.log and app_daily.log

When app_size.log reaches 1MB, it’s renamed to app_size.log.1, then app_size.log.2, and so on up to the backupCount. Older backups are deleted automatically. For long-running services like web servers or data pipelines, TimedRotatingFileHandler with when='midnight' and backupCount=30 gives you a month of daily logs with zero maintenance.

Logging Exceptions

One of the most valuable logging features is capturing full exception tracebacks. Use logger.exception() inside an except block — it logs the message at ERROR level and automatically appends the full traceback.

# exception_logging.py
import logging

logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

def divide(a, b):
    try:
        result = a / b
        logger.info(f'Divided {a} / {b} = {result}')
        return result
    except ZeroDivisionError:
        logger.exception(f'Failed to divide {a} by {b}')
        return None

divide(10, 2)
divide(10, 0)  # Will log the traceback

Output:

2026-04-16 09:00:01 - INFO - Divided 10 / 2 = 5.0
2026-04-16 09:00:01 - ERROR - Failed to divide 10 by 0
Traceback (most recent call last):
  File "exception_logging.py", line 8, in divide
    result = a / b
ZeroDivisionError: division by zero

logger.exception() is equivalent to logger.error(msg, exc_info=True). The traceback is automatically included — you don’t need to call traceback.format_exc() or format it yourself. This is the pattern every production application should use inside exception handlers.

Real-Life Example: Data Pipeline Logger

Here’s a complete logging setup for a data pipeline that processes records from a source file, transforms them, and writes them to an output file — with full logging at every stage.

# data_pipeline.py
import logging
import logging.config
import json
import os
from logging.handlers import RotatingFileHandler

def setup_logger(name, log_file, level=logging.DEBUG):
    """Create a configured logger with console and rotating file handlers."""
    logger = logging.getLogger(name)
    logger.setLevel(level)

    if logger.handlers:
        return logger  # Prevent duplicate handlers on re-import

    formatter = logging.Formatter(
        '%(asctime)s | %(name)-20s | %(levelname)-8s | %(message)s',
        datefmt='%Y-%m-%d %H:%M:%S'
    )

    # Console: WARNING and above
    console = logging.StreamHandler()
    console.setLevel(logging.WARNING)
    console.setFormatter(formatter)

    # File: everything, rotate at 500KB, keep 3 backups
    fh = RotatingFileHandler(log_file, maxBytes=500_000, backupCount=3, encoding='utf-8')
    fh.setLevel(logging.DEBUG)
    fh.setFormatter(formatter)

    logger.addHandler(console)
    logger.addHandler(fh)
    return logger

def process_records(records, logger):
    """Process a list of record dicts. Returns (success_count, error_count)."""
    success = 0
    errors = 0

    for i, record in enumerate(records):
        try:
            if 'name' not in record:
                raise ValueError(f'Missing required field: name')
            if not isinstance(record.get('age', 0), int):
                raise TypeError(f'age must be an integer, got {type(record["age"]).__name__}')

            # Simulate transform
            transformed = {
                'id': i + 1,
                'name': record['name'].strip().title(),
                'age': record['age'],
                'status': 'active'
            }
            logger.debug(f'Processed record {i+1}: {transformed["name"]}')
            success += 1

        except (ValueError, TypeError) as e:
            logger.warning(f'Skipping record {i+1}: {e}')
            errors += 1
        except Exception as e:
            logger.exception(f'Unexpected error on record {i+1}')
            errors += 1

    return success, errors

def run_pipeline(input_records, output_path):
    logger = setup_logger('pipeline', 'pipeline.log')

    logger.info(f'Pipeline started. Input records: {len(input_records)}')

    success, errors = process_records(input_records, logger)

    logger.info(f'Pipeline complete. Success: {success}, Errors: {errors}')
    if errors > 0:
        logger.warning(f'{errors} records skipped due to errors')

    return success, errors

# Run the pipeline
sample_data = [
    {'name': 'alice johnson', 'age': 30},
    {'name': 'bob smith', 'age': 25},
    {'age': 40},                         # Missing name -- will warn
    {'name': 'charlie brown', 'age': 'thirty'},  # Wrong type -- will warn
    {'name': 'diana prince', 'age': 28},
]

success, errors = run_pipeline(sample_data, 'output.json')
print(f'\nFinal result: {success} processed, {errors} skipped')
print('Check pipeline.log for full details')

Output:

WARNING | pipeline             | WARNING  | Skipping record 3: Missing required field: name
WARNING | pipeline             | WARNING  | Skipping record 4: age must be an integer, got str

Final result: 3 processed, 2 skipped
Check pipeline.log for full details

The setup_logger() function uses a guard (if logger.handlers: return logger) to prevent duplicate handlers when the function is called multiple times, which is a common gotcha in larger projects. The pipeline logs every step to the file (DEBUG level) while showing only warnings and errors on the console, giving operators a clean output while preserving the full diagnostic trail in the log file.

Frequently Asked Questions

When should I use basicConfig vs named loggers?

Use basicConfig() for scripts and quick tools where you just need output to the console. For any application with multiple modules, use named loggers (logging.getLogger(__name__)) so you can identify which module emitted each message and control logging per module. Named loggers are the standard for libraries and production code.

Why am I seeing duplicate log messages?

This almost always happens because you added a handler to a logger that also propagates to the root logger, which has its own handler. Fix it either by setting logger.propagate = False on your named logger, or by removing the handler from the root logger. The pattern if logger.handlers: return logger in a setup function also prevents duplicate handlers when the function is called more than once.

How do I turn off all logging in production?

Set the root logger level to logging.CRITICAL + 1 or logging.NOTSET and remove all handlers: logging.disable(logging.CRITICAL) silences everything at CRITICAL and below (effectively everything). More typically, just set the production handler level to WARNING or ERROR rather than disabling logging entirely — you want errors logged even in production.

How do I configure logging from a config file?

Use logging.config.fileConfig('logging.ini') for INI-format config files, or logging.config.dictConfig(config_dict) for dictionary-based config (which you can load from a JSON or YAML file). Dictionary config is the modern approach — it’s more flexible and easier to version-control alongside your application code.

Does logging slow down my application?

At DEBUG level with many log messages, yes, logging adds overhead — especially if writing to disk. In production, set the level to WARNING or ERROR so most logging.debug() and logging.info() calls return immediately without any I/O. For extremely hot code paths, check if logger.isEnabledFor(logging.DEBUG): before constructing expensive log messages.

Conclusion

Python’s logging module gives you a production-grade observability system built into the standard library. We covered the five log levels (DEBUG through CRITICAL), setting up named loggers with getLogger(__name__), combining console and file handlers with different levels, automatic log rotation with RotatingFileHandler and TimedRotatingFileHandler, capturing exception tracebacks with logger.exception(), and building a complete pipeline logger. Replace every print() statement in your applications with the appropriate logging call — your future debugging self will thank you.

Extend the data pipeline example by loading the logging configuration from a JSON file so it can be adjusted without code changes, or add a SMTPHandler to email you when a CRITICAL event fires. The logging module’s handler ecosystem is extensive.

See the official logging documentation and the logging cookbook for advanced patterns including thread-safe logging and multiprocessing log handlers.