Intermediate
Every Python developer starts with print() for debugging. It works fine when you are learning, but the moment your code runs in production — on a server, in a scheduled task, or as a background service — print statements become useless. They disappear when the terminal closes, they have no timestamps, no severity levels, and no way to separate important messages from noise. That is where Python’s built-in logging module comes in, and when you combine it with rotating file handlers, you get a production-grade logging system that manages itself.
The good news is that Python ships with everything you need. The logging module is part of the standard library, so there is nothing to install. It supports multiple log levels (DEBUG through CRITICAL), custom formatting, and a variety of handlers that control where your logs go — console, files, network sockets, email, and more. The RotatingFileHandler and TimedRotatingFileHandler are especially useful because they automatically manage log file sizes and rotation, preventing your disk from filling up.
In this article we will set up a complete logging system from scratch. We will start with a quick working example, then explain why logging beats print(), walk through log levels and formatting, set up size-based rotation with RotatingFileHandler, time-based rotation with TimedRotatingFileHandler, combine multiple handlers for simultaneous console and file logging, add structured JSON logging, and finish with a real-life project — a Production Application Logger class you can drop into any project.
Python Logging With Rotation: Quick Example
Here is a minimal setup that logs messages to both the console and a rotating file. The file automatically rolls over when it hits 1 MB, keeping the last 3 backups.
# quick_logging.py
import logging
from logging.handlers import RotatingFileHandler
# Create logger
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# Console handler
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# Rotating file handler (1 MB max, keep 3 backups)
file_handler = RotatingFileHandler("app.log", maxBytes=1_000_000, backupCount=3)
file_handler.setLevel(logging.DEBUG)
# Formatter
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
console.setFormatter(formatter)
file_handler.setFormatter(formatter)
# Add handlers
logger.addHandler(console)
logger.addHandler(file_handler)
# Test it
logger.debug("This goes to the file only")
logger.info("This goes to both console and file")
logger.warning("Something might be wrong")
logger.error("Something is definitely wrong")
Output (console):
2026-03-13 14:30:00,123 - myapp - INFO - This goes to both console and file
2026-03-13 14:30:00,124 - myapp - WARNING - Something might be wrong
2026-03-13 14:30:00,124 - myapp - ERROR - Something is definitely wrong
Notice that the DEBUG message only appears in the file, not the console — because we set the console handler to INFO level. The file handler captures everything from DEBUG up. When app.log reaches 1 MB, it automatically renames to app.log.1, creates a fresh app.log, and deletes the oldest backup beyond 3 files. Zero maintenance required.
Want to go deeper? Below we cover why logging beats print, all five log levels, custom formatting, both types of rotation handlers, and a production-ready logger class you can reuse in any project.
Why Not Just Use print()?
The print() function is great for quick debugging, but it falls apart in any serious application. Understanding its limitations helps explain why the logging module exists and why every production codebase uses it.
Here is a side-by-side comparison of what you get with each approach:
| Feature | print() | logging |
|---|---|---|
| Timestamps | Manual (you add them yourself) | Automatic with formatters |
| Severity levels | None | DEBUG, INFO, WARNING, ERROR, CRITICAL |
| Output destination | Console only (stdout) | Console, files, email, network, etc. |
| File rotation | Not possible | Built-in with handlers |
| Easy to disable | Must delete or comment out | Change one level setting |
| Thread safety | Not guaranteed | Built-in thread safety |
| Source tracking | Manual | Automatic (module, line number, function) |
| Production ready | No | Yes |
The most important difference is control. With logging, you can set your production server to WARNING level (ignoring all DEBUG and INFO messages) without changing a single line of code. You can send errors to a file while keeping info messages in the console. You can add email alerts for CRITICAL failures. None of this is possible with print().
Basic Logging Setup
The simplest way to start logging is with logging.basicConfig(), which configures the root logger with a single function call. This is fine for scripts and small programs, though for larger applications you will want the more flexible approach we show later.
# basic_setup.py
import logging
# Configure the root logger
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
# These all use the root logger
logging.debug("Detailed information for diagnosing problems")
logging.info("Confirmation that things are working as expected")
logging.warning("Something unexpected happened, but the program still works")
logging.error("A more serious problem - something failed")
logging.critical("The program may not be able to continue")
Output:
2026-03-13 14:30:00 - DEBUG - Detailed information for diagnosing problems
2026-03-13 14:30:00 - INFO - Confirmation that things are working as expected
2026-03-13 14:30:00 - WARNING - Something unexpected happened, but the program still works
2026-03-13 14:30:00 - ERROR - A more serious problem - something failed
2026-03-13 14:30:00 - CRITICAL - The program may not be able to continue
The basicConfig() function is a convenience wrapper. The level parameter sets the minimum severity to capture — anything below this level is silently ignored. The format string controls what each log line looks like, using placeholder variables like %(asctime)s for the timestamp and %(levelname)s for the severity. The datefmt parameter controls the timestamp format.

Understanding Log Levels
Log levels are the filtering mechanism that makes logging so powerful. Each level has a numeric value, and the logger only processes messages at or above the configured threshold. Understanding when to use each level is critical for writing logs that are actually useful when you need them.
# log_levels.py
import logging
logging.basicConfig(level=logging.DEBUG, format="%(levelname)-8s %(message)s")
# DEBUG (10) - Detailed diagnostic info, only useful during development
logging.debug(f"Processing user_id=42, payload_size=1024 bytes")
# INFO (20) - Routine operational messages
logging.info("Server started on port 8080")
logging.info("User 'alice' logged in successfully")
# WARNING (30) - Something unexpected but not broken
logging.warning("Disk usage at 85% - consider cleanup")
logging.warning("API response took 4.2s (threshold: 3.0s)")
# ERROR (40) - Something failed, but the app continues
logging.error("Failed to connect to database: Connection refused")
logging.error("Payment processing failed for order #1234")
# CRITICAL (50) - The app may crash or is in an unrecoverable state
logging.critical("Out of memory - shutting down worker process")
logging.critical("Security breach detected: unauthorized admin access")
Output:
DEBUG Processing user_id=42, payload_size=1024 bytes
INFO Server started on port 8080
INFO User 'alice' logged in successfully
WARNING Disk usage at 85% - consider cleanup
WARNING API response took 4.2s (threshold: 3.0s)
ERROR Failed to connect to database: Connection refused
ERROR Payment processing failed for order #1234
CRITICAL Out of memory - shutting down worker process
CRITICAL Security breach detected: unauthorized admin access
A common production strategy is to log DEBUG and INFO to files (for post-mortem analysis) while only showing WARNING and above in the console (to avoid drowning operators in noise). The %-8s in the format string left-aligns the level name in an 8-character field, making the output easier to scan visually.
RotatingFileHandler for Size-Based Rotation
The RotatingFileHandler automatically creates a new log file when the current one reaches a specified size. Old log files are renamed with numeric suffixes (.1, .2, etc.) and the oldest files beyond your backup count are deleted automatically. This prevents log files from growing unbounded and filling up your disk.
# rotating_handler.py
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("rotating_demo")
logger.setLevel(logging.DEBUG)
# Create rotating handler: 500 KB max, keep 5 backups
handler = RotatingFileHandler(
filename="demo.log",
maxBytes=500_000, # 500 KB per file
backupCount=5, # Keep demo.log.1 through demo.log.5
encoding="utf-8" # Always specify encoding
)
formatter = logging.Formatter(
"%(asctime)s | %(levelname)-8s | %(name)s | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
handler.setFormatter(formatter)
logger.addHandler(handler)
# Simulate logging activity
for i in range(1000):
logger.info(f"Processing record {i}: status=OK, duration=0.{i % 100:02d}s")
if i % 100 == 0:
logger.warning(f"Batch {i // 100} checkpoint reached")
logger.info("Processing complete")
Output (in demo.log):
2026-03-13 14:30:00 | INFO | rotating_demo | Processing record 0: status=OK, duration=0.00s
2026-03-13 14:30:00 | WARNING | rotating_demo | Batch 0 checkpoint reached
2026-03-13 14:30:00 | INFO | rotating_demo | Processing record 1: status=OK, duration=0.01s
...
After running this, you will see files like demo.log, demo.log.1, demo.log.2, and so on in your directory. The demo.log file is always the current, active log. When it hits 500 KB, the handler renames it to demo.log.1 (pushing the previous .1 to .2, etc.) and starts writing to a fresh demo.log. Files beyond demo.log.5 are automatically deleted. The total maximum disk usage is maxBytes x (backupCount + 1) — in this case, about 3 MB.
TimedRotatingFileHandler for Time-Based Rotation
Sometimes you want logs rotated by time rather than size — a new file every day, every hour, or every week. The TimedRotatingFileHandler handles this automatically. This is especially useful for daily log files that you can easily search by date.
# timed_handler.py
import logging
from logging.handlers import TimedRotatingFileHandler
logger = logging.getLogger("timed_demo")
logger.setLevel(logging.DEBUG)
# Create timed rotating handler: rotate at midnight, keep 30 days
handler = TimedRotatingFileHandler(
filename="daily.log",
when="midnight", # Rotate at midnight
interval=1, # Every 1 day
backupCount=30, # Keep 30 days of logs
encoding="utf-8",
utc=False # Use local time, not UTC
)
# Customize the backup file suffix to include the date
handler.suffix = "%Y-%m-%d"
formatter = logging.Formatter(
"%(asctime)s | %(levelname)-8s | %(funcName)s | %(message)s"
)
handler.setFormatter(formatter)
logger.addHandler(handler)
# Example usage
def process_order(order_id, amount):
logger.info(f"Processing order #{order_id} for ${amount:.2f}")
if amount > 1000:
logger.warning(f"High-value order #{order_id}: ${amount:.2f}")
logger.debug(f"Order #{order_id} details sent to payment gateway")
process_order(1001, 49.99)
process_order(1002, 1500.00)
process_order(1003, 25.50)
Output (in daily.log):
2026-03-13 14:30:00,123 | INFO | process_order | Processing order #1001 for $49.99
2026-03-13 14:30:00,123 | DEBUG | process_order | Order #1001 details sent to payment gateway
2026-03-13 14:30:00,124 | INFO | process_order | Processing order #1002 for $1500.00
2026-03-13 14:30:00,124 | WARNING | process_order | High-value order #1002: $1500.00
2026-03-13 14:30:00,124 | DEBUG | process_order | Order #1002 details sent to payment gateway
2026-03-13 14:30:00,125 | INFO | process_order | Processing order #1003 for $25.50
2026-03-13 14:30:00,125 | DEBUG | process_order | Order #1003 details sent to payment gateway
At midnight, the handler renames daily.log to daily.log.2026-03-13 and creates a fresh daily.log. The when parameter accepts several values: "S" for seconds, "M" for minutes, "H" for hours, "D" for days, "midnight" for midnight rotation, and "W0" through "W6" for specific weekdays (Monday through Sunday). The %(funcName)s formatter variable automatically includes the function name where the log call was made — extremely useful for tracing issues across a large codebase.

Multiple Handlers: Console AND File Logging
In practice, you almost always want both console output (for real-time monitoring) and file output (for historical records). Python’s logging module makes this easy — a single logger can have multiple handlers, each with its own level and format.
# multi_handler.py
import logging
from logging.handlers import RotatingFileHandler
def setup_logger(name, log_file="app.log", console_level=logging.INFO, file_level=logging.DEBUG):
"""Create a logger with both console and rotating file handlers."""
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG) # Capture everything; handlers filter
# Console handler - concise format, higher threshold
console_handler = logging.StreamHandler()
console_handler.setLevel(console_level)
console_fmt = logging.Formatter("%(levelname)-8s %(message)s")
console_handler.setFormatter(console_fmt)
# File handler - detailed format, captures everything
file_handler = RotatingFileHandler(
log_file, maxBytes=5_000_000, backupCount=5, encoding="utf-8"
)
file_handler.setLevel(file_level)
file_fmt = logging.Formatter(
"%(asctime)s | %(levelname)-8s | %(name)s:%(funcName)s:%(lineno)d | %(message)s"
)
file_handler.setFormatter(file_fmt)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
return logger
# Usage
logger = setup_logger("myapp")
def connect_to_database(host, port):
logger.debug(f"Attempting connection to {host}:{port}")
logger.info(f"Connected to database at {host}:{port}")
return True
def fetch_users():
logger.debug("Executing SELECT * FROM users")
logger.info("Fetched 42 users from database")
logger.warning("Query took 2.3 seconds (threshold: 1.0s)")
connect_to_database("localhost", 5432)
fetch_users()
Output (console — concise):
INFO Connected to database at localhost:5432
INFO Fetched 42 users from database
WARNING Query took 2.3 seconds (threshold: 1.0s)
Output (app.log — detailed):
2026-03-13 14:30:00,123 | DEBUG | myapp:connect_to_database:30 | Attempting connection to localhost:5432
2026-03-13 14:30:00,124 | INFO | myapp:connect_to_database:31 | Connected to database at localhost:5432
2026-03-13 14:30:00,125 | DEBUG | myapp:fetch_users:35 | Executing SELECT * FROM users
2026-03-13 14:30:00,125 | INFO | myapp:fetch_users:36 | Fetched 42 users from database
2026-03-13 14:30:00,126 | WARNING | myapp:fetch_users:37 | Query took 2.3 seconds (threshold: 1.0s)
The key insight is that the logger level must be set to the lowest level you want to capture (DEBUG), and each handler then filters independently. The console handler only shows INFO and above, keeping the terminal clean. The file handler captures everything including DEBUG messages, giving you full diagnostic detail when you need to investigate an issue after the fact. The file format includes the function name and line number (%(funcName)s:%(lineno)d), which makes tracing bugs significantly faster.
Real-Life Example: Production Application Logger

Let us build a reusable AppLogger class that you can drop into any project. It combines everything we have covered — console and file handlers, rotating files, structured formatting, and exception logging — into a clean, configurable package.
# app_logger.py
import os
import logging
import traceback
from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler
from datetime import datetime
class AppLogger:
"""Production-ready logger with console and rotating file output."""
def __init__(self, name, log_dir="logs", console_level="INFO",
file_level="DEBUG", max_bytes=10_000_000, backup_count=10):
# Create log directory if it doesn't exist
os.makedirs(log_dir, exist_ok=True)
self.logger = logging.getLogger(name)
self.logger.setLevel(logging.DEBUG)
# Prevent duplicate handlers if called multiple times
if self.logger.handlers:
return
# Console handler - human-readable, colored by level
console = logging.StreamHandler()
console.setLevel(getattr(logging, console_level.upper()))
console_fmt = logging.Formatter(
"%(asctime)s | %(levelname)-8s | %(message)s",
datefmt="%H:%M:%S"
)
console.setFormatter(console_fmt)
# Rotating file handler - detailed, size-based rotation
log_path = os.path.join(log_dir, f"{name}.log")
file_handler = RotatingFileHandler(
log_path, maxBytes=max_bytes, backupCount=backup_count, encoding="utf-8"
)
file_handler.setLevel(getattr(logging, file_level.upper()))
file_fmt = logging.Formatter(
"%(asctime)s | %(levelname)-8s | %(name)s:%(funcName)s:%(lineno)d | %(message)s"
)
file_handler.setFormatter(file_fmt)
# Error-only file handler - quick access to errors
error_path = os.path.join(log_dir, f"{name}_errors.log")
error_handler = RotatingFileHandler(
error_path, maxBytes=max_bytes, backupCount=5, encoding="utf-8"
)
error_handler.setLevel(logging.ERROR)
error_handler.setFormatter(file_fmt)
self.logger.addHandler(console)
self.logger.addHandler(file_handler)
self.logger.addHandler(error_handler)
def debug(self, msg): self.logger.debug(msg)
def info(self, msg): self.logger.info(msg)
def warning(self, msg): self.logger.warning(msg)
def error(self, msg): self.logger.error(msg)
def critical(self, msg): self.logger.critical(msg)
def exception(self, msg):
"""Log an error with full traceback."""
self.logger.error(f"{msg}\n{traceback.format_exc()}")
# Demo usage
if __name__ == "__main__":
log = AppLogger("myapp")
log.info("Application started")
log.debug("Loading configuration from config.json")
log.info("Database connection established")
# Simulate processing
for i in range(5):
log.info(f"Processing batch {i + 1} of 5")
if i == 2:
log.warning("Batch 3 had 12 skipped records")
# Simulate an error with traceback
try:
result = 1 / 0
except ZeroDivisionError:
log.exception("Math operation failed")
log.info("Application shutdown complete")
Output (console):
14:30:00 | INFO | Application started
14:30:00 | INFO | Database connection established
14:30:00 | INFO | Processing batch 1 of 5
14:30:00 | INFO | Processing batch 2 of 5
14:30:00 | INFO | Processing batch 3 of 5
14:30:00 | WARNING | Batch 3 had 12 skipped records
14:30:00 | INFO | Processing batch 4 of 5
14:30:00 | INFO | Processing batch 5 of 5
14:30:00 | ERROR | Math operation failed
Traceback (most recent call last):
File "app_logger.py", line 74, in <module>
result = 1 / 0
ZeroDivisionError: division by zero
14:30:00 | INFO | Application shutdown complete
This logger class gives you three outputs: a clean console for real-time monitoring, a detailed log file for everything, and a separate error-only file for quick problem diagnosis. The exception() method automatically captures the full Python traceback, which is invaluable for debugging production errors. The duplicate handler check (if self.logger.handlers) prevents the common bug where creating multiple instances of the same logger adds duplicate handlers, causing each message to appear multiple times.
You could extend this logger with JSON-formatted output for log aggregation tools like ELK Stack, email alerts for CRITICAL messages using SMTPHandler, or Slack notifications via a custom handler.
Frequently Asked Questions
When should I use basicConfig vs manual handler setup?
Use basicConfig() for quick scripts, one-file programs, and learning. It is a single function call that handles the most common case. Switch to manual handler setup (creating Logger, Handler, and Formatter objects explicitly) when you need multiple handlers with different levels, custom formatting per output, or when building a library or larger application. The manual approach gives you complete control.
How do I silence noisy logs from third-party libraries?
Third-party libraries like requests, urllib3, and boto3 often produce verbose DEBUG logs. Set their logger level to WARNING: logging.getLogger("urllib3").setLevel(logging.WARNING). This silences their DEBUG and INFO messages without affecting your own logging. You can also use logging.getLogger("urllib3").propagate = False to completely stop their messages from reaching your root logger.
What format variables are available in log formatters?
The most useful ones are: %(asctime)s for timestamp, %(levelname)s for level, %(name)s for logger name, %(funcName)s for function name, %(lineno)d for line number, %(filename)s for file name, %(message)s for the actual message, and %(process)d for process ID. You can combine them in any order. For production, include at minimum the timestamp, level, and message.
How do I log in JSON format for tools like ELK Stack?
Install the python-json-logger package (pip install python-json-logger) and use its JsonFormatter. Replace your standard formatter with JsonFormatter("%(asctime)s %(levelname)s %(name)s %(message)s"). This outputs each log line as a JSON object with those fields as keys, which tools like Elasticsearch, Splunk, and CloudWatch can parse automatically without custom regex patterns.
Is Python logging thread-safe?
Yes. The logging module uses locks internally to ensure that log messages from different threads do not interleave or corrupt each other. Each handler has its own lock. This means you can safely use the same logger from multiple threads without any additional synchronization. For multi-process applications, however, you need to be more careful — RotatingFileHandler can have issues when multiple processes write to the same file. Use QueueHandler with a separate logging process in that case.
Does logging slow down my application?
Logging has minimal overhead when configured correctly. The biggest performance tip is to use lazy formatting: write logger.debug("Processing %d items", count) instead of logger.debug(f"Processing {count} items"). With the first form, the string formatting only happens if DEBUG level is enabled. With the f-string, the formatting happens every time regardless of level. For most applications, logging overhead is negligible compared to I/O, network calls, or database queries.
Conclusion
You now have a complete understanding of Python’s logging system. We covered why print() falls short in production, how to use basicConfig() for quick setup, the five log levels and when to use each one, custom formatting with Formatter, size-based rotation with RotatingFileHandler, time-based rotation with TimedRotatingFileHandler, combining multiple handlers for simultaneous console and file output, and a reusable AppLogger class for production applications.
The AppLogger class from the real-life example is ready to use in your own projects. Try extending it with JSON output for log aggregation, email alerts for critical errors, or integration with monitoring tools like Sentry. Proper logging is one of those investments that pays for itself the first time you need to debug a production issue at 2 AM.
For the complete reference on handlers, formatters, filters, and configuration, check out the official Python logging documentation and the Logging Cookbook for advanced patterns.