Beginner
Every serious Python project needs logging, but the standard logging module requires you to configure handlers, formatters, and levels before you can write a single line to a file. By the time you have a working setup, you have written more boilerplate than actual code. Worse, exceptions logged with the standard module show a flat traceback — no colours, no local variable values, no easy way to tell which frame caused the crash. Debugging a production issue from a plain-text traceback feels like reading a crash report by torchlight.
Loguru is a third-party library that replaces all of that boilerplate with one import. Install it with pip install loguru and you get a pre-configured logger with coloured output, automatic exception tracing with variable values, file rotation, structured logging, and async support — all driven by a single logger.add() call instead of five classes and a dozen method calls.
In this article we will set up Loguru from scratch, learn its core logging levels and message formatting, configure file sinks with rotation and retention, capture rich exception tracebacks, log structured data, and build a real-life application logger for a FastAPI service. You will finish with a drop-in logging setup you can reuse across any project.
Loguru Quick Example
The fastest way to appreciate Loguru is to see how little setup it needs compared to the standard library:
# quick_loguru.py
from loguru import logger
logger.debug("Checking config...")
logger.info("Application started")
logger.warning("Rate limit approaching: 95/100 requests used")
logger.error("Database connection failed")
logger.success("Cache warmed successfully") # Loguru-only level
Output (colourised in terminal):
2026-04-27 08:12:34.501 | DEBUG | __main__:<module>:3 - Checking config...
2026-04-27 08:12:34.502 | INFO | __main__:<module>:4 - Application started
2026-04-27 08:12:34.502 | WARNING | __main__:<module>:5 - Rate limit approaching: 95/100 requests used
2026-04-27 08:12:34.503 | ERROR | __main__:<module>:6 - Database connection failed
2026-04-27 08:12:34.503 | SUCCESS | __main__:<module>:7 - Cache warmed successfully
No setup, no basicConfig(), no getLogger(). Just import and log. Loguru automatically writes to stderr with timestamps, level names, the calling module and line number, and full colour coding per level. The extra SUCCESS level sits between INFO and WARNING — useful for confirming that an important step completed cleanly.
What Is Loguru and How Does It Differ?
The standard logging module works fine but was designed for maximum configurability, which means minimum convenience. You must create a logger, attach a handler, set a formatter, and set levels — four separate objects just to write a message to a file. Loguru’s philosophy is the opposite: one global logger object, all configuration through logger.add(), and sensible defaults for everything.
| Feature | logging (stdlib) | Loguru |
|---|---|---|
| Setup lines for file logging | 6-10 | 1 |
| Coloured terminal output | Needs extra lib | Built-in |
| Exception tracing with variables | No | Yes |
| File rotation by size/time | RotatingFileHandler | logger.add(…, rotation=) |
| Structured logging | Extra JSON formatter | logger.add(…, serialize=True) |
| SUCCESS level | No | Yes |
Loguru does not replace the standard library in production systems that integrate with third-party logging infrastructure (like Sentry or Datadog handlers). For those, you often bridge Loguru back to the stdlib. But for greenfield projects, scripts, microservices, and anything you write from scratch, Loguru is simply faster to ship.
Configuring Sinks: Terminal, File, and Rotation
In Loguru, a “sink” is any destination for log messages — a file, a terminal stream, a network endpoint, or a function. You add sinks with logger.add() and remove the default stderr sink if you want complete control:
# sinks_demo.py
import sys
from loguru import logger
# Remove the default stderr sink
logger.remove()
# Add a clean stdout sink showing only INFO and above
logger.add(sys.stdout, level="INFO", colorize=True,
format="{time:HH:mm:ss} | {level} | {message}")
# Add a rotating file sink -- new file every day, keep 7 days
logger.add(
"logs/app_{time:YYYY-MM-DD}.log",
level="DEBUG",
rotation="00:00", # rotate at midnight
retention="7 days", # delete logs older than 7 days
compression="zip", # compress old logs
enqueue=True, # thread-safe async writing
)
logger.debug("This goes only to the file (below INFO threshold for stdout)")
logger.info("This goes to both stdout and file")
logger.error("Error logged to both sinks")
Output (stdout):
08:12:34 | INFO | This goes to both stdout and file
08:12:34 | ERROR | Error logged to both sinks
The rotation parameter accepts time strings like "00:00" (midnight), size strings like "100 MB", or a timedelta. The retention parameter automatically deletes old files. enqueue=True makes writes happen in a background thread, which is essential for high-throughput applications where file I/O should not block the main thread. compression="zip" saves disk space by compressing rotated files immediately.
Rich Exception Tracing
Loguru’s most impressive feature is exception logging. Use logger.exception() inside an except block, or use logger.opt(exception=True) — Loguru prints the full traceback including local variable values at every frame:
# exception_demo.py
from loguru import logger
def parse_config(data: dict) -> dict:
required = ["host", "port", "db_name"]
result = {}
for key in required:
value = data[key] # KeyError if missing
result[key] = value
return result
config_input = {"host": "localhost", "port": 5432} # missing db_name
try:
config = parse_config(config_input)
except KeyError:
logger.exception("Config parsing failed -- check your .env file")
Output:
2026-04-27 08:12:35.100 | ERROR | __main__:<module>:13 - Config parsing failed -- check your .env file
Traceback (most recent call last):
File "exception_demo.py", line 13, in <module>
config = parse_config(config_input)
-- config_input = {'host': 'localhost', 'port': 5432}
File "exception_demo.py", line 7, in parse_config
value = data[key]
-- data = {'host': 'localhost', 'port': 5432}
-- key = 'db_name'
KeyError: 'db_name'
The -- variable = value lines after each frame are Loguru’s signature feature. When you are debugging a crash that happened in production six hours ago, seeing exactly what values were in scope when it crashed is the difference between a five-minute fix and a two-hour investigation. This alone is worth the dependency.
Structured Logging and Context Binding
For machine-readable logs consumed by log aggregators like Elasticsearch or Datadog, enable JSON output with serialize=True. Use logger.bind() to attach context that flows through every subsequent log call in a request lifecycle:
# structured_logging.py
import sys
import json
from loguru import logger
# Add a JSON-output sink
logger.add(sys.stdout, serialize=True, level="INFO")
logger.remove(0) # remove the default coloured handler (id=0)
# Bind context for a specific request
request_logger = logger.bind(request_id="req-abc-123", user_id=42)
request_logger.info("Processing payment")
request_logger.info("Payment authorised", amount=99.95, currency="AUD")
Output (JSON, one object per line):
{"text": "Processing payment\n", "record": {"elapsed": {...}, "level": {"name": "INFO", ...}, "extra": {"request_id": "req-abc-123", "user_id": 42}, "message": "Processing payment", ...}}
{"text": "Payment authorised\n", "record": {..., "extra": {"request_id": "req-abc-123", "user_id": 42, "amount": 99.95, "currency": "AUD"}, ...}}
Every JSON log line contains the full record with the bound context values. Log aggregators can index extra.request_id and extra.user_id to give you per-request drill-down in your observability dashboard. The logger.bind() call returns a new logger instance — it does not modify the global logger, so you can safely use it inside async handlers or threads without affecting other request contexts.
Real-Life Example: FastAPI Request Logger
Here is a complete logging setup for a FastAPI application with per-request context, file rotation, and exception capturing:
# app_logger.py
import sys
from loguru import logger
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
import uuid
# ------------------------------------------------------------------
# Logging setup -- call once at startup
# ------------------------------------------------------------------
def setup_logging():
logger.remove() # clear defaults
# Human-readable stdout for development
logger.add(
sys.stdout,
level="DEBUG",
colorize=True,
format="{time:HH:mm:ss} | {level:<8} | {extra[req_id]:.8} | {message}",
)
# Rotating JSON file for production / log aggregation
logger.add(
"logs/api_{time:YYYY-MM-DD}.jsonl",
level="INFO",
serialize=True,
rotation="100 MB",
retention="30 days",
compression="gz",
enqueue=True,
)
# ------------------------------------------------------------------
# FastAPI app with middleware for per-request logging
# ------------------------------------------------------------------
setup_logging()
app = FastAPI()
@app.middleware("http")
async def log_requests(request: Request, call_next):
req_id = str(uuid.uuid4())[:8]
req_logger = logger.bind(req_id=req_id, path=request.url.path)
req_logger.info(f"START {request.method} {request.url.path}")
try:
response = await call_next(request)
req_logger.info(f"END {response.status_code}")
return response
except Exception:
req_logger.exception("Unhandled exception during request")
return JSONResponse({"error": "internal server error"}, status_code=500)
@app.get("/items/{item_id}")
async def get_item(item_id: int):
if item_id < 1:
raise ValueError(f"item_id must be positive, got {item_id}")
return {"item_id": item_id, "name": f"Item {item_id}"}
Sample stdout output for a request to /items/5:
08:12:35 | INFO | req-1a2b | START GET /items/5
08:12:35 | INFO | req-1a2b | END 200
Each request gets a short UUID prefix so you can grep the logs for a single request across multiple log lines. The except Exception in the middleware catches any unhandled error, logs the full traceback with local variables, and returns a clean 500 response to the client. To extend the system, call logger.add() again with a Slack or PagerDuty webhook sink that fires only on CRITICAL messages.
Frequently Asked Questions
How do I use Loguru alongside third-party libraries that use stdlib logging?
Most third-party libraries log via the stdlib logging module. To forward those messages into Loguru, intercept the stdlib root logger with a custom handler: create a class that subclasses logging.Handler, override emit() to call logger.log(record.levelno, record.getMessage()), and attach it with logging.basicConfig(handlers=[YourHandler()]). This pipes all stdlib log calls through your Loguru sinks automatically.
Can I add custom log levels beyond the built-ins?
Yes. Use logger.level("AUDIT", no=38, color="<yellow>", icon="@") to register a new level between WARNING (30) and ERROR (40). After registering, call it with logger.log("AUDIT", "User deleted account"). Custom levels appear in all sinks that have a level number at or below your custom level's number. This is useful for security audit logs that need to persist even when general INFO logging is disabled.
Is Loguru safe to use in async code?
Yes, with enqueue=True on your file sinks. Without it, multiple async tasks writing to the same file sink can interleave log lines. enqueue=True routes all writes through an internal queue processed by a dedicated thread, serializing them safely. The logger.bind() context is also async-safe because it returns a new logger object rather than mutating global state.
How do I capture Loguru output in pytest tests?
Add a Loguru sink that writes to a list or a StringIO buffer before the test, and remove it after. A clean pattern is a pytest fixture that calls logger.add(string_buffer) with level="DEBUG", yields the buffer for assertions, then calls logger.remove(handler_id) in teardown. You can then assert on string_buffer.getvalue() to verify that specific log messages were emitted during the test.
What does logger.remove(0) actually do?
Loguru starts with one default sink — stderr with id 0. Calling logger.remove(0) removes that sink. After this call, no output goes anywhere until you add a new sink with logger.add(). This is the standard pattern when you want complete control over where logs go. Calling logger.remove() without an argument removes all sinks at once, which is useful during test teardown.
Conclusion
Loguru reduces production logging from a configuration ceremony to a design decision. You have learned how to use the pre-configured global logger, add and customise sinks for terminal and rotating files, capture rich exception tracebacks with local variable values, bind per-request context for structured JSON output, and wire the whole thing into a FastAPI middleware. Every project from one-off scripts to microservices benefits from replacing print() and stdlib boilerplate with these patterns.
The natural next step is adding a Loguru sink that fires on ERROR and above and sends a Telegram or Slack notification. Grab the real-life FastAPI example, point its CRITICAL sink at the Telegram Bot API, and you have on-call alerting in under 20 lines. The official Loguru documentation at https://loguru.readthedocs.io/ has a comprehensive recipe section for sink integrations.