Intermediate
You have a Python worker process running in production. Kubernetes sends it a SIGTERM to gracefully shut it down before a deployment. Your process ignores it and gets hard-killed 30 seconds later, losing the job it was halfway through. Sound familiar? Signal handling is how long-running Python processes stay in control of their own fate — and it is built right into the standard library.
The signal module lets you register Python functions as handlers for Unix signals like SIGTERM, SIGINT, SIGHUP, and SIGUSR1. When the operating system delivers a signal to your process, Python interrupts the main thread and calls your handler. No threads needed, no polling loops — the OS does the heavy lifting.
In this article we will cover the most important signals and what they mean, how to register signal handlers with signal.signal(), how to implement graceful shutdown patterns, how to reload configuration on SIGHUP, and how signals interact with threads. By the end you will know how to write Python daemons and workers that handle signals correctly.
Python signal: Quick Example
Here is the simplest possible signal handler — catching Ctrl+C (SIGINT) and printing a friendly message instead of a traceback:
# quick_signal.py
import signal
import sys
import time
def handle_sigint(signum, frame):
print("\nCaught SIGINT (Ctrl+C). Exiting cleanly.")
sys.exit(0)
# Register the handler
signal.signal(signal.SIGINT, handle_sigint)
print("Running. Press Ctrl+C to stop.")
while True:
time.sleep(1)
print("Working...")
Output (after pressing Ctrl+C):
Running. Press Ctrl+C to stop.
Working...
Working...
^CCaught SIGINT (Ctrl+C). Exiting cleanly.
The handler function receives two arguments: signum (the signal number) and frame (the current stack frame). You can use signum to write a single handler for multiple signals.
What Are Unix Signals?
Signals are asynchronous notifications sent by the OS or other processes to a running process. Think of them as hardware interrupts for software — the process stops what it is doing, runs the handler, and resumes. They are used for lifecycle management, inter-process communication, and error reporting.
| Signal | Default Action | Common Use |
|---|---|---|
| SIGINT (2) | Terminate | Ctrl+C in terminal |
| SIGTERM (15) | Terminate | Graceful shutdown request (kill, kubectl) |
| SIGHUP (1) | Terminate | Terminal hangup; reload config in daemons |
| SIGUSR1 (10) | Terminate | User-defined; dump stats, toggle debug |
| SIGUSR2 (12) | Terminate | User-defined; second custom action |
| SIGKILL (9) | Terminate (uncatchable) | Force-kill; cannot be caught or ignored |
| SIGCHLD (17) | Ignore | Child process state change |
Note: SIGKILL and SIGSTOP cannot be caught, blocked, or ignored. If the OS sends SIGKILL, your process dies immediately with no handler call. This is why SIGTERM exists — it is the polite request before the OS resorts to SIGKILL.
Implementing Graceful Shutdown
The most important signal pattern for production services is graceful shutdown on SIGTERM. Instead of dying instantly, the process finishes its current unit of work, flushes buffers, closes connections, and exits cleanly.
# graceful_shutdown.py
import signal
import sys
import time
import threading
class Worker:
def __init__(self):
self.running = True
self.current_job = None
self._shutdown_event = threading.Event()
# Register handlers
signal.signal(signal.SIGTERM, self._handle_shutdown)
signal.signal(signal.SIGINT, self._handle_shutdown)
def _handle_shutdown(self, signum, frame):
sig_name = signal.Signals(signum).name
print(f"\n[{sig_name}] Shutdown requested. Finishing current job...")
self.running = False
self._shutdown_event.set()
def process_job(self, job_id: int) -> None:
self.current_job = job_id
print(f" Processing job #{job_id}...")
time.sleep(0.5) # Simulate work
self.current_job = None
print(f" Job #{job_id} complete.")
def run(self):
print("Worker started. Send SIGTERM or Ctrl+C to stop.")
job_id = 0
while self.running:
job_id += 1
self.process_job(job_id)
# Wait a bit between jobs; wake up immediately on shutdown
self._shutdown_event.wait(timeout=0.2)
print("Worker exited cleanly. Goodbye.")
if __name__ == "__main__":
worker = Worker()
worker.run()
Output (after sending SIGTERM with kill -TERM <pid>):
Worker started. Send SIGTERM or Ctrl+C to stop.
Processing job #1...
Job #1 complete.
Processing job #2...
[SIGTERM] Shutdown requested. Finishing current job...
Job #2 complete.
Worker exited cleanly. Goodbye.
The key design is the self.running flag. The signal handler sets it to False and signals the threading.Event. The main loop checks self.running between jobs, so it finishes the current job before exiting. No job is lost, no file is left half-written.

Reloading Configuration on SIGHUP
On Linux, SIGHUP was originally sent when a terminal disconnected. For daemons, it became the convention for “reload your configuration without restarting.” This lets you update a config file and reload it in a running service without downtime.
# config_reload.py
import signal
import json
import time
from pathlib import Path
CONFIG_PATH = Path("/tmp/app_config.json")
# Write an initial config file
CONFIG_PATH.write_text(json.dumps({"log_level": "INFO", "max_retries": 3}))
config = {}
def load_config():
global config
try:
config = json.loads(CONFIG_PATH.read_text())
print(f"Config loaded: {config}")
except (json.JSONDecodeError, FileNotFoundError) as e:
print(f"Config load failed: {e}. Keeping previous config.")
def handle_sighup(signum, frame):
print("\n[SIGHUP] Reloading configuration...")
load_config()
signal.signal(signal.SIGHUP, handle_sighup)
load_config()
print("Daemon running. Send SIGHUP to reload config.")
for i in range(5):
print(f" Tick {i+1} -- log_level={config.get('log_level')}")
time.sleep(1)
Output (after editing config and sending kill -HUP <pid>):
Config loaded: {'log_level': 'INFO', 'max_retries': 3}
Daemon running. Send SIGHUP to reload config.
Tick 1 -- log_level=INFO
Tick 2 -- log_level=INFO
[SIGHUP] Reloading configuration...
Config loaded: {'log_level': 'DEBUG', 'max_retries': 5}
Tick 3 -- log_level=DEBUG
Tick 4 -- log_level=DEBUG
Important: signal handlers run in the main thread and should be short and fast. The handler itself just calls load_config(), which is synchronous and quick. For expensive reloads (re-establishing database connections, re-reading large files), set a flag in the handler and do the actual work in the main loop instead.
Signals and Threads
Python only delivers signals to the main thread. Worker threads cannot register signal handlers or receive signals directly. This is a critical constraint for multithreaded programs — if your main thread is blocked waiting for a thread join, signals may be delayed until the join returns.
# signal_threads.py
import signal
import threading
import time
shutdown_event = threading.Event()
def handle_shutdown(signum, frame):
print(f"\n[Signal {signum}] Stopping threads...")
shutdown_event.set() # Thread-safe way to signal workers
signal.signal(signal.SIGTERM, handle_shutdown)
signal.signal(signal.SIGINT, handle_shutdown)
def worker_thread(thread_id: int):
while not shutdown_event.is_set():
print(f" Thread {thread_id} working...")
shutdown_event.wait(timeout=1.0) # Interruptible sleep
print(f" Thread {thread_id} exiting.")
threads = [threading.Thread(target=worker_thread, args=(i,)) for i in range(3)]
for t in threads:
t.start()
# Main thread: keep the signal handlers alive and wait
while not shutdown_event.is_set():
time.sleep(0.1) # Short sleep keeps the main thread responsive to signals
for t in threads:
t.join()
print("All threads stopped.")
Output (after Ctrl+C):
Thread 0 working...
Thread 1 working...
Thread 2 working...
^C
[Signal 2] Stopping threads.
Thread 0 exiting.
Thread 1 exiting.
Thread 2 exiting.
All threads stopped.
The pattern is: signal handler sets a threading.Event, worker threads check the event with wait(timeout=...)` instead of bare time.sleep(). The wait() call wakes up as soon as the event is set, so threads exit promptly instead of sleeping through the shutdown signal.

Real-Life Example: Self-Diagnosing Worker Daemon
This example combines SIGTERM graceful shutdown, SIGHUP config reload, and SIGUSR1 for dumping runtime stats -- a complete signal handling setup for a production daemon.
# worker_daemon.py
import signal
import json
import time
import sys
from pathlib import Path
from datetime import datetime
class Daemon:
def __init__(self):
self.running = True
self.jobs_processed = 0
self.start_time = datetime.now()
self.config = {"worker_name": "default", "sleep_interval": 0.5}
signal.signal(signal.SIGTERM, self._on_shutdown)
signal.signal(signal.SIGINT, self._on_shutdown)
signal.signal(signal.SIGHUP, self._on_reload)
signal.signal(signal.SIGUSR1, self._on_stats)
def _on_shutdown(self, signum, frame):
print(f"\n[{signal.Signals(signum).name}] Graceful shutdown initiated.")
self.running = False
def _on_reload(self, signum, frame):
print("\n[SIGHUP] Config reloaded (simulated).")
self.config["worker_name"] = f"reloaded_{int(time.time())}"
def _on_stats(self, signum, frame):
elapsed = (datetime.now() - self.start_time).seconds
rate = self.jobs_processed / elapsed if elapsed else 0
print(f"\n[SIGUSR1] Stats -- jobs:{self.jobs_processed} uptime:{elapsed}s rate:{rate:.1f}/s")
def run(self):
print(f"Daemon '{self.config['worker_name']}' started. PID: {sys.argv[0]}")
print("Signals: SIGTERM=shutdown, SIGHUP=reload, SIGUSR1=stats")
while self.running:
self.jobs_processed += 1
time.sleep(self.config["sleep_interval"])
print(f"Daemon stopped. Total jobs: {self.jobs_processed}")
if __name__ == "__main__":
Daemon().run()
Output (interaction with multiple signals):
Daemon 'default' started.
Signals: SIGTERM=shutdown, SIGHUP=reload, SIGUSR1=stats
[SIGUSR1] Stats -- jobs:4 uptime:2s rate:2.0/s
[SIGHUP] Config reloaded (simulated).
[SIGUSR1] Stats -- jobs:9 uptime:4s rate:2.2/s
[SIGTERM] Graceful shutdown initiated.
Daemon stopped. Total jobs: 11
This three-signal pattern (SIGTERM, SIGHUP, SIGUSR1) covers the lifecycle of most production Python daemons. Add SIGUSR2 for a second diagnostic action -- for example, dumping a thread stack trace or triggering a garbage collection report.
Frequently Asked Questions
Does the signal module work on Windows?
Partially. On Windows, Python only supports a subset of signals: SIGTERM, SIGINT, SIGABRT, SIGFPE, SIGILL, SIGSEGV, and SIGBREAK. Unix-specific signals like SIGHUP, SIGUSR1, and SIGUSR2 are not available. If you are writing cross-platform code, check sys.platform before registering Unix-specific handlers.
What is safe to do inside a signal handler?
Keep signal handlers minimal. Setting a flag (self.running = False), setting a threading.Event, or calling sys.exit() are safe. Avoid complex logic, I/O operations, memory allocation, or calling non-reentrant functions. The reason is that signal handlers can interrupt your code at any point, including in the middle of a malloc or dict lookup -- complex handlers risk deadlocks and memory corruption.
How do signals work with asyncio?
Use loop.add_signal_handler(signal.SIGTERM, callback) instead of signal.signal() when working with asyncio. The asyncio-aware version integrates with the event loop so your callback runs as part of the event loop rather than interrupting it. Call loop.stop() inside the callback for a clean asyncio shutdown.
How do I ignore a signal?
Pass signal.SIG_IGN as the handler: signal.signal(signal.SIGHUP, signal.SIG_IGN). This tells the OS to silently discard the signal. Use this when you want a daemon to survive terminal hangups without reloading. To restore the default OS behavior, pass signal.SIG_DFL instead.
How do I send a signal from Python?
Use os.kill(pid, signal.SIGTERM) to send a signal to another process. To send to the current process itself, use os.kill(os.getpid(), signal.SIGUSR1). This is useful in tests -- you can programmatically trigger a signal handler to verify it works correctly without relying on external tools.
Conclusion
The signal module is the foundation of well-behaved Python daemons and long-running workers. We covered registering handlers with signal.signal(), implementing graceful shutdown on SIGTERM, reloading configuration on SIGHUP, using SIGUSR1 for runtime diagnostics, and coordinating signal handlers with worker threads via threading.Event. The daemon example brings all three patterns together in a reusable class structure.
Try adding a SIGUSR2 handler to your next worker that dumps a full stack trace of all running threads using traceback.print_stack() -- it is an invaluable debugging tool for stuck processes in production.
For the full API reference, see the Python signal documentation.