Intermediate

How To Schedule Python Scripts To Run Automatically With cron and schedule

If you’re building Python applications, sooner or later you’ll want them to run on a schedule without you having to sit at your computer and trigger them manually. Whether it’s a daily backup, sending reports, checking for updates, or cleaning up temporary files, automation is a game-changer. In this tutorial, we’ll explore how to schedule Python scripts to run automatically using both Linux cron and the Python schedule library, plus we’ll touch on Windows Task Scheduler for our Windows users.

Quick Example (TLDR)

Here’s the fastest way to schedule a Python script using the schedule library:


# Install: pip install schedule
import schedule
import time

def my_job():
    # This code runs on schedule
    print("Job executed at", time.strftime("%Y-%m-%d %H:%M:%S"))

# Schedule the job to run every day at 10:00 AM
schedule.every().day.at("10:00").do(my_job)

# Keep the scheduler running
while True:
    schedule.run_pending()
    time.sleep(60)  # Check every minute if a job should run

Output:


Job executed at 2026-03-12 10:00:15
Job executed at 2026-03-13 10:00:12

Why Automate Your Python Scripts?

Let’s think about some real-world scenarios where automation saves you time and money:

  • Backups: Automatically backup your database every night without remembering
  • Reports: Generate and email reports every Monday morning at 9 AM
  • Data Processing: Process new files as they arrive, every hour
  • Monitoring: Check server health every 5 minutes and alert if something’s wrong
  • Cleanup: Delete temporary files older than 30 days, weekly

When you automate these tasks, you free up mental energy to focus on more important things. Plus, computers don’t sleep, so they can work 24/7 without complaining!

Method 1: Linux cron Jobs

If you’re running on Linux or macOS, cron is a powerful and built-in scheduler. Let’s walk through how to set up a cron job.

Step 1: Create Your Python Script

First, let’s create a simple backup script:


# backup_script.py
import shutil
import os
from datetime import datetime

def backup_database():
    # Get current date and time
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    
    # Source and destination paths
    source = "/home/user/myapp/data.db"
    destination = f"/home/user/backups/data_{timestamp}.db"
    
    try:
        # Copy the database file
        shutil.copy2(source, destination)
        print(f"Backup successful: {destination}")
    except Exception as e:
        # Log errors for troubleshooting
        print(f"Backup failed: {e}")

if __name__ == "__main__":
    backup_database()

Output:


Backup successful: /home/user/backups/data_20260312_100000.db

Step 2: Make It Executable


chmod +x /home/user/backup_script.py

Step 3: Edit Your Crontab

Open your cron configuration by running:


crontab -e

Add this line to run the backup every day at 2 AM:


# Run backup script every day at 2:00 AM
0 2 * * * /usr/bin/python3 /home/user/backup_script.py >> /var/log/backup.log 2>&1

The cron syntax is: minute hour day month weekday command

  • 0 2 * * * = Every day at 2:00 AM
  • /usr/bin/python3 = Full path to Python interpreter
  • /home/user/backup_script.py = Your script path
  • >> /var/log/backup.log 2>&1 = Log output to a file

Method 2: Python schedule Library (Recommended for Cross-Platform)

If you need more control or want your scheduler to run within your Python application, the schedule library is excellent:


# job_scheduler.py
import schedule
import time
import logging
from datetime import datetime

# Setup logging to track what happens
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(message)s'
)

def backup_job():
    # This runs every day
    logging.info("Starting daily backup...")
    # Your backup code here
    logging.info("Backup complete!")

def send_report():
    # This runs every Monday at 9 AM
    logging.info("Sending weekly report...")
    # Your email code here
    logging.info("Report sent!")

def cleanup_temp_files():
    # This runs every hour
    logging.info("Cleaning temporary files...")
    # Your cleanup code here
    logging.info("Cleanup complete!")

# Schedule the jobs
schedule.every().day.at("02:00").do(backup_job)
schedule.every().monday.at("09:00").do(send_report)
schedule.every().hour.do(cleanup_temp_files)

# Keep the scheduler running
if __name__ == "__main__":
    print("Scheduler started. Press Ctrl+C to stop.")
    while True:
        schedule.run_pending()
        time.sleep(60)  # Check every minute

Output (simulated run):


2026-03-12 02:00:00 - Starting daily backup...
2026-03-12 02:00:15 - Backup complete!
2026-03-12 09:00:00 - Sending weekly report...
2026-03-12 09:00:25 - Report sent!
2026-03-12 13:00:00 - Cleaning temporary files...
2026-03-12 13:00:05 - Cleanup complete!

Method 3: Windows Task Scheduler

Windows users can use Task Scheduler to run Python scripts automatically:

  1. Press Win + R and type taskschd.msc to open Task Scheduler
  2. Click “Create Basic Task”
  3. Give it a name: “My Python Backup”
  4. Set trigger (when to run): Daily at 2:00 AM
  5. Set action: Start a program
    • Program: C:Python312python.exe
    • Arguments: C:UsersYourNameackup_script.py
  6. Click Finish

Error Handling in Scheduled Tasks

When your script runs automatically, you won’t be there to see errors. That’s why logging is critical:


# robust_scheduler.py
import schedule
import time
import logging
import traceback
from datetime import datetime

# Configure detailed logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('/var/log/my_jobs.log'),
        logging.StreamHandler()
    ]
)

def safe_job_wrapper(job_func, job_name):
    # Wrapper function that catches and logs errors
    def wrapper():
        try:
            logging.info(f"Starting: {job_name}")
            job_func()
            logging.info(f"Completed: {job_name}")
        except Exception as e:
            # Log the full error trace for debugging
            logging.error(f"Failed: {job_name}")
            logging.error(traceback.format_exc())
    return wrapper

def my_risky_job():
    # This might fail sometimes
    result = 10 / 1  # Would be 10 / 0 to cause error
    print(f"Calculation result: {result}")

# Schedule with error handling
safe_wrapper = safe_job_wrapper(my_risky_job, "my_risky_job")
schedule.every().hour.do(safe_wrapper)

if __name__ == "__main__":
    while True:
        schedule.run_pending()
        time.sleep(60)

Log Output (when error occurs):


2026-03-12 15:00:00 - INFO - Starting: my_risky_job
2026-03-12 15:00:00 - ERROR - Failed: my_risky_job
2026-03-12 15:00:00 - ERROR - Traceback (most recent call last):
  File "scheduler.py", line 15, in wrapper
    job_func()
  File "scheduler.py", line 28, in my_risky_job
    result = 10 / 0
ZeroDivisionError: division by zero

Real-Life Example: Automated Daily Backup Script

Let’s build a complete backup system that you can use right away:


# daily_backup.py
import schedule
import time
import os
import shutil
import logging
from datetime import datetime, timedelta
import gzip
import json

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s',
    filename='/var/log/backup.log'
)

class BackupManager:
    def __init__(self, source_dir, backup_dir, keep_days=7):
        # Initialize backup settings
        self.source_dir = source_dir
        self.backup_dir = backup_dir
        self.keep_days = keep_days
        
        # Create backup directory if it doesn't exist
        os.makedirs(backup_dir, exist_ok=True)
    
    def backup_files(self):
        # Create timestamped backup
        timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
        backup_path = os.path.join(self.backup_dir, f"backup_{timestamp}.tar.gz")
        
        try:
            # Compress the entire directory
            shutil.make_archive(
                backup_path.replace('.tar.gz', ''),
                'gztar',
                self.source_dir
            )
            logging.info(f"Backup created: {backup_path}")
            
            # Clean old backups
            self.cleanup_old_backups()
            
        except Exception as e:
            logging.error(f"Backup failed: {e}")
    
    def cleanup_old_backups(self):
        # Remove backups older than keep_days
        cutoff_date = datetime.now() - timedelta(days=self.keep_days)
        
        for filename in os.listdir(self.backup_dir):
            filepath = os.path.join(self.backup_dir, filename)
            file_time = datetime.fromtimestamp(os.path.getmtime(filepath))
            
            if file_time < cutoff_date:
                try:
                    os.remove(filepath)
                    logging.info(f"Deleted old backup: {filename}")
                except Exception as e:
                    logging.error(f"Failed to delete {filename}: {e}")

# Create backup manager instance
backup = BackupManager("/home/user/myapp", "/home/user/backups")

# Schedule daily backup at 2 AM
schedule.every().day.at("02:00").do(backup.backup_files)

# Keep scheduler running
if __name__ == "__main__":
    logging.info("Backup scheduler started")
    while True:
        schedule.run_pending()
        time.sleep(60)

Output:


2026-03-12 02:00:00 - INFO - Backup created: /home/user/backups/backup_20260312_020000.tar.gz
2026-03-13 02:00:00 - INFO - Backup created: /home/user/backups/backup_20260313_020000.tar.gz
2026-03-15 02:00:00 - INFO - Deleted old backup: backup_20260305_020000.tar.gz

FAQ

Q1: How do I know if my scheduled job ran?

Use logging! Redirect output to a log file as shown in our examples. Check the log file to verify execution.

Q2: Can I schedule a job for every 30 minutes?

Yes! With schedule library: schedule.every(30).minutes.do(my_job). With cron: */30 * * * * command

Q3: What's the difference between cron and the schedule library?

Cron is system-level, always running. Schedule library runs inside your Python process. Cron is better for production servers; schedule is better for local testing and development.

Q4: How do I stop a scheduled task?

For cron: crontab -e and delete the line. For schedule: Stop the Python process (Ctrl+C). For Windows: Open Task Scheduler and disable the task.

Q5: What if my Python script takes longer to run than scheduled?

Use a queuing system like Celery for production workloads. For simple scripts, the schedule library won't run overlapping instances by default.

Conclusion

Scheduling Python scripts is one of the most practical skills you can develop. Whether you use cron for server environments, the schedule library for cross-platform applications, or Windows Task Scheduler for local automation, you now have the tools to build reliable automated systems. Start small with a simple daily task, add logging to track what happens, and expand from there. Your future self will thank you for automating those repetitive tasks!

References

Cache Katie setting multiple alarm clocks
schedule.every().monday.at('09:00') — cron syntax for people who value their sanity.