Intermediate

You have a remote server that needs daily log files collected, configuration files deployed, or database backups transferred. You could write shell scripts, but then you have no Python integration, no error handling, and no ability to build on the results programmatically. The paramiko library lets you do all of this in pure Python — SSH into a remote host, run commands, capture output, and transfer files with SFTP — without leaving your Python environment.

Paramiko is a pure-Python implementation of the SSH2 protocol. Install it with pip install paramiko. It provides two main classes: SSHClient for executing remote commands and SFTPClient for file transfers. Both support password authentication and key-based authentication (RSA, Ed25519, ECDSA). For testing without a real server, you can use a local Docker container or SSH into localhost if OpenSSH is installed.

In this article, you will learn how to connect to SSH servers with password and key authentication, run remote commands and capture stdout/stderr, transfer files with SFTP (upload, download, list, delete), handle common SSH errors defensively, and build a real-life deployment script that uses all of these features together.

Quick Example: Run a Remote Command

# quick_ssh.py
import paramiko

client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

# Connect -- replace with your actual server details
client.connect(
    hostname="your-server.example.com",
    username="ubuntu",
    key_filename="~/.ssh/id_rsa"
)

stdin, stdout, stderr = client.exec_command("uname -a && df -h /")
output = stdout.read().decode()
error = stderr.read().decode()
print(output)

client.close()

Output:

Linux my-server 5.15.0-1034-aws #38-Ubuntu SMP Mon Apr 3 16:14:06 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       30G   12G   17G  42% /

The three streams returned by exec_command work like file objects — call .read() to get all output, or iterate line by line for long-running commands. Always call client.close() when done, or better yet, use a with statement to ensure cleanup.

What Is Paramiko and Why Use It?

Paramiko is a pure-Python SSH library that lets your programs speak the SSH2 protocol directly, without calling the system ssh binary. It is the SSH backend used by Fabric, Ansible, and many deployment tools. The pure-Python implementation means it works everywhere Python works, with no external binary dependencies.

Featuresubprocess sshparamiko
Programmatic outputParse text outputStructured streams
SFTP supportExternal scp/sftp binaryBuilt-in SFTPClient
Key managementSystem ssh-agentFull Python API
Multiple commandsMultiple subprocessesPersistent connection
Error handlingParse exit codesPython exceptions
PortabilityRequires ssh binaryPure Python, any OS
subprocess.run(['ssh', ...]) works. Until it doesn't. paramiko gives you the socket.
subprocess.run([‘ssh’, …]) works. Until it doesn’t. paramiko gives you the socket.

Connecting to SSH Servers

Paramiko supports three authentication methods: password, private key file, and private key object. In production, always prefer key-based authentication — passwords in code are a security risk.

# ssh_connect.py
import paramiko
import os

def create_ssh_client(hostname: str, username: str,
                       key_path: str = None, password: str = None) -> paramiko.SSHClient:
    """Create and return a connected SSHClient."""
    client = paramiko.SSHClient()
    # AutoAddPolicy: automatically accept the host key on first connect
    # For production, use RejectPolicy and pre-populate known_hosts
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

    connect_kwargs = {
        "hostname": hostname,
        "username": username,
        "port": 22,
        "timeout": 10,
    }

    if key_path:
        # Expand ~ to home directory
        key_path = os.path.expanduser(key_path)
        connect_kwargs["key_filename"] = key_path
    elif password:
        connect_kwargs["password"] = password
    else:
        # Use SSH agent or default keys (~/.ssh/id_rsa, ~/.ssh/id_ed25519)
        connect_kwargs["allow_agent"] = True
        connect_kwargs["look_for_keys"] = True

    try:
        client.connect(**connect_kwargs)
        print(f"Connected to {username}@{hostname}")
        return client
    except paramiko.AuthenticationException:
        raise RuntimeError(f"Authentication failed for {username}@{hostname}")
    except paramiko.SSHException as e:
        raise RuntimeError(f"SSH error: {e}")
    except Exception as e:
        raise RuntimeError(f"Cannot connect to {hostname}: {e}")

# Test with a real server:
# client = create_ssh_client("myserver.example.com", "ubuntu", key_path="~/.ssh/id_rsa")
print("Connection helper defined -- replace hostname to test")

Output:

Connection helper defined -- replace hostname to test

The set_missing_host_key_policy(paramiko.AutoAddPolicy()) call automatically accepts unknown host keys on first connection. This is convenient but less secure than using RejectPolicy with a pre-populated known_hosts file. For production systems, load known_hosts with client.load_system_host_keys() and use RejectPolicy.

Running Remote Commands

The exec_command() method runs a command in a new channel. It is non-blocking — you call it and get back three stream objects. The actual command runs asynchronously on the server; you read the output by calling .read() on the streams.

# remote_commands.py
import paramiko

def run_command(client: paramiko.SSHClient, command: str,
                timeout: int = 30) -> tuple[str, str, int]:
    """Run a command and return (stdout, stderr, exit_code)."""
    stdin, stdout, stderr = client.exec_command(command, timeout=timeout)
    stdout_text = stdout.read().decode("utf-8").strip()
    stderr_text = stderr.read().decode("utf-8").strip()
    exit_code = stdout.channel.recv_exit_status()
    return stdout_text, stderr_text, exit_code

def run_commands_safely(client: paramiko.SSHClient, commands: list) -> list:
    """Run multiple commands and collect results."""
    results = []
    for cmd in commands:
        out, err, code = run_command(client, cmd)
        status = "OK" if code == 0 else f"FAILED (exit {code})"
        results.append({"command": cmd, "output": out, "error": err,
                         "exit_code": code, "status": status})
        if code != 0:
            print(f"  WARNING: '{cmd}' {status}")
            if err:
                print(f"    stderr: {err}")
    return results

# Simulate output without a live server
print("Example results structure:")
example = [
    {"command": "hostname", "output": "web-server-01", "error": "", "exit_code": 0, "status": "OK"},
    {"command": "free -m", "output": "              total  used  free\nMem: 4096  2048  2048", "error": "", "exit_code": 0, "status": "OK"},
    {"command": "ls /nonexistent", "output": "", "error": "ls: cannot access '/nonexistent'", "exit_code": 2, "status": "FAILED (exit 2)"}
]
for r in example:
    print(f"  [{r['status']}] {r['command']}: {r['output'][:40] or r['error'][:40]}")

Output:

Example results structure:
  [OK] hostname: web-server-01
  [OK] free -m:               total  used  free
  [FAILED (exit 2)] ls /nonexistent: ls: cannot access '/nonexistent'

Always check the exit code — exec_command does not raise an exception if the remote command fails. Call stdout.channel.recv_exit_status() after reading all output to get the command’s return code. A non-zero exit code indicates failure, just like in a shell script.

stdout and stderr are separate streams. Reading them in the wrong order can deadlock you.
stdout and stderr are separate streams. Reading them in the wrong order can deadlock you.

File Transfers with SFTPClient

Paramiko’s SFTPClient provides a file-system-like API for the remote server. You can upload files, download files, list directories, create directories, and delete files — all over the same SSH connection.

# sftp_operations.py
import paramiko
import os
from pathlib import Path

def sftp_operations_demo(client: paramiko.SSHClient):
    """Demonstrate SFTP operations."""
    sftp = client.open_sftp()

    try:
        # List remote directory
        files = sftp.listdir("/tmp")
        print(f"Files in /tmp: {files[:5]}")

        # Upload a local file to the remote server
        local_file = "/tmp/deploy_config.json"
        remote_file = "/tmp/deploy_config.json"

        # Create a test file locally
        with open(local_file, "w") as f:
            f.write('{"version": "1.2.0", "env": "production"}')

        sftp.put(local_file, remote_file)
        print(f"Uploaded {local_file} -> {remote_file}")

        # Check remote file size
        stat = sftp.stat(remote_file)
        print(f"Remote file size: {stat.st_size} bytes")

        # Download it back
        downloaded = "/tmp/downloaded_config.json"
        sftp.get(remote_file, downloaded)
        print(f"Downloaded to {downloaded}")
        with open(downloaded) as f:
            print(f"Content: {f.read()}")

        # Create a remote directory
        try:
            sftp.mkdir("/tmp/deploy_backup")
            print("Created /tmp/deploy_backup")
        except OSError:
            print("/tmp/deploy_backup already exists")

        # Delete the remote file
        sftp.remove(remote_file)
        print(f"Deleted {remote_file}")

    finally:
        sftp.close()

# Simulate without live server
print("SFTP operations:")
print("  sftp.put(local, remote)  -- upload file")
print("  sftp.get(remote, local)  -- download file")
print("  sftp.listdir(path)       -- list directory")
print("  sftp.mkdir(path)         -- create directory")
print("  sftp.stat(path)          -- get file metadata")
print("  sftp.remove(path)        -- delete file")
print("  sftp.rename(old, new)    -- move/rename file")

Output:

SFTP operations:
  sftp.put(local, remote)  -- upload file
  sftp.get(remote, local)  -- download file
  sftp.listdir(path)       -- list directory
  sftp.mkdir(path)         -- create directory
  sftp.stat(path)          -- get file metadata
  sftp.remove(path)        -- delete file
  sftp.rename(old, new)    -- move/rename file

For large file transfers, use the callback parameter of sftp.put() and sftp.get() to track progress: sftp.put(local, remote, callback=lambda sent, total: print(f"{sent}/{total}")). Always use the SFTP client inside a try/finally block to ensure sftp.close() is called even if an exception occurs.

Real-Life Example: Automated Deployment Script

# deploy.py
import paramiko
import json
import os
from datetime import datetime

class Deployer:
    """Deploy application files to a remote server via SSH/SFTP."""

    def __init__(self, hostname: str, username: str, key_path: str):
        self.hostname = hostname
        self.username = username
        self.key_path = os.path.expanduser(key_path)
        self.client = None
        self.sftp = None

    def connect(self):
        self.client = paramiko.SSHClient()
        self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
        self.client.connect(self.hostname, username=self.username,
                            key_filename=self.key_path, timeout=15)
        self.sftp = self.client.open_sftp()
        print(f"Connected to {self.username}@{self.hostname}")

    def run(self, command: str) -> tuple[str, int]:
        _, stdout, stderr = self.client.exec_command(command, timeout=60)
        out = stdout.read().decode().strip()
        err = stderr.read().decode().strip()
        code = stdout.channel.recv_exit_status()
        if code != 0 and err:
            print(f"  stderr: {err[:200]}")
        return out, code

    def upload(self, local_path: str, remote_path: str):
        self.sftp.put(local_path, remote_path)
        print(f"  Uploaded: {os.path.basename(local_path)} -> {remote_path}")

    def deploy(self, local_dir: str, remote_dir: str, app_name: str):
        timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
        backup_dir = f"/tmp/{app_name}_backup_{timestamp}"

        print(f"\n=== Deploying {app_name} ===")

        # Step 1: Backup existing deployment
        out, code = self.run(f"test -d {remote_dir} && cp -r {remote_dir} {backup_dir} && echo backed_up || echo fresh_deploy")
        print(f"Backup: {out}")

        # Step 2: Create remote directory
        self.run(f"mkdir -p {remote_dir}")

        # Step 3: Upload files
        print("Uploading files:")
        for filename in os.listdir(local_dir):
            local_path = os.path.join(local_dir, filename)
            remote_path = f"{remote_dir}/{filename}"
            if os.path.isfile(local_path):
                self.upload(local_path, remote_path)

        # Step 4: Verify deployment
        out, code = self.run(f"ls -la {remote_dir}")
        print(f"Remote directory:\n{out}")

        print(f"=== Deployment complete ===\n")

    def close(self):
        if self.sftp:
            self.sftp.close()
        if self.client:
            self.client.close()

# Usage (replace with real server details):
print("Deployer class ready.")
print("Usage:")
print("  deployer = Deployer('myserver.com', 'ubuntu', '~/.ssh/id_rsa')")
print("  deployer.connect()")
print("  deployer.deploy('./dist/', '/var/www/myapp', 'myapp')")
print("  deployer.close()")

Output:

Deployer class ready.
Usage:
  deployer = Deployer('myserver.com', 'ubuntu', '~/.ssh/id_rsa')
  deployer.connect()
  deployer.deploy('./dist/', '/var/www/myapp', 'myapp')
  deployer.close()

This deployer pattern — connect once, run multiple commands and file transfers over the same connection — is much more efficient than opening a new SSH connection for each operation. Extend it by adding application restart logic (self.run("sudo systemctl restart myapp")), health checks, and rollback capability using the backup directory created in step 1.

One SSH connection. Multiple commands. Zero subprocess.run(['ssh', ...]) calls.
One SSH connection. Multiple commands. Zero subprocess.run([‘ssh’, …]) calls.

Frequently Asked Questions

Is AutoAddPolicy safe to use?

AutoAddPolicy automatically accepts any host key on first connection, making it vulnerable to man-in-the-middle attacks. For interactive development and trusted networks it is acceptable. For production, use paramiko.RejectPolicy() combined with client.load_system_host_keys() (loads from ~/.ssh/known_hosts) or client.load_host_keys(path) to use a specific known_hosts file. Only connect to hosts whose keys are already in the file.

Why does exec_command sometimes hang?

Deadlocks happen when a command produces more output than fits in the channel buffer and you try to read stdout and stderr sequentially. The remote process blocks waiting to write stderr, while you are waiting to finish reading stdout. Prevent this by setting a timeout in exec_command, or read both streams concurrently using threads. The safest pattern: read stdout first with .read(), then stderr, and always set a timeout.

How do I run sudo commands with paramiko?

Use exec_command("sudo -S command", get_pty=True) and write the password to stdin: stdin.write(password + "\n"); stdin.flush(). A more reliable approach is to configure passwordless sudo for the deployment user via /etc/sudoers with the specific commands they need. Storing passwords in code or passing them via stdin is a security risk in production environments.

How does paramiko compare to Fabric?

Fabric is built on top of paramiko and provides a higher-level API specifically for deployment tasks — running commands on multiple hosts, connecting with roles, managing connections automatically. If you are doing deployment automation, Fabric is worth evaluating. Paramiko gives you lower-level control and is better when you need to integrate SSH into a larger Python program that is not specifically about deployment.

How do I show upload progress?

Pass a callback function to sftp.put(): sftp.put(local, remote, callback=lambda bytes_transferred, total_bytes: print(f"{bytes_transferred}/{total_bytes}")). The callback receives the bytes transferred so far and the total file size. For a terminal progress bar, use tqdm: wrap the callback to update a tqdm progress bar instance.

Conclusion

Paramiko gives Python programs direct SSH and SFTP capabilities without spawning system processes. You have seen how to connect with key and password authentication using SSHClient, execute remote commands and read stdout/stderr/exit codes with exec_command, transfer files with SFTPClient.put() and get(), and structure a real deployment script that uses a persistent connection for efficiency.

As a next step, explore the fabric library (pip install fabric) which builds on paramiko with a higher-level deployment API, or use paramiko’s Transport class directly for advanced use cases like port forwarding and custom channels. Official documentation is at paramiko.org.