Beginner
Every application eventually needs configuration that changes between environments — API keys for development vs production, database URLs that are different on your laptop and the server, secret tokens that absolutely cannot be checked into version control. The naive solution is to hardcode these values directly in your Python files. The problem with that approach is not just security — it is also inflexibility. Every time you move the code to a different machine or environment, you have to edit the source file, which means tracking down which file, remembering the format, and hoping you do not accidentally commit the change.
python-dotenv solves this cleanly. You create a .env file in your project root containing key-value pairs, add .env to your .gitignore, and call load_dotenv() once at startup. From that point, all your config values are available through os.environ or os.getenv() exactly like regular environment variables, but loaded from your file instead of the shell. The library is a single pip install python-dotenv away and has no dependencies.
This article covers creating and formatting .env files, loading them with load_dotenv(), reading values with os.getenv() and its defaults, handling multiple environments (development, staging, production), override behavior, and a real-world configuration module pattern you can drop into any Python project. By the end, you will have a repeatable, secure configuration workflow that works for scripts, Flask apps, FastAPI services, and data pipelines alike.
Quick Example: Loading a .env File
First, install python-dotenv with pip install python-dotenv. Then create a .env file and a Python script in the same directory.
# .env (create this file in your project root)
DATABASE_URL=postgresql://localhost/myapp_dev
API_KEY=dev-key-abc123
DEBUG=true
MAX_CONNECTIONS=10
# quick_dotenv.py
import os
from dotenv import load_dotenv
# Load variables from .env into os.environ
load_dotenv()
# Read them like any environment variable
db_url = os.getenv("DATABASE_URL")
api_key = os.getenv("API_KEY")
debug = os.getenv("DEBUG") == "true"
max_conn = int(os.getenv("MAX_CONNECTIONS", "5"))
print(f"DB URL: {db_url}")
print(f"API Key: {api_key}")
print(f"Debug mode: {debug}")
print(f"Max connections: {max_conn}")
Output:
DB URL: postgresql://localhost/myapp_dev
API Key: dev-key-abc123
Debug mode: True
Max connections: 10
Three things to notice here. First, load_dotenv() is called before any os.getenv() call — that one line loads the entire file. Second, values are always strings, so you need to convert integers with int() and booleans by comparing to the string "true". Third, os.getenv("MAX_CONNECTIONS", "5") shows the default fallback pattern — if the key is missing from the environment, you get "5" instead of None.
What Is python-dotenv and How Does It Work?
python-dotenv reads key-value pairs from a .env file and adds them to os.environ, which is the standard Python dictionary mapping for environment variables. Once loaded, your code accesses config values through os.getenv() the same way it would access variables set in the shell or in your deployment platform. This means your application code does not need to know anything about dotenv specifically — it just reads from os.environ.
| Approach | How it works | Risk |
|---|---|---|
| Hardcoded values | Values in source code | Committed to version control, no per-environment flexibility |
| Shell environment variables | Set in shell before running | Easy to forget, not portable, not stored with project |
| .env + python-dotenv | Read from .env file at startup | None if .env is gitignored — portable, version-controlled template |
| Config management service | AWS Secrets Manager, Vault, etc. | Infrastructure dependency, more complex setup |
The critical workflow is: your actual .env file with real secrets is in .gitignore and never committed. You commit a .env.example file with the same keys but placeholder values. New team members copy .env.example to .env and fill in their own values. This is the standard approach used across Flask, FastAPI, Django, and most modern Python project templates.
.env File Format
The .env file format is simple: one KEY=value pair per line. Here are all the formatting rules python-dotenv handles correctly.
# .env -- complete format reference
# Simple string values
APP_NAME=MyPythonApp
ENVIRONMENT=development
# Values with spaces -- wrap in quotes
WELCOME_MESSAGE="Hello, welcome to the app"
SERVER_DESCRIPTION='Python API server v2'
# Multiline values -- use quotes and \n
LOG_FORMAT="%(asctime)s %(name)s %(levelname)s %(message)s"
# Numbers -- still stored as strings, convert in Python
PORT=8000
WORKER_COUNT=4
TIMEOUT=30.0
# Booleans -- stored as strings "true"/"false"
DEBUG=true
ENABLE_CACHE=false
# Empty value -- results in empty string
OPTIONAL_FEATURE=
# Comments -- lines starting with # are ignored
# DATABASE_URL=postgresql://localhost/old_db (commented out)
# Referencing other variables (variable expansion)
BASE_URL=http://localhost
API_ENDPOINT=${BASE_URL}/api/v1
# Export syntax also works (for shell compatibility)
export SECRET_KEY=abc123def456
The most important rule: values do not need quotes unless they contain spaces or special characters. Quotes are stripped from the value when loaded — so NAME="alice" becomes the string alice, not "alice".
load_dotenv() Options
The load_dotenv() function has several useful parameters for controlling how and which file gets loaded.
# dotenv_options.py
import os
from dotenv import load_dotenv
# Default: loads .env in current directory or any parent directory
load_dotenv()
# Load from a specific path
load_dotenv(dotenv_path="/path/to/config/.env.production")
# Override existing environment variables (default is False -- existing vars win)
load_dotenv(override=True)
# Load a .env file and return a dict instead of modifying os.environ
from dotenv import dotenv_values
config = dotenv_values(".env")
print(config) # OrderedDict with all key-value pairs
print(type(config["PORT"])) # str -- always strings
Output:
OrderedDict([('APP_NAME', 'MyPythonApp'), ('PORT', '8000'), ...])
<class 'str'>
The override=False default is important: if DATABASE_URL is already set in the shell environment (e.g., by your deployment platform), load_dotenv() will NOT overwrite it. This means your production environment can set real values, and the .env file provides development defaults. Set override=True only when you specifically want the file to take precedence over the shell.
Managing Multiple Environments
A common pattern is to have separate .env files for development, staging, and production, and load the correct one based on an environment variable or a naming convention.
# multi_env.py
import os
from dotenv import load_dotenv
from pathlib import Path
def load_environment_config():
"""Load the correct .env file based on APP_ENV."""
env = os.getenv("APP_ENV", "development")
env_file = Path(f".env.{env}")
if env_file.exists():
load_dotenv(dotenv_path=env_file)
print(f"Loaded config from {env_file}")
else:
# Fall back to .env if specific file not found
load_dotenv()
print(f"Loaded config from .env (no .env.{env} found)")
return env
current_env = load_environment_config()
print(f"Running in: {current_env}")
print(f"Database: {os.getenv('DATABASE_URL', 'not set')}")
print(f"Debug: {os.getenv('DEBUG', 'false')}")
Your project structure would then look like this:
myproject/
.env # gitignored -- your local defaults
.env.development # gitignored -- dev-specific values
.env.staging # gitignored -- staging values
.env.production # gitignored -- DO NOT COMMIT
.env.example # committed -- template with placeholder values
app.py
Run with different environments using: APP_ENV=staging python multi_env.py. This pattern is used directly in frameworks like Flask (where FLASK_ENV controls environment) and is compatible with Docker Compose, which supports --env-file flags for injecting the right file at container startup.
Real-Life Example: Application Config Module
Here is a production-ready configuration module pattern. Instead of scattering os.getenv() calls throughout your code, centralize all config loading in one module with type conversion and validation.
# config.py -- drop this in any Python project
import os
from dotenv import load_dotenv
from dataclasses import dataclass
from typing import Optional
# Load .env at module import time -- happens once per process
load_dotenv()
@dataclass
class DatabaseConfig:
url: str
pool_size: int
timeout: float
@dataclass
class AppConfig:
name: str
debug: bool
port: int
secret_key: str
database: DatabaseConfig
allowed_origins: list
def _require(key: str) -> str:
"""Get a required env var -- raise if missing."""
value = os.getenv(key)
if value is None:
raise EnvironmentError(f"Required environment variable '{key}' is not set. Check your .env file.")
return value
def load_config() -> AppConfig:
return AppConfig(
name=os.getenv("APP_NAME", "MyApp"),
debug=os.getenv("DEBUG", "false").lower() == "true",
port=int(os.getenv("PORT", "8000")),
secret_key=_require("SECRET_KEY"),
database=DatabaseConfig(
url=_require("DATABASE_URL"),
pool_size=int(os.getenv("DB_POOL_SIZE", "5")),
timeout=float(os.getenv("DB_TIMEOUT", "30.0")),
),
allowed_origins=os.getenv("ALLOWED_ORIGINS", "").split(","),
)
# Usage: import config and access typed attributes
if __name__ == "__main__":
# Create a minimal .env for demonstration
import pathlib
pathlib.Path(".env").write_text(
"APP_NAME=DemoApp\nDEBUG=true\nPORT=5000\n"
"SECRET_KEY=demo-secret-xyz\nDATABASE_URL=sqlite:///demo.db\n"
"ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173\n"
)
cfg = load_config()
print(f"App: {cfg.name}")
print(f"Debug: {cfg.debug}")
print(f"Port: {cfg.port}")
print(f"DB URL: {cfg.database.url}")
print(f"Pool size: {cfg.database.pool_size}")
print(f"Allowed origins: {cfg.allowed_origins}")
Output:
App: DemoApp
Debug: True
Port: 5000
DB URL: sqlite:///demo.db
Pool size: 5
Allowed origins: ['http://localhost:3000', 'http://localhost:5173']
The _require() helper raises a descriptive error immediately if a required variable is missing, so you get a clear error at startup instead of a cryptic None-related crash later. The typed AppConfig dataclass means your IDE knows the shape of all config values, which catches many mistakes at edit time rather than runtime.
Frequently Asked Questions
How do I make sure .env is never committed?
Add .env to your .gitignore file. The standard Python .gitignore template from GitHub already includes it. Also run git status to confirm the file shows as untracked before your first commit. For extra safety, you can add a pre-commit hook that scans for common secret patterns. The .env.example file with placeholder values is safe to commit and should be committed so other developers know what variables are required.
What happens if the variable is already set in the shell?
By default, load_dotenv() does not override existing environment variables. If DATABASE_URL is already set in your shell or by a Docker environment, the .env file value is ignored for that variable. This is the correct behavior for production deployments where the platform sets real credentials. Use load_dotenv(override=True) only when you explicitly want the file to take precedence over the shell.
Why are all values strings? How do I handle types?
Environment variables are always strings at the OS level — python-dotenv does not change this. The recommended pattern is to convert types at config load time, not at usage time. Use int(os.getenv("PORT", "8000")) for integers, float(os.getenv("TIMEOUT", "30.0")) for floats, and a string comparison like os.getenv("DEBUG", "false").lower() == "true" for booleans. Centralizing these conversions in a config module (as shown in the real-world example) prevents repeated conversion logic throughout your codebase.
How does python-dotenv work with Docker?
Docker Compose supports env_file: .env in the service definition, which loads your .env file as container environment variables. In this setup, python-dotenv is redundant — the variables are already in the environment before Python starts. Many projects use both: docker-compose sets variables from .env, and python-dotenv handles the local development case when running outside Docker. The load_dotenv() call is harmless when all variables are already set (because it does not override by default).
Does python-dotenv handle special characters in values?
Yes, but you need to quote values that contain special characters, spaces, or the # comment character. Use double quotes: PASSWORD="my#secret!with spaces". Inside double quotes, all characters are literal. Without quotes, a # in a value would be treated as a comment, truncating the value. As a rule of thumb, quote any value that is not a simple alphanumeric string.
How should I handle config in tests?
Create a .env.test file with test-specific values (test database, mock API keys) and load it explicitly in your test setup: load_dotenv(".env.test", override=True). Alternatively, use pytest’s monkeypatch fixture to set individual environment variables for specific tests: monkeypatch.setenv("DATABASE_URL", "sqlite:///:memory:"). This keeps test config isolated and prevents test runs from accidentally connecting to development or production resources.
Conclusion
python-dotenv is one of the first libraries you should add to any Python project that connects to external services. It solves the configuration problem cleanly: secrets stay out of source code, config changes between environments without editing files, and os.getenv() remains your single interface to all configuration regardless of where values come from. The .env file approach is so widely adopted that it is supported natively by Docker Compose, many CI/CD platforms, and virtually every Python web framework.
Start with the config module pattern from this article — centralize your load_dotenv() call, convert all types immediately, and use _require() for mandatory variables so failures are loud and clear at startup. Commit your .env.example, gitignore your .env, and your configuration workflow will be solid for any project. The full API reference is available at the python-dotenv PyPI page.