Skill Level: Intermediate
Introduction to Modern HTTP Requests
For years, the Python requests library has been the go-to solution for making HTTP requests. While requests is powerful and user-friendly, it has limitations that modern Python developers encounter daily. It doesn’t support async operations natively, lacks HTTP/2 support, and can feel sluggish when you’re handling dozens of concurrent requests. If you’ve found yourself wrestling with requests’ synchronous nature or spinning up ThreadPoolExecutor just to manage multiple requests, you’re not alone. Many developers hit a wall where they need something more capable.
Enter httpx — a modern HTTP client that drops in as a replacement for requests while adding powerful features like native async/await support, HTTP/2 capabilities, and streaming responses. The best part? If you already know requests, you’ll feel right at home with httpx. The API is remarkably similar, which means you can start using it immediately without relearning everything. We’ll cover installation, basic usage, async patterns, and real-world examples that show why httpx is becoming the preferred choice for new Python projects.
In this guide, we’ll walk through everything you need to know about httpx. We’ll start with a quick example to get you up and running, explore what makes httpx special compared to other HTTP libraries, and then dive into practical patterns you can use in your own projects. Whether you’re building a simple API client or managing complex async workflows, httpx has the tools you need. Let’s get started.
Quick Example: GET Request in 5 Lines
# quick_get.py
import httpx
response = httpx.get("https://jsonplaceholder.typicode.com/posts/1")
print(response.status_code)
print(response.json())
That’s it. If you’ve used requests before, you already know httpx. The synchronous API is nearly identical, but under the hood, httpx brings modern features to the table. Now let’s explore what makes httpx more powerful than its predecessor.

Assignment that makes code cooler
What is httpx and Why Use It?
httpx is a modern HTTP client library for Python that combines the simplicity of requests with advanced features like async support, HTTP/2, and more sophisticated timeout handling. Created by Tom Christie (the same developer behind Starlette and FastAPI), httpx is built on a foundation that understands modern Python development patterns. It’s not just a requests replacement — it’s a redesign based on everything we’ve learned about making HTTP libraries in the 2020s.
The key differences matter when you’re building real applications. Unlike requests, httpx supports both synchronous and asynchronous code from the same library. You don’t need to install separate packages or maintain multiple code paths. It supports HTTP/2 by default, which means better performance for services that support it. It has built-in connection pooling, proper async context managers, and a cleaner API that feels more Pythonic.
Let’s compare httpx to other HTTP libraries in the Python ecosystem:
| Feature | httpx | requests | aiohttp | urllib3 |
|---|---|---|---|---|
| Synchronous API | Yes | Yes | No | Yes |
| Async/Await Support | Yes | No | Yes | No |
| HTTP/2 | Yes | No | Yes | No |
| Connection Pooling | Yes | Yes | Yes | Yes |
| Streaming Support | Yes | Yes | Yes | Yes |
| API Complexity | Low | Low | Medium | High |
| Drop-in requests Replacement | Mostly Yes | N/A | No | No |
httpx shines when you need synchronous and asynchronous code in the same project. Unlike aiohttp, which requires AsyncIO from the start, httpx lets you start simple and scale to async when you need it. Unlike requests, httpx doesn’t force you into threading patterns when you want to handle multiple requests concurrently. It’s the bridge between requests’ simplicity and aiohttp’s power.
Installing httpx
Installation is straightforward. httpx is available on PyPI and installs cleanly without forcing dependencies on you. For basic functionality, you only need one command.
# install_httpx.sh
pip install httpx
If you want HTTP/2 support with better performance, you can install the optional dependencies. The h2 library handles HTTP/2 protocol details, while httpcore provides the underlying transport layer.
# install_httpx_with_http2.sh
pip install httpx[http2]
That’s all you need. Unlike some HTTP libraries, httpx doesn’t require compiling C extensions or installing system dependencies. It’s pure Python with optional performance enhancements.

Assignment expressions unlock comprehension superpowers
Making GET Requests
GET requests are the foundation of HTTP. They retrieve data without side effects, and httpx makes them effortless. The basic pattern is identical to requests, but httpx adds subtle improvements like better error handling and automatic timeout management.
# get_requests.py
import httpx
# Simple GET request
response = httpx.get("https://jsonplaceholder.typicode.com/users")
print(f"Status: {response.status_code}")
print(f"Content-Type: {response.headers['content-type']}")
print(f"First user: {response.json()[0]['name']}")
# GET with query parameters
params = {"userId": 1}
response = httpx.get("https://jsonplaceholder.typicode.com/posts", params=params)
print(f"Posts for user 1: {len(response.json())}")
# GET with custom headers
headers = {"User-Agent": "MyApp/1.0"}
response = httpx.get("https://httpbin.org/headers", headers=headers)
print(response.json())
Notice how httpx handles query parameters naturally through the `params` dictionary. You don’t manually construct query strings or worry about URL encoding — httpx handles that behind the scenes. Headers work the same way, accepting a dictionary that httpx merges with the default headers. This consistent API means you can focus on your application logic instead of HTTP bookkeeping.
Making POST Requests
POST requests send data to the server. httpx supports multiple ways to send data: form-encoded, JSON, raw bytes, or streaming. Let’s explore the most common patterns.
# post_requests.py
import httpx
# POST with JSON data
client = httpx.Client()
data = {
"title": "New Post",
"body": "This is a test post",
"userId": 1
}
response = client.post(
"https://jsonplaceholder.typicode.com/posts",
json=data
)
print(f"Created post with ID: {response.json()['id']}")
# POST with form data
form_data = {"username": "john", "password": "secret123"}
response = client.post(
"https://httpbin.org/post",
data=form_data
)
print(f"Form post status: {response.status_code}")
# POST with custom timeout
try:
response = client.post(
"https://httpbin.org/delay/10",
timeout=5.0
)
except httpx.TimeoutException:
print("Request timed out after 5 seconds")
client.close()
When making POST requests, use the `json` parameter for JSON data and the `data` parameter for form-encoded data. httpx automatically sets the correct Content-Type header for you. The `Client()` context manager maintains connection pooling across multiple requests, which is more efficient than using module-level functions for repeated requests. Timeouts are crucial for production code — they prevent your application from hanging if a server stops responding.

When to walrus: regex matches and stream reading
Using Async with httpx
This is where httpx truly shines. Async support is built in from the ground up, not bolted on as an afterthought. When you need to handle multiple concurrent requests, async/await patterns let you handle hundreds of concurrent connections with minimal memory overhead — something that would require threading or multiprocessing with requests.
# async_requests.py
import asyncio
import httpx
async def fetch_posts(user_id):
"""Fetch posts for a specific user"""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://jsonplaceholder.typicode.com/posts?userId={user_id}"
)
return response.json()
async def fetch_multiple_users():
"""Fetch posts for multiple users concurrently"""
tasks = [
fetch_posts(user_id)
for user_id in range(1, 6)
]
results = await asyncio.gather(*tasks)
for i, posts in enumerate(results, 1):
print(f"User {i}: {len(posts)} posts")
# Run the async function
asyncio.run(fetch_multiple_users())
The `AsyncClient()` context manager handles resource cleanup automatically. Using `asyncio.gather()`, we fetch posts for five users concurrently in roughly the time it takes to fetch one. This pattern scales to thousands of concurrent requests without the overhead of creating threads. The key difference from synchronous code is minimal — just add `async` and `await` keywords.
# async_with_timeout.py
import asyncio
import httpx
async def fetch_with_timeout():
"""Fetch with explicit timeout configuration"""
timeout = httpx.Timeout(10.0) # 10 second timeout for all operations
async with httpx.AsyncClient(timeout=timeout) as client:
try:
response = await client.get(
"https://jsonplaceholder.typicode.com/posts/1"
)
print(f"Success: {response.status_code}")
except httpx.TimeoutException:
print("Request timed out")
except httpx.RequestError as e:
print(f"Network error: {e}")
asyncio.run(fetch_with_timeout())
Timeout handling in async contexts is critical. The `Timeout` object lets you set different timeouts for connection, read, write, and pool operations. This fine-grained control prevents your async application from hanging on unresponsive servers.
HTTP/2 Support
HTTP/2 is faster than HTTP/1.1 because it multiplexes multiple requests over a single connection and compresses headers. With httpx, HTTP/2 support is automatic for servers that support it. You don’t need to change your code — just have the h2 library installed.
# http2_support.py
import httpx
# HTTP/2 is automatic when available
response = httpx.get("https://httpbin.org/get")
print(f"HTTP Version: {response.http_version}")
# Force HTTP/1.1 if needed
client = httpx.Client(http2=False)
response = client.get("https://httpbin.org/get")
print(f"HTTP Version (forced 1.1): {response.http_version}")
client.close()
# Create client with explicit HTTP/2 support
client = httpx.Client(http2=True)
response = client.get("https://httpbin.org/get")
print(f"HTTP Version (with HTTP/2): {response.http_version}")
client.close()
The performance difference is subtle for single requests but becomes dramatic with concurrent requests over HTTP/2. Since HTTP/2 multiplexes requests on a single connection, you avoid the overhead of establishing multiple TCP connections. For APIs that support it, this can mean 20-50% faster performance in real-world scenarios.

Understanding when to skip the walrus keeps code maintainable
Timeouts and Error Handling
Production code needs robust error handling. httpx provides clear exceptions for different failure scenarios, making it easy to distinguish between network problems, timeouts, and server errors.
# error_handling.py
import httpx
def fetch_with_fallback(url, fallback_url):
"""Fetch from primary URL with fallback"""
try:
response = httpx.get(url, timeout=5.0)
response.raise_for_status() # Raise exception for bad status codes
return response.json()
except httpx.TimeoutException:
print(f"Timeout on {url}, trying fallback")
return httpx.get(fallback_url).json()
except httpx.HTTPStatusError as e:
print(f"HTTP error: {e.response.status_code}")
raise
except httpx.RequestError as e:
print(f"Request error: {e}")
raise
try:
data = fetch_with_fallback(
"https://jsonplaceholder.typicode.com/posts/1",
"https://jsonplaceholder.typicode.com/posts/2"
)
print(f"Fetched post: {data['title']}")
except Exception as e:
print(f"All attempts failed: {e}")
httpx organizes exceptions in a clear hierarchy. `RequestError` is the base class for all request-related errors. `TimeoutException` indicates a timeout (connection, read, write, or pool). `HTTPStatusError` means the server responded with an error status code (4xx or 5xx). Using `raise_for_status()` automatically raises an exception for bad status codes, similar to requests.
# advanced_timeouts.py
import httpx
# Granular timeout control
timeout = httpx.Timeout(
timeout=10.0, # Default timeout
connect=5.0, # Connection timeout
read=10.0, # Read timeout
write=10.0, # Write timeout
pool=10.0 # Connection pool timeout
)
client = httpx.Client(timeout=timeout)
# Override timeout for specific request
try:
response = client.get(
"https://httpbin.org/delay/2",
timeout=2.0 # Override client timeout
)
print(f"Response: {response.status_code}")
except httpx.TimeoutException:
print("Custom timeout exceeded")
finally:
client.close()
Different operations need different timeout values. Connection timeouts should be shorter (5-10 seconds), while read timeouts depend on the expected response size. For large downloads, you might need 30+ second read timeouts. httpx lets you configure each separately.
Streaming Responses
When dealing with large files or streaming APIs, loading the entire response into memory is inefficient. httpx supports streaming, letting you process responses chunk by chunk.
# streaming_responses.py
import httpx
# Stream a large response
with httpx.stream("GET", "https://httpbin.org/bytes/1024") as response:
print(f"Status: {response.status_code}")
print(f"Content-Length: {response.headers.get('content-length')}")
# Process response in chunks
for chunk in response.iter_bytes(chunk_size=256):
print(f"Received {len(chunk)} bytes")
# Stream with iterator
with httpx.stream("GET", "https://httpbin.org/json") as response:
for line in response.iter_lines():
if line:
print(f"Line: {line[:50]}...")
# Async streaming
import asyncio
async def async_stream():
async with httpx.AsyncClient() as client:
async with client.stream("GET", "https://httpbin.org/bytes/512") as response:
async for chunk in response.aiter_bytes(chunk_size=128):
print(f"Async received {len(chunk)} bytes")
asyncio.run(async_stream())
Streaming is essential for production applications that download large files or handle streaming APIs. The `iter_bytes()` method gives you raw bytes, while `iter_lines()` automatically splits on newlines — useful for newline-delimited JSON APIs. Async streaming with `aiter_bytes()` and `aiter_lines()` works the same way in async contexts.
Real-Life Example: Async API Data Aggregator
Let’s build a practical example that combines async requests, error handling, and structured data processing. Imagine you’re aggregating data from multiple APIs and want to do it efficiently.
# api_aggregator.py
import asyncio
import httpx
from datetime import datetime
class APIAggregator:
"""Aggregates data from multiple APIs concurrently"""
def __init__(self, max_concurrent=5):
self.max_concurrent = max_concurrent
self.timeout = httpx.Timeout(10.0)
async def fetch_post(self, client, post_id):
"""Fetch a single post"""
try:
response = await client.get(
f"https://jsonplaceholder.typicode.com/posts/{post_id}",
timeout=self.timeout
)
response.raise_for_status()
return response.json()
except httpx.RequestError as e:
print(f"Error fetching post {post_id}: {e}")
return None
async def fetch_user_comments(self, client, user_id):
"""Fetch comments for a user"""
try:
response = await client.get(
f"https://jsonplaceholder.typicode.com/comments?email=*@{user_id}.com",
timeout=self.timeout
)
response.raise_for_status()
return response.json()
except httpx.RequestError as e:
print(f"Error fetching comments: {e}")
return []
async def aggregate(self):
"""Aggregate data from multiple endpoints"""
async with httpx.AsyncClient(timeout=self.timeout) as client:
# Fetch posts concurrently
post_tasks = [
self.fetch_post(client, i)
for i in range(1, 6)
]
posts = await asyncio.gather(*post_tasks)
# Fetch comments concurrently
comment_tasks = [
self.fetch_user_comments(client, i)
for i in range(1, 3)
]
comments = await asyncio.gather(*comment_tasks)
return {
"timestamp": datetime.now().isoformat(),
"posts_fetched": len([p for p in posts if p]),
"total_comments": sum(len(c) for c in comments if c),
"sample_post": posts[0] if posts else None
}
# Run the aggregator
async def main():
aggregator = APIAggregator()
results = await aggregator.aggregate()
print(f"Aggregation completed at {results['timestamp']}")
print(f"Posts fetched: {results['posts_fetched']}")
print(f"Total comments: {results['total_comments']}")
print(f"First post title: {results['sample_post']['title']}")
asyncio.run(main())
This example demonstrates several httpx patterns: creating an async client, managing concurrent requests with proper error handling, using consistent timeouts across all requests, and returning structured data. The `APIAggregator` class is reusable and extensible — you could add caching, retry logic, or progress tracking. In production, you’d likely add logging and more sophisticated error recovery.
Frequently Asked Questions
Q: Is httpx a drop-in replacement for requests?
A: Mostly yes. The synchronous API is nearly identical, so most requests code works with httpx unchanged. However, httpx is stricter about some behaviors (like automatic redirects) and has some API differences. Test thoroughly before migrating production code.
Q: Do I need to use async?
A: No. httpx works great synchronously, and you only need async when handling many concurrent requests. Start with synchronous code and migrate to async if profiling shows it’s beneficial.
Q: What’s the performance difference between httpx and requests?
A: For single requests, performance is similar. For concurrent requests, httpx with async is dramatically faster because it avoids thread overhead. HTTP/2 support also improves performance for compatible servers.
Q: How do I handle cookies and sessions?
A: httpx clients maintain cookies automatically. Use a persistent client for multiple requests to the same host to keep cookies and connection pooling active across requests.
Q: Can I use httpx with Django or Flask?
A: Yes, but use the synchronous API in request handlers since WSGI is synchronous. Use async httpx in ASGI applications like FastAPI or Django Async Views.
Q: How do I set up authentication?
A: httpx supports multiple auth methods. For basic auth: `httpx.get(url, auth=(“user”, “pass”))`. For bearer tokens: `headers={“Authorization”: “Bearer token”}`. For custom auth, create a subclass of `httpx.Auth`.
Conclusion
httpx represents the future of HTTP requests in Python. It combines the simplicity of requests with modern features like async/await, HTTP/2, and better timeout handling. Whether you’re building a simple API client or managing complex concurrent request workflows, httpx has the tools you need without unnecessary complexity.
Start by installing httpx and replacing requests in a non-critical project. You’ll quickly discover why developers are switching. For more details, comprehensive documentation is available at httpx.readthedocs.io.
Related Articles
- How To Handle Asynchronous Programming in Python with asyncio
- Understanding HTTP Status Codes: A Complete Guide
- Building REST APIs with FastAPI and Python
- Web Scraping with Python: Best Practices and Tools
- Error Handling and Logging in Production Python Applications