Skill Level: Intermediate
Serverless Computing Meets Python
AWS Lambda has fundamentally changed how developers deploy backend applications. Instead of managing servers, worrying about scaling, or paying for idle compute time, you write functions and let AWS handle the rest. In this guide, you’ll learn how to take a Python application, package it with dependencies, and deploy it to Lambda where it’ll handle thousands of concurrent requests without you touching a single EC2 instance.
Don’t let the “serverless” terminology intimidate you. You’re still writing Python code–the infrastructure complexity is just abstracted away. We’ll walk through every step from local development to production deployment, and you’ll understand exactly what’s happening at each stage. By the end, you’ll have a working Lambda function integrated with API Gateway, ready to scale infinitely.
Here’s what we’re covering: setting up the AWS CLI, creating a Lambda handler function, managing dependencies with zip files and Lambda Layers, deploying via the command line, integrating with API Gateway for HTTP endpoints, configuring environment variables, and building a real-world URL shortener service. You’ll also see common pitfalls and how to avoid them.
Quick Example: Your First Lambda Function
Let’s skip the theory for a moment and get something working in five minutes. This is your first Lambda function:
# lambda_handler.py
import json
from datetime import datetime
def handler(event, context):
"""Simple Lambda handler that returns current time and a message"""
return {
'statusCode': 200,
'body': json.dumps({
'message': 'Hello from Lambda',
'timestamp': datetime.now().isoformat()
})
}
Output:
{
"statusCode": 200,
"body": "{\"message\": \"Hello from Lambda\", \"timestamp\": \"2026-04-08T14:30:45.123456\"}"
}
That’s it. This function will run on Lambda, handles HTTP requests through API Gateway, and costs you nothing until someone actually invokes it. The event parameter contains request data, and context provides runtime information like request ID and remaining execution time.

Understanding AWS Lambda Architecture
Lambda is Amazon’s serverless compute service. You upload code, set memory/timeout limits, and AWS auto-scales based on incoming requests. You pay only for execution time in 1ms increments. Unlike EC2 where you provision instances all day, Lambda spins up and down instantly.
The fundamental unit is the handler function. AWS calls this function whenever an event triggers it–could be an HTTP request through API Gateway, an S3 upload, a scheduled CloudWatch event, or an SQS message. Your function receives the event details and must return a response.
| Aspect | Lambda | Traditional EC2 | Containerized Services |
|---|---|---|---|
| Scaling | Automatic, instant | Manual or ASG | Manual or orchestrator |
| Cold Starts | 50-500ms first call | None | Can be optimized |
| Pricing | Per invocation + duration | Per instance-hour | Per container hour |
| Management | Code only | Full OS control | Container image |
| Ideal Use | Bursty traffic, microservices | Consistent workloads | Complex deployments |
Setting Up AWS CLI and Credentials
Before deploying anything, you need the AWS CLI configured with credentials. Install it via pip if you haven’t already, then create an IAM user with Lambda deployment permissions.
# terminal
pip install awscli --upgrade
aws --version
Output:
aws-cli/2.15.0 Python/3.11.7 Linux/6.1.0-20 botocore/2.32.1
Now configure credentials. Generate an access key in the AWS IAM console, then run:
# terminal
aws configure
You’ll be prompted for Access Key ID and Secret Access Key. Store them securely–never commit them to version control. The AWS CLI stores them in ~/.aws/credentials.
For deployment, your IAM user needs these permissions: lambda:CreateFunction, lambda:UpdateFunctionCode, iam:PassRole, apigateway:*. Create an inline policy or use the AWSLambdaFullAccess managed policy for development.

Creating Your Lambda Handler Function
A Lambda handler is any Python function that AWS Lambda invokes. The signature must accept two parameters: event (contains request data) and context (runtime metadata).
# app.py
import json
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
"""
Main Lambda handler function
Args:
event: dict containing request data from trigger source
context: LambdaContext object with runtime info
Returns:
dict with statusCode and body for API Gateway
"""
try:
logger.info(f"Received event: {json.dumps(event)}")
# Extract query parameters or body
body = event.get('body', '{}')
if isinstance(body, str):
body = json.loads(body)
name = body.get('name', 'World')
response_data = {
'message': f'Hello, {name}!',
'request_id': context.request_id,
'function_name': context.function_name
}
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps(response_data)
}
except Exception as e:
logger.error(f"Error: {str(e)}")
return {
'statusCode': 500,
'body': json.dumps({'error': 'Internal server error'})
}
Output:
{
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": "{\"message\": \"Hello, Alice!\", \"request_id\": \"12345-abcde\", \"function_name\": \"my-python-app\"}"
}
Notice the structure: we return a dict with statusCode, headers, and body. This format is specifically for API Gateway integration. The context object provides metadata like execution duration, memory limits, and the current request ID for logging.
Packaging Dependencies for Lambda
Lambda has a 50MB limit for unzipped deployment packages. Installing dependencies locally and zipping them is the standard approach. Many libraries like requests, numpy, and psycopg2 have compiled C extensions that must match Lambda’s Linux environment.
# terminal
mkdir lambda_package
cd lambda_package
pip install -r requirements.txt -t .
cat > requirements.txt << 'EOF'
requests==2.31.0
python-dateutil==2.8.2
boto3==1.28.0
EOF
zip -r function.zip .
Output:
adding: app.py (deflated 45%)
adding: requests/ (stored 0%)
adding: requests/__init__.py (deflated 52%)
...
adding: botocore/data/sts/2011-06-15/service-2.json (deflated 78%)
31 files, 8.4 MB compressed into 2.1 MB
The key is installing dependencies with the -t flag, which places them in the current directory. Lambda's runtime will find them automatically when you import them. For larger packages (NumPy, TensorFlow), consider using Lambda Layers which support up to 5 layers of 50MB each.

Deploying via AWS CLI
With your code zipped and dependencies included, deploy it. First, create an IAM role that Lambda can assume.
# terminal
# Create the trust policy
cat > trust-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
# Create the role
aws iam create-role \\
--role-name lambda-execution-role \\
--assume-role-policy-document file://trust-policy.json
# Attach basic execution policy for CloudWatch logs
aws iam attach-role-policy \\
--role-name lambda-execution-role \\
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Output:
{
"Role": {
"RoleName": "lambda-execution-role",
"Arn": "arn:aws:iam::123456789012:role/lambda-execution-role",
"Path": "/",
"CreateDate": "2026-04-08T10:30:00+00:00"
}
}
Now deploy the function:
# terminal
aws lambda create-function \\
--function-name my-python-app \\
--runtime python3.11 \\
--role arn:aws:iam::123456789012:role/lambda-execution-role \\
--handler app.lambda_handler \\
--zip-file fileb://function.zip \\
--timeout 30 \\
--memory-size 256
Output:
{
"FunctionName": "my-python-app",
"FunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:my-python-app",
"Runtime": "python3.11",
"Handler": "app.lambda_handler",
"CodeSize": 2157812,
"MemorySize": 256,
"Timeout": 30,
"LastModified": "2026-04-08T10:35:22.000+0000"
}
The --handler parameter points to your function: filename.function_name. Updates are just as easy:
# terminal
aws lambda update-function-code \\
--function-name my-python-app \\
--zip-file fileb://function.zip
Integrating with API Gateway
Raw Lambda invocations are fine for internal triggers, but to handle HTTP requests, you need API Gateway. This acts as the front door, converting HTTP requests into Lambda events.
# terminal
# Create REST API
API_ID=$(aws apigateway create-rest-api \\
--name my-python-app-api \\
--description "API for Python Lambda app" \\
--query 'id' --output text)
echo "API ID: $API_ID"
# Get root resource
ROOT_ID=$(aws apigateway get-resources \\
--rest-api-id $API_ID \\
--query 'items[0].id' --output text)
# Create resource
RESOURCE_ID=$(aws apigateway create-resource \\
--rest-api-id $API_ID \\
--parent-id $ROOT_ID \\
--path-part "greet" \\
--query 'id' --output text)
# Create POST method
aws apigateway put-method \\
--rest-api-id $API_ID \\
--resource-id $RESOURCE_ID \\
--http-method POST \\
--authorization-type NONE
Then grant API Gateway permission to invoke Lambda:
# terminal
aws lambda add-permission \\
--function-name my-python-app \\
--statement-id AllowAPIGatewayInvoke \\
--action lambda:InvokeFunction \\
--principal apigateway.amazonaws.com
Wire the API to Lambda and deploy:
# terminal
# Set Lambda as integration
aws apigateway put-integration \\
--rest-api-id $API_ID \\
--resource-id $RESOURCE_ID \\
--http-method POST \\
--type AWS_PROXY \\
--integration-http-method POST \\
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:123456789012:function:my-python-app/invocations
# Deploy the API
aws apigateway create-deployment \\
--rest-api-id $API_ID \\
--stage-name prod
Output:
{
"id": "abc123def456",
"createdDate": "2026-04-08T11:00:00+00:00"
}
Your API endpoint is now available at https://{api-id}.execute-api.us-east-1.amazonaws.com/prod/greet. Send a POST request with JSON body:
# terminal
curl -X POST https://abc123.execute-api.us-east-1.amazonaws.com/prod/greet \\
-H "Content-Type: application/json" \\
-d '{"name": "Alice"}'
Output:
{"message":"Hello, Alice!","request_id":"req-12345","function_name":"my-python-app"}

Managing Environment Variables and Secrets
Never hardcode API keys or database passwords. Lambda supports environment variables for configuration.
# terminal
aws lambda update-function-configuration \\
--function-name my-python-app \\
--environment Variables="{ENVIRONMENT=production,LOG_LEVEL=INFO,DATABASE_HOST=mydb.us-east-1.rds.amazonaws.com}"
Access them in your code:
# app.py
import os
import json
def lambda_handler(event, context):
env = os.getenv('ENVIRONMENT', 'development')
log_level = os.getenv('LOG_LEVEL', 'INFO')
db_host = os.getenv('DATABASE_HOST')
return {
'statusCode': 200,
'body': json.dumps({
'environment': env,
'db_host': db_host
})
}
For sensitive data, use AWS Secrets Manager or Systems Manager Parameter Store instead of plaintext environment variables:
# app.py
import json
import boto3
secrets_client = boto3.client('secretsmanager')
def get_database_password():
"""Retrieve password from Secrets Manager"""
try:
response = secrets_client.get_secret_value(
SecretId='prod/database/password'
)
return json.loads(response['SecretString'])['password']
except Exception as e:
print(f"Error retrieving secret: {e}")
raise
def lambda_handler(event, context):
db_password = get_database_password()
# Use password safely
return {'statusCode': 200, 'body': 'OK'}
Using Lambda Layers for Shared Dependencies
If you have multiple Lambda functions sharing libraries, Lambda Layers avoid duplication. A layer is a zip file containing code or libraries that all your functions can access.
# terminal
mkdir -p lambda_layer/python/lib/python3.11/site-packages
pip install requests numpy -t lambda_layer/python/lib/python3.11/site-packages/
cd lambda_layer
zip -r requests_numpy_layer.zip python
aws lambda publish-layer-version \\
--layer-name shared-dependencies \\
--zip-file fileb://requests_numpy_layer.zip \\
--compatible-runtimes python3.11
Output:
{
"LayerVersionArn": "arn:aws:lambda:us-east-1:123456789012:layer:shared-dependencies:1",
"Version": 1,
"CodeSize": 12457283
}
Now attach this layer to your function:
# terminal
aws lambda update-function-configuration \\
--function-name my-python-app \\
--layers arn:aws:lambda:us-east-1:123456789012:layer:shared-dependencies:1
Your function immediately gains access to requests and numpy without bundling them in your deployment package.

Real-World Example: Serverless URL Shortener
Let's build a practical service that shortens URLs and redirects them. It uses DynamoDB for storage and API Gateway for HTTP endpoints.
# url_shortener.py
import json
import uuid
import boto3
import logging
from datetime import datetime, timedelta
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('url-mappings')
logger = logging.getLogger()
def generate_short_code(length=6):
"""Generate a random short code"""
return str(uuid.uuid4())[:length]
def create_short_url(original_url, custom_alias=None):
"""Store mapping and return short code"""
short_code = custom_alias or generate_short_code()
try:
table.put_item(
Item={
'short_code': short_code,
'original_url': original_url,
'created_at': datetime.now().isoformat(),
'expires_at': (datetime.now() + timedelta(days=365)).isoformat(),
'click_count': 0
},
ConditionExpression='attribute_not_exists(short_code)'
)
return short_code
except Exception as e:
logger.error(f"Error creating mapping: {e}")
raise
def get_redirect_url(short_code):
"""Retrieve original URL and increment click count"""
try:
response = table.get_item(Key={'short_code': short_code})
if 'Item' not in response:
return None
item = response['Item']
# Increment click counter
table.update_item(
Key={'short_code': short_code},
UpdateExpression='SET click_count = click_count + :inc',
ExpressionAttributeValues={':inc': 1}
)
return item['original_url']
except Exception as e:
logger.error(f"Error retrieving URL: {e}")
return None
def lambda_handler(event, context):
"""Handle shorten and redirect requests"""
path = event.get('path', '/')
method = event.get('httpMethod', 'GET')
try:
if path == '/shorten' and method == 'POST':
body = json.loads(event.get('body', '{}'))
original_url = body.get('url')
custom_alias = body.get('alias')
if not original_url:
return {
'statusCode': 400,
'body': json.dumps({'error': 'URL is required'})
}
short_code = create_short_url(original_url, custom_alias)
short_url = f"https://short.example.com/{short_code}"
return {
'statusCode': 201,
'body': json.dumps({
'short_url': short_url,
'short_code': short_code
})
}
elif path.startswith('/') and method == 'GET':
short_code = path.lstrip('/')
if not short_code:
return {
'statusCode': 400,
'body': json.dumps({'error': 'Short code required'})
}
original_url = get_redirect_url(short_code)
if not original_url:
return {
'statusCode': 404,
'body': json.dumps({'error': 'Short URL not found'})
}
return {
'statusCode': 301,
'headers': {'Location': original_url}
}
else:
return {
'statusCode': 400,
'body': json.dumps({'error': 'Invalid request'})
}
except Exception as e:
logger.error(f"Handler error: {e}")
return {
'statusCode': 500,
'body': json.dumps({'error': 'Internal server error'})
}
Test Requests:
# Create short URL
curl -X POST https://api.example.com/shorten \\
-H "Content-Type: application/json" \\
-d '{"url": "https://www.example.com/very/long/article/path", "alias": "article123"}'
# Response
{"short_url":"https://short.example.com/article123","short_code":"article123"}
# Redirect
curl -L https://short.example.com/article123
# Follows 301 redirect to original URL
This example demonstrates several key Lambda patterns: DynamoDB integration, handling multiple HTTP methods, conditional writes to prevent duplicates, and atomic operations for click counting. Deploy it by creating a DynamoDB table first, then zipping the code with boto3 as a dependency.

Frequently Asked Questions
What causes cold starts and can I eliminate them?
Cold starts occur when Lambda initializes a new execution environment, adding 50-500ms latency. They happen when no warm containers are available. Provisioned Concurrency eliminates cold starts by keeping instances warm, but costs extra. Alternatively, keep your functions small and fast--warm starts (reusing existing containers) are virtually free.
Can I test Lambda functions locally?
Use the AWS SAM CLI (Serverless Application Model) to run functions locally with Lambda emulation. Install it, then run sam local start-api to test API Gateway integration. You can also invoke functions directly with sam local invoke. It's not 100% identical to production AWS Lambda, but it's close enough for development.
How do I handle long-running tasks in Lambda?
Lambda has a 15-minute timeout maximum. For longer tasks, decouple using SQS or SNS: receive a quick acknowledgment, queue the work, then process asynchronously. Or run your code on EC2 or ECS triggered by Lambda. For data processing, consider AWS Batch or Step Functions for orchestration.
What's the difference between Lambda and containers?
Lambda is fully managed--you upload code and don't worry about infrastructure. Containers (ECS/EKS) give you more control but require managing clusters. Lambda scales infinitely and automatically; containers require capacity planning. Use Lambda for bursty microservices; use containers for consistent workloads or when you need specific OS-level control.
How do I debug Lambda functions in production?
CloudWatch Logs capture all print statements and exceptions. Use aws logs tail /aws/lambda/my-function --follow to stream logs. Add structured logging with JSON output for easier parsing. Lambda Insights provides additional metrics and performance analysis. X-Ray integrates with Lambda to trace requests across services.
Can I use async/await with Lambda?
Yes, Python async/await works fine in Lambda. However, ensure your handler function itself is not async (Lambda doesn't await it). Call async functions using asyncio.run(). For truly asynchronous patterns, use SQS with Lambda's batch message processor or invoke Lambda asynchronously with InvocationType=Event.
Wrapping Up
Deploying Python to AWS Lambda eliminates infrastructure headaches. You've learned the complete pipeline: writing handlers, packaging dependencies, using Lambda Layers to share code, integrating with API Gateway for HTTP endpoints, managing secrets securely, and building real-world services like a URL shortener.
The serverless model isn't suitable for every workload--continuous background services are cheaper on EC2, and processes exceeding 15 minutes require different architectures. But for APIs, webhooks, microservices, and event-driven workflows, Lambda is unbeatable for simplicity and cost efficiency.
Next steps: explore Lambda@Edge for CDN functions, Step Functions for orchestrating multi-function workflows, and EventBridge for decoupling event sources. Check the official AWS Lambda documentation at docs.aws.amazon.com/lambda and the Boto3 documentation for Python SDK reference.
Official Resources
- AWS Lambda Developer Guide
- Lambda Handler Python Reference
- Working with Lambda Layers
- API Gateway Developer Guide
- Boto3 Documentation