Course Lessons
Lesson 4 of 4
Lambda: Serverless Computing
Master AWS Lambda to run code without managing servers, including function creation, triggers, environment variables, and building serverless applications.
38 minutes
Lambda: Serverless Computing
AWS Lambda lets you run code without provisioning or managing servers. You only pay for the compute time you consume - there's no charge when your code isn't running.
What is Serverless?
- No Server Management: AWS handles infrastructure
- Automatic Scaling: Scales from a few requests to thousands
- Pay Per Use: Only pay for execution time
- Event-Driven: Responds to events automatically
How Lambda Works
- Upload your code to Lambda
- Set up triggers (API Gateway, S3, DynamoDB, etc.)
- Lambda runs your code when triggered
- Automatically scales based on demand
- You pay only for compute time used
Key Concepts
Function
- Unit of deployment in Lambda
- Contains code and configuration
- Supports multiple languages (Node.js, Python, Java, Go, C#, Ruby)
Triggers
- Events that invoke Lambda functions
- API Gateway: HTTP requests
- S3: File uploads/changes
- DynamoDB: Database changes
- CloudWatch: Scheduled events
- SNS/SQS: Message queues
Execution Environment
- Temporary runtime environment
- Includes memory allocation (128MB - 10GB)
- CPU scales with memory
- Max execution time: 15 minutes
Layers
- Package dependencies separately
- Reuse across multiple functions
- Reduce deployment package size
Lambda Function Structure
Node.js Example
exports.handler = async (event) => {
// Your code here
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!')
}
return response
}
Python Example
def lambda_handler(event, context):
# Your code here
return {
'statusCode': 200,
'body': 'Hello from Lambda!'
}
Common Use Cases
API Backends
- REST APIs with API Gateway
- GraphQL APIs
- Microservices architecture
Data Processing
- Process S3 file uploads
- Transform data streams
- ETL operations
Automation
- Scheduled tasks (cron jobs)
- Infrastructure management
- Automated backups
Real-Time Processing
- IoT data processing
- Real-time analytics
- Stream processing
Best Practices
- Keep functions small and focused
- Use environment variables for configuration
- Implement proper error handling
- Use layers for dependencies
- Monitor with CloudWatch Logs
- Set appropriate memory allocation
- Use provisioned concurrency for latency-sensitive apps
- Implement retry logic for failures
Cost Optimization
- Right-size memory allocation
- Reduce cold starts with provisioned concurrency
- Use Lambda@Edge for lower latency
- Monitor and optimize execution time
- Clean up unused functions
Code Example
# Create and deploy Lambda functions with AWS CLI
# Create a simple Lambda function (Node.js)
cat > index.js << 'EOF'
exports.handler = async (event) => {
console.log('Event:', JSON.stringify(event, null, 2))
const response = {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({
message: 'Hello from Lambda!',
input: event
})
}
return response
}
EOF
# Package the function
zip function.zip index.js
# Create IAM role for Lambda
aws iam create-role \
--role-name lambda-execution-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach basic execution policy
aws iam attach-role-policy \
--role-name lambda-execution-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
# Create Lambda function
aws lambda create-function \
--function-name my-first-lambda \
--runtime nodejs18.x \
--role arn:aws:iam::ACCOUNT_ID:role/lambda-execution-role \
--handler index.handler \
--zip-file fileb://function.zip \
--memory-size 256 \
--timeout 30
# Invoke the function
aws lambda invoke \
--function-name my-first-lambda \
--payload '{"name":"World"}' \
response.json
cat response.json
# Update function code
zip function.zip index.js
aws lambda update-function-code \
--function-name my-first-lambda \
--zip-file fileb://function.zip
# Set environment variables
aws lambda update-function-configuration \
--function-name my-first-lambda \
--environment Variables={DB_HOST=localhost,API_KEY=secret123}
# Create S3 trigger
aws lambda add-permission \
--function-name my-first-lambda \
--statement-id s3-trigger \
--action lambda:InvokeFunction \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::my-bucket
# List all functions
aws lambda list-functions --output table
# Get function configuration
aws lambda get-function --function-name my-first-lambda
# Delete function
aws lambda delete-function --function-name my-first-lambda
# Python Lambda example for S3 processing
cat > lambda_function.py << 'EOF'
import json
import boto3
s3 = boto3.client('s3')
def lambda_handler(event, context):
# Get bucket and key from S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
print(f'Processing file: {bucket}/{key}')
# Get file from S3
obj = s3.get_object(Bucket=bucket, Key=key)
data = obj['Body'].read().decode('utf-8')
# Process data (example: count lines)
line_count = len(data.split('\n'))
return {
'statusCode': 200,
'body': json.dumps({
'message': 'File processed',
'lines': line_count
})
}
EOF