Modern AWS Deployment Strategies for Web Applications

Andrei Bespamiatnov

Introduction
AWS offers numerous deployment options for web applications, each with its own trade-offs in terms of complexity, cost, and scalability. In this comprehensive guide, I’ll walk through different deployment strategies, from simple static site hosting to sophisticated containerized deployments.
Static Site Deployment
Amazon S3 + CloudFront
Perfect for static sites, SPAs, and JAMstack applications.
// CDK Example for Static Site Deployment
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as cloudfront from 'aws-cdk-lib/aws-cloudfront';
import * as origins from 'aws-cdk-lib/aws-cloudfront-origins';
export class StaticSiteStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
// S3 Bucket for static content
const siteBucket = new s3.Bucket(this, 'SiteBucket', {
bucketName: 'my-static-site-bucket',
publicReadAccess: false,
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
removalPolicy: RemovalPolicy.DESTROY,
autoDeleteObjects: true,
});
// CloudFront Distribution
const distribution = new cloudfront.Distribution(this, 'SiteDistribution', {
defaultBehavior: {
origin: new origins.S3Origin(siteBucket),
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
cachePolicy: cloudfront.CachePolicy.CACHING_OPTIMIZED,
},
defaultRootObject: 'index.html',
errorResponses: [
{
httpStatus: 404,
responseHttpStatus: 200,
responsePagePath: '/index.html', // SPA routing
},
],
});
// Deploy script
new s3deploy.BucketDeployment(this, 'DeployWebsite', {
sources: [s3deploy.Source.asset('./dist')],
destinationBucket: siteBucket,
distribution,
distributionPaths: ['/*'],
});
}
}
Benefits and Use Cases
- Cost-effective: Pay only for storage and data transfer
- Global CDN: Fast content delivery worldwide
- Automatic scaling: Handles traffic spikes seamlessly
- Perfect for: React, Vue, Angular, static sites, documentation
Serverless Application Deployment
AWS Lambda + API Gateway
Ideal for APIs and server-side rendered applications with variable traffic.
// Serverless Framework Example
// serverless.yml
service: my-web-api
provider:
name: aws
runtime: nodejs18.x
region: us-east-1
environment:
NODE_ENV: production
DATABASE_URL: ${env:DATABASE_URL}
functions:
api:
handler: src/handler.handler
events:
- http:
path: /{proxy+}
method: ANY
cors: true
timeout: 30
memorySize: 512
plugins:
- serverless-offline
- serverless-webpack
custom:
webpack:
webpackConfig: webpack.config.js
includeModules: true
// Express.js handler for Lambda
import express from 'express';
import serverless from 'serverless-http';
const app = express();
app.use(express.json());
app.get('/api/health', (req, res) => {
res.json({ status: 'healthy', timestamp: new Date().toISOString() });
});
app.get('/api/users', async (req, res) => {
try {
const users = await getUsersFromDatabase();
res.json(users);
} catch (error) {
res.status(500).json({ error: 'Internal server error' });
}
});
// Export the serverless handler
export const handler = serverless(app);
Benefits and Considerations
- Pay per request: No idle server costs
- Automatic scaling: From 0 to thousands of concurrent executions
- Managed infrastructure: No server maintenance
- Cold starts: Initial request latency consideration
- Execution limits: 15-minute maximum execution time
Container-Based Deployment
Amazon ECS with Fargate
Perfect for microservices and applications requiring more control over the runtime environment.
# Multi-stage Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]
// ECS Service with CDK
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as elbv2 from 'aws-cdk-lib/aws-elasticloadbalancingv2';
export class ContainerServiceStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
// VPC
const vpc = new ec2.Vpc(this, 'MyVpc', {
maxAzs: 2,
});
// ECS Cluster
const cluster = new ecs.Cluster(this, 'MyCluster', {
vpc,
containerInsights: true,
});
// Task Definition
const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDef', {
memoryLimitMiB: 512,
cpu: 256,
});
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromRegistry('my-app:latest'),
memoryLimitMiB: 512,
logging: ecs.LogDrivers.awsLogs({
streamPrefix: 'my-app',
}),
environment: {
NODE_ENV: 'production',
},
});
container.addPortMappings({
containerPort: 3000,
protocol: ecs.Protocol.TCP,
});
// Fargate Service
const service = new ecs.FargateService(this, 'Service', {
cluster,
taskDefinition,
desiredCount: 2,
assignPublicIp: false,
});
// Application Load Balancer
const lb = new elbv2.ApplicationLoadBalancer(this, 'LB', {
vpc,
internetFacing: true,
});
const listener = lb.addListener('Listener', {
port: 80,
});
listener.addTargets('ECS', {
port: 80,
targets: [service],
healthCheckPath: '/health',
});
}
}
Auto Scaling Configuration
// Auto scaling based on CPU and memory
const scalableTarget = service.autoScaleTaskCount({
minCapacity: 2,
maxCapacity: 10,
});
scalableTarget.scaleOnCpuUtilization('CpuScaling', {
targetUtilizationPercent: 70,
scaleInCooldown: Duration.minutes(5),
scaleOutCooldown: Duration.minutes(2),
});
scalableTarget.scaleOnMemoryUtilization('MemoryScaling', {
targetUtilizationPercent: 80,
});
Kubernetes on AWS (EKS)
For Complex Microservices Architectures
# kubernetes/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-app
labels:
app: my-web-app
spec:
replicas: 3
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: web
image: my-web-app:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app-service
spec:
selector:
app: my-web-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
// EKS Cluster with CDK
import * as eks from 'aws-cdk-lib/aws-eks';
export class EKSClusterStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const cluster = new eks.Cluster(this, 'MyCluster', {
version: eks.KubernetesVersion.V1_24,
defaultCapacity: 0, // We'll add managed node groups
});
// Managed Node Group
cluster.addNodegroupCapacity('ManagedNodes', {
instanceTypes: [
new ec2.InstanceType('t3.medium'),
new ec2.InstanceType('t3.large'),
],
minSize: 2,
maxSize: 10,
desiredSize: 3,
});
// Install AWS Load Balancer Controller
cluster.addHelmChart('AWSLoadBalancerController', {
chart: 'aws-load-balancer-controller',
repository: 'https://aws.github.io/eks-charts',
namespace: 'kube-system',
values: {
clusterName: cluster.clusterName,
},
});
}
}
CI/CD Pipeline Integration
GitHub Actions with AWS
# .github/workflows/deploy.yml
name: Deploy to AWS
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Build application
run: npm run build
# For Static Site Deployment
- name: Deploy to S3
run: aws s3 sync ./dist s3://${{ secrets.S3_BUCKET }} --delete
- name: Invalidate CloudFront
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} \
--paths "/*"
# For Container Deployment
- name: Build and push Docker image
env:
ECR_REGISTRY: ${{ secrets.ECR_REGISTRY }}
ECR_REPOSITORY: my-web-app
IMAGE_TAG: ${{ github.sha }}
run: |
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $ECR_REGISTRY
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Update ECS service
run: |
aws ecs update-service \
--cluster my-cluster \
--service my-service \
--force-new-deployment
Monitoring and Observability
CloudWatch Integration
// Custom metrics and alarms
import * as cloudwatch from 'aws-cdk-lib/aws-cloudwatch';
import * as actions from 'aws-cdk-lib/aws-cloudwatch-actions';
import * as sns from 'aws-cdk-lib/aws-sns';
// SNS Topic for alerts
const alertTopic = new sns.Topic(this, 'AlertTopic');
// Application Load Balancer metrics
const albTargetResponseTime = new cloudwatch.Metric({
namespace: 'AWS/ApplicationELB',
metricName: 'TargetResponseTime',
dimensionsMap: {
LoadBalancer: lb.loadBalancerFullName,
},
statistic: 'Average',
});
// Alarm for high response time
new cloudwatch.Alarm(this, 'HighResponseTimeAlarm', {
metric: albTargetResponseTime,
threshold: 2, // 2 seconds
evaluationPeriods: 2,
treatMissingData: cloudwatch.TreatMissingData.NOT_BREACHING,
});
// Custom application metrics
const customMetric = new cloudwatch.Metric({
namespace: 'MyApp/Performance',
metricName: 'DatabaseQueryTime',
statistic: 'Average',
});
Application Performance Monitoring
// Express.js middleware for custom metrics
import { CloudWatch } from 'aws-sdk';
const cloudwatch = new CloudWatch();
const metricsMiddleware = (req: Request, res: Response, next: NextFunction) => {
const start = Date.now();
res.on('finish', () => {
const duration = Date.now() - start;
// Send custom metric to CloudWatch
cloudwatch.putMetricData({
Namespace: 'MyApp/Performance',
MetricData: [
{
MetricName: 'RequestDuration',
Value: duration,
Unit: 'Milliseconds',
Dimensions: [
{
Name: 'Method',
Value: req.method,
},
{
Name: 'Route',
Value: req.route?.path || req.path,
},
],
},
],
}).promise().catch(console.error);
});
next();
};
app.use(metricsMiddleware);
Cost Optimization Strategies
1. Right-sizing Resources
// Use appropriate instance sizes
const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDef', {
memoryLimitMiB: 512, // Start small
cpu: 256, // Scale up as needed
});
// Implement auto-scaling to handle traffic variations
const scalableTarget = service.autoScaleTaskCount({
minCapacity: 1, // Minimum for availability
maxCapacity: 10, // Maximum for cost control
});
2. Use Spot Instances for Non-Critical Workloads
// ECS Capacity Provider with Spot instances
const spotCapacityProvider = new ecs.AsgCapacityProvider(this, 'SpotCapacityProvider', {
autoScalingGroup: new autoscaling.AutoScalingGroup(this, 'SpotASG', {
vpc,
instanceType: new ec2.InstanceType('t3.medium'),
machineImage: ecs.EcsOptimizedImage.amazonLinux2(),
spotPrice: '0.05',
minCapacity: 0,
maxCapacity: 10,
}),
});
cluster.addAsgCapacityProvider(spotCapacityProvider);
3. Implement Lifecycle Policies
// S3 lifecycle policy for static assets
const siteBucket = new s3.Bucket(this, 'SiteBucket', {
lifecycleRules: [
{
id: 'DeleteOldVersions',
enabled: true,
noncurrentVersionExpiration: Duration.days(30),
},
{
id: 'TransitionToIA',
enabled: true,
transitions: [
{
storageClass: s3.StorageClass.INFREQUENT_ACCESS,
transitionAfter: Duration.days(30),
},
{
storageClass: s3.StorageClass.GLACIER,
transitionAfter: Duration.days(90),
},
],
},
],
});
Choosing the Right Strategy
Decision Matrix
Use Case | Traffic Pattern | Complexity | Recommended Strategy |
---|---|---|---|
Static Site/SPA | Any | Low | S3 + CloudFront |
API with Variable Traffic | Unpredictable | Medium | Lambda + API Gateway |
Microservices | Consistent | Medium-High | ECS Fargate |
Complex Orchestration | High | High | EKS |
Legacy Applications | Consistent | Low-Medium | EC2 + ALB |
Cost Considerations
- Static Sites: $1-10/month for most sites
- Serverless: Pay per request, great for variable traffic
- Containers: $30-200/month depending on size and usage
- Kubernetes: $70+ for cluster + node costs
Conclusion
The key to successful AWS deployment is matching your strategy to your specific requirements:
- Start simple with static hosting or serverless
- Scale complexity as your needs grow
- Monitor costs and optimize continuously
- Implement proper CI/CD from the beginning
- Plan for monitoring and observability
Remember, you can always start with a simpler approach and evolve your architecture as your application and team mature. The important thing is to choose a strategy that aligns with your current needs while providing a clear path for future growth.
Each deployment strategy has its place, and the best choice depends on your specific requirements for scalability, cost, complexity, and team expertise. Start with what makes sense for your current situation, and don’t be afraid to evolve your approach as you learn and grow.