AWS Compute Optimizer: Right-Sizing EC2, Lambda, and ECS Automatically
Most AWS accounts run EC2 instances that are the wrong size. Not dramatically wrong — nobody runs an m5.24xlarge for a blog — but quietly, consistently over-provisioned. An instance that peaks at 15% CPU with 20% memory utilization is a candidate for downsizing. Compute Optimizer looks at 14 days of CloudWatch metrics and tells you which instances to resize, which Lambda functions need more or less memory, which EBS volumes are over-provisioned, and which ECS Fargate tasks have CPU and memory headroom.
It’s free to enable for basic metrics (14 days of CloudWatch data). The paid tier, Enhanced Infrastructure Metrics, extends the lookback to 93 days and improves recommendation accuracy for workloads with monthly cycles. This guide covers enabling Compute Optimizer, reading recommendations for each resource type, automating the analysis, and integrating with your cost management workflow.
Enabling Compute Optimizer
Compute Optimizer is opt-in and disabled by default. Enable it at the account level or across an AWS Organization:
# Enable for a single account
aws compute-optimizer update-enrollment-status \
--status Active
# Verify enrollment
aws compute-optimizer get-enrollment-status
# For organization-wide enrollment (run from management account)
aws compute-optimizer update-enrollment-status \
--status Active \
--include-member-accounts
# Check enrollment status across the org
aws compute-optimizer get-enrollment-statuses-for-organization \
--query 'accountEnrollmentStatuses[*].{Account:accountId,Status:status}'
After enabling, Compute Optimizer needs 14 days of CloudWatch metrics before it generates recommendations. New accounts or recently-launched instances will show status InsufficientData until the lookback window fills.
Enabling Enhanced Infrastructure Metrics extends the lookback to 93 days and costs $0.0036 per EC2 instance per month. For instances with monthly traffic patterns (batch jobs, end-of-month reporting), 14 days misses most of the variation. The cost is negligible — 10 instances is $0.43/month — and the improved accuracy often pays for itself with a single correctly-sized instance.
# Enable Enhanced Infrastructure Metrics for specific instances
aws compute-optimizer put-recommendation-preferences \
--resource-type Ec2Instance \
--scope '{"name": "AccountId", "value": "123456789012"}' \
--enhanced-infrastructure-metrics Activated
# Or for a specific Auto Scaling group
aws compute-optimizer put-recommendation-preferences \
--resource-type AutoScalingGroup \
--scope '{"name": "ResourceArn", "value": "arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:xxx:autoScalingGroupName/my-asg"}' \
--enhanced-infrastructure-metrics Activated
EC2 Instance Recommendations
Compute Optimizer classifies each EC2 instance into one of four findings:
- Over-provisioned: instance is larger than needed; downsizing saves money
- Under-provisioned: instance is too small; CPU, memory, or network is maxing out
- Optimized: current size is appropriate
- InsufficientData: not enough CloudWatch metrics to make a recommendation
# Get EC2 recommendations for all instances in the account
aws compute-optimizer get-ec2-instance-recommendations \
--query 'instanceRecommendations[*].{
Instance:instanceArn,
Finding:finding,
CurrentType:currentInstanceType,
Recommended:recommendationOptions[0].instanceType,
MonthlySavings:recommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.value,
Currency:recommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.currency
}' \
--output table
# Filter to only over-provisioned instances (highest priority for cost savings)
aws compute-optimizer get-ec2-instance-recommendations \
--filters name=Finding,values=Overprovisioned \
--query 'instanceRecommendations[*].{
Instance:instanceArn,
Current:currentInstanceType,
Recommended:recommendationOptions[0].instanceType,
CPUMax:utilizationMetrics[?name==`CPU`].value|[0],
Savings:recommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.value
}' \
--output table
Each recommendation includes up to three alternative instance types with estimated monthly savings and the projected CPU/memory utilization at each size. The recommendation isn’t just “go smaller” — it accounts for burstable instance behavior, savings plans pricing if active, and whether you’re using the instance on-demand or with a commitment.
One thing to verify before acting on recommendations: Compute Optimizer doesn’t know about application-level constraints. An instance that looks CPU-idle at the CloudWatch level might be memory-constrained at the application level if the JVM heap is tuned to the current instance size. Always review recommendations against application metrics, not just OS-level ones.
Lambda Recommendations
Lambda right-sizing is often overlooked. Functions default to 128MB, and developers set memory manually without systematic analysis. Memory in Lambda affects both cost and performance — more memory means more CPU, so under-provisioned functions run slower and cost more per invocation.
# Get Lambda recommendations
aws compute-optimizer get-lambda-function-recommendations \
--query 'lambdaFunctionRecommendations[*].{
Function:functionArn,
Finding:finding,
CurrentMemory:memorySizeRecommendationOptions[0].rank,
Recommended:recommendationOptions[0].configuration.memorySizeInMB,
CurrentDuration:utilizationMetrics[?name==`Duration`].value|[0]
}' \
--output table
# Detailed recommendation for a specific function
aws compute-optimizer get-lambda-function-recommendations \
--function-arns arn:aws:lambda:us-east-1:123456789012:function:my-api-handler \
--query 'lambdaFunctionRecommendations[0]'
Lambda findings are different from EC2:
- MemoryOverprovisioned: Reducing memory would maintain performance and cost less per invocation
- MemoryUnderprovisioned: The function is constrained — increasing memory would reduce duration enough to lower or maintain cost
- Optimized: Memory is appropriate for the workload
- Unavailable: Insufficient invocation data (function called fewer than ~50 times in the lookback period)
The MemoryUnderprovisioned case is counterintuitive. Increasing Lambda memory costs more per GB-second but reduces execution time. If a function runs in 3 seconds at 256MB ($0.0000048 per invocation) but would run in 1 second at 512MB ($0.0000032 per invocation), the higher memory configuration is actually cheaper. Compute Optimizer surfaces these cases automatically.
EBS Volume Recommendations
EBS right-sizing has two components: volume type and volume size. The most common recommendation is migrating gp2 volumes to gp3, which costs 20% less and provides better baseline performance.
# Get EBS volume recommendations
aws compute-optimizer get-ebs-volume-recommendations \
--query 'volumeRecommendations[*].{
Volume:volumeArn,
Finding:finding,
CurrentType:currentConfiguration.volumeType,
CurrentSize:currentConfiguration.volumeSize,
RecommendedType:volumeRecommendationOptions[0].configuration.volumeType,
RecommendedSize:volumeRecommendationOptions[0].configuration.volumeSize,
MonthlySavings:volumeRecommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.value
}' \
--output table
# Count volumes by finding type
aws compute-optimizer get-ebs-volume-recommendations \
--query 'volumeRecommendations[*].finding' \
--output text | sort | uniq -c | sort -rn
For the gp2 to gp3 migration specifically, you can do it with no downtime on a running instance:
# Modify a gp2 volume to gp3 in-place (no restart required)
aws ec2 modify-volume \
--volume-id vol-0123456789abcdef0 \
--volume-type gp3 \
--iops 3000 \ # gp3 default is 3000 IOPS (same as gp2 baseline for volumes <1TB)
--throughput 125 # gp3 default is 125 MB/s
# Check modification progress
aws ec2 describe-volumes-modifications \
--volume-ids vol-0123456789abcdef0 \
--query 'VolumesModifications[0].{State:ModificationState,Progress:Progress}'
gp3 baseline is 3,000 IOPS and 125 MB/s for any volume size. gp2 baselines 3 IOPS per GB, which means a 100GB gp2 volume only gets 300 IOPS at baseline. For small volumes, gp3 is both cheaper and faster.
ECS Fargate Recommendations
Fargate task CPU and memory are defined at launch — there’s no automatic scaling of resources within a task. Over-provisioned Fargate tasks waste money on every running container.
# Get ECS Fargate recommendations
aws compute-optimizer get-ecs-service-recommendations \
--query 'ecsServiceRecommendations[*].{
Service:serviceArn,
Finding:finding,
CurrentCPU:currentServiceConfiguration.cpu,
CurrentMemory:currentServiceConfiguration.memory,
RecommendedCPU:serviceRecommendationOptions[0].containerRecommendations[0].cpu[0].value,
RecommendedMemory:serviceRecommendationOptions[0].containerRecommendations[0].memory[0].value,
Savings:serviceRecommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.value
}' \
--output table
Fargate pricing is per vCPU-hour and per GB-hour. A task defined as 1 vCPU / 2GB running for 30 days costs about $29/month. If Compute Optimizer recommends dropping to 0.5 vCPU / 1GB (the task consistently uses <25% CPU and <40% memory), that’s $14.50/month per running task — pure savings with no performance impact at the measured utilization levels.
Exporting Recommendations to S3
For large accounts or automated workflows, export recommendations to S3 for processing:
# Export all EC2 recommendations to S3
aws compute-optimizer export-ec2-instance-recommendations \
--s3-destination-config bucket=my-optimizer-exports,keyPrefix=ec2/ \
--file-format Csv \
--include-member-accounts
# Export Lambda recommendations
aws compute-optimizer export-lambda-function-recommendations \
--s3-destination-config bucket=my-optimizer-exports,keyPrefix=lambda/ \
--file-format Csv
# The export is async — check job status
aws compute-optimizer describe-recommendation-export-jobs \
--query 'recommendationExportJobs[*].{JobId:jobId,Status:status,ResourceType:resourceType}'
The CSV export includes all recommendation details plus estimated savings. Running this weekly and loading into a spreadsheet or BI tool gives a running view of potential savings and tracks which recommendations have been acted on over time.
Integration with Cost Explorer
Compute Optimizer and Cost Explorer show different views of the same problem. Cost Explorer shows your actual spend history and savings plan coverage. Compute Optimizer shows what you could save by right-sizing. The full picture requires both.
A simple workflow: run Compute Optimizer exports weekly, filter to instances with estimated savings greater than $20/month, review against Cost Explorer to confirm the cost pattern, then resize during the next maintenance window. Focus on the highest-savings instances first — the 20% of instances that represent 80% of waste.
For EC2 instances that can’t be resized (production databases, legacy applications), look at savings plans and reserved instances in Cost Explorer to at least reduce the per-hour cost of the existing size. The AWS Savings Plans vs Reserved Instances guide covers the commitment strategy for workloads you’re keeping at their current size. For the cost monitoring and budgeting side, AWS Cost Explorer and Budgets covers setting up spend alerts before they become surprises.
Comments