AWS App Runner: Deploy Containerized Apps Without Managing Infrastructure

Bits Lovers
Written by Bits Lovers on
AWS App Runner: Deploy Containerized Apps Without Managing Infrastructure

AWS App Runner launched in 2021 to fill a real gap: you have a containerized web app or API, you want it running on AWS, and you don’t want to think about clusters, task definitions, load balancers, or auto-scaling groups. You give App Runner a container image or a GitHub repository, configure a few settings, and it handles everything — provisioning, scaling, TLS termination, health checks, and traffic routing. The resulting URL is live in minutes, not hours.

This guide covers how App Runner actually works, how to deploy from both container images and source code, VPC connectivity for private resources, auto-scaling behavior, and where App Runner fits (and doesn’t fit) compared to ECS Fargate, Lambda, and Elastic Beanstalk.

How App Runner Works

App Runner runs your workload in a fully managed compute layer. You never see the underlying infrastructure — no EC2 instances to choose, no ECS clusters to configure, no Kubernetes nodes to manage.

Behind the scenes, App Runner uses a shared compute fleet. Your service gets isolated containers with dedicated CPU and memory. Traffic hits an App Runner-managed load balancer, which routes to your container instances. Scaling is automatic based on concurrency — App Runner adds instances when request queue depth grows and removes them when traffic drops.

Two source types: container images and source code.

Container image source: App Runner pulls from ECR (public or private) or Docker Hub. You bring the image; App Runner runs it. This is the recommended path for production workloads — you control exactly what’s in the image.

Source code: App Runner connects to a GitHub repository via an App Runner connection, detects your runtime (Python, Node.js, Java, Go, .NET, Ruby, PHP), builds the image using a managed build environment, and deploys it. Useful for simpler workloads where you don’t want to maintain a Dockerfile.

Ports: App Runner expects your container to listen on a single port (default 8080). All traffic comes in on HTTPS 443 — App Runner handles TLS with an AWS-managed certificate. You can’t choose HTTP only.

Deploying from a Container Image

# Create an App Runner service from an ECR image
aws apprunner create-service \
  --service-name my-api \
  --source-configuration '{
    "ImageRepository": {
      "ImageIdentifier": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-api:latest",
      "ImageConfiguration": {
        "Port": "8080",
        "RuntimeEnvironmentVariables": {
          "DB_HOST": "rds.endpoint.us-east-1.rds.amazonaws.com",
          "ENV": "production"
        }
      },
      "ImageRepositoryType": "ECR"
    },
    "AutoDeploymentsEnabled": true,
    "AuthenticationConfiguration": {
      "AccessRoleArn": "arn:aws:iam::123456789012:role/AppRunnerECRRole"
    }
  }' \
  --instance-configuration '{
    "Cpu": "1 vCPU",
    "Memory": "2 GB",
    "InstanceRoleArn": "arn:aws:iam::123456789012:role/AppRunnerInstanceRole"
  }' \
  --health-check-configuration '{
    "Protocol": "HTTP",
    "Path": "/health",
    "Interval": 10,
    "Timeout": 5,
    "HealthyThreshold": 1,
    "UnhealthyThreshold": 5
  }'

AutoDeploymentsEnabled: true is the key setting that makes ECR image sources practical. When you push a new image to the configured tag, App Runner detects the digest change and automatically deploys it. For private ECR repositories, the AccessRoleArn must have ecr:GetAuthorizationToken, ecr:BatchGetImage, and ecr:GetDownloadUrlForLayer permissions.

The InstanceRoleArn is the IAM role your running container assumes. If your API calls DynamoDB, S3, or Secrets Manager, those permissions go here — same pattern as ECS task roles.

# Monitor deployment status
aws apprunner describe-service \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123 \
  --query 'Service.{Status:Status,URL:ServiceUrl}'

# Wait for service to be running
aws apprunner wait service-updated \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123

The service URL (something like abc123.us-east-1.awsapprunner.com) is ready as soon as status reaches RUNNING.

Deploying from Source Code

# First, create a GitHub connection (one-time setup, requires browser auth)
aws apprunner create-connection \
  --connection-name github-connection \
  --provider-type GITHUB

# After completing OAuth in the console, get the connection ARN
CONNECTION_ARN=$(aws apprunner list-connections \
  --query 'ConnectionSummaryList[?ConnectionName==`github-connection`].ConnectionArn' \
  --output text)

# Create service from source code
aws apprunner create-service \
  --service-name my-python-api \
  --source-configuration "{
    \"CodeRepository\": {
      \"RepositoryUrl\": \"https://github.com/myorg/my-api\",
      \"SourceCodeVersion\": {
        \"Type\": \"BRANCH\",
        \"Value\": \"main\"
      },
      \"CodeConfiguration\": {
        \"ConfigurationSource\": \"API\",
        \"CodeConfigurationValues\": {
          \"Runtime\": \"PYTHON_3\",
          \"BuildCommand\": \"pip install -r requirements.txt\",
          \"StartCommand\": \"python app.py\",
          \"Port\": \"8080\",
          \"RuntimeEnvironmentVariables\": {
            \"ENV\": \"production\"
          }
        }
      }
    },
    \"AutoDeploymentsEnabled\": true,
    \"AuthenticationConfiguration\": {
      \"ConnectionArn\": \"$CONNECTION_ARN\"
    }
  }" \
  --instance-configuration '{"Cpu": "1 vCPU", "Memory": "2 GB"}'

ConfigurationSource: API means the build and start commands come from this API call. Alternatively, ConfigurationSource: REPOSITORY reads from an apprunner.yaml file at the root of your repo — useful for keeping the deployment config version-controlled with the code:

# apprunner.yaml
version: 1.0
runtime: python3
build:
  commands:
    build:
      - pip install -r requirements.txt
run:
  runtime-version: 3.12
  command: python app.py
  network:
    port: 8080
  env:
    - name: ENV
      value: production

The GitHub connection requires one manual OAuth step in the AWS console — there’s no CLI-only path for the initial authorization. Once the connection exists, everything else is automatable.

Auto-Scaling Configuration

App Runner’s scaling model is concurrency-based, not CPU or memory based. You configure a target for maximum concurrent requests per instance:

# Create an auto-scaling configuration
AUTOSCALING_ARN=$(aws apprunner create-auto-scaling-configuration \
  --auto-scaling-configuration-name my-api-scaling \
  --max-concurrency 100 \
  --min-size 1 \
  --max-size 25 \
  --query 'AutoScalingConfiguration.AutoScalingConfigurationArn' \
  --output text)

# Associate with service
aws apprunner update-service \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123 \
  --auto-scaling-configuration-arn $AUTOSCALING_ARN

MaxConcurrency: 100 means App Runner starts a new instance when an existing one is handling 100 simultaneous requests. Set this based on how many concurrent connections your application can handle without degrading. A CPU-bound service processing images might handle 10 concurrent requests well; a mostly-idle API waiting on database queries might handle 500.

MinSize: 1 prevents complete scale-to-zero — one instance always runs. Set it to 0 to scale to zero entirely, but understand the cold start cost: the first request after a scale-to-zero event waits for a new instance to start, which takes 15-30 seconds for a typical container. For user-facing applications, MinSize 1 is the right default.

MaxSize: 25 caps the instance count and therefore limits cost. App Runner doesn’t set a maximum by default — without it, a traffic spike could spin up hundreds of instances.

VPC Connector for Private Resources

By default, App Runner runs in AWS-managed VPCs with no access to your private resources (RDS, ElastiCache, internal services). A VPC connector gives App Runner egress into your VPC:

# Create VPC connector
CONNECTOR_ARN=$(aws apprunner create-vpc-connector \
  --vpc-connector-name my-api-connector \
  --subnets subnet-private-1a subnet-private-1b \
  --security-groups sg-apprunner-egress \
  --query 'VpcConnector.VpcConnectorArn' \
  --output text)

# Attach to service
aws apprunner update-service \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123 \
  --network-configuration "{
    \"EgressConfiguration\": {
      \"EgressType\": \"VPC\",
      \"VpcConnectorArn\": \"$CONNECTOR_ARN\"
    }
  }"

# Security group on RDS: allow port 5432 from App Runner's security group
aws ec2 authorize-security-group-ingress \
  --group-id sg-rds-database \
  --protocol tcp \
  --port 5432 \
  --source-group sg-apprunner-egress

The security group on the VPC connector controls outbound traffic from App Runner to your VPC. The security group on your RDS instance (or other resource) controls inbound — allow the App Runner SG as the source. Standard VPC security group rules apply; nothing special about App Runner here.

Note that the VPC connector only provides egress (outbound) connectivity from App Runner to your VPC. Inbound traffic to App Runner still comes through the App Runner load balancer, not through your VPC.

Custom Domains

# Associate a custom domain (you must own the DNS)
aws apprunner associate-custom-domain \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123 \
  --domain-name api.example.com \
  --enable-www-subdomain false

# Get the CNAME records to add to your DNS
aws apprunner describe-custom-domains \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123 \
  --query 'CustomDomains[].CertificateValidationRecords[]'

App Runner provisions an ACM certificate for the domain automatically. After the association, you get CNAME records to add to your DNS for certificate validation. Once the certificate validates, HTTPS traffic to your domain routes to the App Runner service. No certificate management, no load balancer configuration.

Observability

App Runner sends logs and metrics to CloudWatch automatically. No agent configuration needed:

# View application logs
aws logs filter-log-events \
  --log-group-name /aws/apprunner/my-api/abc123/application \
  --start-time $(date -d '1 hour ago' +%s)000 \
  --filter-pattern "ERROR"

# App Runner service logs (deployment, scaling events)
aws logs filter-log-events \
  --log-group-name /aws/apprunner/my-api/abc123/service \
  --start-time $(date -d '30 minutes ago' +%s)000

# Key CloudWatch metrics for App Runner
aws cloudwatch get-metric-statistics \
  --namespace AWS/AppRunner \
  --metric-name RequestLatency \
  --dimensions Name=ServiceName,Value=my-api \
  --statistics p99 \
  --period 300 \
  --start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ)

The metrics that matter most: RequestLatency (P99 is where problems hide), ActiveInstances (watch for unexpected scale-out), 2xxStatusRate and 5xxStatusRate (set alarms on the 5xx rate), and Concurrency (how close you are to triggering scale-out).

Enable X-Ray tracing to get distributed traces through your application:

aws apprunner update-service \
  --service-arn arn:aws:apprunner:us-east-1:123456789012:service/my-api/abc123 \
  --observability-configuration '{
    "ObservabilityEnabled": true,
    "ObservabilityConfigurationArn": "arn:aws:apprunner:us-east-1:123456789012:observabilityconfiguration/DefaultConfiguration/1/abc"
  }'

X-Ray integration requires the AWS X-Ray SDK in your application code. App Runner doesn’t instrument automatically — it only provides the sidecar daemon.

App Runner vs ECS Fargate vs Lambda

The choice comes down to what you’re willing to trade:

App Runner: Zero cluster management, fastest time to production, concurrency-based scaling. Trade-off: limited configuration control, higher per-unit cost than Fargate at scale, no support for long-running background jobs, no EFS mounting, no GPU instances.

ECS Fargate: Full task definition control, lower cost at scale, supports every ECS feature (service discovery, EFS, GPU, complex networking). Trade-off: you manage the ECS service, load balancer, target groups, and auto-scaling policies separately.

Lambda: Best for event-driven, short-duration workloads. Sub-second billing granularity is unbeatable for infrequently invoked functions. Trade-off: 15-minute execution limit, cold starts, no persistent connections, payload size limits. For a web API that receives sustained traffic, Lambda cold starts can be a constant irritant that App Runner avoids entirely.

App Runner makes sense when: your team has a container and wants it running on the internet within 10 minutes, you don’t need the operational surface area of ECS, and the workload is a web API or frontend service (not a batch job or event processor).

Pricing

App Runner charges for compute provisioned (active and paused), plus build minutes for source code deployments.

For a 1 vCPU / 2 GB instance: $0.064/hour per active instance. Paused instances (when traffic drops and instances become idle but haven’t scaled to zero) cost $0.007/hour — about 11% of the active rate.

A service running MinSize 1 with 1 active instance 24/7 at 1 vCPU / 2 GB: $0.064 × 720 hours = $46.08/month. That’s more than running a t3.small EC2 instance ($15.18/month) but includes the managed load balancer, TLS, auto-scaling, and zero operational overhead.

At higher scale — say, 10 active instances — the cost gap between App Runner and ECS Fargate grows. Fargate at 10 × 1 vCPU / 2 GB costs roughly $30/month in compute; App Runner at the same size costs roughly $460/month. The convenience premium is real.

For new services, prototypes, internal APIs, and teams without dedicated DevOps support, App Runner’s pricing is entirely reasonable. For high-throughput production services with stable load, Fargate is the better value. The ECS Fargate guide covers when to make that transition and how to migrate.

The Lambda cold starts guide covers the tradeoffs between Lambda and always-warm compute like App Runner in detail. If you’re building on top of a multi-VPC architecture, the VPC design patterns guide explains how VPC connectors fit into hub-and-spoke topologies.

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus