AWS PrivateLink: Private Connectivity Without NAT or VPN
The default path for a private EC2 instance to reach an AWS service like S3, Secrets Manager, or SSM is through a NAT gateway — $0.045/hour plus $0.045 per GB processed, with traffic crossing the public internet conceptually even if AWS keeps it on their backbone. PrivateLink offers the alternative: private IP connectivity to AWS services and your own services directly within your VPC, no internet path involved, at a fraction of the NAT cost for high-volume endpoints.
This guide covers the two endpoint types (interface and gateway), when each applies, how to expose your own services via PrivateLink, DNS resolution inside VPCs, and the cost comparison that makes S3 and DynamoDB gateway endpoints a no-brainer for any production environment.
Gateway Endpoints vs Interface Endpoints
AWS has two types of VPC endpoints, and confusing them is the most common PrivateLink mistake.
Gateway endpoints are free. They add a route table entry in your subnet routing table that directs traffic for a service’s IP ranges to the endpoint instead of the internet gateway or NAT gateway. Gateway endpoints only support two services: S3 and DynamoDB. They don’t use private DNS or create ENIs — they’re just route table entries.
Interface endpoints (PrivateLink) create an Elastic Network Interface in your subnet with a private IP address. Traffic to the service routes through that ENI. They support almost every AWS service plus third-party PrivateLink services. They cost $0.01/hour per AZ plus $0.01 per GB.
The right mental model: gateway endpoints for S3 and DynamoDB (always use them — they’re free). Interface endpoints for everything else that needs to stay private.
Gateway Endpoints: S3 and DynamoDB
Create an S3 gateway endpoint in under a minute:
# Get your VPC ID and route table IDs
VPC_ID=vpc-0abc123
ROUTE_TABLES=$(aws ec2 describe-route-tables \
--filters "Name=vpc-id,Values=$VPC_ID" \
--query 'RouteTables[].RouteTableId' \
--output text)
# Create S3 gateway endpoint
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--service-name com.amazonaws.us-east-1.s3 \
--vpc-endpoint-type Gateway \
--route-table-ids $ROUTE_TABLES
# Create DynamoDB gateway endpoint
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--service-name com.amazonaws.us-east-1.dynamodb \
--vpc-endpoint-type Gateway \
--route-table-ids $ROUTE_TABLES
After this, EC2 instances and Lambda functions in those subnets reach S3 and DynamoDB through AWS’s internal network without going through a NAT gateway. For a Lambda-heavy architecture calling DynamoDB frequently, the NAT gateway savings alone justify the five minutes of setup.
Gateway endpoints also support endpoint policies — JSON documents that restrict what’s accessible through the endpoint:
{
"Statement": [{
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::my-specific-bucket/*"
}]
}
An endpoint policy that only allows access to specific buckets prevents credentials compromised on one instance from reaching buckets belonging to other teams.
Interface Endpoints for AWS Services
Interface endpoints cover the rest of the AWS service catalog. Common ones for production environments:
# Create endpoints for SSM Session Manager (three required)
for SERVICE in ssm ssmmessages ec2messages; do
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--vpc-endpoint-type Interface \
--service-name com.amazonaws.us-east-1.$SERVICE \
--subnet-ids subnet-private-1a subnet-private-1b \
--security-group-ids sg-endpoints \
--private-dns-enabled
done
# Secrets Manager endpoint
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--vpc-endpoint-type Interface \
--service-name com.amazonaws.us-east-1.secretsmanager \
--subnet-ids subnet-private-1a subnet-private-1b \
--security-group-ids sg-endpoints \
--private-dns-enabled
# ECR endpoints (two required for container pulls)
for SERVICE in ecr.api ecr.dkr; do
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--vpc-endpoint-type Interface \
--service-name com.amazonaws.us-east-1.$SERVICE \
--subnet-ids subnet-private-1a subnet-private-1b \
--security-group-ids sg-endpoints \
--private-dns-enabled
done
# CloudWatch Logs endpoint (for log delivery without NAT)
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--vpc-endpoint-type Interface \
--service-name com.amazonaws.us-east-1.logs \
--subnet-ids subnet-private-1a subnet-private-1b \
--security-group-ids sg-endpoints \
--private-dns-enabled
The security group on the endpoints controls which resources can use them. It should allow HTTPS (443) inbound from the security groups attached to your compute resources (EC2 instances, ECS tasks, Lambda functions in a VPC).
DNS Resolution with Private DNS
The --private-dns-enabled flag is what makes interface endpoints transparent to application code. When enabled, the endpoint overrides the public DNS name for the service within your VPC. secretsmanager.us-east-1.amazonaws.com resolves to the endpoint’s private IP instead of the public service endpoint.
Your application code doesn’t change — it still calls secretsmanager.us-east-1.amazonaws.com, but within the VPC that hostname resolves to a private IP address. Traffic never leaves the AWS network.
Two VPC settings must be enabled for private DNS to work:
# Both must be true for private DNS on endpoints to function
aws ec2 modify-vpc-attribute \
--vpc-id $VPC_ID \
--enable-dns-support '{"Value": true}'
aws ec2 modify-vpc-attribute \
--vpc-id $VPC_ID \
--enable-dns-hostnames '{"Value": true}'
If you see connection timeouts from private instances to AWS services after creating endpoints, check these two settings first. They’re the most common reason endpoints appear to work but don’t.
Creating Your Own PrivateLink Service
PrivateLink isn’t just for consuming AWS services. You can expose your own services privately to other VPCs or AWS accounts without VPC peering or Transit Gateway. The setup: put your service behind a Network Load Balancer, create a VPC endpoint service pointing to the NLB, and share it.
# Your service must be behind an NLB
NLB_ARN="arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/net/my-service-nlb/abc123"
# Create the endpoint service
ENDPOINT_SERVICE=$(aws ec2 create-vpc-endpoint-service-configuration \
--network-load-balancer-arns $NLB_ARN \
--acceptance-required \
--query 'ServiceConfiguration.ServiceId' \
--output text)
# Allow specific accounts to create endpoints to your service
aws ec2 modify-vpc-endpoint-service-permissions \
--service-id $ENDPOINT_SERVICE \
--add-allowed-principals '["arn:aws:iam::999999999999:root"]'
The consumer account creates an interface endpoint pointing to your service name (com.amazonaws.vpce.us-east-1.vpce-svc-xxxx). With acceptance-required: true, each endpoint connection request must be approved by the service owner. Set it to false for trusted internal consumers.
This pattern works for:
- Internal platform services that multiple teams consume (authentication service, payment processor, internal API)
- SaaS products where you want to offer customers private connectivity without VPC peering (Datadog, Snowflake, and Confluent all use this)
- Microservices that need to cross account boundaries without Transit Gateway overhead
Cost Comparison: PrivateLink vs NAT Gateway
The cost math matters for choosing between interface endpoints and NAT gateway:
NAT Gateway: $0.045/hour ($32.40/month) + $0.045/GB processed. For an ECS cluster pulling container images from ECR (say 200 GB/month), that’s $32.40 + $9 = $41.40/month per AZ.
ECR Interface Endpoints: Two endpoints (ecr.api + ecr.dkr) × $0.01/hour × 2 AZs = $14.40/month + $0.01/GB × 200 GB = $2 = $16.40/month. Plus you need the S3 gateway endpoint (free) since ECR layers are stored in S3.
For heavy ECR users, interface endpoints cost less than half of routing through NAT. For light users (< ~100 GB/month), NAT is cheaper because the per-hour endpoint cost dominates.
A practical approach: always deploy gateway endpoints (S3, DynamoDB) since they’re free. Deploy interface endpoints for services used heavily by private resources (SSM, Secrets Manager, ECR for ECS/EKS). Keep NAT gateway for internet access to external services.
The endpoint approach also removes the NAT gateway as a single point of failure (in single-AZ setups) and eliminates the attack surface of internet-routable paths for internal traffic.
Endpoint Policies for Access Control
Interface endpoints support the same endpoint policies as gateway endpoints. Scope what’s accessible through the endpoint to reduce blast radius:
{
"Statement": [{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/AppRole"
},
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/*"
}]
}
This endpoint policy restricts the Secrets Manager endpoint to read-only access on production secrets from a specific IAM role. Even if an attacker compromises an instance in the VPC, they can’t use the endpoint to create, delete, or modify secrets.
For EKS workloads running in private subnets, see the EKS networking guide for how pods interact with VPC endpoints. The SSM Session Manager guide covers the three endpoints required to enable Session Manager for private instances without NAT. And the IAM roles and policies guide covers how to combine endpoint policies with IAM policies for defense in depth.
The cost savings from gateway endpoints and the security improvement from interface endpoints both justify deployment in any environment beyond a basic proof-of-concept. Add them to your VPC Terraform module so every new VPC gets them automatically.
Comments