Amazon EFS vs EBS vs S3: Choosing the Right AWS Storage

Bits Lovers
Written by Bits Lovers on
Amazon EFS vs EBS vs S3: Choosing the Right AWS Storage

Three AWS storage services cover most production workloads, and the wrong choice is expensive. EBS costs $0.08/GB-month for gp3, EFS costs $0.30/GB-month in standard storage, and S3 costs $0.023/GB-month. Pick EFS for a workload that only needs single-instance block storage and you’re paying 3.75x the correct price. Pick S3 for a database and your latency numbers will end your career.

The services solve different problems at different layers of the stack. EBS is a network-attached block device that looks like a local disk to one instance at a time. Mount it, format it, run a filesystem on it — the OS has no idea it’s not a physical drive. EFS runs NFS: you can mount the same file system from 10,000 instances simultaneously across multiple AZs, and they all see the same directory tree. S3 is a different animal entirely — it’s an HTTP API for storing and retrieving named objects, not a filesystem. You don’t mount S3 and navigate directories; you call GetObject and PutObject.

Amazon EBS (Elastic Block Store)

EBS is what you reach for when an application needs a local disk. Databases (MySQL, PostgreSQL, MongoDB), Elasticsearch indexes, and any application that uses the local filesystem all need block storage. The key constraint: a gp3 or io2 volume attaches to one EC2 instance at a time. Multi-attach is available on io2 volumes but requires cluster-aware filesystems — not a transparent upgrade from single-attach.

Volume types for 2026:

Type IOPS Throughput Cost Use Case
gp3 3,000–16,000 125–1,000 MB/s $0.08/GB + $0.005/IOPS Default choice
io2 Block Express up to 256,000 up to 4,000 MB/s $0.125/GB + $0.065/IOPS High-perf databases
st1 500 500 MB/s $0.045/GB Sequential throughput
sc1 250 250 MB/s $0.015/GB Cold data

gp3 is the default for almost everything. The baseline 3,000 IOPS and 125 MB/s throughput is free — you only pay extra if you provision above baseline. A 100GB gp3 volume at baseline costs $8/month; a 100GB gp2 volume at the same size costs more while delivering only 300 IOPS at baseline (gp2 is 3 IOPS/GB).

# Create an optimized gp3 volume
aws ec2 create-volume \
  --availability-zone us-east-1a \
  --volume-type gp3 \
  --size 100 \
  --iops 6000 \
  --throughput 500 \
  --encrypted \
  --tag-specifications 'ResourceType=volume,Tags=[{Key=Name,Value=db-primary},{Key=Team,Value=platform}]'

# Migrate existing gp2 volume to gp3 in-place (no downtime)
aws ec2 modify-volume \
  --volume-id vol-0123456789abcdef0 \
  --volume-type gp3 \
  --iops 3000 \
  --throughput 125

# Monitor modification
aws ec2 describe-volumes-modifications \
  --volume-ids vol-0123456789abcdef0 \
  --query 'VolumesModifications[0].{State:ModificationState,Progress:Progress}'

EBS on EKS uses the EBS CSI driver with a StorageClass:

# gp3 StorageClass for EKS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp3
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer   # Ensures volume created in same AZ as pod
reclaimPolicy: Delete
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  encrypted: "true"
  kmsKeyId: arn:aws:kms:us-east-1:123456789012:key/your-kms-key-id

The WaitForFirstConsumer binding mode is critical on EKS: it delays volume creation until a pod is scheduled, ensuring the EBS volume is created in the same AZ as the node. Without it, a pod on us-east-1b can end up with a volume in us-east-1a, causing the pod to fail to start.

Amazon EFS (Elastic File System)

EFS is NFS on demand. You mount it from multiple instances simultaneously, it scales storage automatically (no need to provision capacity), and you pay for what you use. The standard storage price is $0.30/GB-month — 3.75x more expensive than gp3 EBS per GB. That premium is justified when you actually need concurrent multi-instance access; it’s waste when you don’t.

Storage classes within EFS:

Class Price Access
Standard $0.30/GB-month Frequent
Standard-IA $0.025/GB-month Infrequent (+ $0.01/GB data access fee)
One Zone $0.16/GB-month Single AZ, frequent
One Zone-IA $0.01/GB-month Single AZ, infrequent

Lifecycle management moves files to Infrequent Access automatically after N days of no access. For a file system with a mix of hot configuration files and cold log archives, lifecycle policies can reduce the effective cost to well under $0.10/GB.

# Create an EFS file system with lifecycle management
aws efs create-file-system \
  --performance-mode generalPurpose \
  --throughput-mode elastic \
  --encrypted \
  --tags Key=Name,Value=app-shared-storage

EFS_ID=$(aws efs describe-file-systems \
  --query 'FileSystems[?Tags[?Key==`Name` && Value==`app-shared-storage`]].FileSystemId' \
  --output text)

# Enable lifecycle: move to IA after 30 days, move back on access
aws efs put-lifecycle-configuration \
  --file-system-id $EFS_ID \
  --lifecycle-policies \
    TransitionToIA=AFTER_30_DAYS \
    TransitionToPrimaryStorageClass=AFTER_1_ACCESS

# Create a mount target in each subnet (one per AZ)
for subnet in subnet-aaa111 subnet-bbb222 subnet-ccc333; do
  aws efs create-mount-target \
    --file-system-id $EFS_ID \
    --subnet-id $subnet \
    --security-groups sg-efs-access
done

Throughput modes matter. Bursting throughput scales with file system size — a 100GB file system gets 100 MB/s burst credit and 1 MB/s baseline. Elastic throughput (the newer mode) automatically scales to 3 GB/s read and 1 GB/s write regardless of size. For small file systems that occasionally need high throughput, Elastic mode is the right choice. Provisioned throughput gives a fixed MB/s for predictable workloads.

EFS on EKS uses the EFS CSI driver with ReadWriteMany access:

# StorageClass for EFS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap    # Uses EFS Access Points for isolation
  fileSystemId: fs-0123456789abcdef0
  directoryPerfs: "/"
  uid: "1000"
  gid: "1000"

---
# PVC with ReadWriteMany — works across multiple pods/nodes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-content
  namespace: cms
spec:
  accessModes:
    - ReadWriteMany   # Multiple pods on different nodes can mount this
  storageClassName: efs-sc
  resources:
    requests:
      storage: 10Gi

ReadWriteMany is the key EFS differentiator in Kubernetes. EBS only supports ReadWriteOnce (single node). If you need a shared volume across multiple pods — WordPress uploads, ML model weights loaded by multiple inference pods, shared configuration — EFS is the answer.

Amazon S3

S3 isn’t a filesystem. It’s an HTTP API for storing and retrieving objects — files with keys, not paths. You can’t mount S3 natively (tools like Mountpoint for Amazon S3 approximate it, with limitations). S3 is for data that flows through APIs: application assets, backups, data lake files, build artifacts, ML training datasets.

S3 storage classes:

Class Price Retrieval Use Case
Standard $0.023/GB Immediate Active data
Standard-IA $0.0125/GB + $0.01/GB retrieval Immediate Infrequent, critical
One Zone-IA $0.01/GB + $0.01/GB retrieval Immediate Non-critical, infrequent
Glacier Instant $0.004/GB + $0.03/GB retrieval Milliseconds Archives, quarterly access
Glacier Flexible $0.0036/GB + $0.01/GB retrieval 1–12 hours Compliance archives
Deep Archive $0.00099/GB + $0.02/GB retrieval 12–48 hours 7-year retention
# Create a bucket with intelligent tiering for automatic class management
aws s3api create-bucket \
  --bucket my-app-assets \
  --region us-east-1

# Enable Intelligent-Tiering (automatically moves objects between tiers)
aws s3api put-bucket-intelligent-tiering-configuration \
  --bucket my-app-assets \
  --id main-tier \
  --intelligent-tiering-configuration '{
    "Id": "main-tier",
    "Status": "Enabled",
    "Tierings": [
      {"Days": 90, "AccessTier": "ARCHIVE_ACCESS"},
      {"Days": 180, "AccessTier": "DEEP_ARCHIVE_ACCESS"}
    ]
  }'

# Lifecycle policy: delete incomplete multipart uploads after 7 days
aws s3api put-bucket-lifecycle-configuration \
  --bucket my-app-assets \
  --lifecycle-configuration '{
    "Rules": [{
      "ID": "cleanup-incomplete-uploads",
      "Status": "Enabled",
      "Filter": {"Prefix": ""},
      "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}
    }]
  }'

S3 latency is 1–100ms per operation (varies by object size and region). That’s fine for serving assets from CloudFront, reading batch input files, or loading model weights once at startup. It’s not fine for a database or an application that reads configuration on every request.

Decision Matrix

Pick the service based on access pattern, not cost alone:

Scenario Service Reason
PostgreSQL database EBS gp3 or io2 Block storage, single instance, POSIX
Shared CMS uploads (WordPress) EFS Multiple web servers, ReadWriteMany
Static website assets S3 + CloudFront HTTP delivery, unlimited scale
EKS stateful single-pod workload EBS via StorageClass ReadWriteOnce, stays on one node
EKS shared config volume EFS via StorageClass ReadWriteMany across pods
ML training data (read-once batch) S3 Large files, parallel reads, cheap
CI/CD artifact storage S3 Build outputs, ephemeral, cheap
Video transcoding output S3 Large objects, streamed downstream
Elasticsearch data directory EBS io2 High IOPS random reads
Log aggregation from many pods EFS or Kinesis Concurrent writes

The mental model that actually sticks: EBS is a drive, EFS is a network share, S3 is a bucket. Your app checks /var/data/config.json? That’s a drive — EBS. Your app NFS-mounts /shared/uploads across 20 web servers? That’s a network share — EFS. Your app calls s3.GetObject("config.json")? That’s a bucket — S3. If the distinction is still unclear for a specific workload, ask whether the application code uses filesystem calls (open, read, write) or HTTP calls. Filesystem calls need block or NFS storage; HTTP calls belong in S3.

Cost Comparison Example

100GB of data, 1 month:

Service Config Cost
S3 Standard 100GB stored $2.30
EBS gp3 100GB, baseline IOPS $8.00
EFS Standard 100GB, frequent access $30.00
EFS Standard-IA 100GB, infrequent $2.50 + access fees

S3 is 3.5x cheaper than EBS and 13x cheaper than EFS for storage cost alone. But that comparison ignores that EBS gives you microsecond IOPS latency and POSIX semantics, and EFS gives you shared concurrent access — capabilities S3 doesn’t offer at any price.

For Kubernetes workloads specifically, the EBS CSI driver for stateful single-pod workloads and the EFS CSI driver for shared volumes work alongside the EKS Fargate guide (Fargate pods support EFS but not EBS). The storage performance directly feeds into the right-sizing work covered in AWS Compute Optimizer — gp2 to gp3 migration alone typically shows up as a recommendation for any account still running gp2 volumes.

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus