Everything you need to know about S3 Lifecycle Rules
If you’ve got data in S3 and you’re tired of managing it manually, lifecycle rules are worth knowing about. They let you automate what happens to objects over time - when they should move to cheaper storage, when to delete them, that sort of thing.
Overview of Amazon S3 Lifecycle Rules
Lifecycle rules run once a day and apply to everything in their scope. You can set them up to trigger actions when objects hit a certain age or when they move to a different storage class.
A practical example: say you want logs in a bucket to land in Glacier after a week, then disappear after 30 days. You can set that up and forget about it. No more manually cleaning up old files or wondering if you’re still paying for stuff you don’t need.
The rules work with events like reaching a certain age, and you can scope them to the whole bucket or just objects with a specific prefix.
Benefits: why people use lifecycle rules
The main reason is cost. S3’s standard storage isn’t cheap for long-term retention, and Glacier is significantly cheaper for stuff you don’t need to access often.
With lifecycle rules, you can:
- Move older file versions to infrequent access or archive storage classes
- Delete objects automatically after a retention period
- Keep your bucket from growing unbounded
I see teams use this most for log rotation, compliance data retention, and cost optimization on backup files.
Setting up a lifecycle rule
You can create lifecycle rules through the AWS Console, CLI, Terraform, or CloudFormation. Here’s how each approach works.
AWS Console
- Open the S3 console and pick your bucket
- Go to the Management tab, then Lifecycle
- Click “Add lifecycle rule”
- Give it a name and decide the scope (whole bucket or a prefix)
- Choose your transitions - like moving objects to Standard-IA after 30 days, then to Glacier after 90
- Set expiration if you want objects deleted automatically
- Review and create
You can edit or delete rules anytime from the same page.
AWS CLI
The CLI command is put-bucket-lifecycle-configuration. You’ll need a JSON file with your rule definitions:
aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://lifecycle-configuration.json
Here’s what the JSON looks like:
{
"Rules": [
{
"Expiration": {
"Days": 30
},
"Filter": {
"Prefix": "logs/"
},
"Status": "Enabled",
"Transitions": [
{
"Days": 7,
"StorageClass": "GLACIER"
}
]
}
]
}
This rule targets objects with the “logs/” prefix. After a week, they move to Glacier. After 30 days, they’re gone.
Terraform
provider "aws" {
region = "us-west-2"
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
}
resource "aws_s3_bucket_lifecycle_configuration" "my_bucket_lifecycle" {
rule {
id = "example-rule"
prefix = "logs/"
status = "Enabled"
transition {
days = 7
storage_class = "GLACIER"
}
expiration {
days = 30
}
}
bucket = aws_s3_bucket.my_bucket.id
}
Run terraform init first, then terraform plan to preview, then terraform apply to create everything.
AWS CloudFormation
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket
MyBucketLifecycleConfiguration:
Type: AWS::S3::BucketLifecycleConfiguration
Properties:
Bucket: !Ref MyBucket
LifecycleConfiguration:
Rules:
- Id: example-rule
Prefix: logs/
Status: Enabled
Transitions:
- Days: 7
StorageClass: GLACIER
ExpirationInDays: 30
Upload this template when creating a CloudFormation stack.
S3 Lifecycle Rules and S3 Object Lock
If you need to keep objects immutable for compliance, lifecycle rules work alongside S3 Object Lock. Here’s the pattern:
- Use a lifecycle rule to move objects to Glacier or Glacier Deep Archive
- Apply Object Lock to those objects with your required retention period
For example, you might move objects to Glacier after 30 days and lock them for 7 years. Once locked, nobody can delete or modify them until the retention period expires - even if they have full S3 permissions.
Object Lock only works with certain storage classes and regions, so check the AWS documentation before building a workflow around it.
Using lifecycle rules with object tagging
Tagging objects when you create lifecycle rules helps with organization and tracking. Tags let you group objects by department, project, or any scheme that fits your needs.
Beyond organization, tags give you visibility into access patterns. If you see certain tags accessing data less frequently, you can adjust your policies accordingly.
How lifecycle rules work with object versions
If you have versioning enabled on a bucket, lifecycle rules apply to all versions of an object, not just the current one. Keep this in mind when setting up rules.
A few things to know:
- Versioning must be enabled before lifecycle rules affect versions
- Rules apply to all versions - if you set a rule to transition versions to Glacier after 30 days, every version goes, not just the latest
- You can target specific versions using version ID or metadata in your filter
- You can suspend rules for specific versions if needed
- Object Lock works with versions - useful for compliance scenarios
The practical benefit is controlling storage costs on version-heavy buckets. Without lifecycle rules, old versions just sit there accumulating charges.
Wrapping up
S3 lifecycle rules handle the automation side of storage management. Set them up correctly and you get:
- Automatic cost optimization through storage class transitions
- Cleanup of expired objects without manual work
- Support for versioning and Object Lock workflows
My recommendation: start simple. Pick one bucket, set up a basic rule to transition objects to Standard-IA after 90 days, and see how it goes. You can always add complexity later.
Remember to use object tagging to keep your rules organized and track what’s happening across your storage.
Comments