AWS S3 Lifecycle Policies: Cut Storage Costs by 60%
A practical guide to configuring S3 lifecycle rules that automatically transition objects between storage classes, with real cost analysis and Terraform examples.
S3 lifecycle policies are one of the highest-ROI optimizations available in AWS. Most teams are paying for Standard storage on data that hasn't been accessed in months. In this post I'll show you exactly how to configure lifecycle rules to move data down the storage tier ladder automatically.
The Storage Class Ladder
AWS S3 has six storage classes, each with different cost/retrieval trade-offs:
| Class | Storage cost | Retrieval cost | Min duration | Best for |
|---|---|---|---|---|
| Standard | $0.023/GB | Free | None | Active data |
| Intelligent-Tiering | $0.023/GB + $0.0025/1K | Free | None | Unknown access patterns |
| Standard-IA | $0.0125/GB | $0.01/GB | 30 days | Infrequent access |
| One Zone-IA | $0.01/GB | $0.01/GB | 30 days | Non-critical, infrequent |
| Glacier Instant | $0.004/GB | $0.03/GB | 90 days | Archives with ms retrieval |
| Glacier Deep Archive | $0.00099/GB | $0.02/GB | 180 days | Long-term cold storage |
The 30-day rule
There's a minimum storage charge of 30 days for Standard-IA and 90 days for Glacier classes. Transitioning objects before those minimums burns money rather than saves it. Use lifecycle rules only for data that consistently lives longer than the minimum duration.
Calculating Your Potential Savings
Before touching any configuration, pull your S3 storage lens data or run this CLI command to see your current breakdown:
# Storage class distribution across a bucket
aws s3api list-objects-v2 \
--bucket your-bucket-name \
--query 'Contents[].{Key:Key,StorageClass:StorageClass,Size:Size}' \
--output json | jq '
group_by(.StorageClass) |
map({
class: .[0].StorageClass,
count: length,
total_gb: (map(.Size) | add) / 1073741824
})
'On a typical application bucket I've seen:
- 70% of objects older than 90 days still sitting in Standard
- 15% accessed once and never again
- Only 15% genuinely hot
That 70% is free money waiting to be collected.
Configuring Lifecycle Rules in Terraform
Here's a production-ready Terraform module I use for application backup buckets:
resource "aws_s3_bucket_lifecycle_configuration" "app_logs" {
bucket = aws_s3_bucket.app_logs.id
rule {
id = "transition-to-ia-then-glacier"
status = "Enabled"
filter {
prefix = "logs/" # Apply only to log objects
}
# Move to IA after 30 days
transition {
days = 30
storage_class = "STANDARD_IA"
}
# Move to Glacier Instant after 90 days
transition {
days = 90
storage_class = "GLACIER_IR"
}
# Move to Deep Archive after 1 year
transition {
days = 365
storage_class = "DEEP_ARCHIVE"
}
# Delete after 7 years (compliance requirement)
expiration {
days = 2555
}
}
rule {
id = "abort-incomplete-multipart"
status = "Enabled"
filter {}
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
}
}Watch your minimum durations
If your objects are typically short-lived (e.g., ephemeral build artifacts under 30 days), don't use lifecycle rules — the minimum duration charges will exceed any savings.
Real Cost Example
For a bucket with 10TB of log data:
Before lifecycle policies:
- 10TB in Standard = $235.52/month
After lifecycle policies (steady state, 12 months in):
- 1TB in Standard (last 30 days) = $23.55
- 2TB in Standard-IA (30–90 days) = $25.60
- 3TB in Glacier Instant (90 days–1 year) = $12.29
- 4TB in Deep Archive (1+ years) = $4.06
Total after: $65.50/month — a 72% reduction
Common Mistakes
-
Transitioning objects smaller than 128KB to IA/Glacier — the per-object overhead ($0.0025 for Standard-IA) means tiny files are more expensive in IA than Standard.
-
Not setting
abort_incomplete_multipart_upload— failed multipart uploads accumulate quietly and cost real money. -
Using lifecycle on versioned buckets without targeting noncurrent versions — you need separate rules for current and noncurrent object versions.
# Don't forget noncurrent versions in versioned buckets
noncurrent_version_transition {
noncurrent_days = 30
storage_class = "STANDARD_IA"
}
noncurrent_version_expiration {
noncurrent_days = 90
}Verifying the Rules are Working
After applying, monitor via S3 Storage Lens or CloudWatch:
# Check current storage class breakdown in CloudWatch
aws cloudwatch get-metric-statistics \
--namespace AWS/S3 \
--metric-name BucketSizeBytes \
--dimensions Name=BucketName,Value=your-bucket Name=StorageType,Value=StandardIAStorage \
--statistics Average \
--start-time 2026-01-01T00:00:00Z \
--end-time 2026-03-01T00:00:00Z \
--period 86400Lifecycle rules run once per day, so you won't see immediate changes — check back the next day.
Conclusion
S3 lifecycle policies are genuinely one of the easiest wins in cloud cost optimization. Configure them once, and they run forever. The Terraform config above takes 15 minutes to apply and pays for itself in the first week.
Next up in this series: EC2 rightsizing using Compute Optimizer.
Written by
Chetan Yamger
Cloud Engineer · AI Automation Architect · Blogger
Cloud Engineer and AI Automation Architect with deep expertise in Azure, Intune, PowerShell, and AI-driven workflows. I use ChatGPT, Gemini, and prompt engineering to build intelligent automation that improves productivity and decision-making in real IT environments.
Stay in the loop.
New articles, straight to you.
Deep-dive technical articles on Intune, PowerShell, and AI — no noise, no spam.
Discussion
Share your thoughts — your email stays private
Leave a comment
