AWS Certified Solutions Architect Associate SAA-C03 Practice Question
A company ingests high-volume telemetry from IoT devices into an Amazon S3 bucket for real-time analytics. The data is accessed frequently for the first 30 days. After that, access is rare, but compliance rules require the data to be retained for a total of one year. What is the most cost-effective way to manage this data's lifecycle?
Move the data to S3 Standard-Infrequent Access after 30 days and delete it after one year.
Store the data in S3 Standard for 30 days, then transition it to S3 Glacier Deep Archive, and configure expiration to delete it after one year.
Transfer the data to EBS Cold HDD volumes after 30 days and delete it after one year.
Keep the data in S3 Standard for the entire year so it is always immediately available.
The most economical approach is to apply an S3 Lifecycle policy that keeps new objects in the S3 Standard storage class for the first 30 days (to support real-time analytics), automatically transitions them to S3 Glacier Deep Archive on day 31, and deletes them after 365 days.
Deep Archive is the lowest-priced S3 storage class (≈ $0.00099 per GB-month) and suits data that is rarely accessed.
The objects will remain in Deep Archive for ~335 days, satisfying its 180-day minimum-storage requirement with no early-deletion fees.
Alternatives such as Glacier Flexible Retrieval or Standard-IA cost several times more over the retention period, and moving data to EBS volumes would shift from object to block storage, increasing both cost and management complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier?
Open an interactive chat with Bash
Why is S3 Standard not suitable for data that is rarely accessed?