AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements

Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
- 20 Questions
- Unlimited
- Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
Your client hosts their multimedia files on Amazon S3 and observes that these files are frequently accessed for up to 60 days after uploading. After 60 days, the access patterns decline sharply, but the client requires the files to be available for occasional access for at least one year. Which lifecycle policy should be applied to meet the client's need for cost optimization while maintaining file availability?
Transition objects to S3 One Zone-Infrequent Access after 60 days
Transition objects directly to S3 Glacier Flexible Retrieval after 60 days
Keep the objects stored in S3 Standard without transitioning them to other storage classes
Transition objects to S3 Standard-Infrequent Access after 60 days and to S3 Glacier Flexible Retrieval after one year
Answer Description
Configure an S3 Lifecycle rule that transitions the objects to S3 Standard-Infrequent Access (Standard-IA) 60 days after upload. Standard-IA offers the same millisecond latency and 11-nines durability as S3 Standard at a lower storage cost, with a 30-day minimum-storage duration that the workload already satisfies. Add a second transition to move the objects to S3 Glacier Flexible Retrieval (formerly "S3 Glacier") after they are 365 days old. Glacier Flexible Retrieval provides the lowest-cost archive tier that still allows retrieval in minutes to hours-adequate for the client's occasional access needs. S3 One Zone-IA is not chosen because, although it has the same durability as Standard-IA, it stores data in a single Availability Zone and offers lower availability (99.5%), making it less suitable for a primary copy that must remain available. Leaving the data in S3 Standard or moving it directly to Glacier after 60 days would either cost more or slow access during the high-access period.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between S3 Standard, S3 Standard-IA, and S3 Glacier Flexible Retrieval?
Why is S3 One Zone-IA not recommended for this use case?
What happens if objects are moved directly to S3 Glacier Flexible Retrieval after 60 days?
A company wants to expose some internal services to external developers over the Internet. They need a solution that offers authentication/authorization, per-client rate limiting, and the ability to monitor and control usage. Which AWS service should they use to meet these requirements?
Use AWS Lambda to host the services and implement custom authentication and throttling logic.
Host the services on Amazon EC2 instances and use security groups for access control.
Use Amazon API Gateway to expose the services and enforce usage plans with API keys.
Deploy the services behind an Application Load Balancer and use Amazon Cognito for authentication.
Answer Description
Amazon API Gateway is a fully managed service that lets you create, publish, maintain, monitor, and secure APIs at any scale. It integrates with Amazon Cognito user pools, IAM, or Lambda authorizers for authentication and authorization. To meter and control consumption, you attach API keys to usage plans, which provide request quotas and throttling limits; detailed metrics and logs are pushed to Amazon CloudWatch for monitoring.
An Application Load Balancer with Amazon Cognito can authenticate users, but ALB has no built-in per-client throttling or usage-tracking features-you would need AWS WAF or another service for that.
Running the services directly on Amazon EC2 with security groups only restricts network traffic and provides none of the API-level monitoring or rate-limiting capabilities.
Using AWS Lambda alone would still require you to build or add separate authentication, metering, and throttling logic; API Gateway already provides these capabilities out of the box.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon API Gateway and how does it manage authentication and throttling?
How do usage plans and API keys work in Amazon API Gateway?
How is Amazon API Gateway different from an Application Load Balancer for API management?
Which of the following statements correctly describes how a multi-tier architecture can improve the scalability of an application deployed on AWS?
Storing business logic and data in the same tier minimizes network hops, which inherently increases concurrent throughput.
Each layer-presentation, application, and data-can be scaled horizontally and independently, so increasing load on one layer does not require scaling the others.
All tiers run on the same Amazon EC2 instances, so autoscaling the instance group automatically scales every tier together.
Having a fixed 1:1 mapping between web servers and database servers simplifies capacity planning and removes the need for load balancing.
Answer Description
In a multi-tier (three-tier) architecture, the presentation, business logic, and data layers run on separate, independent infrastructure. Because the tiers are decoupled, each one can be scaled horizontally or vertically on its own to meet demand. This independence lets architects add web servers when user traffic grows, increase application-tier capacity for compute-intensive processing, or scale database read replicas-all without forcing changes to the other tiers. Designs that keep all tiers on the same instances or couple them with fixed ratios negate these benefits and limit scalability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does horizontal scaling mean in a multi-tier architecture?
What types of AWS services are commonly used in a multi-tier architecture?
How does a decoupled architecture benefit application scalability?
A web startup is deploying an interactive platform that will experience fluctuating levels of traffic, with occasional surges during marketing campaigns and special events. The platform needs consistent compute power under normal conditions but must also be able to scale up swiftly and cost-effectively during peak times. Which service should the architect recommend to fulfill these compute requirements with elasticity?
Utilize a data processing managed service designed for handling sporadic heavy loads
Deploy the platform using a serverless function execution service
Provision a single compute instance with maximum capacity to handle traffic spikes
Implement an auto scaling service that dynamically adjusts the number of instances
Answer Description
Using an auto scaling service for elastic compute is the best fit for the scenario described, where consistent compute power is required with the flexibility to scale up during traffic surges. This service will automatically adjust the number of compute instances to handle the load, providing a balance between performance and cost. A serverless function execution platform, while ideal for certain event-driven patterns, does not provide a persistent baseline environment which web applications typically require. Using a larger fixed compute instance may lead to cost inefficiencies due to over-provisioning during non-peak times. Data processing managed services are not suitable as they are tailored for batch jobs and not interactive platforms with variable traffic.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is auto scaling in AWS?
How does auto scaling differ from serverless architecture?
What are the benefits of auto scaling compared to provisioning a single large instance?
A scientific research institute needs to offload a large collection of genomic data sets from its on-premises servers to AWS. The data sets are seldom accessed, but when they are, a delay of several hours is acceptable. The institute requires a highly cost-effective solution for storing and retrieving these data sets, with a strong focus on minimizing storage costs. What method represents the MOST cost-optimized approach to store this data?
Implement a Storage Gateway with stored volumes to gradually move the data sets into Amazon S3 over a direct connection.
Leverage S3 Intelligent-Tiering to automatically optimize costs between frequent and infrequent access tiers for the data sets.
Store the genomic data sets using the S3 Glacier Deep Archive storage class after initial upload completion.
Utilize the Hadoop Distributed File System (HDFS) on Amazon EMR with occasional syncing to S3 for the data sets not actively in use.
Answer Description
Opting for the S3 Glacier or S3 Glacier Deep Archive storage class is ideal for rarely accessed data that can tolerate retrieval times of several hours, with Deep Archive being the most cost-effective for data that needs to be stored for a decade or longer. Additionally, utilizing the Hadoop Distributed File System (HDFS) on Amazon EMR for infrequent access is incorrect as EMR is a compute optimized service, not a storage service, and would lead to higher costs due to ongoing compute charges. The use of Storage Gateway for storing directly into S3 would introduce unnecessary costs and complexity for a one-time migration need.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between S3 Glacier and S3 Glacier Deep Archive?
What makes S3 Glacier Deep Archive cost-effective for long-term storage?
When should you use S3 Intelligent-Tiering instead of S3 Glacier?
A company is required to enforce strict access policies regarding the management of its encryption keys used to secure sensitive data at rest on Amazon S3. Which of the following is the BEST method to ensure that only a select group of senior security personnel can administer the keys?
Encapsulate the encryption keys using an additional layer of encryption with a separate master key.
Use AWS managed keys for S3 and rely on default encryption features to restrict key administration.
Create a Customer Managed Key in AWS KMS and restrict access to a specific IAM group assigned to the senior security personnel.
Implement automatic key rotation every three months for the encryption keys.
Answer Description
Creating a Customer Managed Key in AWS Key Management Service (KMS) and using IAM policies to restrict access to that key to a specific IAM group containing the senior security personnel provides a secure way to allow only authorized staff to manage the keys. This fits the principle of least privilege by specifically designating which IAM identities have permissions to perform key administrative operations. Using AWS managed keys does not provide the same level of granular control over access permissions. Encrypting the keys with another key would add complexity without solving the access control requirement. Rotating keys independently of access policies ensures keys are refreshed regularly but does not address who can manage key permissions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS KMS?
What is the principle of least privilege?
How does a Customer Managed Key differ from an AWS Managed Key?
A Solutions Architect needs to determine the ideal memory allocation for a new AWS Lambda function that processes short bursts of data with low to moderate processing requirements. What should the Architect consider when selecting the Lambda function's memory size?
Select the lowest memory size since the function only processes short bursts of data.
Allocate the highest memory option to ensure the fastest execution time.
Choose a lower memory size that still provides adequate CPU power for the function's requirements, balancing cost and performance.
Allocate memory size based solely on the duration of the function's execution, regardless of processing requirements.
Answer Description
The correct answer understands that adjusting the memory size also affects the CPU available to the Lambda function, allowing it to complete execution faster. Thus, for moderate processing requirements, it is not necessary to allocate the highest option, but the Architect should choose a memory size that provides sufficient CPU for efficient execution while avoiding unnecessary costs associated with overprovisioning.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does memory size impact CPU performance in AWS Lambda?
What happens to the cost if you allocate too much memory for an AWS Lambda function?
Why is it not ideal to choose the lowest memory size for AWS Lambda functions with moderate requirements?
An enterprise needs to ensure the encryption of sensitive data stored in their Amazon S3 buckets. The company has mandated that its own encryption keys must be used, and those keys must be capable of being rotated on a company-defined schedule and disabled immediately in the event of a security breach. Which of the following configurations should be implemented to meet these specific requirements?
Use an AWS-managed CMK in AWS KMS without enabling key rotation.
Create a customer-managed CMK in AWS KMS, use it to encrypt the S3 buckets (SSE-KMS), and manage rotation/disablement according to the company policy.
Use Amazon S3-managed keys (SSE-S3) for encryption and handle rotation outside of AWS.
Use an AWS-managed KMS key and rely on its automatic annual rotation.
Answer Description
Using a customer-managed AWS KMS key (CMK) satisfies the requirements because the organization controls the key material.
- Rotation: A customer-managed CMK can be rotated on demand or by enabling automatic rotation with a custom period from 90 to 2560 days (default 365 days), giving the company full control over the schedule.
- Disablement: The key owner can call the DisableKey operation to make the CMK unusable almost immediately if a compromise is suspected.
AWS-managed keys (the default aws/s3 key) rotate automatically every year and cannot be disabled or have their schedule changed. SSE-S3 uses keys fully managed by Amazon S3, so the customer cannot supply, rotate, or disable its own keys. Therefore, creating and using a customer-managed CMK for SSE-KMS on the S3 bucket is the only configuration that meets all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a customer-managed CMK in AWS KMS?
How can you rotate keys in AWS KMS?
What is the difference between AWS-managed keys and customer-managed keys in AWS KMS?
An application is experiencing significant load on its database tier, particularly with read-heavy query operations that are impacting performance. Which service would best alleviate this issue by caching query results to enhance the performance and scalability of the application?
Amazon RDS Read Replicas
Amazon Simple Storage Service (S3)
Amazon RDS Multi-AZ deployment
Amazon ElastiCache
Answer Description
Amazon ElastiCache effectively addresses the challenge of reducing database load by caching query results. It supports caching strategies, both for Redis and Memcached, that can store frequently accessed data in-memory, thereby improving application performance by providing low-latency access to the data and reducing the load on the database tier. This aligns with the criteria specified for enhancing performance and scalability for read-heavy operations, making it the best option given the scenario described.
Amazon RDS Read Replicas are not the ideal choice because, while they can reduce the load by serving read requests, they are not a caching layer and do not offer the same low latency as an in-memory cache. Using RDS Multi-AZ would provide high availability but would not address the performance issue related to read-heavy queries as effectively as a caching layer. Amazon S3 is primarily used for object storage and is not suitable for caching database query results.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon ElastiCache and how does it work?
How does ElastiCache differ from using RDS Read Replicas?
What are the main use cases for Amazon ElastiCache?
A financial-services company transfers about 20 TB of sensitive transaction data every night from its on-premises data center in New York to Amazon S3 buckets in the us-east-1 Region. The security team requires that the traffic never traverse the public Internet. The network team also needs deterministic, low-latency performance during the transfer window and is willing to provision physical connectivity if necessary.
Which AWS networking option MOST cost-effectively meets these requirements?
Order a dedicated 10 Gbps AWS Direct Connect connection at a nearby AWS Direct Connect location and configure a private virtual interface
Use AWS DataSync over the Internet with TLS encryption to copy the data to Amazon S3
Enable AWS Global Accelerator and route the data traffic through accelerator endpoints
Create an AWS Site-to-Site VPN connection over existing Internet circuits between the data center and a VPC
Answer Description
AWS Direct Connect provides a dedicated, private layer-2 or layer-3 connection that keeps traffic on the AWS global backbone. This bypasses the public Internet and delivers predictable, low-latency performance-ideal for large, daily data transfers. A Site-to-Site VPN is encrypted but still rides the Internet, so latency and jitter remain unpredictable. AWS Global Accelerator optimizes Internet paths for end-user traffic, not for data-center-to-AWS links. AWS DataSync over the Internet encrypts traffic but still uses public routes unless combined with Direct Connect, so it cannot meet the "never traverse the Internet" requirement on its own.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Direct Connect, and how does it ensure private connectivity?
Why is AWS Site-to-Site VPN not suitable for this use case?
How does AWS Global Accelerator differ from AWS Direct Connect in networking use cases?
Using Amazon EBS snapshots for backups is less cost-effective for storage than copying the same amount of data to Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
True
False
Answer Description
The statement is true. In the us-east-1 region, standard EBS snapshots are billed at $0.05 per GB-month, whereas S3 Standard-Infrequent Access storage is $0.0125 per GB-month. Even though snapshots are incremental, holding the same quantity of backup data in the snapshot standard tier costs four times more than placing that data directly in S3 Standard-IA. Therefore, using the standard tier for EBS snapshots is less cost-effective for storing infrequently accessed backups compared to using S3 Standard-IA.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon EBS snapshots?
What is Amazon S3 Standard-Infrequent Access (S3 Standard-IA)?
Why are EBS snapshots stored in the standard tier more expensive than S3 Standard-IA?
A security engineer must protect sensitive data that is uploaded to an Amazon S3 bucket. The engineer's requirements are:
- Encrypt data in transit by allowing only SSL/TLS connections to the bucket.
- Encrypt data at rest with the customer-managed AWS KMS key arn:aws:kms:us-east-1:123456789012:key/abcd1234.
Which of the following statements best describes AWS best practice for meeting both requirements?
Using a bucket policy to require SSL/TLS is unnecessary because Amazon S3 automatically forces HTTPS; only default encryption needs to be enabled.
A bucket policy can enforce SSL/TLS, but it can require only the AWS-managed key (aws/s3); customer-managed keys cannot be specified in policy conditions.
Enforcing SSL/TLS and a specific customer-managed KMS key in the bucket policy aligns with AWS security best practices for protecting data in transit and at rest.
Enabling SSE-S3 encryption at rest makes enforcing SSL/TLS in transit redundant, so the bucket policy only needs to specify the aes256 header.
Answer Description
AWS best practice is to enforce both controls with a bucket policy that (1) denies all requests when the global condition key aws:SecureTransport is false, ensuring that only HTTPS traffic is allowed, and (2) denies PUT operations that do not specify server-side encryption with the designated customer-managed KMS key via the condition keys s3:x-amz-server-side-encryption and s3:x-amz-server-side-encryption-aws-kms-key-id. This configuration encrypts data both in transit and at rest while giving the organization full control and auditability over the CMK.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the aws:SecureTransport condition key used for?
How do customer-managed KMS keys work with S3 buckets?
Why is it necessary to enforce both SSL/TLS and server-side encryption?
A company operates under a multi-account strategy where one account is managed by the security engineers and another is operated by a separate team responsible for network administration. The security team needs to allow the network administration team's account access to a specific Amazon S3 bucket without broadening the access to other accounts. Which of the following is the MOST secure way to grant the required access?
Set up a bucket policy that limits access to the S3 bucket based on the source IP range of the network administration team's office location.
Attach a resource-based policy directly to the S3 bucket identifying the network administration team's account as the principal with the specified permissions.
Edit the S3 bucket's Access Control List (ACL) to include the user identifiers from the team handling network administration.
Implement a policy for individual users in the security engineers' account that grants permissions to the network administration team.
Answer Description
Attach a resource-based policy (bucket policy) to the S3 bucket that identifies the network administration team's AWS account as the principal and grants only the required permissions. A bucket policy is evaluated in the account that owns the resource and explicitly supports specifying an entire account in the Principal element, which cleanly limits access to that account.
IAM identity-based policies in the security engineers' account cannot by themselves grant principals from another account access to the bucket; a resource-based policy in the bucket owner's account is still required for cross-account access. Although legacy S3 ACLs can grant permissions to another AWS account via that account's canonical user ID, AWS now recommends disabling ACLs and using bucket policies for simpler management and finer-grained control. Restricting access by source IP address does not satisfy the requirement because any principal from any account could still reach the bucket if it originates from the allowed network range.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
Why are bucket policies preferred over S3 ACLs (Access Control Lists)?
How does the Principal element work in an S3 bucket policy?
A company needs a scalable storage solution to store a vast and ever-increasing collection of raw image files. These files will be frequently accessed immediately after creation, but access will dramatically decrease after one month. Which Amazon storage service will BEST meet the business's need for scalability and cost-effectiveness over time?
Amazon S3 Intelligent-Tiering
Amazon Elastic File System (EFS)
Amazon Elastic Block Store (EBS)
Amazon S3 Standard
Answer Description
Amazon S3 Intelligent-Tiering is the best solution for this scenario because it automatically moves the data to the most cost-effective access tier without performance impact or operational overhead. This is ideal for data with unknown or changing access patterns. Amazon S3 Standard is not the most cost-effective for data that is infrequently accessed over time. Amazon EFS is not optimized for object storage and cost-effectiveness like S3 Intelligent-Tiering. Amazon EBS volumes are not suitable for object storage and are best for block storage for use with EC2 instances.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Intelligent-Tiering?
How does S3 Intelligent-Tiering differ from S3 Standard?
Why aren't EFS or EBS suitable for this use case?
Which of the following statements about designing Amazon EC2 Auto Scaling policies for different workload types is CORRECT?
AWS Auto Scaling cannot use custom CloudWatch metrics, so only predefined CPU-based metrics are supported.
CPU utilization is always the best, and usually the only metric you need for horizontal scaling across every workload.
Memory-bound applications may require Auto Scaling policies that trigger on memory usage or other custom metrics instead of CPU utilization.
Network-intensive workloads should scale based on CPU utilization because network metrics cannot be used in Auto Scaling policies.
Answer Description
CPU utilization is a useful default metric, but it is not adequate for every workload. Memory-bound applications, for example, may need policies that trigger on memory usage or other custom CloudWatch metrics you publish. AWS Auto Scaling also supports predefined metrics such as NetworkIn/NetworkOut and Application Load Balancer request count, as well as fully custom metrics, so you should choose the metric-or combination of metrics-that best reflects real resource saturation for the application. Options B is therefore correct; the other options incorrectly claim that CPU is always sufficient or that other metrics are unsupported.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are custom CloudWatch metrics in AWS Auto Scaling?
How does AWS Auto Scaling handle memory-bound applications differently?
Can network metrics like NetworkIn/NetworkOut be used in AWS Auto Scaling policies?
A technology startup is building a social media analytics platform that experiences unpredictable bursts of traffic throughout the day. They require a compute solution that automatically adjusts to the varying load and optimizes costs. Which architecture should they implement?
Use Amazon Elastic Container Service (ECS) with Fargate Launch Type for serverless container orchestration.
Utilize Amazon EC2 instances with Auto Scaling groups to manage the varying load.
Deploy on Amazon Elastic Kubernetes Service (EKS) with horizontal pod autoscaling.
Deploy the application on AWS Lambda and utilize its automatic scaling and billing for only the compute time used.
Answer Description
AWS Lambda allows the platform to automatically scale the compute capacity by running code in response to events. It is well-suited for unpredictable workloads as it ensures that the startup only pays for the compute time consumed, thereby optimizing costs. Although Amazon EC2 with Auto Scaling can scale to meet demand, it is not as cost-optimized for burstable, sporadic workloads because of the continuous running of instances. ECS and EKS are container management services and, while they offer scaling capabilities, they don't offer the same level of cost optimization and event-driven scaling that AWS Lambda provides for the described use case.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does AWS Lambda automatically scale for unpredictable traffic bursts?
Why is AWS Lambda more cost-efficient compared to EC2 Auto Scaling for bursty workloads?
What is the key difference between AWS Lambda and Amazon ECS with Fargate in handling serverless environments?
A company needs a disaster-recovery solution that can bring its Amazon RDS database back to the latest state after a full regional outage. The Recovery Point Objective (RPO) must be no greater than 5 minutes, and the business also wants the shortest possible Recovery Time Objective (RTO). Which approach best meets these requirements?
Deploy the database manually in multiple Regions and handle data replication yourself
Enable a cross-Region read replica for the Amazon RDS instance and promote it during a disaster
Configure automatic backups of the Amazon RDS database to run every 6 hours
Create Amazon RDS snapshots every 12 hours
Answer Description
A cross-Region read replica keeps a near-real-time copy of the database in another AWS Region. Because updates are shipped asynchronously-typically within seconds-you can promote the replica to a standalone primary during a regional outage, keeping data loss well under the 5-minute RPO and achieving a short RTO (promotion usually takes minutes).
User-defined snapshot or backup schedules of 6 or 12 hours leave data gaps far larger than 5 minutes. Although Amazon RDS automated backups ship transaction logs every 5 minutes and can meet the RPO, restoring from those backups involves creating and initializing a new DB instance, resulting in a much longer RTO than promoting a ready-to-serve replica. Manual multi-Region replication similarly provides no built-in guarantee of meeting the 5-minute RPO and adds operational overhead.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between RPO and RTO?
What are cross-Region read replicas in Amazon RDS?
How does promoting a cross-Region read replica minimize RTO?
Your company has a collection of historical data that is rarely accessed but must be retained for legal and auditing purposes. Which Amazon S3 storage class is the most cost-effective choice for this use case?
Amazon S3 One Zone-Infrequent Access
Amazon S3 Glacier Deep Archive
Amazon S3 Standard
Amazon S3 Intelligent-Tiering
Answer Description
Amazon S3 Glacier and Amazon S3 Glacier Deep Archive are designed for data archiving and long-term backup at very low costs. Among the available storage classes, S3 Glacier Deep Archive provides the lowest cost option that meets the requirement of retaining data for long periods without frequent access. It is suitable for archiving data that may be retrieved once or twice a year.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier Deep Archive?
How does S3 Glacier differ from S3 Glacier Deep Archive?
What are the retrieval options for S3 Glacier Deep Archive?
A company with multiple organizational accounts needs to provide its data analytics team, which operates in a dedicated account, with read-only access to specific object prefixes within a storage service managed by another account. What is the most secure way to configure this access?
Deploy network access control lists to enable selective object prefix traffic from the analytics team's account to the storage service.
Utilize key management service policies to allow analytics team's data processing applications to decrypt read-access data.
Attach managed policies to the analytics team's user accounts that specify read permissions on the object prefixes in the storage service.
Craft a resource-based policy on the storage buckets to grant read privileges on the specified object prefixes to the analytics team's account.
Answer Description
The implementation of a resource policy on the storage service belonging to one account to allow access to another account’s users is the most appropriate and secure method for this scenario. Specifically, you can use a bucket policy to define permissions at the resource level, in this case to allow read-only access to the specified object prefixes for the data analytics team. Directly attaching policies to the users in the other account or using network-based access controls would not achieve the desired effect of cross-account access at the bucket level, especially with the granularity required for specific object prefixes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
How does S3 bucket policy enable cross-account access?
What are object prefixes in S3, and why are they important for access control?
A company uploads monthly reports to an S3 bucket. These reports are frequently accessed for the first 30 days, occasionally accessed for the next 60 days, and rarely there after. The company needs a cost-effective solution to store and access these reports. Which lifecycle policy should be applied to the objects to minimize storage costs while keeping them available for occasional access when needed?
Transition to S3 One Zone-Infrequent Access after 30 days and S3 Glacier Flexible Retrieval after 90 days
Transition to S3 Standard-Infrequent Access after 30 days and then to S3 Glacier Flexible Retrieval after 90 days
Keep the reports in S3 Standard without transitioning to another storage class
Use S3 Intelligent-Tiering without any transitions
Answer Description
The correct answer is to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days and then to S3 Glacier Flexible Retrieval (formerly known as S3 Glacier) after 90 days. S3 Standard-IA is cost-effective for data that is accessed less frequently but requires rapid access when needed. After 90 days, when the reports are rarely accessed, moving them to S3 Glacier Flexible Retrieval will further reduce storage costs. S3 One Zone-Infrequent Access is cheaper than S3 Standard-IA but does not offer the same level of availability and resilience, as it stores data in a single Availability Zone. S3 Intelligent-Tiering could also be used, but in this scenario, the access patterns are predictable, so utilizing lifecycle transitions to Standard-IA and Glacier Flexible Retrieval is more cost-effective.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between S3 Standard-IA and S3 One Zone-IA?
Why is S3 Glacier Flexible Retrieval chosen for long-term storage?
Why is S3 Intelligent-Tiering not the best choice in this scenario?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.