00:20:00

AWS Certified Solutions Architect Associate Practice Test (SAA-C03)

Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified Solutions Architect Associate SAA-C03
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Associate SAA-C03 Information

AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.

The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.

The exam also validates a candidate’s ability to complete the following tasks:

  • Design solutions that incorporate AWS services to meet current business requirements and future projected needs
  • Design architectures that are secure, resilient, high-performing, and cost optimized
  • Review existing solutions and determine improvements
AWS Certified Solutions Architect Associate SAA-C03 Logo
  • Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test

  • 20 Questions
  • Unlimited
  • Design Secure Architectures
    Design Resilient Architectures
    Design High-Performing Architectures
    Design Cost-Optimized Architectures
Question 1 of 20

Your client hosts their multimedia files on Amazon S3 and observes that these files are frequently accessed for up to 60 days after uploading. After 60 days, the access patterns decline sharply, but the client requires the files to be available for occasional access for at least one year. Which lifecycle policy should be applied to meet the client's need for cost optimization while maintaining file availability?

  • Transition objects to S3 One Zone-Infrequent Access after 60 days

  • Transition objects directly to S3 Glacier Flexible Retrieval after 60 days

  • Keep the objects stored in S3 Standard without transitioning them to other storage classes

  • Transition objects to S3 Standard-Infrequent Access after 60 days and to S3 Glacier Flexible Retrieval after one year

Question 2 of 20

A company wants to expose some internal services to external developers over the Internet. They need a solution that offers authentication/authorization, per-client rate limiting, and the ability to monitor and control usage. Which AWS service should they use to meet these requirements?

  • Use AWS Lambda to host the services and implement custom authentication and throttling logic.

  • Host the services on Amazon EC2 instances and use security groups for access control.

  • Use Amazon API Gateway to expose the services and enforce usage plans with API keys.

  • Deploy the services behind an Application Load Balancer and use Amazon Cognito for authentication.

Question 3 of 20

Which of the following statements correctly describes how a multi-tier architecture can improve the scalability of an application deployed on AWS?

  • Storing business logic and data in the same tier minimizes network hops, which inherently increases concurrent throughput.

  • Each layer-presentation, application, and data-can be scaled horizontally and independently, so increasing load on one layer does not require scaling the others.

  • All tiers run on the same Amazon EC2 instances, so autoscaling the instance group automatically scales every tier together.

  • Having a fixed 1:1 mapping between web servers and database servers simplifies capacity planning and removes the need for load balancing.

Question 4 of 20

A web startup is deploying an interactive platform that will experience fluctuating levels of traffic, with occasional surges during marketing campaigns and special events. The platform needs consistent compute power under normal conditions but must also be able to scale up swiftly and cost-effectively during peak times. Which service should the architect recommend to fulfill these compute requirements with elasticity?

  • Utilize a data processing managed service designed for handling sporadic heavy loads

  • Deploy the platform using a serverless function execution service

  • Provision a single compute instance with maximum capacity to handle traffic spikes

  • Implement an auto scaling service that dynamically adjusts the number of instances

Question 5 of 20

A scientific research institute needs to offload a large collection of genomic data sets from its on-premises servers to AWS. The data sets are seldom accessed, but when they are, a delay of several hours is acceptable. The institute requires a highly cost-effective solution for storing and retrieving these data sets, with a strong focus on minimizing storage costs. What method represents the MOST cost-optimized approach to store this data?

  • Implement a Storage Gateway with stored volumes to gradually move the data sets into Amazon S3 over a direct connection.

  • Leverage S3 Intelligent-Tiering to automatically optimize costs between frequent and infrequent access tiers for the data sets.

  • Store the genomic data sets using the S3 Glacier Deep Archive storage class after initial upload completion.

  • Utilize the Hadoop Distributed File System (HDFS) on Amazon EMR with occasional syncing to S3 for the data sets not actively in use.

Question 6 of 20

A company is required to enforce strict access policies regarding the management of its encryption keys used to secure sensitive data at rest on Amazon S3. Which of the following is the BEST method to ensure that only a select group of senior security personnel can administer the keys?

  • Encapsulate the encryption keys using an additional layer of encryption with a separate master key.

  • Use AWS managed keys for S3 and rely on default encryption features to restrict key administration.

  • Create a Customer Managed Key in AWS KMS and restrict access to a specific IAM group assigned to the senior security personnel.

  • Implement automatic key rotation every three months for the encryption keys.

Question 7 of 20

A Solutions Architect needs to determine the ideal memory allocation for a new AWS Lambda function that processes short bursts of data with low to moderate processing requirements. What should the Architect consider when selecting the Lambda function's memory size?

  • Select the lowest memory size since the function only processes short bursts of data.

  • Allocate the highest memory option to ensure the fastest execution time.

  • Choose a lower memory size that still provides adequate CPU power for the function's requirements, balancing cost and performance.

  • Allocate memory size based solely on the duration of the function's execution, regardless of processing requirements.

Question 8 of 20

An enterprise needs to ensure the encryption of sensitive data stored in their Amazon S3 buckets. The company has mandated that its own encryption keys must be used, and those keys must be capable of being rotated on a company-defined schedule and disabled immediately in the event of a security breach. Which of the following configurations should be implemented to meet these specific requirements?

  • Use an AWS-managed CMK in AWS KMS without enabling key rotation.

  • Create a customer-managed CMK in AWS KMS, use it to encrypt the S3 buckets (SSE-KMS), and manage rotation/disablement according to the company policy.

  • Use Amazon S3-managed keys (SSE-S3) for encryption and handle rotation outside of AWS.

  • Use an AWS-managed KMS key and rely on its automatic annual rotation.

Question 9 of 20

An application is experiencing significant load on its database tier, particularly with read-heavy query operations that are impacting performance. Which service would best alleviate this issue by caching query results to enhance the performance and scalability of the application?

  • Amazon RDS Read Replicas

  • Amazon Simple Storage Service (S3)

  • Amazon RDS Multi-AZ deployment

  • Amazon ElastiCache

Question 10 of 20

A financial-services company transfers about 20 TB of sensitive transaction data every night from its on-premises data center in New York to Amazon S3 buckets in the us-east-1 Region. The security team requires that the traffic never traverse the public Internet. The network team also needs deterministic, low-latency performance during the transfer window and is willing to provision physical connectivity if necessary.

Which AWS networking option MOST cost-effectively meets these requirements?

  • Order a dedicated 10 Gbps AWS Direct Connect connection at a nearby AWS Direct Connect location and configure a private virtual interface

  • Use AWS DataSync over the Internet with TLS encryption to copy the data to Amazon S3

  • Enable AWS Global Accelerator and route the data traffic through accelerator endpoints

  • Create an AWS Site-to-Site VPN connection over existing Internet circuits between the data center and a VPC

Question 11 of 20

Using Amazon EBS snapshots for backups is less cost-effective for storage than copying the same amount of data to Amazon S3 Standard-Infrequent Access (S3 Standard-IA).

  • True

  • False

Question 12 of 20

A security engineer must protect sensitive data that is uploaded to an Amazon S3 bucket. The engineer's requirements are:

  • Encrypt data in transit by allowing only SSL/TLS connections to the bucket.
  • Encrypt data at rest with the customer-managed AWS KMS key arn:aws:kms:us-east-1:123456789012:key/abcd1234.

Which of the following statements best describes AWS best practice for meeting both requirements?

  • Using a bucket policy to require SSL/TLS is unnecessary because Amazon S3 automatically forces HTTPS; only default encryption needs to be enabled.

  • A bucket policy can enforce SSL/TLS, but it can require only the AWS-managed key (aws/s3); customer-managed keys cannot be specified in policy conditions.

  • Enforcing SSL/TLS and a specific customer-managed KMS key in the bucket policy aligns with AWS security best practices for protecting data in transit and at rest.

  • Enabling SSE-S3 encryption at rest makes enforcing SSL/TLS in transit redundant, so the bucket policy only needs to specify the aes256 header.

Question 13 of 20

A company operates under a multi-account strategy where one account is managed by the security engineers and another is operated by a separate team responsible for network administration. The security team needs to allow the network administration team's account access to a specific Amazon S3 bucket without broadening the access to other accounts. Which of the following is the MOST secure way to grant the required access?

  • Set up a bucket policy that limits access to the S3 bucket based on the source IP range of the network administration team's office location.

  • Attach a resource-based policy directly to the S3 bucket identifying the network administration team's account as the principal with the specified permissions.

  • Edit the S3 bucket's Access Control List (ACL) to include the user identifiers from the team handling network administration.

  • Implement a policy for individual users in the security engineers' account that grants permissions to the network administration team.

Question 14 of 20

A company needs a scalable storage solution to store a vast and ever-increasing collection of raw image files. These files will be frequently accessed immediately after creation, but access will dramatically decrease after one month. Which Amazon storage service will BEST meet the business's need for scalability and cost-effectiveness over time?

  • Amazon S3 Intelligent-Tiering

  • Amazon Elastic File System (EFS)

  • Amazon Elastic Block Store (EBS)

  • Amazon S3 Standard

Question 15 of 20

Which of the following statements about designing Amazon EC2 Auto Scaling policies for different workload types is CORRECT?

  • AWS Auto Scaling cannot use custom CloudWatch metrics, so only predefined CPU-based metrics are supported.

  • CPU utilization is always the best, and usually the only metric you need for horizontal scaling across every workload.

  • Memory-bound applications may require Auto Scaling policies that trigger on memory usage or other custom metrics instead of CPU utilization.

  • Network-intensive workloads should scale based on CPU utilization because network metrics cannot be used in Auto Scaling policies.

Question 16 of 20

A technology startup is building a social media analytics platform that experiences unpredictable bursts of traffic throughout the day. They require a compute solution that automatically adjusts to the varying load and optimizes costs. Which architecture should they implement?

  • Use Amazon Elastic Container Service (ECS) with Fargate Launch Type for serverless container orchestration.

  • Utilize Amazon EC2 instances with Auto Scaling groups to manage the varying load.

  • Deploy on Amazon Elastic Kubernetes Service (EKS) with horizontal pod autoscaling.

  • Deploy the application on AWS Lambda and utilize its automatic scaling and billing for only the compute time used.

Question 17 of 20

A company needs a disaster-recovery solution that can bring its Amazon RDS database back to the latest state after a full regional outage. The Recovery Point Objective (RPO) must be no greater than 5 minutes, and the business also wants the shortest possible Recovery Time Objective (RTO). Which approach best meets these requirements?

  • Deploy the database manually in multiple Regions and handle data replication yourself

  • Enable a cross-Region read replica for the Amazon RDS instance and promote it during a disaster

  • Configure automatic backups of the Amazon RDS database to run every 6 hours

  • Create Amazon RDS snapshots every 12 hours

Question 18 of 20

Your company has a collection of historical data that is rarely accessed but must be retained for legal and auditing purposes. Which Amazon S3 storage class is the most cost-effective choice for this use case?

  • Amazon S3 One Zone-Infrequent Access

  • Amazon S3 Glacier Deep Archive

  • Amazon S3 Standard

  • Amazon S3 Intelligent-Tiering

Question 19 of 20

A company with multiple organizational accounts needs to provide its data analytics team, which operates in a dedicated account, with read-only access to specific object prefixes within a storage service managed by another account. What is the most secure way to configure this access?

  • Deploy network access control lists to enable selective object prefix traffic from the analytics team's account to the storage service.

  • Utilize key management service policies to allow analytics team's data processing applications to decrypt read-access data.

  • Attach managed policies to the analytics team's user accounts that specify read permissions on the object prefixes in the storage service.

  • Craft a resource-based policy on the storage buckets to grant read privileges on the specified object prefixes to the analytics team's account.

Question 20 of 20

A company uploads monthly reports to an S3 bucket. These reports are frequently accessed for the first 30 days, occasionally accessed for the next 60 days, and rarely there after. The company needs a cost-effective solution to store and access these reports. Which lifecycle policy should be applied to the objects to minimize storage costs while keeping them available for occasional access when needed?

  • Transition to S3 One Zone-Infrequent Access after 30 days and S3 Glacier Flexible Retrieval after 90 days

  • Transition to S3 Standard-Infrequent Access after 30 days and then to S3 Glacier Flexible Retrieval after 90 days

  • Keep the reports in S3 Standard without transitioning to another storage class

  • Use S3 Intelligent-Tiering without any transitions