AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-90 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
A company is deploying an application that requires compliant data storage, the redundancy of multiple data centers, and low-latency access for users situated in Europe. To accomplish this in a cost-optimized manner, they need to select an appropriate AWS Region. Which of the following options aligns best with these requirements?
Deploy the application in a single Availability Zone in the Frankfurt region to lower costs.
Deploy the application in two separate regions: the Ireland region and the London region to ensure redundancy.
Deploy the application across three Availability Zones in the Frankfurt region.
Deploy the application across Availability Zones in both the Frankfurt region and the Northern Virginia region to reduce latency for global access.
Answer Description
The Frankfurt region (eu-central-1) supports GDPR compliance and offers multiple Availability Zones for high availability and redundancy. Low-latency access for European users is also ensured given the region's central placement within Europe. Conversely, using US regions would conflict with the EU data residency requirements, and deploying across multiple regions could lead to a complex architecture that unnecessarily increases costs without significant benefits to the users in Europe.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Availability Zones in AWS?
What is GDPR compliance and why is it important for data storage?
Why is the selection of an AWS Region important for latency?
Your organization needs a strategy that allows software engineers to manage AWS resources in an isolated development environment without creating individual user accounts in this environment. What is the most secure and maintainable way to achieve this, adhering to AWS best practices?
Distribute individual access credentials for the isolated development environment to each software engineer.
Create a role within the development environment with the required permissions that can be assumed by engineers from their originally authenticated accounts.
Prohibit all direct management of resources within the isolated development environment to maintain strict security.
Configure a shared service account that all software engineers use to manage resources in the development environment.
Answer Description
Implementing a cross-account access strategy by creating a role with the necessary permissions in the development environment that can be assumed by engineers from their existing accounts is in line with AWS best practices and securely facilitates the required access. This approach centralizes user management and allows for a streamlined permission model without creating additional users in multiple accounts. Assigning unique credentials for each development account would introduce unnecessary overhead and complicates credential management. Providing a service account is not a user-centric access solution and typically does not cater to individual permissions, risking over-privileged access. Lastly, restricting access entirely defeats the purpose of the requirement and does not provide a solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is cross-account access in AWS?
What are IAM roles and how do they work?
Why is it best practice to avoid sharing service accounts?
A company requires minimal data loss and rapid recovery of its critical systems in the event of a disaster, but they operate with a limited budget for DR. Which of the following disaster recovery strategies would be most appropriate for this company's needs?
Pilot Light
Active-Active Failover
Backup and Restore
Warm Standby
Answer Description
The Pilot Light strategy is correct because it involves having a minimal version of an environment always running in the cloud. This strategy allows a company to rapidly scale up this environment to handle full production load in the event of a disaster, ensuring minimum data loss and rapid recovery. The Warm Standby strategy is more costly as it involves a scaled-down but fully functional version of the environment. Both the Backup and Restore and Active-Active strategies do not fit the criteria as well as Pilot Light; Backup and Restore can involve more extensive data loss and longer recovery times, while Active-Active requires full production-scale environments in two locations, significantly increasing cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Pilot Light strategy in disaster recovery?
How does the Warm Standby strategy differ from Pilot Light?
Why might Backup and Restore be less effective in a disaster recovery scenario?
Which service should be utilized to manage user sign-up and sign-in functionalities, along with federated authentication, for a mobile application that requires integration with social login providers?
Amazon GuardDuty
AWS Control Tower
AWS Identity and Access Management (IAM)
Amazon Cognito
Answer Description
The correct answer is Amazon Cognito, which allows developers to add user sign-up, sign-in, and access control to their web and mobile applications quickly and easily. It also supports federated authentication with social identity providers, such as Facebook, Google, and Amazon, which is the functionality described in the question. The other services listed have different primary uses: AWS IAM is designed for secure AWS resource management, AWS Control Tower is for governance across multiple AWS accounts, and Amazon GuardDuty specializes in security threat detection and continuous monitoring.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is federated authentication and how does it work in Amazon Cognito?
What are the main benefits of using Amazon Cognito for mobile applications?
How does Amazon Cognito differ from AWS IAM in terms of user management?
Which Amazon S3 storage class is optimized for data that is rarely accessed but requires long-term retention for regulatory compliance purposes?
Amazon S3 Intelligent-Tiering
Amazon S3 Glacier
Amazon S3 Standard
Amazon S3 One Zone-IA
Answer Description
Amazon S3 Glacier is specifically optimized for data archiving, providing secure and durable storage for data that is rarely accessed. It is a cost-effective solution for long-term data retention, such as for regulatory compliance, because it offers lower storage costs compared to other storage classes intended for more frequent access. Amazon S3 Standard is designed for general-purpose storage of frequently accessed data. Amazon S3 Intelligent-Tiering is designed to optimize costs by automatically moving data between access tiers based on changing access patterns. Amazon S3 One Zone-IA provides low-cost storage for infrequently accessed data, but unlike Amazon S3 Glacier, it does not target long-term archival storage needs and retains data in only one Availability Zone.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier and how does it work?
What are the retrieval options in Amazon S3 Glacier?
What distinguishes Amazon S3 Glacier from other S3 storage classes like S3 Standard?
Which service is commonly utilized to manage policies governing who can use cryptographic keys for securing data stored on the cloud?
Identity and Access Management
Secrets Manager
Certificate Manager
Key Management Service
Answer Description
The correct service is the one specifically designed to create, control, and manage encryption keys and policies. It enables administrators to define user permissions and outline the scope of actions that can be performed with these keys. This service is integral to managing the lifecycle of encryption keys and their accessibility, helping to uphold the principles of confidentiality and integrity in cloud data security.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Key Management Service (KMS)?
How do IAM roles interact with KMS?
What are some best practices for using KMS in AWS?
A company's web application experiences unpredictable traffic, leading to underused instances during off-peak hours and performance degradation during traffic spikes. The application is hosted on EC2 instances and requires a solution that scales the compute resources effectively while minimizing cost. Which AWS service or feature should the solutions architect recommend to meet these requirements?
Purchase Reserved Instances to ensure capacity while reducing costs.
Utilize Spot Instances for cost savings during off-peak hours.
Implement EC2 Auto Scaling to automatically add or remove instances based on traffic demands.
Enable EC2 Hibernation for instances during off-peak traffic periods.
Answer Description
Auto Scaling is the correct choice because it automatically adjusts the number of EC2 instances in response to changing demand, ensuring that the number of instances scales up when needed during traffic peaks and scales down during low traffic to save costs. Spot Instances, while being cost-effective, are best for workloads with flexible start and end times and can be interrupted, potentially leading to application unavailability during traffic spikes if outbid. EC2 Hibernation saves the instance's state to EBS, which is useful for saving the session state for long-running processes, but does not help with scaling. Reserved Instances provide cost savings for consistent workloads with predictable usage, not unpredictable traffic patterns.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are EC2 Auto Scaling Groups?
How does EC2 Auto Scaling compare to using Spot Instances?
What are the benefits of using Reserved Instances?
Your client wishes to build a system where their web and mobile platforms can securely request information from a variety of upstream services. This system must support managing developer access, accommodate changes in the structure of requests, and offer mechanisms to limit the number of incoming requests per user. Which Amazon service should they implement to meet these requirements?
AWS Direct Connect
Amazon API Gateway
Amazon Simple Storage Service (S3)
Amazon Cognito
AWS Lambda
AWS Step Functions
Answer Description
The correct answer is Amazon API Gateway because it securely manages API requests, supports developer access control, handles request transformations, and enforces rate limiting. It integrates with AWS services like Lambda and Cognito, making it ideal for managing web and mobile API traffic. The incorrect options lack full API management capabilities - AWS Direct Connect is for private networking, S3 is for storage, and Cognito only handles authentication. Step Functions is for workflow automation, and Lambda executes backend logic but lacks API request management. While some of these services complement API Gateway, none provide a complete solution on their own.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What functionalities does Amazon API Gateway offer for managing access?
How does API Gateway accommodate changes in the structure of requests?
What mechanisms does API Gateway provide to limit the number of incoming requests per user?
A company wants to deploy containerized applications using an open-source orchestration system they are familiar with. They require a managed cloud service that reduces operational overhead, ensures high availability across multiple Availability Zones (AZs), and is compatible with their existing workloads. Which managed service should they choose?
Deploy Kubernetes on EC2 instances.
AWS Fargate with Amazon ECS.
Amazon Elastic Container Service (ECS).
Amazon Elastic Kubernetes Service (EKS).
Answer Description
Amazon Elastic Kubernetes Service is a managed service that allows companies to run Kubernetes clusters on AWS without the need to manage the control plane components. It reduces operational overhead and provides high availability by distributing the control plane across multiple AZs. Deploying Kubernetes on EC2 instances would require the company to manage the control plane themselves, increasing operational effort. Amazon Elastic Container Service is a proprietary service that uses its own orchestration system, which may not be compatible with workloads designed for Kubernetes. AWS Fargate with Amazon ECS allows running containers without managing servers but uses Amazon ECS instead of an open-source orchestration system.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Elastic Kubernetes Service (EKS)?
What are Availability Zones (AZs) and why are they important?
How does EKS reduce operational overhead?
A company is required to keep financial records readily available for immediate access for two years, after which time the records may be archived. Which feature should be implemented to automate the transition of the data to a more cost-effective storage service once the two-year period concludes?
Tape backup procedures with Storage Gateway
Automated Backup retention with RDS
Snapshot policies in EBS
Lifecycle policies in S3
Answer Description
Lifecycle policies allow the automatic transition of objects between different storage classes. By setting up a policy, the financial records can remain in a standard storage class for immediate accessibility for two years, and then automatically transfer to a colder storage option such as Glacier or Glacier Deep Archive. This ensures compliance with access needs and cost optimization when the immediate access requirement expires.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are lifecycle policies in S3?
What are the differences between S3 Standard, Glacier, and Glacier Deep Archive?
Why is it important to have data lifecycle management in cloud storage?
Your application is hosted on a fleet of EC2 instances that serve HTTP/HTTPS traffic. The application experiences sudden and unpredictable spikes in traffic that can overwhelm a single server. Which type of load balancer should you use to ensure high availability and elasticity while also enabling advanced request routing capabilities?
AWS Global Accelerator
Network Load Balancer
Application Load Balancer
Classic Load Balancer
Answer Description
An Application Load Balancer (ALB) is specifically designed to handle HTTP/HTTPS applications and offers advanced request routing capabilities, making it capable of distributing traffic based on the content of the request (such as URL path or host field). This would effectively manage the sudden spikes in traffic by distributing it across multiple EC2 instances, ensuring high availability and elasticity. Network Load Balancer is typically used for TCP, UDP, and TLS traffic where extreme performance and static IP is necessary for the load balancer. Classic Load Balancer is a legacy option that provides basic load balancing across multiple EC2 instances but with less advanced routing capabilities than ALB. A Global Accelerator primarily improves global application availability and performance by directing traffic to optimal endpoints over the AWS global network.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of an Application Load Balancer (ALB)?
When should I choose a Network Load Balancer over an Application Load Balancer?
What is the difference between an Application Load Balancer and a Classic Load Balancer?
A data analytics company processes daily logs that amount to 400 GB per day. The logs need to be stored for 30 days for compliance purposes and must be readily accessible for querying and analysis. The processing and analysis are performed on Amazon EC2 instances. The company seeks a cost-effective storage solution that provides quick access and minimal management overhead. As a Solutions Architect, which storage solution would you recommend for storing the logs?
Store the logs in Amazon S3 Standard storage class with Lifecycle policy to either adjust storage class or delete them after 30 days.
Use an Amazon EFS file system to store the logs.
Set up an EBS Provisioned IOPS SSD (io2) volume for each EC2 instance to store the logs.
Attach multiple EBS General Purpose SSD (gp3) volumes to the EC2 instances for log storage.
Answer Description
Storing the logs in Amazon S3 Standard is the most cost-effective and scalable solution. Amazon S3 offers durable, highly available storage with low latency access, ideal for storing large amounts of data that need to be accessed quickly. It can automatically scale to store the required amount (400 GB per day × 30 days) without upfront capacity provisioning, reducing costs compared to block or file storage options. Amazon EFS is a managed file system ideal for shared access and persistent file storage, but it is more expensive than S3 for large-scale log storage and not optimized for high-volume data storage with lifecycle management. Amazon EBS is a block storage service primarily used for EC2 instance storage. It's not ideal for large-scale, long-term log storage due to its higher cost and manual management overhead compared to S3.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and why is it suitable for storing logs?
What is a lifecycle policy in Amazon S3?
What are the differences between Amazon S3, EFS, and EBS for storage?
Which service is most appropriate for orchestrating and automating the transformation of data that is accumulated in a daily cycle?
Simple Storage Service (S3)
Athena
Kinesis
Glue
Answer Description
For managing and automating the transformation of data collected in periodic batches, such as on a daily basis, an efficient service to employ is one that offers capabilities for serverless data integration. The correct service for such batch processing tasks excels at preparing and loading data for analytics and can automate these data preparation steps, which fits the requirements for daily batch ingestion and processing. Other services mentioned are more focused on real-time processing, object storage, and interactive querying, which do not inherently manage the data transformation process for daily batches.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Glue and how does it work?
How does AWS Glue compare to other services like Kinesis?
What are the benefits of using AWS Glue for data transformation?
Which AWS component acts as a virtual firewall for controlling traffic at the instance level within an Amazon VPC?
Route Table
Subnet CIDR
Security Group
Network Access Control List (NACL)
Answer Description
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups operate at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. Network Access Control Lists (NACLs) operate at the subnet level, so they are not the correct answer. Route tables direct network traffic, but do not control or filter traffic like a firewall. Subnet CIDRs are used for IP address allocation within a VPC and have no filtering capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the main differences between a Security Group and a Network Access Control List (NACL)?
How do I configure a Security Group in AWS?
What are the potential security risks if Security Groups are not configured properly?
By default, all inbound traffic is allowed on a newly created security group in a Virtual Private Cloud (VPC).
This statement is true.
This statement is false.
Answer Description
By default, a newly created security group in a VPC denies all inbound traffic until you create inbound traffic rules allowing it. This security measure ensures that no unintended services are exposed unless explicitly allowed by the architect or administrator. The 'deny all' default helps in maintaining a secure network posture aligning with the principle of least privilege.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are security groups in AWS VPC?
What is the principle of least privilege?
How do I create inbound traffic rules in a security group?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.