AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
A company needs to transfer 100 TB of data from its on-premises data center to Amazon S3 quickly and cost-effectively. Which service should the company use to migrate the data?
AWS Snowball Edge Storage Optimized.
AWS Storage Gateway.
AWS Direct Connect.
AWS DataSync.
Answer Description
AWS Snowball Edge Storage Optimized is designed for transferring large amounts of data (up to petabytes) to AWS efficiently and securely. By physically shipping the Snowball Edge device to AWS, the company can avoid the limitations and costs associated with network bandwidth. AWS DataSync can transfer data over the Internet or via dedicated network, but moving 100 TB may be slower and incur higher network costs. AWS Storage Gateway enables hybrid storage integration but is not intended for one-time, bulk data migrations. AWS Direct Connect provides a dedicated network connection, but setting it up can be time-consuming and costly for a one-time transfer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Snowball Edge Storage Optimized?
How does the data transfer process using AWS Snowball work?
What are the pricing implications of using AWS Snowball for data transfer?
A company is designing a new application that requires a cost-effective storage solution for system backups and archival data. The backups will be retrieved infrequently and are not subject to strict performance requirements. Which AWS storage solution aligns MOST closely with these needs?
Amazon Simple Storage Service (S3) Standard
Amazon Elastic Block Store (EBS) Snapshot
Amazon Elastic File System (EFS) with lifecycle management
Amazon S3 Glacier
Answer Description
Amazon S3 Glacier is designed for long-term backup and archival storage with infrequent access patterns. It provides cost-efficient storage pricing and is most closely aligned with the requirement for storing backups and archival data that are infrequently accessed. Amazon EBS volumes are suited for block-level storage and are not optimized for infrequent access or archival purposes. Amazon EFS provides file storage solutions that can scale automatically, but it is more expensive than Amazon S3 Glacier for archival use cases. Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data, making it less cost-effective for archival storage when compared to Glacier.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the main features of Amazon S3 Glacier?
How does Amazon S3 Glacier differ from Amazon S3 Standard?
What scenarios are best suited for using S3 Glacier?
A multinational corporation seeks to fortify the security of the top-level user credentials across its numerous cloud accounts, where each account functions under its own operational domain. They intend to put into effect a two-step verification process for all top-level user logins and establish an automatic mechanism for monitoring any top-level credential usage in API calls. Which service should they utilize to automate the monitoring of such activities throughout all operational domains?
AWS GuardDuty
AWS Config
Amazon CloudTrail
AWS Identity and Access Management (IAM)
Answer Description
The service that enables logging of account actions and automatic detection of top-level user API activity is the correct answer, which is Amazon CloudTrail. It records events that are made within an account and can be set up to generate alerts when specific activities, including those by the top-level account user, are detected. The service known for configuration tracking is not suitable for monitoring account activities directly. The service responsible for identity management does not offer automated detection or alerting for specific user actions. The service focused on threat detection primarily monitors for unusual activity but is not specifically designed for tracking the usages of top-level user credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudTrail and how does it work?
What are the differences between AWS Config and CloudTrail?
How does automated monitoring of API call activities enhance security?
A company's application is deployed across multiple EC2 instances in an Auto Scaling group behind an Application Load Balancer. The company requires minimal downtime, even if an entire Availability Zone becomes unavailable. Which disaster-recovery approach BEST meets this requirement?
Implement a Backup and Restore strategy with daily backups.
Use an Active-Active deployment across two different cloud service providers.
Extend the Auto Scaling group to multiple Availability Zones within the same region.
Configure a Pilot Light environment in another region.
Answer Description
Extending the Auto Scaling group to span two or more Availability Zones within the same AWS Region keeps at least one healthy instance available if a zone fails. The ALB automatically routes traffic only to healthy AZs, and the Auto Scaling group launches replacement instances in the remaining zones, providing seamless failover with no manual intervention.
Backup and Restore and Pilot Light are cross-Region strategies; both need time to restore or scale infrastructure, resulting in higher recovery-time objectives for an AZ-level event.
An Active-Active deployment across two cloud providers can also achieve high availability, but it is significantly more complex and costly than required for protection against a single-AZ outage. AWS's own guidance recommends starting with Multi-AZ high availability before adopting multi-provider or multi-Region Active-Active designs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Auto Scaling group in AWS?
What are Availability Zones and why are they important?
Can you explain what high availability means in the context of AWS?
A company is deploying a web application that must route client requests to different Amazon EC2 instances based on the URL path specified in the client's request. The application also needs to distribute traffic efficiently across multiple Availability Zones. Which AWS load balancing solution should the company use to meet these requirements?
Use a Gateway Load Balancer (GWLB) to manage traffic at the transport layer.
Use an Application Load Balancer (ALB) to direct traffic based on request metadata.
Use a Network Load Balancer (NLB) to distribute traffic based on network protocol data.
Use a Classic Load Balancer to distribute traffic evenly across instances.
Answer Description
An Application Load Balancer operates at the application layer (Layer 7 of the OSI model) and can direct traffic based on the content of the HTTP/HTTPS request, such as the URL path. This makes it suitable for routing client requests to different targets based on URL paths. It also supports distribution of traffic across multiple Availability Zones, ensuring high availability and scalability. Network Load Balancers make routing decisions based on network protocol data, which does not allow for routing based on URL paths. Gateway Load Balancers operate at the network layer (Layer 3 of the OSI model) and are designed for deploying, scaling, and managing virtual appliances such as firewalls, not for content-based routing. Classic Load Balancers provide basic load balancing across instances but lack advanced request routing capabilities based on URL paths that the Application Load Balancer offers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Application Load Balancer (ALB)?
How does an Application Load Balancer differ from a Network Load Balancer (NLB)?
What are Availability Zones and why are they important for load balancing?
A retail company is decomposing a monolithic application into microservices. When a new order is placed, the OrderService must publish events so that the InventoryService, BillingService, and NotificationService can each process the event independently without the OrderService needing to know about the other services. The solution must minimize operational overhead and follow AWS best practices for a scalable, event-driven architecture. Which design meets these requirements?
Expose a REST endpoint in each downstream microservice through Amazon API Gateway. Configure the OrderService to make HTTP POST requests to every endpoint.
Publish order events to an Amazon SNS topic. Create a separate AWS Lambda function for each downstream microservice and subscribe each function to the SNS topic.
Store order events in an Amazon DynamoDB table. Use an Amazon EventBridge scheduled rule to query the table every minute and start AWS Step Functions workflows for processing.
Write events directly to a single Amazon SQS queue. Configure one AWS Lambda function to poll the queue and invoke the downstream services synchronously.
Answer Description
Publishing order events to an Amazon SNS topic and subscribing a separate AWS Lambda function for each downstream microservice provides high-throughput fan-out, loose coupling, and serverless compute with almost no infrastructure to manage. The REST-endpoint option is tightly coupled and synchronous; the single SQS queue with one Lambda function introduces a bottleneck and additional logic; and the scheduled EventBridge query of a DynamoDB table is inefficient and not event-driven.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SNS and how does it work with AWS Lambda?
What are microservices and how does an event-driven architecture benefit them?
What are the advantages of using AWS Lambda in conjunction with Amazon SNS?
A healthcare organization operating in Country X must ensure that all patient data and backups physically remain inside the country to meet national data-residency laws. Country X already has an AWS Region. When deciding where to deploy its compute and storage workloads, which approach best satisfies the legal requirement with the least additional complexity?
Use AWS Global Accelerator so traffic always reaches the nearest Region automatically.
Store data in CloudFront edge locations that are inside Country X.
Choose an AWS Region in a neighboring country but restrict access with security groups and network ACLs.
Deploy workloads in the AWS Region that is located inside Country X and disable any optional cross-Region features.
Answer Description
Deploying resources in the AWS Region that is physically located inside Country X keeps customer data within national borders by default. All Availability Zones in a Region reside in the same country, and AWS systems are designed so that customer data does not leave the Region unless the customer enables cross-Region features (for example, S3 cross-Region replication). Choosing this Region therefore meets the residency mandate with minimal extra controls.
Using CloudFront edge locations does not control where the origin data is stored. Selecting a Region in a neighboring country clearly violates the residency requirement. AWS Global Accelerator optimizes traffic routing but cannot guarantee that data is stored only inside Country X.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are data residency requirements?
Why is choosing a data center in the same country crucial for compliance?
What are the risks of distributing resources across data centers near the country's borders?
A financial services company runs periodic risk modeling simulations that are highly parallelizable and require a significant amount of compute power for a brief duration at the end of each month. Which of the following compute options would align BEST with the company's performance and cost-optimization needs?
Amazon EC2 Reserved Instances
Amazon EC2 T3 instances
Amazon EC2 Spot Instances
Amazon EC2 Dedicated Hosts
Answer Description
Amazon EC2 Spot Instances offer the most cost-effective approach to utilizing a significant amount of compute power for tasks that can be interrupted and have flexible start and end times, such as batch processing jobs or background tasks. Given that the company's workload is periodic and occurs at well-defined times, with an ability to handle interruptions (resumption of simulations), Spot Instances provide the required compute capacity at lower costs than On-Demand or Reserved Instances. EC2 Dedicated Hosts are more targeted towards licensing requirements and consistent performance, and T3 instances, while providing burstable performance, may not offer consistent high performance throughout the simulation period, making both of these options less aligned with the company's combination of a high-compute, cost-effective, and periodic processing routine.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon EC2 Spot Instances?
How do Spot Instances compare to On-Demand and Reserved Instances?
What workloads are best suited for Spot Instances?
Which service is designed to establish a private connection between the cloud environment and specific applications, helping minimize data transfer costs by avoiding the public internet?
VPC Peering
Internet Gateway
AWS PrivateLink
NAT Gateway
AWS Direct Connect
Answer Description
AWS PrivateLink establishes private connections between the internal network of the cloud environment and specific services or applications. This is cost-effective because it keeps the data within the Amazon network, avoiding the public internet which can incur additional costs. Although AWS Direct Connect helps reduce costs for on-premises to cloud data transfer, it's not the primary service for connecting to individual applications within the cloud environment. VPC Peering is used for interconnecting networks, but does not address application-specific routing needs. Internet Gateways and NAT Gateways facilitate public internet access and are contrary to the purpose of minimizing costs through avoidance of public routes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS PrivateLink and how does it work?
How does AWS Direct Connect differ from AWS PrivateLink?
What are the benefits of using AWS PrivateLink compared to traditional internet routing?
A company requires block storage for its low-latency interactive workloads that involve frequent random reads and small to medium-sized I/O operations. However, the budget is limited, and they must choose a volume type that aligns with their cost constraints. Which Amazon EBS volume type BEST meets the company's requirements for both performance and cost optimization?
Provisioned IOPS SSD (io1/io2) volumes
General Purpose SSD (gp3) volumes
Throughput Optimized HDD (st1) volumes
Cold HDD (sc1) volumes
Answer Description
General Purpose SSD (gp3) volumes strike the right balance of price and performance for most transactional or interactive workloads. They deliver single-digit-millisecond latency, include 3,000 baseline IOPS at no extra charge, and let you scale IOPS and throughput independently-making them cost-efficient for workloads with frequent random reads. Provisioned IOPS SSD (io1/io2) can achieve higher performance but at a higher cost per GiB and per provisioned IOPS. Throughput Optimized HDD (st1) and Cold HDD (sc1) are HDD-backed; they emphasize sequential throughput, not random I/O, and have higher latency, so they do not meet the application's low-latency requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of General Purpose SSD (gp3) volumes?
Why are Provisioned IOPS SSD (io1/io2) volumes not recommended for cost-sensitive applications?
What characteristics make Throughput Optimized HDD (st1) and Cold HDD (sc1) volumes unsuitable for low-latency workloads?
A company is migrating an on-premises application to AWS. The application requires shared storage that provides low-latency access to data and supports standard file system features like file locking and hierarchical directories. The data is frequently updated, and the solution should be scalable and cost-effective. Which AWS storage service is the MOST appropriate to meet these requirements?
Amazon Elastic File System (Amazon EFS).
Amazon Elastic Block Store (Amazon EBS).
Amazon Simple Storage Service (Amazon S3).
Amazon S3 Glacier.
Answer Description
Amazon Elastic File System (Amazon EFS) is the most appropriate storage service for this scenario. EFS provides a scalable, fully managed Network File System (NFS) for use with AWS Cloud services and on-premises resources. It supports standard file system semantics such as file locking and hierarchical directories, which are essential for applications that require shared file storage. EFS is designed for low-latency access to data and scales automatically as files are added and removed, making it both scalable and cost-effective. Amazon Simple Storage Service (Amazon S3) is object storage and does not support file system semantics like file locking or hierarchical directories. Amazon Elastic Block Store (Amazon EBS) provides block storage for EC2 instances and does not offer shared storage across multiple instances unless using EBS Multi-Attach, which has limitations and may not suit shared file system needs. Amazon S3 Glacier is intended for archival storage and is not suitable for frequently accessed data requiring low-latency access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of Amazon EFS?
How does Amazon EFS differ from Amazon S3?
What are the limitations of using Amazon EBS for shared storage?
A company is building an e-commerce application that must scale to handle sudden spikes in traffic during sales events. The application consists of a web front end that interacts with a back-end order processing system. The company wants to design an architecture that is scalable and minimizes the impact on other components if one component fails. What is the best architectural pattern to achieve this?
Deploy the application on a single large EC2 instance to handle peak loads.
Use a microservices architecture with services communicating via message queuing service such as Amazon SQS.
Implement a monolithic architecture on Amazon EC2 Auto Scaling groups.
Use a tightly coupled multi-tier architecture with direct service-to-service communication.
Answer Description
Using a microservices architecture where services communicate via Amazon Simple Queue Service (Amazon SQS) allows each component to function independently. This approach enhances scalability since each service can scale separately to handle load and ensures that if one component fails, it doesn't directly affect the overall architecture. Monolithic architectures, even when using Amazon EC2 Auto Scaling, are less flexible and can become bottlenecks. Deploying on a single large EC2 instance introduces a single point of failure, hence, lacks both scalability and fault tolerance. A tightly coupled multi-tier architecture increases the risk that a failure in one service will impact others due to direct dependencies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a microservices architecture?
How does Amazon SQS facilitate communication between microservices?
What are the benefits of using a message queuing service like Amazon SQS over a monolithic architecture?
Which AWS service is used primarily to control inbound and outbound traffic at the instance level within an Amazon VPC?
Network ACLs
Amazon GuardDuty
Route Tables
Security Groups
Answer Description
Security Groups in AWS are used to control inbound and outbound traffic at the instance level within an Amazon VPC. They act as a virtual firewall for EC2 instances to regulate traffic. Network ACLs, on the other hand, are used at the subnet level and not primarily for individual instances. Route tables are used to control the routing of traffic between subnets and the Internet, but do not directly control traffic at the instance level. Amazon GuardDuty is a threat detection service, not a traffic control mechanism.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Security Groups and how do they differ from Network ACLs?
Can you explain the term 'stateful' in relation to Security Groups?
What types of rules can be defined in a Security Group?
A company has set a monthly AWS spending limit of USD 5,000. They want a service that lets them define this budget, continuously monitor actual and forecasted spend during the month, and automatically send email or SNS alerts if the forecast or actual cost is expected to exceed the limit. Which AWS service should they use?
AWS Cost Explorer
AWS Cost and Usage Report
AWS Budgets
AWS Compute Optimizer
Answer Description
AWS Budgets lets you create custom monthly (or other period) budgets, tracks both actual and forecasted costs, and sends notifications via email or Amazon SNS when a threshold is reached or forecasted to be exceeded. Cost Explorer is primarily for interactive visualization, the Cost and Usage Report provides raw data files, and AWS Compute Optimizer gives resource-rightsizing recommendations-none of these services let you set automatic budget alerts.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are AWS Budgets and how do they work?
How do alerts work in AWS Budgets?
What is the difference between AWS Budgets and AWS Cost Explorer?
An e-commerce company needs a file storage solution accessible by multiple Amazon EC2 instances for processing transaction logs and analytics data in real time. The company expects the data volume to grow unpredictably and wants to avoid manual adjustments to storage capacity while minimizing costs. Which storage solution should the company implement to meet these requirements?
Implement Amazon Elastic File System (Amazon EFS) with storage auto scaling enabled.
Utilize instance store volumes and manage storage capacity manually.
Use Amazon Elastic Block Store (Amazon EBS) volumes and manually increase capacity as needed.
Deploy AWS Storage Gateway appliances in EC2 machines.
Answer Description
Using Amazon Elastic File System (Amazon EFS) with storage auto scaling enabled allows the file system to automatically adjust its capacity as data grows or shrinks. This provides seamless scalability without manual intervention and is cost-effective since the company only pays for the storage used. Manually increasing Amazon Elastic Block Store (Amazon EBS) volumes requires ongoing management and does not support concurrent access from multiple EC2 instances as effectively as Amazon EFS. AWS Storage Gateway is intended for hybrid storage scenarios where on-premises applications can seamlessly use Cloud storage. While it is possible to deploy Storage Gateway appliance in EC2 instances, building a shared file system on top of Amazon storage, this architecture is complex and way costly and have no additional benefits as compared to using Amazon EFS for the same use case.
Instance store volumes are ephemeral and not suitable for persistent storage needs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Elastic File System (EFS)?
Why is auto scaling important for storage solutions?
What are the main differences between EFS and Amazon Elastic Block Store (EBS)?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.