AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
A business needs to serve their predominantly west coast customer base with a newly developed web platform while ensuring content is delivered with the least possible delay. Which solution would be most effective for reducing latency when accessing the platform's content?
Elastic Compute Instances scaled across multiple regions
A dedicated network connection between on-premises and cloud infrastructure
A message queuing service to decouple application components
Content Delivery Network (CDN)
Answer Description
A content delivery network (CDN) specializes in reducing latency by caching content at edge locations closer to the end users. This service dramatically decreases the time taken to deliver content, regardless of the user's geographic location.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Content Delivery Network (CDN)?
How does caching in a CDN work?
How does a CDN improve user experience in terms of latency?
What is the impact on cost optimization when applying a throttling strategy to API requests within an AWS-hosted application?
It enhances application monitoring capabilities, resulting in preventative cost savings.
Throttling discourages use of the application, thereby reducing the number of compute resources required.
By slowing down request processing, throttling increases the execution duration and thereby reduces costs.
Throttling substantially decreases latency, thereby directly reducing compute resource usage and associated costs.
Throttling ensures that the infrastructure is not scaled up due to temporary spikes, avoiding increased costs associated with scaling.
Applying throttling converts the pricing model to a flat-rate billing, simplifying cost management.
Answer Description
A well-implemented throttling strategy prevents server overload by limiting the number of API requests that can be made by a user over time. This prevents unexpected bursts of traffic from causing autoscaling events or the provisioning of additional, potentially unnecessary resources, which would incur higher costs. Throttling ensures a balanced distribution of system load, maintaining steady performance which can reduce costs on compute resources and help in maintaining the budget. The answer related to decreasing latency is incorrect as throttling can actually increase response times. The suggestion about flat-rate billing misunderstands AWS pricing models, which can include per-request costs or scale with utilization. Enhanced monitoring on its own would not directly result in cost savings but rather increased visibility.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is throttling in the context of API requests?
How does throttling prevent unexpected autoscaling events?
Can throttling affect application performance, and if so, how?
An application running in Amazon EC2 instances requires low-latency access to storage for high-performance transactional workloads. Which AWS storage service and configuration should you use to meet these requirements?
Use Amazon EBS Throughput Optimized HDD (st1) volumes.
Use Amazon EFS with General Purpose performance mode.
Use Amazon S3 Standard storage.
Use Amazon EBS Provisioned IOPS SSD (io1 or io2) volumes.
Answer Description
Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD volumes (io1 or io2) provide consistent low-latency performance and high input/output operations per second (IOPS), making them ideal for high-performance transactional workloads that require rapid access to data. They are designed to handle intensive workloads that demand the highest performance. Amazon S3 is an object storage service with higher latency, unsuitable for transactional workloads requiring block storage. Amazon EFS offers shared file storage but may not provide the necessary low-latency performance for high IOPS workloads. Amazon EBS Throughput Optimized HDD (st1) volumes are designed for throughput-intensive workloads with large, sequential I/O rather than low-latency transactional workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Provisioned IOPS SSD (io1 or io2) volumes in Amazon EBS?
Why is Amazon S3 not suitable for high-performance transactional workloads?
What are some typical use cases for EBS Throughput Optimized HDD (st1) volumes?
Your company has a legacy application that needs to store and retrieve files using common storage protocols. Which service should be implemented to integrate these capabilities with cloud-based storage?
Elastic Block Store
Storage Gateway
DataSync
FSx
Answer Description
The correct service is Storage Gateway because it enables integration of on-premises applications with cloud storage by supporting common storage protocols such as NFS, SMB, and iSCSI. This facilitates a smooth extension of on-premises storage to the cloud, which is beneficial for cost-optimized storage strategies. DataSync is mainly focused on the efficient transfer of data between on-premises and the cloud and is not primarily used for real-time access. FSx provides fully managed native third-party file system services, but it doesn't provide a direct hybrid storage solution. EBS is designed for use with EC2 instances for block storage and does not fit the hybrid storage integration scenario described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What storage protocols does Storage Gateway support?
How does Storage Gateway facilitate hybrid cloud storage?
What is the difference between Storage Gateway and DataSync?
A company needs to transfer 50 TB of data from its on-premises data center to cloud storage for archival purposes within one week. They have a high-speed internet connection with 500 Mbps upload bandwidth but want to minimize transfer costs. Which method should they choose to transfer the data to the cloud storage?
Use a data transfer service to transfer the data over the Internet.
Upload the data over the Internet using command-line tools.
Establish a dedicated network connection and transfer the data directly.
Ship the data using a physical data transport appliance like Snowball Edge.
Answer Description
Shipping the data using a physical data transport appliance like Snowball Edge provides the most cost-effective and time-efficient method for transferring large amounts of data to the cloud. It avoids the limitations of network bandwidth and reduces costs associated with prolonged Internet transfers. Transferring 50 TB over a 500 Mbps connection would take over 9 days, exceeding the one-week requirement, and could incur higher costs due to extended network usage. Using data transfer services over the Internet or uploading via command-line tools would face similar time constraints and potentially higher costs. Establishing a dedicated network connection is not feasible within a week and involves substantial setup costs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Snowball Edge?
Why is using a physical appliance like Snowball Edge more cost-effective than transferring over the Internet?
What are the limitations of transferring large data directly over the Internet?
What is the most cost-effective storage class for infrequently accessed objects which you intend to store for at least a year in Amazon S3?
Amazon EBS General Purpose SSD (gp2)
Amazon S3 Intelligent-Tiering
Amazon S3 Standard
Amazon S3 Glacier
Answer Description
Amazon S3 Glacier is the most cost-effective storage class for objects that are infrequently accessed and can tolerate retrieval times of several hours. It is designed for long-term storage, providing lower storage costs and higher retrieval costs compared to other S3 storage classes, making it an ideal choice for archiving data which is not accessed frequently.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the main features of Amazon S3 Glacier?
How does data retrieval work with Amazon S3 Glacier?
What scenarios are best suited for using Amazon S3 Glacier?
A company is deploying a web application that will experience unpredictable, spikey traffic, which may see sudden surges during marketing events. The application must automatically adjust its compute capacity to maintain performance. Which of the following solutions will BEST meet these requirements?
Deploy the application on a fixed-size group of Amazon EC2 instances sized for peak load
Implement an Amazon EC2 Auto Scaling group with a target tracking scaling policy
Provision a single, large EC2 instance optimized for high compute power to handle the unexpected traffic
Use an Application Load Balancer without Auto Scaling to distribute traffic evenly to EC2 instances
Answer Description
An Amazon EC2 Auto Scaling group with a target tracking scaling policy is the best solution for the described use case as it allows automatic adjustment of the number of EC2 instances in response to the load on the application. Target tracking scaling policies adjust the capacity to maintain a specific target for a selected metric such as average CPU utilization or number of requests per target. This is suitable for handling unpredictable, spikey traffic because it provides elasticity by automatically increasing or decreasing the number of EC2 instances as required to meet the target. The other options, such as fixed-size EC2 groups or an Application Load Balancer without Auto Scaling, do not offer the same level of elasticity and cannot automatically adjust the compute capacity in response to varying loads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon EC2 Auto Scaling and how does it work?
What is a target tracking scaling policy?
What is the difference between an Application Load Balancer and EC2 Auto Scaling?
A financial technology company uses a NoSQL database service for storing real-time transactional data. They require a recovery mechanism that would allow them to retrieve their data to any given point within the last five weeks, in case of inadvertent deletions or corruptions. Which feature should be implemented to fulfill this operational demand?
Enable versioning on the database to allow for restoring previous versions of the data.
Implement the NoSQL database service's automated Point-In-Time Recovery feature.
Establish a routine for conducting continuous backups using a cloud-managed backup service.
Create a chronological sequence of data snapshots using the database's streaming feature.
Answer Description
The service's built-in Point-In-Time Recovery (PITR) feature provides the ability to restore data to any second within the last 35 days, which is within the company's five-week requirement. This feature captures changes to the data up to the last second before a failure, which makes it suitable for the scenario presented. The streaming feature while useful for capturing data changes in real-time does not allow for restoring data to any point in time. Versioning is not applicable directly to NoSQL databases and typically pertains to object storage services. While a continuous backup solution is a good practice, it's distinct from the native point-in-time recovery capabilities that are specifically designed for the NoSQL service in question.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Point-In-Time Recovery (PITR)?
How does NoSQL differ from traditional SQL databases?
What are the benefits of using a NoSQL database for real-time transactional data?
An organization is deploying a web application that relies primarily on delivering static content with minimal dynamic data processing. To optimize for cost while still providing the required compute resources, which Amazon EC2 instance type would be BEST suited for this application?
r5.2xlarge
t3.medium
m5.xlarge
c5.large
Answer Description
When the primary task is to deliver static content, a balance of compute, memory, and networking resources is usually sufficient. T3 instances are designed as burstable general-purpose instances that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required, making them cost-efficient choices for workloads with moderate CPUs that can spike temporarily. For the web application described, where the demand on CPU resources is not constant or predictably high but may experience occasional spikes, a t3.medium
instance is likely to offer an optimal balance between cost and capability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are T3 instances and how do they work?
What criteria should I use to choose the right EC2 instance type?
What are the differences between T3, M5, C5, and R5 instance types?
Your company needs to securely receive files via SFTP from external clients. They want to avoid managing any servers and minimize operational overhead. Which managed service should you use to meet these requirements?
AWS Storage Gateway.
AWS Transfer Family.
Amazon S3 Transfer Acceleration.
AWS DataSync.
Answer Description
AWS Transfer Family provides fully managed service for file transfers directly into and out of Amazon S3 using Secure File Transfer Protocol (SFTP), FTPS, and FTP. With AWS Transfer Family, you can set up an SFTP server without needing to manage any servers, thus minimizing operational overhead and eliminating the need to handle infrastructure.
AWS DataSync is used for transferring large amounts of data online between on-premises storage and AWS storage services but does not provide SFTP endpoints.
AWS Storage Gateway helps integrate on-premises environments with cloud storage for hybrid deployments but doesn't offer SFTP functionality.
Amazon S3 Transfer Acceleration speeds up file uploads and downloads to and from Amazon S3 by leveraging Amazon CloudFront's globally distributed edge locations. It does not provide SFTP access or serverless file transfer capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Transfer Family and how does it work?
What are the benefits of using managed services like AWS Transfer Family?
How does AWS Transfer Family differ from other AWS data transfer services?
An enterprise requires a solution for their on-premises application to cache and efficiently access a frequently updated dataset that is primarily stored in Amazon S3. Which service should be implemented to reduce network latency when accessing these datasets?
AWS DataSync with a scheduled synchronization from Amazon S3 to the on-premises environment
Amazon Elastic File System (Amazon EFS) with an on-premises server caching layer
Amazon S3 Transfer Acceleration with a dedicated cache infrastructure
AWS Storage Gateway with a File Gateway configuration
Answer Description
AWS Storage Gateway with a File Gateway configuration is ideal for this scenario as it provides a hybrid storage service that enables your on-premises applications to seamlessly use AWS Cloud storage. File Gateway offers a local cache for low-latency access to often-accessed data, while less frequently accessed data is read directly from S3; thus, it satisfies the need for efficient caching and low-latency data access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Storage Gateway and how does it work?
What benefits does using a cache layer provide?
How does the File Gateway configuration differ from other AWS Storage Gateway configurations?
A company uses multiple cloud accounts for different teams to isolate their resources. The company wants to enable team members from one account to access specific resources in another account while following security best practices and reducing management complexity. What is the MOST secure and efficient method to achieve this?
Set up an IAM role in the target account and allow users from the other account to assume the role.
Use the token service to generate temporary credentials and distribute them to the team needing access.
Establish a VPN connection between the accounts to share resources.
Create IAM users in the target account and share their credentials with the other team.
Answer Description
Setting up an IAM role in the target account with the necessary permissions and allowing users from the source account to assume this role is the most secure and efficient solution. This method leverages AWS Security Token Service (STS) to provide temporary credentials when users assume the role, adhering to AWS security best practices and reducing the need to manage multiple user accounts. Creating IAM users in the target account and sharing credentials is not secure and increases administrative overhead. Manually generating and distributing temporary credentials is not efficient and poses security risks compared to using roles. Establishing a VPN connection addresses network connectivity but does not provide the necessary access control.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role in AWS?
What is AWS Security Token Service (STS)?
How does assuming a role help with security best practices?
Your enterprise is scaling and plans to create separate environments for various departments. To ensure centralized management, consistent application of compliance requirements, and an automated setup process for these environments, which service should you leverage?
AWS Organizations
Amazon Inspector
AWS Config
AWS Control Tower
Answer Description
Using the selected service, enterprises can manage multiple environments by setting up a well-architected baseline, automating the provisioning of new environments, and uniformly applying policy controls across all environments for security and compliance. While the other options provide specific security features or advisory services, they do not offer the comprehensive solution needed for centralized governance and automated environment setup.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower?
What are guardrails in AWS Control Tower?
How does AWS Control Tower differ from AWS Organizations?
A healthcare company leverages a leading cloud service provider to host patient health data. Compliance standards require that the organization must have the ability to track who is accessing this data and any modifications made to it in real-time. Which tool would you choose to meet these stringent logging requirements?
Config
CloudTrail
Inspector
CloudWatch
Answer Description
CloudTrail is the appropriate choice for monitoring and logging user actions and resource changes over time, vital for handling sensitive information like PHI. It ensures that actions taken by users, roles, or AWS services are recorded, which is fundamental for maintaining stringent access logs required by compliance standards, such as those related to healthcare information.
The use of Inspector would not fulfill the requirement as its use case is security assessment for applications, not detailed logging of data access. Config helps monitor resource states and changes but does not provide the granular action-level logging required for compliance audits associated with healthcare data. CloudWatch primarily focuses on performance monitoring rather than user access and interaction logs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What specific types of logs does AWS CloudTrail provide?
How does AWS CloudTrail enhance compliance with regulations like HIPAA?
What distinguishes AWS CloudTrail from other logging tools like AWS Config or CloudWatch?
A financial analytics company expects their new web application to encounter unpredictable traffic, with occasional surges linked to market events. The engineering department must implement a solution that scales the computational resources up or down in response to changing demand, optimizing for cost-efficiency. Which service should they leverage for this scenario?
Lambda
Auto Scaling
Fargate with manual scaling setup
Elastic MapReduce (EMR) with scaling policies
Answer Description
Auto Scaling is the correct answer because it provides the functionality of adjusting the number of instances dynamically to cope with the load, ensuring both performance maintenance and cost optimization. Lambda, while it does scale automatically, is event-driven and designed for short-duration workloads, making it less suited for traditional web application hosting. Elastic MapReduce (EMR) focuses on data processing and big data use cases rather than web application traffic fluctuation. Fargate, although it scales easily, is a container management service and not primarily used for instance-level scaling like the situation described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Auto Scaling and how does it work?
What are the benefits of using Auto Scaling for web applications?
What are some alternatives to Auto Scaling for managing traffic demands?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.