AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
An e-commerce company runs a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The application experiences unpredictable spikes in traffic during promotional events. The company wants to ensure that the application can handle these spikes automatically without manual intervention and minimize costs during periods of low traffic. Which scaling strategy should the solutions architect recommend?
Configure an Auto Scaling group with dynamic scaling policies based on target tracking.
Maintain a fixed number of EC2 instances at maximum capacity to handle peak loads.
Use larger instance types for the existing EC2 instances to accommodate more traffic.
Manually adjust the number of EC2 instances before each promotional event to handle increased traffic.
Answer Description
Configuring an Auto Scaling group with dynamic scaling policies based on target tracking allows the application to automatically adjust the number of EC2 instances in response to real-time demand. This strategy ensures sufficient capacity during traffic spikes and reduces unnecessary costs during low traffic periods by terminating unneeded instances. Manually adjusting the instance count is not practical for unpredictable traffic and requires ongoing management. Using larger instance types increases capacity is referred as vertical scaling and it may not handle extreme unpredictable spikes. Maintaining maximum capacity at all times wastes resources during periods of low demand.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Auto Scaling group in AWS?
What are dynamic scaling policies and how do they work?
How does target tracking differ from step scaling in AWS Auto Scaling?
A startup is building a mobile backend that needs to process asynchronous, unpredictable bursts of data. The data processing jobs are expected to complete within seconds and the workload does not require any server to be continuously running. Which compute solution should be recommended for optimal cost efficiency?
A service providing virtual servers to secure lower rates for continuous usage through an upfront payment.
A serverless platform that scales automatically with the invocation of code and only charges for the compute time used.
Utilizing spare computing capacity in the cloud at discounted rates for fault-tolerant and flexible applications.
A container management service that abstracts the server layer, charging for container task execution time.
Answer Description
Given the requirements of handling asynchronous and unpredictable bursts of data without a need for a constantly running server, a serverless approach is most cost-effective. The recommended service, AWS Lambda, allows the startup to run code in response to triggers and automatically manages the scaling. There is no charge for when the code isn't running, matching the sporadic workload, thereby reducing operational costs. Other services like EC2, even with Spot Instances, wouldn't be cost-effective due to the potential of paying for idle time. Similarly, EC2 Reserved Instances would not fit the unpredictable pattern and would involve a long-term commitment. AWS Fargate offers task-based pricing but still incurs costs for the duration the task is running, which might not be as cost-effective for very short-lived tasks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Lambda and how does it work?
What are the advantages of using a serverless model?
What is the difference between AWS Lambda and EC2?
Your enterprise is scaling and plans to create separate environments for various departments. To ensure centralized management, consistent application of compliance requirements, and an automated setup process for these environments, which service should you leverage?
AWS Organizations
Amazon Inspector
AWS Control Tower
AWS Config
Answer Description
Using the selected service, enterprises can manage multiple environments by setting up a well-architected baseline, automating the provisioning of new environments, and uniformly applying policy controls across all environments for security and compliance. While the other options provide specific security features or advisory services, they do not offer the comprehensive solution needed for centralized governance and automated environment setup.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower?
What are guardrails in AWS Control Tower?
How does AWS Control Tower differ from AWS Organizations?
A company is analyzing their requirements for processing extensive datasets that involve batch jobs and interactive analytics. They are looking for a managed environment that provides support for a myriad of big data and analytics frameworks without the need to maintain the infrastructure. Which service should they leverage for efficient data processing?
The managed platform for big data frameworks such as Hadoop and Spark
A service providing virtual servers for general computing needs
A managed database service typically used for structured data
A serverless event-driven compute service
Answer Description
Amazon EMR, a service that enables customers to run big data frameworks such as Hadoop, Spark, and HBase in a managed cluster environment is the correct choice for processing extensive datasets that include batch jobs and interactive analytics. This service allows customers to focus on analytics without the operational burden of managing infrastructure. Other options may be used for computing but they do not offer the managed environment for big data and analytics frameworks, thereby increasing the operational complexity for the given use case.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are big data frameworks like Hadoop and Spark?
What does it mean to have a managed environment for big data?
Why is focusing on analytics important for businesses dealing with large datasets?
A software development company is deploying an application in AWS and has a development team with varying access requirements. Some developers need read-only access to certain resources, while others require full administrative access to different services. The company anticipates rapid team expansion and wants to manage permissions in a scalable and organized manner without over-provisioning access. What is the most effective way to implement a flexible authorization model to satisfy these needs?
Create roles for each permission set and have users assume these roles when needed.
Attach the necessary permissions directly to each individual user account.
Grant all users full access to resources to simplify management.
Organize users into groups based on permissions and assign policies to the groups.
Answer Description
Organizing users into groups based on their access requirements and attaching appropriate policies to these groups is the most effective approach. This method implements a role based access control (RBAC) which allows for centralized permission management, making it easier to assign or revoke access as team members join or leave. It also reduces the complexity of managing individual permissions for each user and minimizes the risk of granting unnecessary privileges. Attaching permissions directly to individual user accounts is not scalable as the team grows. Using roles for each permission set is more suitable for temporary or cross-account access rather than managing team permissions. Granting all users full access goes against the principle of least privilege and poses security risks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is role-based access control (RBAC)?
Why is it not scalable to attach permissions directly to individual user accounts?
What are the security risks of granting all users full access to resources?
To enable a serverless code execution service in Account A to interact with object storage in Account B, which approach should be used to most securely grant the required permissions in line with best practices?
Configure the object storage in Account B to be publicly accessible and regulate access using resource-based policies that check the request origin.
Generate access keys for a user in Account B, store them as environmental variables for the serverless function in Account A, and use these keys within the function to access the object storage.
Create an IAM role in Account B with the proper permissions for object storage and establish a trust relationship allowing the serverless function's role in Account A to assume this role.
Set up a role in Account B granting full access to the object storage and define a broad trust policy that permits the assumption of this role by other identities, relying on additional service-specific policies in Account A to enforce restrictions.
Answer Description
The most secure way to provide the necessary permissions is by creating a role in Account B with the appropriate permissions to the object storage and setting up a trust relationship that allows the execution role of the function in Account A to assume this role when needed. This method upholds the practice of granting least privilege access. The other methods either do not provide the direct cross-account functionality required, may lead to unnecessary exposure of credentials, or do not properly adhere to the principle of least privilege.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role and how does it work in AWS?
What does a trust relationship in IAM mean?
What is the principle of least privilege in the context of AWS?
An architect must devise a solution that allows a fleet of ephemeral compute instances to efficiently open and maintain connections to a relational database during unpredictable traffic spikes. Which service should the architect employ to ensure scalable and resilient database connectivity?
Create additional read replicas to distribute the load across multiple instances.
Initiate a Multi-AZ deployment strategy for the database to ensure connectivity.
Deploy a fully managed database proxy service for connection pooling.
Increase the compute capacity of the database instance to handle more connections.
Answer Description
Using a fully managed database proxy service allows ephemeral compute instances, like AWS Lambda functions, to efficiently manage database connections. This service can absorb the variability in concurrent connections, smoothing out spikes in traffic and preserving backend database stability. It achieves this through connection pooling and multiplexing, which enhances application scalability and resilience without exhausting connections. While other options may improve availability (Multi-AZ) or read performance (Read Replica), they do not offer the specific benefits of a managed proxy service when it comes to connection pooling. Scaling the compute engine vertically would not directly solve the problem of efficiently managing a large number of connections and could lead to resource exhaustion under high traffic conditions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a database proxy service and how does it work?
What is connection pooling and why is it important?
What are the benefits of using a fully managed service compared to self-hosting a proxy?
A startup is building a lightweight IoT backend that receives intermittent and unpredictable messages from devices in the field. The backend needs to process messages concurrently and scale precisely with incoming traffic to optimize costs. Management prefers a solution that minimizes operational overhead. Which of the following would provide the optimal balance of cost-efficiency and scalability?
Implement the backend with AWS Lambda to process messages with automatic scaling and no infrastructure management.
Provision Amazon EC2 Spot Instances to handle the message load and reduce costs by bidding on spare computing capacity.
Deploy the backend using Amazon ECS on AWS Fargate to smoothly scale with workload changes, minimizing server management.
Use AWS Batch to manage message processing jobs, relying on its queuing mechanism to handle bursts in traffic.
Answer Description
AWS Lambda is most suitable for the described scenario as it provides a serverless environment that can handle sporadic, concurrent requests without the need for server management. Lambda allows the system to scale automatically and precisely with the incoming workload, ensuring that costs directly correlate to usage. While AWS Batch efficiently processes batch jobs, its predetermined scaling may not align perfectly with unpredictable message arrival times, leading to possible delays or over-provisioning. Amazon EC2 Spot Instances are cost-effective but require management for in-depth capacity and scaling, which does not match the management's preference. Amazon ECS on AWS Fargate could meet the needs but will incur more costs compared to Lambda because Fargate’s pricing is based on container runtime and resources regardless of message arrivals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Lambda and how does it work?
How does AWS Lambda ensure automatic scaling?
What are the advantages of using AWS Lambda for IoT applications?
An online retail application is being developed to handle erratic and potentially high numbers of incoming orders. Each order must be processed, but immediate processing is not required. Which approach would ensure the application remains scalable while maintaining loose coupling between the order reception and processing components?
Broadcasting each order to various services using Amazon Simple Notification Service (SNS) for simultaneous processing.
Insertion of order details into an Amazon DynamoDB table, triggering an AWS Lambda function to process the order.
Placement of orders into an Amazon Simple Queue Service (Amazon SQS) queue that allows deferred processing by a dedicated order management component.
Logging of each transaction to an Amazon Kinesis stream, with continuous processing by an auto-scaling group of virtual servers.
Immediate processing of orders through synchronous execution within the same service that captures the customer transactions.
Distribution of incoming orders via an Elastic Load Balancer (ELB) to an auto-scaled group of processing microservices.
Answer Description
By leveraging Amazon Simple Queue Service (Amazon SQS), the application can enqueue incoming orders, allowing the processing component to consume messages at a regulated pace. This introduces a layer that decouples the order intake from processing, thereby enhancing scalability as the message queue can absorb sudden bursts of traffic. Immediate execution through synchronous invocation leads to tight coupling and possible system overload during peak times. Other solutions, like streams or database triggers, do not offer the same level of decoupling and might introduce complexity that is unnecessary for this use case.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Simple Queue Service (Amazon SQS)?
What does it mean for components to be loosely coupled?
Why is immediate processing through synchronous execution not suitable for this application?
A company is preparing to migrate its mission-critical relational database to AWS. The database is expected to have a high read and write throughput with predictable performance needs throughout the year. Which of the following demonstrates the BEST approach for database capacity planning to optimize costs?
Selecting Amazon RDS with Reserved Instances to ensure reserved capacity and cost savings.
Employing Amazon DynamoDB with On-Demand Capacity mode for flexible throughput without planning.
Leveraging EC2 Spot Instances to host the database, taking advantage of lower pricing for unused capacity.
Using Amazon Aurora with On-Demand Instances optimized for high performance.
Answer Description
Selecting Amazon RDS with Reserved Instances provides cost savings over on-demand instance pricing due to the commitment to a specific usage term. This approach is beneficial for databases with predictable performance needs, such as the one described, as it allows the company to plan its capacity and manage costs effectively. Using the on-demand pricing model would be less cost-efficient given the predictable, consistent demand. Although Aurora and DynamoDB provide performance benefits, they might not always translate to cost savings in cases where the workload is stable and the capacity is predictable, making them less optimal for this scenario. Spot instances are not suitable for mission-critical databases due to the possibility of interruption.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Reserved Instances in Amazon RDS?
What is Amazon RDS and how does it differ from Amazon Aurora?
Why are Spot Instances not suitable for mission-critical databases?
A web application has a baseline usage pattern with predictable spikes during weekends. The spikes can be 5 times the normal load, and the application must scale accordingly. The company wishes to optimize costs while ensuring availability during peak times. Which combination of purchasing options should a Solutions Architect recommend for the most cost-effective architecture?
A full Savings Plan to cover all instance usage
Reserved Instances for baseline traffic and On-Demand Instances for spikes
Using On-Demand Instances for both baseline and peak loads
A combination of Spot Instances for baseline traffic and On-Demand Instances for spikes
Answer Description
For predictable baseline usage, Reserved Instances provide a significant discount compared to On-Demand pricing in exchange for a commitment to a consistent amount of compute capacity. To handle the variable peak loads during weekends, On-Demand Instances or Spot Instances can be used. Given the need for availability during peak times, relying solely on Spot Instances which can be interrupted is risky; therefore, a mix of Reserved Instances for the baseline and On-Demand Instances to handle peaks can be the most cost-effective and reliable approach as it provides cost savings for the predictable demand and flexibility to scale with reliable instances during spikes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Reserved Instances and how do they work?
What are On-Demand Instances and when should I use them?
Can you explain what Spot Instances are?
Your organization is using a leading cloud provider's services for application development and hosting. You are tasked with ensuring the adherence to the shared responsibility model for security. Which of the following tasks falls within your organization's scope rather than the cloud provider's?
Ensuring the underlying software that manages virtualization is up-to-date with security patches
Updating the physical network devices that are part of the dedicated cloud infrastructure
Maintaining the physical hardware on which cloud services operate
Implementing encryption for client-side data before storage in object storage services
Answer Description
According to the shared responsibility model for security in the cloud, while the provider is responsible for the security of the cloud infrastructure (computing hardware, storage, and networking), the customer is responsible for securing the data processed and stored within the cloud environment. This extends to client-side data encryption, which is the customer's duty, and not the provider's.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the shared responsibility model in cloud security?
Why is client-side data encryption important?
What are some examples of customer responsibilities in cloud security?
An emerging fintech startup requires a database solution for processing and storing large volumes of financial transaction records. Transactions must be quickly retrievable based on the transaction ID, and new records are ingested at a high velocity throughout the day. Consistency is important immediately after transaction write. The startup is looking to minimize costs while ensuring the database can scale to meet growing demand. Which AWS database service should the startup utilize?
Amazon RDS with Provisioned IOPS
Amazon DynamoDB with on-demand capacity
Amazon DocumentDB
Amazon Neptune
Answer Description
Amazon DynamoDB is the optimal solution for this use case as it provides a NoSQL database with the ability to scale automatically to accommodate high ingest rates of transaction records. It is designed for applications that require consistent, single-digit millisecond latency for any scale. Additionally, DynamoDB offers strong consistency, ensuring that after a write, any subsequent read will reflect the change. In contrast, RDS is better suited for structured data requiring relational capabilities, Neptune is tailored for graph database use cases, and DocumentDB is optimized for JSON document storage which, while capable of handling key-value pairs, is not as cost-effective or performant for this specific scenario as DynamoDB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DynamoDB and how does it differ from RDS?
What are the benefits of using on-demand capacity in DynamoDB?
What is strong consistency in DynamoDB and why is it important?
The development team at an e-commerce company is designing a checkout microservice which would manage shopping cart sessions for users. The microservice must be capable of maintaining a user's shopping cart contents between sessions without compromising on the system's overall resilience and scalability. What architectural design will best meet these requirements while adhering to microservices best practices?
Utilize an AWS RDS instance to store user session states within a relational database schema.
Implement an external session store to manage the shopping cart session state, allowing the microservice to remain stateless.
Store the session states in in-memory storage within the microservice to facilitate quick access.
Design the microservice to be stateful, so it directly manages user shopping cart sessions.
Answer Description
Leveraging an external session store, such as DynamoDB or ElastiCache, to manage the session state outside the microservice allows for statelessness within the microservice itself. This approach enables the microservice to remain stateless while persisting user session data, thereby maintaining scalability and resilient failover capabilities. Implementing a stateful microservice that retains user sessions would hinder scalability and complicate failover processes. Storing session states in a relational database like RDS could introduce unnecessary complexity and potential single points of failure, and relying solely on in-memory storage would risk data loss in the event of a service disruption or restart, thus failing to persist the state between sessions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is meant by a 'stateless' microservice?
How do external session stores like DynamoDB or ElastiCache work?
Why is it important for microservices to maintain statelessness for scalability and resilience?
A company needs to transfer 50 TB of data from its on-premises data center to cloud storage for archival purposes within one week. They have a high-speed internet connection with 500 Mbps upload bandwidth but want to minimize transfer costs. Which method should they choose to transfer the data to the cloud storage?
Upload the data over the Internet using command-line tools.
Use a data transfer service to transfer the data over the Internet.
Ship the data using a physical data transport appliance like Snowball Edge.
Establish a dedicated network connection and transfer the data directly.
Answer Description
Shipping the data using a physical data transport appliance like Snowball Edge provides the most cost-effective and time-efficient method for transferring large amounts of data to the cloud. It avoids the limitations of network bandwidth and reduces costs associated with prolonged Internet transfers. Transferring 50 TB over a 500 Mbps connection would take over 9 days, exceeding the one-week requirement, and could incur higher costs due to extended network usage. Using data transfer services over the Internet or uploading via command-line tools would face similar time constraints and potentially higher costs. Establishing a dedicated network connection is not feasible within a week and involves substantial setup costs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Snowball Edge?
Why is using a physical appliance like Snowball Edge more cost-effective than transferring over the Internet?
What are the limitations of transferring large data directly over the Internet?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.