AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-90 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
A financial services company runs periodic risk modeling simulations that are highly parallelizable and require a significant amount of compute power for a brief duration at the end of each month. Which of the following compute options would align BEST with the company's performance and cost-optimization needs?
Amazon EC2 Spot Instances
Amazon EC2 T3 instances
Amazon EC2 Reserved Instances
Amazon EC2 Dedicated Hosts
Answer Description
Amazon EC2 Spot Instances offer the most cost-effective approach to utilizing a significant amount of compute power for tasks that can be interrupted and have flexible start and end times, such as batch processing jobs or background tasks. Given that the company's workload is periodic and occurs at well-defined times, with an ability to handle interruptions (resumption of simulations), Spot Instances provide the required compute capacity at lower costs than On-Demand or Reserved Instances. EC2 Dedicated Hosts are more targeted towards licensing requirements and consistent performance, and T3 instances, while providing burstable performance, may not offer consistent high performance throughout the simulation period, making both of these options less aligned with the company's combination of a high-compute, cost-effective, and periodic processing routine.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon EC2 Spot Instances?
How do Spot Instances compare to On-Demand and Reserved Instances?
What workloads are best suited for Spot Instances?
Using a database proxy can enable a more efficient connection pool management and reduce the database load caused by idle connections.
False
True
Answer Description
The statement is accurate because a database proxy sits between the application and the database, managing connections by multiplexing and pooling them. This reduces the number of direct connections to the database, which in turn minimizes the overhead caused by a high number of idle connections. Proxies can handle the creation, reuse, and termination of connections more efficiently than if each application instance handled connections itself, thus improving overall performance and scaling of the database service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly is a database proxy and how does it work?
What is connection pooling and why is it important?
What are idle connections and how do they affect database performance?
Your application requires a managed storage solution to serve large media files to users globally with low latency. The files are accessed frequently and need to be updated in a write-once-read-many model. Which storage service is BEST suited to meet these requirements?
Amazon Elastic Block Store (EBS), due to its block storage capabilities suitable for high-performance workloads
Amazon S3, due to its global reach, durability, and integration with content delivery networks
Amazon Elastic File System (EFS), because of its file-based storage that can scale automatically to petabytes of data
Amazon Glacier, for long-term archival of data accessed infrequently
Answer Description
Amazon Simple Storage Service (Amazon S3) is the most appropriate storage service for serving large media files globally with low latency. As an object storage service, it is designed to store and retrieve any amount of data from anywhere. S3 is well-suited for the write-once-read-many access model, which matches the requirement of updating files infrequently and serving them frequently. Moreover, when combined with Amazon CloudFront for content distribution, it can deliver media files with low latency to users worldwide due to CloudFront's edge locations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it work?
What is the write-once-read-many (WORM) model, and why is it suitable for S3?
What role does Amazon CloudFront play in improving media file delivery?
A company has deployed a critical database-backed web application in the Northern Virginia region. The operational requirements dictate that the application must be able to recover from a region-wide service disruption and be operational within 4 hours, with data loss not exceeding 15 minutes. What disaster recovery approach would effectively meet these operational objectives?
Create a scaled-down version of the environment in a secondary region, backed up by snapshots on a daily schedule.
Regularly schedule snapshots of the database to a durable storage service with cross-region copying enabled.
Maintain a synchronous standby replica within the same geographical area to ensure instant failover without data loss.
Configure multi-regional deployment with data replication to a secondary region and establish a system for automated failover.
Answer Description
Deploying database instances across multiple regions with a mechanism in place for data replication caters to both the RTO and RPO objectives. The chosen configuration ensures that data is replicated with minimal lag, and it allows for quick failover should the need arise, which is crucial for meeting a 4-hour operational recovery and 15-minute data recovery criteria. A strategy restricted to the same region does not protect against regional disruptions. Periodic backups without continuous replication are likely to exceed the 15-minute data recovery requirement, especially if backups are not frequent enough. A basic disaster recovery setup like the pilot light does not provide the necessary continuous replication or quick enough failover to meet the stringent objectives outlined.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is multi-regional deployment?
What are RTO and RPO in disaster recovery?
How does data replication work in disaster recovery?
Who is responsible for managing the encryption of sensitive data and configuring identity access policies when utilizing cloud computing services?
The cloud service provider
The Internet service provider of the customer
The cloud service customer
A third-party security services firm
Answer Description
Within the shared responsibility model for cloud services, the provider is in charge of securing the infrastructure that runs all of the services offered in the cloud. On the other hand, the customer is responsible for securing the information they put into cloud services. This includes encryption of data, configuration of access controls, and identity management practices such as ensuring credentials are securely managed and implementing appropriate access policies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the shared responsibility model in cloud computing?
What does it mean to encrypt sensitive data in the cloud?
What are identity access policies and why are they important?
Which service should a Solutions Architect recommend for a developer who needs to troubleshoot bottlenecks in a distributed application with a series of microservices?
AWS Step Functions
AWS X-Ray
Amazon Inspector
Amazon CloudWatch
Answer Description
The correct service for analyzing and debugging production, distributed applications is AWS X-Ray. It provides developers with data to analyze and debug bottlenecks, latency, and other issues in microservices architectures. Amazon CloudWatch primarily focuses on monitoring and observability, not in-depth debugging. Amazon Inspector is a security assessment service to help improve the security and compliance of applications. AWS Step Functions is a serverless function orchestrator which makes it easy to sequence Lambda functions and multiple AWS services.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly does AWS X-Ray do?
How does AWS X-Ray differ from Amazon CloudWatch?
Can AWS X-Ray be used with non-AWS services?
A company requires a durable storage solution that can automatically scale storage capacity with increasing numbers of objects without manual intervention. Which AWS service should they use to meet these requirements?
Amazon Elastic Compute Cloud (EC2)
Amazon Elastic Block Store (EBS)
Amazon Simple Storage Service (S3)
Amazon DynamoDB
Answer Description
Amazon S3 is designed to provide scalability, high availability, and low latency at commodity costs. S3 can store an unlimited number of objects and scale automatically as you add more objects to the bucket, making it the perfect choice for a storage solution that needs to handle growing data without manual scaling actions. Amazon EBS provides block storage for use with EC2 instances and does not automatically scale in the same way as S3. Amazon DynamoDB is a NoSQL database service optimized for high-performance, scalable applications but is not primarily a storage service for objects. Amazon EC2 provides computing capacity in the cloud but is not primarily used for object storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it work?
What are the main advantages of using S3 over EBS or DynamoDB?
What types of data can be stored in Amazon S3?
A company is storing large volumes of financial records that are frequently accessed for the first month after creation and are then rarely accessed. Compliance requirements mandate that these records must be preserved for seven years before they can be deleted. Which storage solution would be the most cost-effective for these requirements?
Store all financial records in Amazon S3 Standard to ensure availability and quick access at all times.
Use Amazon S3 Standard for immediate storage, and transition to Amazon S3 Glacier Deep Archive for long-term archival after one month.
Utilize Amazon S3 One Zone-Infrequent Access for both immediate and long-term storage.
Maintain the financial records on Amazon EFS for quick access and traditional file system interfaces.
Answer Description
Amazon S3 Glacier Deep Archive is designed for data that is accessed very infrequently but needs to be retained for a long-term period. It is the most cost-effective solution for archival storage, meeting compliance requirements and ensuring that records are preserved with minimum cost. Other options like Amazon S3 Standard or Amazon S3 One Zone-Infrequent Access are not optimized for long-term archival storage and would result in higher costs due to higher storage prices and the lack of infrequent access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it work?
What is S3 Glacier Deep Archive and when should I use it?
What is the difference between S3 Standard and S3 One Zone-Infrequent Access?
By default, all inbound traffic is allowed on a newly created security group in a Virtual Private Cloud (VPC).
This statement is true.
This statement is false.
Answer Description
By default, a newly created security group in a VPC denies all inbound traffic until you create inbound traffic rules allowing it. This security measure ensures that no unintended services are exposed unless explicitly allowed by the architect or administrator. The 'deny all' default helps in maintaining a secure network posture aligning with the principle of least privilege.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are security groups in AWS VPC?
What is the principle of least privilege?
How do I create inbound traffic rules in a security group?
It is required to manually move data between storage classes in Amazon S3 to implement storage tiering and reduce costs.
False.
True.
Answer Description
Amazon S3 supports automated storage tiering through features such as lifecycle policies and Intelligent-Tiering. These features allow data to be automatically transitioned between storage classes based on specified rules or access patterns. This automation eliminates the need for manual intervention and helps reduce storage costs effectively.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are lifecycle policies in Amazon S3?
What is Intelligent-Tiering in Amazon S3?
How does storage tiering help reduce costs in Amazon S3?
An Application Load Balancer has the capability to route incoming traffic to multiple Amazon EC2 instances across multiple Availability Zones within the same region based on the domain name specified in the Host header of the HTTP request.
False
True
Answer Description
The correct answer is true. An Application Load Balancer can route traffic based on the domain name specified in the Host header, a feature known as host-based routing. This allows for a multi-tenant architecture where different domains are served by different backend systems, all through a single Application Load Balancer. Incorrect answers may confuse the capabilities of an Application Load Balancer with those of a Simple or Classic Load Balancer, or may underestimate the advanced routing features provided by an ALB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is host-based routing in an Application Load Balancer?
How does the Application Load Balancer differ from other types of load balancers in AWS?
What are Availability Zones and why are they important for load balancing?
Your company is designing a new mobile gaming application that requires a database to maintain user profiles, game state data, and high score rankings with millisecond latency for read and write access. Data size is expected to be in the range of a few terabytes. The database must be capable of handling burstable read and write workloads that can scale automatically with the number of game users. Which database service should you use to optimize for performance and automatic scalability?
Amazon Redshift
Amazon DynamoDB
Amazon Aurora
Amazon RDS with MySQL
Answer Description
Given the requirements for millisecond latency, capability to handle burstable read and write workloads, and auto-scaling features, Amazon DynamoDB is the appropriate choice because it is a NoSQL database service that provides fast and predictable performance with seamless scalability. It is designed to handle large-scale, high-velocity data, which is what a mobile gaming application would generate. Unlike Amazon RDS or Amazon Aurora, which are relational databases better suited for structured data and consistent workloads, Amazon DynamoDB can scale automatically to adjust for varying workloads which is typical of mobile gaming applications. DynamoDB also provides built-in support for the handling of large-scale, distributed datasets which suits the expected data size and the requirement for low latency data access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of Amazon DynamoDB?
How does DynamoDB handle burstable workloads?
What are the differences between DynamoDB and relational databases like Amazon RDS?
Which service is designed to optimize resource allocation for applications, by adjusting the compute fleet size based on the demand?
Amazon EC2 Auto Scaling
Amazon CloudWatch
Amazon Simple Storage Service (S3)
Amazon Relational Database Service (RDS)
Answer Description
Amazon EC2 Auto Scaling is the service that allows the automated adjustment of the number of compute instances in the application's fleet based on demand, ensuring that the correct number of instances is available to handle the application's load. This leads to optimized resource allocation, which aids in maintaining consistent application performance. Amazon RDS is primarily a database service and does not deal with the scaling of compute resources. Amazon CloudWatch is a monitoring service that can initiate scaling actions but does not itself scale the number of compute instances. Amazon S3 is an object storage service and does not deal directly with scaling compute resources.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon EC2 Auto Scaling and how does it work?
What are some benefits of using Auto Scaling in AWS?
How does Amazon CloudWatch relate to Auto Scaling?
A Solutions Architect has been tasked with dissecting cloud expenditure to allocate charges to the correct departments within an organization, each having its resources. The account used is shared among various teams. Which service would BEST facilitate this requirement for detailed financial governance?
NAT Gateway configured with detailed billing reports
Transfer for SFTP to track and charge back file transfer costs
Budgets with alerts for forecasted spend anomalies
Cost Explorer with implementation of categorization tags
Answer Description
Cost Explorer with the implementation of categorization tags provides the capability to analyze and attribute expenses to specific resources and operational groups. By assigning relevant categorization tags to the resources used by each department, the Architect can filter and disassemble the expense data based on the organizational structure. These tags must be activated for cost allocation in the billing console before they can be used for granular visibility into expenditure associated with designated departments or teams.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are categorization tags in AWS?
How does Cost Explorer help in analyzing cloud expenditure?
What are the main differences between AWS Budgets and Cost Explorer?
A multimedia company needs to host a popular online game that generates sporadic traffic spikes. The game’s data is accessed frequently with a requirement for low latency and fast read and write performance. Which storage solution should they use to optimize for high-performance demands?
Amazon EBS Throughput Optimized HDD (st1)
Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1/io2)
Amazon Simple Storage Service (Amazon S3)
Amazon Elastic File System (Amazon EFS)
Answer Description
Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1/io2) volumes are designed to meet the needs of I/O-intensive workloads that are sensitive to storage performance and consistency. They offer high throughput and low latency, which are critical for applications that require fast and reliable access to data. Amazon S3 is not suitable as it is object storage and does not offer the low latency or fine-tuned IOPS required by online games. Amazon EFS would offer shared file access which is not required in this scenario, and Amazon EBS Throughput Optimized HDD (st1) is optimized for large block, throughput-intensive workloads and not low latency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Elastic Block Store (EBS)?
What are Provisioned IOPS and why are they important?
Why isn't Amazon S3 suitable for low-latency performance needs?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.