AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-90 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
A data analytics company processes daily logs that amount to 400 GB per day. The logs need to be stored for 30 days for compliance purposes and must be readily accessible for querying and analysis. The processing and analysis are performed on Amazon EC2 instances. The company seeks a cost-effective storage solution that provides quick access and minimal management overhead. As a Solutions Architect, which storage solution would you recommend for storing the logs?
Use an Amazon EFS file system to store the logs.
Attach multiple EBS General Purpose SSD (gp3) volumes to the EC2 instances for log storage.
Set up an EBS Provisioned IOPS SSD (io2) volume for each EC2 instance to store the logs.
Store the logs in Amazon S3 Standard storage class with Lifecycle policy to either adjust storage class or delete them after 30 days.
Answer Description
Storing the logs in Amazon S3 Standard is the most cost-effective and scalable solution. Amazon S3 offers durable, highly available storage with low latency access, ideal for storing large amounts of data that need to be accessed quickly. It can automatically scale to store the required amount (400 GB per day × 30 days) without upfront capacity provisioning, reducing costs compared to block or file storage options. Amazon EFS is a managed file system ideal for shared access and persistent file storage, but it is more expensive than S3 for large-scale log storage and not optimized for high-volume data storage with lifecycle management. Amazon EBS is a block storage service primarily used for EC2 instance storage. It's not ideal for large-scale, long-term log storage due to its higher cost and manual management overhead compared to S3.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and why is it suitable for storing logs?
What is a lifecycle policy in Amazon S3?
What are the differences between Amazon S3, EFS, and EBS for storage?
A company is expanding its online retail application to accommodate an increasing number of users from different geographical locations. The application is hosted on an Amazon EC2 instance and utilizes an Application Load Balancer (ALB) for distributing incoming traffic. Which of the following network configurations should be undertaken to best scale the network architecture for anticipated global growth?
Increase the Amazon EC2 instance size to improve network and application performance for global users.
Configure Amazon Route 53 geo-proximity routing to direct traffic based on the geographic location of the users.
Set up an Amazon CloudFront distribution to cache content at edge locations closer to the users.
Establish an AWS Direct Connect connection to improve the application's global network performance.
Answer Description
Setting up an Amazon CloudFront distribution in front of the Amazon EC2 instances would cache the content at multiple edge locations across the globe, significantly reducing latency for international users by serving requests from the nearest edge location. This approach effectively scales the network architecture to accommodate a global audience. An additional AWS Direct Connect would establish a private connectivity between on-premises and AWS, which does not inherently address global user latency. Amazon Route 53 geo-proximity routing is more about directing traffic based on geographic location rather than scaling, and increasing the instance size addresses compute scaling but does not directly impact network scaling to accommodate global users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudFront and how does it work?
What are edge locations in the context of AWS CloudFront?
What does latency mean and why is it important for an online application?
A company needs to migrate a substantial volume of data to the cloud, but faces bandwidth limitations that prohibit efficient online transfer. Which service would best facilitate this large-scale, offline data migration while maintaining high security standards?
Kinesis Firehose
Snowball
DataSync
Direct Connect
Answer Description
The best option for migrating large volumes of data offline due to bandwidth constraints is using a physical data transport solution that securely moves data into and out of cloud services. Among the options provided, only one service specifically caters to this need by employing physical storage devices to facilitate the transfer process. Other listed services are primarily intended for online data transfer and streaming, or for establishing a dedicated network connection, and are not optimized for scenarios where an offline, physical transfer is required.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Snowball and how does it work?
What security measures are in place for AWS Snowball?
How does AWS DataSync differ from Snowball?
An organization aims to maintain operational continuity of its critical workload even if an entire data center servicing their region encounters an outage. Their solution includes computing resources distributed across diverse physical locations within the same geographical area. To enhance the system's robustness, which strategy should be implemented for the data layer?
Introduce a Load Balancer to distribute traffic among database instances to minimize the impact of a location outage.
Configure an active-passive setup using a secondary region and enact health checks to direct traffic upon failure.
Implement a Multi-AZ configuration for the relational database to promote automatic failover and data redundancy.
Install a globally distributed database with read replicas in various regions for geographical data distribution.
Answer Description
The question asks what can you do to maintain operational continuity if one data center in a region has an outage. Keep in mind that with AWS, one region is made of many data centers groups into availability zones. Therefor, a multi-AZ setup would help mitigate and prevent outages during a data center outage.
Choosing a Multi-AZ deployment for an RDS instance provides high availability by automatically maintaining a synchronous standby replica in a different data center, or Availability Zone. In case of an infrastructure failure, the database will fail over to the standby so that database operations can resume quickly without manual intervention. This choice is the most aligned with the requirement for operational continuity within a single region in the face of a data center outage. The other answers either describe strategies that introduce geographical redundancy, which goes beyond the scope of the question, or load balancing, which does not address the need for automatic failover at the data layer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Multi-AZ deployment in AWS RDS?
What are Availability Zones (AZs) in AWS?
How does automatic failover work in AWS RDS?
A company generates large amounts of data from their IoT devices and stores this data on Amazon S3 for real-time analysis. After 30 days, the data is rarely accessed but must be retained for one year for compliance reasons. What is the most cost-effective strategy to manage the lifecycle of this data?
Keep the data in S3 Standard for the entire year as it may be needed for unplanned analysis.
Transfer the data to EBS Cold HDD volumes after 30 days and delete it after one year.
Move the data to S3 Standard-Infrequent Access after 30 days and delete it after one year.
Move the data to S3 Glacier after 30 days and delete it after one year.
Answer Description
The correct answer is to move the data to S3 Glacier after 30 days and delete it after one year. This strategy is cost-effective because S3 Glacier provides low-cost storage for long-term archiving of data that is rarely accessed. Since the data is not needed for real-time analysis after the initial 30 days, transitioning to a less expensive storage class until the compliance retention period expires will result in cost savings. Other options like moving the data to S3 Standard-Infrequent Access or keeping it on S3 Standard are not as cost-efficient for the given access patterns.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier?
Why is S3 Standard not suitable for data that is rarely accessed?
What are lifecycle policies in Amazon S3?
A company requires a method to routinely create backups for their virtual servers hosted on the cloud, including the storage volumes attached to these servers. They seek an automated solution that is capable of not only protecting their resources but also managing the backup lifecycle with scalability and security in mind. Which option should they select to best fulfill their needs?
Deploy an on-premises Storage Gateway to synchronize and back up the server data.
Activate versioning on an object storage service for the servers' data archives.
Schedule a routine of manual snapshots for the server storage volumes.
Employ AWS Backup for centralized and automated backup across different services.
Answer Description
The best choice in this scenario is AWS Backup, as it provides a managed service specifically designed for automated backup solutions across various AWS resources. It ensures high durability by storing backups across multiple facilities and enables users to define policies for backup schedules, retention management, and lifecycle rules. This comprehensive backup approach helps companies to comply with regulatory requirements and is scalable to accommodate growing data needs. While creating snapshots or using the Storage Gateway can be part of a backup strategy, they do not provide the full suite of management features and automated capabilities as AWS Backup. Furthermore, enabling versioning on Amazon S3 is primarily for object-level storage and is not suitable for backing up entire virtual server environments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Backup, and how does it work?
What are backup lifecycle policies, and why are they important?
What are the benefits of using automated backups versus manual snapshots?
A company utilizes a centralized system for user credentials and seeks to grant employees the ability to utilize these same credentials to perform job-specific tasks within their cloud environment. What is the recommended solution to link the company's current system with the cloud services, allowing role assignment based on existing job functions?
Deploy a connector that interfaces with the existing credentials directory and assign cloud user profiles to authenticate against it.
Amend the trust configurations in the centralized directory to directly accept authentication requests from the cloud directory service.
Enable a connectivity channel such as a VPN between the on-premises network and cloud network, controlling access through network routing and policies.
Implement a service like AWS IAM Identity Center to establish a trust relationship between the centralized credentials system and the cloud provider, permitting role mapping accordingly.
Construct individual user profiles in the cloud directory service and execute a periodic sync for credentials from the existing on-premises system.
Answer Description
The recommended approach for integrating a centralized directory service with the cloud provider for access management is to use a service designed for identity federation, such as AWS IAM Identity Center, which allows the assignment of cloud roles to on-premises identities. Other suggested methods either do not offer direct federation with directory services or do not follow the best practices for integrating existing user credentials with cloud resources.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS IAM Identity Center and how does it work?
What is identity federation and why is it important?
What are the risks of constructing individual user profiles in the cloud directory service?
A company needs to store data that is infrequently accessed but requires millisecond retrieval when needed. The data must be stored cost-effectively. Which Amazon S3 storage class should the company use?
Amazon S3 Standard-Infrequent Access.
Amazon S3 Glacier Deep Archive.
Amazon S3 Standard.
Amazon S3 Glacier Instant Retrieval.
Answer Description
Amazon S3 Glacier Instant Retrieval is designed for data that is infrequently accessed but requires millisecond retrieval. It offers the lowest storage cost for such data while providing immediate access when needed. Although Amazon S3 Standard-Infrequent Access also provides millisecond retrieval, it has a higher storage cost compared to S3 Glacier Instant Retrieval. Amazon S3 Glacier Deep Archive is more cost-effective in terms of storage cost but does not support millisecond retrieval, as retrieval times can take up to 12 hours. Amazon S3 Standard is intended for frequently accessed data and is more expensive for infrequent access patterns.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier Instant Retrieval?
How does Amazon S3 Glacier Instant Retrieval compare to other S3 storage classes?
What types of use cases are suitable for Amazon S3 Glacier Instant Retrieval?
A development team is seeking a solution to deploy a fleet of containers that will allow them to automatically adjust to traffic fluctuations without manually scaling or managing the host infrastructure. This solution should also facilitate the highest level of resource abstraction while ensuring the containers are orchestrated effectively. Which service should the team implement for optimal elasticity and ease of management?
Serverless architecture with provisionable concurrency for functions
Elastic Compute with Elastic Load Balancing
Managed service for big data processing on virtual server clusters
Container orchestration service with cluster management on virtual servers
Container service with on-demand, serverless compute engine
Job scheduling and execution service for batch processing
Answer Description
The most fitting service for hosting containers where management of the underlying infrastructure is abstracted away is Fargate, which provides on-demand, right-sized compute capacity for containers. It allows developers to focus on building applications without the hassle of managing the servers or clusters running the containers. This service offers automatic scaling, allowing it to handle variable traffic loads effectively. Other compute services like EC2 or ECS with EC2 require manual management of servers or clusters. Lambda is designed for serverless functions, not container management, and is not optimized for such a scenario. EMR focuses on big data and isn't suited for general web application hosting, while Batch processes batch jobs and isn't geared toward real-time container orchestration.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Fargate and how does it work?
What is the difference between EC2 and Fargate for container deployment?
What is container orchestration and why is it important?
An e-commerce company is expecting a significant spike in users accessing product images during an upcoming promotional event. They need a storage service that can serve these images with low latency at scale to enhance customer experience. Which of the following AWS services is the BEST choice to meet these requirements?
Amazon EFS with provisioned throughput configured to serve files directly to users
Amazon Elastic File System (EFS) mounted on high-memory EC2 instances
Amazon Elastic Block Store (EBS) with Provisioned IOPS SSD (io1) volumes attached to EC2 instances serving the images
Amazon Simple Storage Service (S3) with an Amazon CloudFront distribution
Answer Description
Amazon S3 with Amazon CloudFront is the best choice for serving content at scale with low latency. S3 provides durable storage and easy scalability for storing product images, while CloudFront, a content delivery network (CDN), caches the images close to the users at edge locations, thus reducing latency when accessing these assets during high traffic events. Amazon EBS is not designed for serving static content directly to users and does not integrate with a CDN. Amazon EFS is tailored for file system use cases and not optimized for delivering static content over the internet. Amazon EFS when connected to one or more EC2 instances could potentially handle large workloads, but it is not the most efficient choice when directly serving content at scale to many users compared to using Amazon S3 with CloudFront.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it work?
What is Amazon CloudFront and why is it beneficial?
What are the limitations of using Amazon EBS and EFS for serving images?
An application deployed on a cloud virtual server requires interaction with object storage and a NoSQL database service. What is the recommended method to manage the application's service-specific permissions in accordance with best security practices that enforce minimal access rights?
Embed long-term security credentials in the source code of the application to authorize service interactions.
Create a role with the exact permissions required by the application and attach it to the virtual server.
Configure an account with administrative privileges and programmatically distribute its access keys across all server instances.
Utilize the cloud platform's root account to ensure uninterrupted access to necessary services.
Answer Description
The best security practice for managing an application's permissions is to create a role with the specific privileges needed and then attach it to the virtual server. This method supports the principle of least privilege by only granting access that is necessary for the application's function, without the need to manage or expose static credentials. Using static credentials or high-level access like the root account's credentials can lead to significant security risks, including potential unauthorized access if the credentials are compromised.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are IAM roles in AWS?
What is the principle of least privilege?
Why should I avoid embedding long-term credentials in my application code?
A financial institution requires an archiving solution for critical data stored on local file servers. The data must be accessible with minimal delay when requested by on-premises users, yet older, less frequently accessed files should be economically archived in the cloud. However, after a specific period of inactivity, these older files should be transitioned to a less expensive storage class. Which solution should the architect recommend to meet these needs in a cost-efficient manner?
An online data transfer service
A fully managed file storage service for Windows files
A managed file transfer service
File gateway mode of a certain hybrid storage service
Answer Description
File gateway mode of a certain hybrid storage service provides a seamless way to integrate on-premises file systems with cloud storage like Amazon S3, ensuring low-latency access via local caching. It also offers automatic tiering capabilities to transition data to cost-saving storage classes after set periods of inactivity. This makes it a suitable solution for the financial institution's requirements. The alternatives mentioned do not offer the same combined functionality regarding local caching, seamless integration with cloud storage, and automatic tiering based on inactivity, thus, they would not present the most efficient solution for the given scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a File Gateway in AWS?
What is automatic tiering in cloud storage?
Why is low-latency access important for on-premises users?
An online media platform experiences slow content delivery when accessed by users located on a different continent from where the platform's servers are hosted. How can a Solutions Architect optimize content delivery for these international users?
Use Amazon ElastiCache to cache data within the application's current region to enhance content retrieval speeds.
Upscale the compute capacity of the origin servers to improve response times for global requests.
Implement Amazon CloudFront to cache and deliver content from edge locations closest to the international audience.
Deploy additional load balancers in strategic locations to better handle incoming traffic from overseas users.
Answer Description
Amazon CloudFront is the service designed to reduce latency by caching content in edge locations around the world. When users request content, it is served from the nearest edge location, speeding up the delivery. Upgrading server capacity does not address the core issue of geographical distance. While load balancers improve the distribution of traffic across resources, they are typically used within a particular region rather than globally. ElastiCache improves the performance of data retrieval within the system but won't be as effective for global content delivery as a CDN, which also provides caching but at the edge, closer to international users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudFront?
What are edge locations in the context of Amazon CloudFront?
How does caching improve content delivery performance?
A company has an application that generates 50 GB of log files each month, which are analyzed quarterly. The current policy is to keep the logs available in hot storage for the first month after generation for immediate analysis if needed, and then move them to a cheaper storage class for the remainder of the quarter. After the analysis, the logs are archived. Which storage strategy is most cost-effective while fulfilling the company's access pattern requirements?
Store the logs on Amazon Elastic Block Store (EBS) volumes for the first month, then shift the data to Amazon EFS until the quarterly analysis is done, later archiving to Amazon S3 Standard.
Use Amazon S3 to initially store the logs, transition to S3 Standard-Infrequent Access after one month, and move to S3 Glacier or S3 Glacier Deep Archive for archival after quarterly analysis.
Keep the logs in Amazon S3 Glacier during the initial month for cost savings, and then move to Amazon S3 for quick analysis, archiving them back to S3 Glacier after analysis.
Use Amazon S3 One Zone-Infrequent Access for the entire duration until quarterly analysis, then move the logs directly to S3 Glacier Deep Archive.
Answer Description
The most cost-effective strategy would be to use Amazon S3 to store the logs in the S3 Standard tier for the first month when immediate access to logs is required for potential analysis. After the initial month, transitioning the logs to S3 Standard-Infrequent Access (S3 Standard-IA) provides cheaper storage while still offering quick access when needed for quarterly analysis. Finally, after the quarter, moving the logs to Amazon S3 Glacier or S3 Glacier Deep Archive for long-term archival as these are the most cost-effective options for data that requires infrequent access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and its different storage classes?
What are the benefits of transitioning to S3 Standard-IA after one month?
What is the difference between S3 Glacier and S3 Glacier Deep Archive?
Which service feature should you use to manage a large number of concurrent database connections that often experience unpredictable spikes in connection requests, while ensuring minimal changes to the existing applications?
Amazon ElastiCache
AWS Direct Connect
Elastic Load Balancing
Amazon RDS Proxy
Answer Description
Amazon RDS Proxy is designed to handle a large volume of concurrent database connections and smooth out spikes in connection requests to RDS databases. It mitigates database overload by absorbing the connections to create a connection pool and by reducing database failovers through intelligent load balancing. Using RDS Proxy decreases the need to refactor the applications that are not designed to manage such spikes, which distinguishes it from the other answer options that do not offer the same degree of functionality for this specific requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon RDS Proxy and how does it work?
What are the benefits of using a connection pooling service like RDS Proxy?
Why might other services like Amazon ElastiCache or Elastic Load Balancing not be suitable for managing database connection spikes?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.