AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
Your company needs to centrally manage access policies across multiple organizational units within their cloud environment. Which service offered by Amazon Web Services should they use to achieve this?
- You selected this option
AWS CloudTrail
- You selected this option
AWS Identity and Access Management (IAM)
- You selected this option
AWS Organizations
- You selected this option
Amazon Cognito
Answer Description
AWS Organizations enables you to centrally govern and manage your environment as you grow and scale your workloads on AWS. It allows you to group accounts into Organizational Units (OU), and apply Service Control Policies (SCPs) to manage services and permissions for those OUs. AWS Identity and Access Management (IAM) helps you securely control access to services and resources within a single account, not across multiple accounts. Amazon Cognito provides user authentication, authorization and management for web and mobile apps, hence, irrelevant for managing multiple AWS accounts. AWS CloudTrail tracks user activity and API usage for auditing purposes but does not manage access policies across organizational units.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Service Control Policies (SCPs)?
How does AWS Organizations help in managing multiple accounts?
What is the difference between AWS Organizations and IAM?
A company has an online transaction processing (OLTP) workload on an Amazon RDS instance that experiences high read traffic during business hours. The database performance degrades due to the high number of read requests, impacting the user experience. Which of the following solutions should you implement to reduce the read load on the primary database instance and provide high-performing database responses?
- You selected this option
Create multiple read replicas of the primary database instance and distribute the read traffic among them.
- You selected this option
Convert the database deployment to a Multi-AZ deployment for improved performance.
- You selected this option
Implement Amazon ElastiCache to cache common queries and reduce the read load.
- You selected this option
Increase the size of the existing database instance to a larger instance type.
Answer Description
Creating read replicas allows the primary database to offload read requests to the replicas, thereby improving the performance of the primary instance for write operations and providing faster read responses from the replicas. The deployment of read replicas in different Availability Zones (AZs) also increases high availability. The synchronization of read replicas is asynchronous, which helps in scaling-out read operations without impacting the write latency of the primary database. Increasing the instance size could improve performance but would not address the scalability or high availability. Multi-AZ deployment is for high availability and failover support, not for scaling read operations. Using a caching layer such as Amazon ElastiCache can indeed reduce database load, but it does not replace the need for read replicas when the goal is to scale read operations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are read replicas in Amazon RDS?
How does asynchronous replication affect read replicas?
What is the difference between Multi-AZ and read replicas?
Which AWS component acts as a virtual firewall for controlling traffic at the instance level within an Amazon VPC?
- You selected this option
Network Access Control List (NACL)
- You selected this option
Route Table
- You selected this option
Subnet CIDR
- You selected this option
Security Group
Answer Description
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups operate at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. Network Access Control Lists (NACLs) operate at the subnet level, so they are not the correct answer. Route tables direct network traffic, but do not control or filter traffic like a firewall. Subnet CIDRs are used for IP address allocation within a VPC and have no filtering capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the main differences between a Security Group and a Network Access Control List (NACL)?
How do I configure a Security Group in AWS?
What are the potential security risks if Security Groups are not configured properly?
A Solutions Architect needs to implement an authorization strategy that allows efficient permission updates for users based on their job functions in a cloud environment. Given the need for individualized access rights and scalable updates to access patterns, which method should be chosen?
- You selected this option
Implement individual policies for each user, customizing access permissions according to the specific needs and job functions.
- You selected this option
Create a single role that encompasses all permissions for different job functions and grant users the ability to assume this role based on their needs.
- You selected this option
Utilize groups to represent different job functions and attach policies defining the access permissions to these groups. All users are then assigned to the appropriate groups based on their job function.
- You selected this option
Apply service control policies directly to user accounts to grant necessary permissions based on their job roles.
Answer Description
The most efficient way to manage permissions for multiple users who share common job functions is by using groups. When a permissions update for a job function is needed, making a single change to the group policy will automatically propagate to all users in that group. Attaching permissions directly to each user is not scalable and makes it difficult to manage when there are changes in common access patterns. Utilizing a single role for all users would go against individualized access rights and does not support scalable permission updates. Service control policies apply to accounts within an AWS Organization and not to individual user accounts; hence, they are not suitable for managing individualized permissions within a single account.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are IAM groups in AWS and how do they function?
What is the significance of policy updates in IAM groups?
What are individual policies and why are they less efficient than group policies?
A company utilizing a managed relational database service is facing performance bottlenecks during sporadic peaks in read traffic. They need to improve query performance during these periods without incurring excessive costs. What should they implement to handle the varying read loads effectively?
- You selected this option
Add an additional standby replica to distribute the load.
- You selected this option
Create read replicas to distribute the incoming read requests.
- You selected this option
Scale up the primary instance to handle the increased load.
- You selected this option
Introduce a caching service to handle the read traffic.
Answer Description
Creating read replicas of the primary database instance allows for the distribution of the read query load, helping to maintain query performance during traffic peaks without a costly scaling of the primary instance. This technique improves read throughput at a lower cost compared to scaling up the primary instance or using a caching service for queries that cannot be cached. Implementing read replicas is more appropriate for balancing read traffic than using a caching service, which is mainly beneficial for frequently accessed immutable data, or leveraging an additional standby replica, which is more focused on high availability rather than read distribution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are read replicas in the context of a managed relational database?
How does creating read replicas help in cost management for database performance?
What are the primary differences between read replicas and caching services?
As part of its disaster recovery plan, an eCommerce company must ensure uninterrupted service of their workload in case of an EC2 instance failure. The workload is sensitive to client session state. To improve the current architecture's resiliency, what should be implemented?
- You selected this option
Assign primary and secondary Elastic IP addresses to EC2 instances and script failover logic to re-associate the EIP to a standby instance upon failure.
- You selected this option
Implement an Application Load Balancer with an EC2 Auto Scaling group, distributing instances across multiple Availability Zones and leveraging ALB's stickiness feature.
- You selected this option
Deploy the application on Amazon S3 and serve static resources through Amazon CloudFront, using Amazon RDS with a Multi-AZ deployment for the database layer.
- You selected this option
Use Amazon RDS Multi-AZ with synchronous replication and automatic failover alongside AWS Elastic Beanstalk for application deployment.
Answer Description
Using an Application Load Balancer in conjunction with EC2 Auto Scaling across multiple Availability Zones is the most resilient option for maintaining service continuity and client session state. The ALB offers session stickiness, which is crucial for stateful applications to maintain user session information across requests, hence its suitability for an eCommerce workload. Auto Scaling ensures that upon an EC2 instance failure, a new instance is provisioned automatically to replace the failed one, preserving the desired capacity and performance levels. Directly deploying the application on S3 with Amazon CloudFront for static resources does not cater to the stateful nature of the application. While RDS Multi-AZ does serve as a resilient database solution, it does not facilitate high availability for the application layer itself. Utilizing Elastic IP addresses does not inherently provide failover for instance failures and lacks the capability to automatically reroute traffic to healthy instances.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Application Load Balancer (ALB)?
How does EC2 Auto Scaling work?
What does session stickiness mean in the context of ALB?
Your client is seeking a storage solution that allows for simultaneous file access from multiple compute instances. The need is for a managed service that supports NFS protocols. Which option should you recommend?
- You selected this option
Amazon Elastic File System (EFS)
- You selected this option
Elastic Block Store
- You selected this option
Simple Storage Service
Answer Description
Amazon Elastic File System (EFS) is optimized for use cases where a shared file system is required. It supports NFS protocol, and multiple compute instances can simultaneously access the same file system, making it the correct choice for the described scenario. Alternative solutions like Amazon S3 and Amazon EBS do not offer file-level storage with concurrent access. S3 offers object storage with eventual consistency, which is not suitable for file systems requiring consistent and concurrent access, while EBS is a block-level storage that's only attachable to one instance at a time, ruling out simultaneous access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are NFS protocols?
What are the main differences between Amazon EFS and Amazon S3?
Can you explain why Amazon EBS is not suitable for this scenario?
A global enterprise maintains three separate environments for their application lifecycle: development, staging, and production. Each environment is managed through distinct cloud accounts to enhance compartmentalization. With a centralized directory service managing user credentials on-premises, which approach should be chosen to provide developers with environment-specific access, while adhering to the security principle of least privilege?
- You selected this option
Integrate the centralized directory service with cloud-based trust relationships and environment-specific roles for secure, limited access.
- You selected this option
Assign credentials from the directory service to individual user accounts in each cloud environment, applying direct permissions.
- You selected this option
Distribute multiple access credentials tied to a singular identity within the primary cloud account for access management.
- You selected this option
Use a user identity and authentication management service suited for applications to control access to different cloud accounts.
Answer Description
Setting up a trust relationship between the centralized directory service and cloud-based resources with specific roles dedicated for each environment is the most secure and scalable solution. This maintains a single identity for users while leveraging existing authentication mechanisms and enables appropriate, limited access to development, staging, or production resources. Direct attachment of policies to identities is less secure and scalable. Amazon Cognito is better suited for consumer application user pools and identity management. Using one cloud identity with multiple credentials is not a best practice due to security and management complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are cloud-based trust relationships?
What is the principle of least privilege?
How do environment-specific roles enhance security?
A company is migrating an on-premises application to AWS. The application requires shared storage that provides low-latency access to data and supports standard file system features like file locking and hierarchical directories. The data is frequently updated, and the solution should be scalable and cost-effective. Which AWS storage service is the MOST appropriate to meet these requirements?
- You selected this option
Amazon S3 Glacier.
- You selected this option
Amazon Elastic File System (Amazon EFS).
- You selected this option
Amazon Simple Storage Service (Amazon S3).
- You selected this option
Amazon Elastic Block Store (Amazon EBS).
Answer Description
Amazon Elastic File System (Amazon EFS) is the most appropriate storage service for this scenario. EFS provides a scalable, fully managed Network File System (NFS) for use with AWS Cloud services and on-premises resources. It supports standard file system semantics such as file locking and hierarchical directories, which are essential for applications that require shared file storage. EFS is designed for low-latency access to data and scales automatically as files are added and removed, making it both scalable and cost-effective. Amazon Simple Storage Service (Amazon S3) is object storage and does not support file system semantics like file locking or hierarchical directories. Amazon Elastic Block Store (Amazon EBS) provides block storage for EC2 instances and does not offer shared storage across multiple instances unless using EBS Multi-Attach, which has limitations and may not suit shared file system needs. Amazon S3 Glacier is intended for archival storage and is not suitable for frequently accessed data requiring low-latency access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of Amazon EFS?
How does Amazon EFS differ from Amazon S3?
What are the limitations of using Amazon EBS for shared storage?
A multinational organization is deploying numerous environmental sensors across various locations to monitor and analyze ecological data in real-time. The data volume is substantial, and continuously streaming it back to a central processing location is becoming exceedingly costly because of the associated bandwidth usage. Which service should be used to preprocess and minimize the datasets locally before sending the refined data to the central system, thereby reducing transmission costs?
- You selected this option
Implement an IoT edge computing platform for local data processing
- You selected this option
Leverage local data centers to bring cloud services closer to metropolitan areas
- You selected this option
Utilize mobile edge computing infrastructure designed for telecom networks
- You selected this option
Employ portable edge computing and storage devices for large-scale data transfers
Answer Description
The option involving Wavelength is aimed primarily at applications that necessitate ultra-low latency for mobile devices and is not particularly suited for IoT sensor data. The concept of Local Zones is more about improving end user experience by reducing latency for interactive applications and is not primarily intended for local data processing scenarios. Snowball Edge is typically used for edge storage and batch data transfer rather than continuous, real-time edge processing. The correct choice, IoT Greengrass, is designed to allow connected devices to run local compute, message, data caching, and synchronization tasks. This includes executing functions, processing data streams, and only transmitting essential or processed data to the cloud, which aligns perfectly with the need to preprocess and reduce datasets at the source to save on transmission costs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is IoT Greengrass?
What are the benefits of edge computing for IoT applications?
How does data caching work in IoT Greengrass?
A single Average CPU Utilization metric is sufficient for an auto-scaling group managing a fleet of EC2 instances in an e-commerce application to guarantee its high availability during peak times.
- You selected this option
True
- You selected this option
False
Answer Description
Relying solely on Average CPU Utilization might not be sufficient to guarantee the high availability of an application during peak times. Auto-scaling decisions should be based on a broader set of metrics that reflect different aspects of system performance and user experience to avoid potential bottlenecks or resource constraints. These can include metrics like network I/O, disk I/O, memory usage, request latency, error rates, and throughput. A combination of these metrics can more accurately trigger scaling actions that maintain the application's high availability during demand surges.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are some other metrics that can be used for auto-scaling besides CPU utilization?
What is an auto-scaling group and how does it work in AWS?
Why is high availability important in an e-commerce application?
A company wants to ensure high availability and disaster recovery for its critical web application. The web application currently resides in a single region within two Availability Zones. Due to regulatory compliance, the data must reside within the country, and the application must continue to operate even if one full region were to go offline. Which of the following approaches will BEST meet these requirements?
- You selected this option
Create a secondary site in another region within the same country and employ Amazon Route 53 health checks and traffic management.
- You selected this option
Expand the application to include additional Availability Zones in the same region.
- You selected this option
Add read replicas of the database in different Availability Zones across two regions, without configuring traffic management.
Answer Description
Creating a secondary site in another region within the same country with Amazon Route 53 managing traffic between regions ensures that the application stays online even if the primary region fails. Amazon Route 53 can detect if a region is down and route traffic to another available region that also resides within the country, thus maintaining compliance and high availability. This approach contributes to a fault-tolerant architecture by preventing a single region's failure from causing application downtime. Using multiple Availability Zones alone is not sufficient because it doesn't protect against a whole region failure. Adding read replicas in other Availability Zones does not address region-level disaster recovery if the read replicas are all within the same region. Expanding to only one additional Availability Zone will not prevent downtime if that zone is in the same region as the other two, as the requirement is to survive a full-region outage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Route 53 and how does it work?
What are Availability Zones and why are they important for disaster recovery?
What does it mean for data to be compliant with regulatory requirements and why is it important?
A financial institution utilizes a key management service to enhance the security of its data-at-rest within cloud storage services. They aim to adhere to a stringent security protocol that requires the automatic renewal of encryption materials. Which approach can the institution implement to fulfill this requirement without altering the existing key identifiers or metadata?
- You selected this option
Delegating the renewal process until the key reaches its designated expiration period.
- You selected this option
Enabling automatic renewal for the encryption keys through the service's management console or API.
- You selected this option
Establishing a manual process where the keys are only updated in response to a security incident.
- You selected this option
Creating a new key manually every five years while disabling the old one.
Answer Description
The service that manages customer encryption keys offers the capability to automate the rotation of the underlying encryption material used by a managed key, usually on an annual basis. This automation ensures that the material is updated regularly without the need to change the key identifier or associated metadata, thus adhering to strict security protocols. The operation does not demand manual intervention, nor does it rely on a reactive approach to potential key compromises. Moreover, a fixed five-year rotation period is not a feature currently supported by the service provider.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a key management service (KMS)?
What does data-at-rest mean?
Why is automatic key rotation important?
A Solutions Architect has been tasked with dissecting cloud expenditure to allocate charges to the correct departments within an organization, each having its resources. The account used is shared among various teams. Which service would BEST facilitate this requirement for detailed financial governance?
- You selected this option
Transfer for SFTP to track and charge back file transfer costs
- You selected this option
NAT Gateway configured with detailed billing reports
- You selected this option
Budgets with alerts for forecasted spend anomalies
- You selected this option
Cost Explorer with implementation of categorization tags
Answer Description
Cost Explorer with the implementation of categorization tags provides the capability to analyze and attribute expenses to specific resources and operational groups. By assigning relevant categorization tags to the resources used by each department, the Architect can filter and disassemble the expense data based on the organizational structure. These tags must be activated for cost allocation in the billing console before they can be used for granular visibility into expenditure associated with designated departments or teams.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are categorization tags in AWS?
How does Cost Explorer help in analyzing cloud expenditure?
What are the main differences between AWS Budgets and Cost Explorer?
A company is storing large volumes of financial records that are frequently accessed for the first month after creation and are then rarely accessed. Compliance requirements mandate that these records must be preserved for seven years before they can be deleted. Which storage solution would be the most cost-effective for these requirements?
- You selected this option
Utilize Amazon S3 One Zone-Infrequent Access for both immediate and long-term storage.
- You selected this option
Use Amazon S3 Standard for immediate storage, and transition to Amazon S3 Glacier Deep Archive for long-term archival after one month.
- You selected this option
Store all financial records in Amazon S3 Standard to ensure availability and quick access at all times.
- You selected this option
Maintain the financial records on Amazon EFS for quick access and traditional file system interfaces.
Answer Description
Amazon S3 Glacier Deep Archive is designed for data that is accessed very infrequently but needs to be retained for a long-term period. It is the most cost-effective solution for archival storage, meeting compliance requirements and ensuring that records are preserved with minimum cost. Other options like Amazon S3 Standard or Amazon S3 One Zone-Infrequent Access are not optimized for long-term archival storage and would result in higher costs due to higher storage prices and the lack of infrequent access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it work?
What is S3 Glacier Deep Archive and when should I use it?
What is the difference between S3 Standard and S3 One Zone-Infrequent Access?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.