AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
A multinational corporation seeks to fortify the security of the top-level user credentials across its numerous cloud accounts, where each account functions under its own operational domain. They intend to put into effect a two-step verification process for all top-level user logins and establish an automatic mechanism for monitoring any top-level credential usage in API calls. Which service should they utilize to automate the monitoring of such activities throughout all operational domains?
Amazon CloudTrail
AWS Config
AWS GuardDuty
AWS Identity and Access Management (IAM)
Answer Description
The service that enables logging of account actions and automatic detection of top-level user API activity is the correct answer, which is Amazon CloudTrail. It records events that are made within an account and can be set up to generate alerts when specific activities, including those by the top-level account user, are detected. The service known for configuration tracking is not suitable for monitoring account activities directly. The service responsible for identity management does not offer automated detection or alerting for specific user actions. The service focused on threat detection primarily monitors for unusual activity but is not specifically designed for tracking the usages of top-level user credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudTrail and how does it work?
What are the differences between AWS Config and CloudTrail?
How does automated monitoring of API call activities enhance security?
A company operates under a multi-account strategy where one account is managed by the security engineers and another is operated by a separate team responsible for network administration. The security team needs to allow the network administration team's account access to a specific Amazon S3 bucket without broadening the access to other accounts. Which of the following is the MOST secure way to grant the required access?
Edit the S3 bucket's Access Control List (ACL) to include the user identifiers from the team handling network administration.
Implement a policy for individual users in the security engineers' account that grants permissions to the network administration team.
Attach a resource-based policy directly to the S3 bucket identifying the network administration team's account as the principal with the specified permissions.
Set up a bucket policy that limits access to the S3 bucket based on the source IP range of the network administration team's office location.
Answer Description
Attach a resource-based policy (bucket policy) to the S3 bucket that identifies the network administration team's AWS account as the principal and grants only the required permissions. A bucket policy is evaluated in the account that owns the resource and explicitly supports specifying an entire account in the Principal element, which cleanly limits access to that account.
IAM identity-based policies in the security engineers' account cannot by themselves grant principals from another account access to the bucket; a resource-based policy in the bucket owner's account is still required for cross-account access. Although legacy S3 ACLs can grant permissions to another AWS account via that account's canonical user ID, AWS now recommends disabling ACLs and using bucket policies for simpler management and finer-grained control. Restricting access by source IP address does not satisfy the requirement because any principal from any account could still reach the bucket if it originates from the allowed network range.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
What are the differences between resource-based policies and IAM policies?
Why is using IAM user policies or modifying ACLs less secure than using a resource-based policy?
An enterprise with distinct departments needs to ensure managed, independent access to their cloud resources within a shared environment. The configuration should enable department-specific resource management and enforce the least privilege access principle. As a solutions architect, which option would you recommend to achieve this goal?
Set up groups corresponding to the enterprise's internal structure with attached permissions, ensuring each group's access is limited to resources necessary for their operations.
Utilize a central governance mechanism to broadly restrict services accessible by each department without individualized access controls.
Create separate user accounts with individualized permissions tailored to each member's role in the enterprise to manage resource access manually.
Implement role-switching for different teams to grant them temporary access to other departments' resources when required.
Answer Description
Establishing groups for each department and assigning the corresponding policies necessary for their access needs would offer the best solution. This strategy enables clear role definition and privilege segregation. Users are allocated to their respective groups, simplifying the administration and modification of permissions in adherence to their roles. The incorrect choices involve strategies that either increase management complexity without providing additional value, mismatch the scenario's requirements, or fail to offer the precise control needed for departmental resource access within a common environment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'least privilege access principle' mean?
How do AWS Identity and Access Management (IAM) groups work?
What are the benefits of using IAM roles instead of individual user permissions?
What is the purpose of using multiple Availability Zones for deploying applications on AWS?
To serve as a single point of contact and management for global resources in multiple regions.
To increase the overall performance of compute instances by equally distributing the workload.
To cache static content closer to users and reduce latency.
To provide high availability and fault tolerance for applications by distributing resources within a region across physically separated data centers.
Answer Description
Using multiple Availability Zones provides high availability and fault tolerance for applications by distributing resources within a region across physically separated data centers. This ensures that if one Availability Zone becomes unavailable or compromised, others can continue to operate, reducing the potential impact on the application's users. The incorrect answers mention increasing performance and reducing latency, which are benefits related to edge locations and caching respectively, rather than the primary purpose of Availability Zones.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Availability Zones in AWS?
How does using multiple AZs affect application deployment?
What happens if an entire Availability Zone goes down?
Your client wishes to build a system where their web and mobile platforms can securely request information from a variety of upstream services. This system must support managing developer access, accommodate changes in the structure of requests, and offer mechanisms to limit the number of incoming requests per user. Which Amazon service should they implement to meet these requirements?
Amazon Cognito
AWS Step Functions
Amazon API Gateway
AWS Direct Connect
Amazon Simple Storage Service (S3)
AWS Lambda
Answer Description
The correct answer is Amazon API Gateway because it securely manages API requests, supports developer access control, handles request transformations, and enforces rate limiting. It integrates with AWS services like Lambda and Cognito, making it ideal for managing web and mobile API traffic. The incorrect options lack full API management capabilities - AWS Direct Connect is for private networking, S3 is for storage, and Cognito only handles authentication. Step Functions is for workflow automation, and Lambda executes backend logic but lacks API request management. While some of these services complement API Gateway, none provide a complete solution on their own.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What functionalities does Amazon API Gateway offer for managing access?
How does API Gateway accommodate changes in the structure of requests?
What mechanisms does API Gateway provide to limit the number of incoming requests per user?
Your enterprise is scaling and plans to create separate environments for various departments. To ensure centralized management, consistent application of compliance requirements, and an automated setup process for these environments, which service should you leverage?
AWS Config
AWS Control Tower
AWS Organizations
Amazon Inspector
Answer Description
Using the selected service, enterprises can manage multiple environments by setting up a well-architected baseline, automating the provisioning of new environments, and uniformly applying policy controls across all environments for security and compliance. While the other options provide specific security features or advisory services, they do not offer the comprehensive solution needed for centralized governance and automated environment setup.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower?
What are guardrails in AWS Control Tower?
How does AWS Control Tower differ from AWS Organizations?
A company is deploying a three-tier web application consisting of a web server tier, application server tier, and a database tier. How should the organization restrict each tier to only the permissions necessary for their specific operations?
Employ root user credentials for all instances to maintain simplicity in permissions management and ensure full access to resources.
Remove all permissions from instances in each tier to maximize security and prevent potential security incidents.
Distribute administrative credentials to instances in all tiers, ensuring they have sufficient permissions for any action they might need to perform.
Assign tailored IAM roles to each EC2 instance in the respective tiers with only the permissions necessary for their functions.
Answer Description
Implementing fine-grained access controls by assigning tailored IAM roles to each tier's respective EC2 instances ensures that each tier operates with only the permissions necessary for its duties. This strict adherence to the principle of least privilege prevents excessive permissions that could be exploited in case of a security breach. Providing overarching administrative credentials to all tiers, using root account access, or stripping all permissions contradict the security best practice of granting least privilege to perform required functions and are, therefore, incorrect.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are IAM roles and how do they work in AWS?
What is the principle of least privilege?
What does it mean to employ fine-grained access controls?
A company is decomposing a monolithic web application into microservices on AWS. The engineering team wants each new microservice to scale out easily when traffic spikes, without requiring complex session-handling logic. Which design approach BEST satisfies this requirement?
Write the microservice to read and write all application data to the local file system.
Enable sticky sessions on the Application Load Balancer so each user is routed to the same instance.
Design each microservice to be stateless and persist required data in a shared store such as Amazon DynamoDB.
Store user session data in the microservice's in-memory cache for fast access.
Answer Description
Designing a microservice to be stateless allows any instance of the service to process any incoming request. All state information (for example, user sessions or shopping-cart data) is stored in an external, shared data store such as Amazon DynamoDB or Amazon ElastiCache. Because instances do not rely on locally held state, Auto Scaling groups or container orchestrators can freely add or remove instances, and an Application Load Balancer can route requests to any healthy instance. Approaches that keep state on the instance (sticky sessions, local file storage, or in-memory caches within the service) couple requests to specific instances and hinder horizontal scaling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a microservices architecture?
What does it mean for a workload to be stateless?
How does horizontal scaling work in a microservices architecture?
You have been tasked with designing a solution for your company that allows existing corporate network users to obtain temporary credentials to interact with console and programmatic interfaces, streamlining the sign-on process and avoiding separate user management. Which method would you employ to facilitate this?
Implement a proprietary authentication solution specific to the company's internal systems for granting access.
Create individual IAM users corresponding to each member of the workforce and manage permissions directly.
Integrate the corporate directory with identity federation to assign permissions through temporary security credentials.
Distribute long-term security credentials to users for manual configuration of access to the necessary interfaces.
Answer Description
The process of federation involves the integration of an external directory service with IAM roles. By setting up federation, users authenticate with their local credentials on their existing directory system and then receive temporary security credentials to operate on the console or interact with services via APIs or CLI. This method honors the principle of least privilege and simplifies credential management while providing secure access. Manually creating IAM users is redundant and insecure for large enterprise environments. Assigning long-term credentials goes against security best practices, making them a poor choice. Lastly, developing a bespoke authentication portal partial to the corporation's internal system does not use AWS built-in mechanisms for secure access management and is less efficient and potentially less secure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is identity federation, and how does it work with AWS?
What are temporary security credentials, and why are they important?
What is the principle of least privilege, and how does it apply to AWS?
An organization aims to maintain operational continuity of its critical workload even if an entire data center servicing their region encounters an outage. Their solution includes computing resources distributed across diverse physical locations within the same geographical area. To enhance the system's robustness, which strategy should be implemented for the data layer?
Implement a Multi-AZ configuration for the relational database to promote automatic failover and data redundancy.
Install a globally distributed database with read replicas in various regions for geographical data distribution.
Configure an active-passive setup using a secondary region and enact health checks to direct traffic upon failure.
Introduce a Load Balancer to distribute traffic among database instances to minimize the impact of a location outage.
Answer Description
The question asks what can you do to maintain operational continuity if one data center in a region has an outage. Keep in mind that with AWS, one region is made of many data centers groups into availability zones. Therefor, a multi-AZ setup would help mitigate and prevent outages during a data center outage.
Choosing a Multi-AZ deployment for an RDS instance provides high availability by automatically maintaining a synchronous standby replica in a different data center, or Availability Zone. In case of an infrastructure failure, the database will fail over to the standby so that database operations can resume quickly without manual intervention. This choice is the most aligned with the requirement for operational continuity within a single region in the face of a data center outage. The other answers either describe strategies that introduce geographical redundancy, which goes beyond the scope of the question, or load balancing, which does not address the need for automatic failover at the data layer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Multi-AZ deployment in AWS RDS?
What are Availability Zones (AZs) in AWS?
How does automatic failover work in AWS RDS?
An application running on Amazon EC2 instances needs to read log files that are stored only in the S3 bucket named app-logs. No other S3 actions or buckets are required.
Which IAM policy best implements the principle of least privilege for the application's IAM role?
Attach the AWS managed policy AmazonS3ReadOnlyAccess to the role.
Allow s3:GetObject and s3:PutObject on all S3 buckets in the account.
Allow the action s3:GetObject on the resource arn:aws:s3:::app-logs/*.
Allow s3:*" on the resource arn:aws:s3:::app-logs/*.
Answer Description
Granting s3:GetObject on the specific bucket path arn:aws:s3:::app-logs/* limits both the actions and the resource scope to exactly what the application needs, satisfying the least-privilege principle. The other options either grant broader actions (write or list), apply to every bucket in the account, or use a managed policy that provides read access to all buckets, all of which exceed the stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are IAM users in AWS?
Why is the principle of least privilege important for security?
What are some examples of implementing least privilege in AWS?
A SaaS provider currently runs its entire stack in the us-east-1 Region. Customers are located in North America, Europe, and Asia-Pacific. The product team adds two new requirements:
- Decrease round-trip latency for all users and maintain service availability if an AWS Region becomes unavailable.
- Comply with regional regulations that require all customer data created in the European Union (EU) to remain in EU infrastructure.
As the solutions architect, which approach best meets both requirements while minimizing ongoing operational overhead?
Keep the workload in us-east-1 and place Amazon CloudFront in front of the application to cache static and dynamic content at global edge locations.
Retain a single-Region deployment in us-east-1 but add AWS Global Accelerator to improve network paths for TCP and UDP traffic worldwide.
Deploy the application stack in eu-central-1 and ap-southeast-2 in addition to us-east-1. Use Amazon Route 53 latency-based routing with health checks to direct users to the nearest healthy Region. Store EU customer data only in eu-central-1 and disable cross-Region replication for those buckets and databases.
Move all compute instances into a cluster placement group in us-east-1 and purchase a 100 Gbps AWS Direct Connect to enhance throughput and latency for every user.
Answer Description
Deploying identical stacks in multiple AWS Regions and using Amazon Route 53 latency-based (or geolocation) routing with health checks fulfills both goals. Each Region serves traffic from the closest users, reducing latency, and EU data can be stored exclusively in the EU Region (for example, eu-central-1) with cross-Region replication disabled, satisfying residency rules. CloudFront or Global Accelerator alone improve network performance but still route dynamic writes to the original Region, violating data-residency. Increasing instance size, placement groups, or dedicated network links also leave the workload single-Region and do not address compliance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DNS and how does it help with traffic routing?
What are regional data protection regulations?
What are the benefits of a geographically distributed cloud deployment?
Your company is deploying a web application on AWS using Amazon RDS for database storage, and the Security Officer is drafting a security strategy. What responsibility does AWS directly take care of as part of the shared responsibility model for Amazon RDS?
Configuring database encryption at rest
Patching the underlying database software
Managing user permissions within the database
Designing secure logical database schemas
Answer Description
AWS is responsible for protecting the infrastructure that runs AWS services, which includes the physical security of data centers, network infrastructure, and managed service patching, like Amazon RDS. Customers are responsible for managing the security inside the database, such as the data itself, controlling who can access the data, and using encryption to protect the data. While AWS will patch database engines, the logical database schema is within the customer's control, thus the correct answer is 'Patching the underlying database software'.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the shared responsibility model in AWS?
What does managing user permissions within a database involve?
Why is patching the underlying database software important?
A multinational enterprise has separate accounts for development and production environments to enhance security and operational efficiency. Developers need to access cloud resources in the production environment sporadically to perform troubleshooting. As a solutions architect, what approach would you suggest to facilitate these occasional access requirements while maintaining stringent security controls?
Implement trust relationships between the organization's accounts using roles with permissions to access necessary services, allowing for temporary credential assumption through a trusted federation.
Create identically named roles with necessary permissions in both the development and separate environment accounts.
Adjust the policies attached to resources in the separate environment to directly authorize access for identities from the development environment.
Provide distinct user credentials for each developer that grant access to the necessary services in the separate environment, with a scheduled monthly rotation policy.
Answer Description
The correct approach involves setting up trust relationships between accounts by creating roles that can be assumed as needed. The idea is to provide temporary credentials that can be used within certain security parameters defined by the role’s permission policy. This enables the enterprise to control access precisely without having to manage permanent credentials for each developer for each environment, adhering to the principle of least privilege. Generating dedicated user credentials or creating shared roles in both accounts doesn't follow the best practice of using temporary credentials for cross-account access and may not meet security and audit requirements. Enabling direct access by modifying resource policies could compromise security by making the role too permissive and is not aligned with recommended security practices.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are trust relationships in AWS?
What is the principle of least privilege?
What are temporary credentials in AWS and how do they work?
Which of the following statements about enabling multi-factor authentication (MFA) in AWS is correct?
Select ONE answer.
MFA can only be enabled through the AWS CLI; the AWS Management Console does not support enabling MFA devices.
An IAM user can enable an MFA device for their own user if their IAM policy permits the required actions.
Only the AWS account root user can enable MFA, and IAM users cannot enable it even if granted permissions.
MFA can be enabled for an IAM role in the same way that it is enabled for a user.
Answer Description
An IAM user can enable an MFA device for their own user when their IAM policy grants the required permissions (such as iam:CreateVirtualMFADevice and iam:EnableMFADevice). The root user can enable MFA for the root user credentials, but it is not the only principal that can enable MFA. IAM roles cannot have MFA devices attached, and MFA can be configured through both the AWS Management Console and the AWS CLI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is IAM in AWS?
What is MFA and why is it important?
How do IAM policies work?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.