AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements

- Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
- 20 Questions
- Unlimited
- Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
An application running on Amazon EC2 instances needs to read log files that are stored only in the S3 bucket named app-logs. No other S3 actions or buckets are required.
Which IAM policy best implements the principle of least privilege for the application's IAM role?
- Allow the action s3:GetObject on the resource arn:aws:s3:::app-logs/*. 
- Allow s3:GetObject and s3:PutObject on all S3 buckets in the account. 
- Attach the AWS managed policy AmazonS3ReadOnlyAccess to the role. 
- Allow s3:*" on the resource arn:aws:s3:::app-logs/*. 
Answer Description
Granting s3:GetObject on the specific bucket path arn:aws:s3:::app-logs/* limits both the actions and the resource scope to exactly what the application needs, satisfying the least-privilege principle. The other options either grant broader actions (write or list), apply to every bucket in the account, or use a managed policy that provides read access to all buckets, all of which exceed the stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege in IAM policies?
How does `s3:GetObject` differ from other S3 actions like `s3:PutObject`?
Why is using an AWS managed policy like `AmazonS3ReadOnlyAccess` not ideal here?
Which service is commonly utilized to manage policies governing who can use cryptographic keys for securing data stored on the cloud?
- Key Management Service 
- Secrets Manager 
- Identity and Access Management 
- Certificate Manager 
Answer Description
The correct service is the one specifically designed to create, control, and manage encryption keys and policies. It enables administrators to define user permissions and outline the scope of actions that can be performed with these keys. This service is integral to managing the lifecycle of encryption keys and their accessibility, helping to uphold the principles of confidentiality and integrity in cloud data security.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Key Management Service (KMS)?
How does KMS differ from AWS Secrets Manager?
What are resource-based policies in AWS KMS?
Which capability enables users to append labels to cloud resources to facilitate detailed tracking and categorization of spending?
- Cost allocation tags 
- Budget alerts 
- Account outlines 
- Resource packs 
Answer Description
The capability that allows users to append labels for detailed tracking and categorization of spending is known as cost allocation tags. These tags help users break down their cloud expenditures by project, department, environment, or other organizational dimensions, thereby enabling detailed cost analysis and improving spending visibility. Resource groups and AWS Organizations are used for managing and organizing resources and accounts, but they do not by themselves offer the functionality of categorizing spending through labeling. Spending alarms can notify you when your costs exceed a certain threshold but do not categorize or track costs like cost allocation tags do.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are cost allocation tags in AWS?
How do you enable cost allocation tags in AWS?
How do cost allocation tags differ from resource groups in AWS?
Which type of AWS database is optimized for scenarios with heavy read traffic and requires high availability of data serving?
- Amazon DynamoDB without DAX 
- Amazon S3 standard storage class 
- Amazon RDS with Read Replicas 
- Amazon Elastic File System (EFS) 
Answer Description
An Amazon RDS instance that has multiple read replicas is ideal for read-intensive workloads. It allows for the database to handle a large number of read requests by distributing them across several read replicas, thus improving the read throughput and offering high availability. Amazon RDS supports the creation and management of read replicas. Using a solution such as Amazon EFS for heavy read traffic scenarios might not offer the required database-like features like ACID properties and query support. While DynamoDB can also support high read rates, the context here points to the use of read replicas, a feature best associated with RDS in a high-read scenario. S3 is primarily an object storage service and, although it supports high read rates, it is not optimized in the way RDS with read replicas is for database-like read intensive workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon RDS Read Replica?
How does Amazon RDS with Read Replicas differ from DynamoDB with DAX?
What happens if the primary database fails in an Amazon RDS with Read Replicas setup?
A company needs to store thousands of data files generated daily from hundreds of sensors. Each sensor sends small-size files roughly every minute. To minimize the storage costs on Amazon S3, what is the BEST strategy the company should follow?
- Batch multiple files together before uploading to reduce the number of PUT requests to S3. 
- Upload each file individually to ensure immediate availability and processing in S3. 
- Configure S3 Intelligent-Tiering to automatically move the files to the most cost-efficient tier. 
- Enable S3 Transfer Acceleration on the bucket to optimize the upload speed of the files. 
Answer Description
Batching files together before uploading to Amazon S3 is a cost-effective strategy when dealing with a large number of small files. It reduces the number of PUT requests, which can lower costs, especially because S3 charges for each request. Uploading files individually would generate a significant number of requests, increasing the cost disproportionately in comparison to the actual storage used. Although using S3 Transfer Acceleration might speed up the transfer process and S3 Intelligent-Tiering could help with cost savings over time, they do not address the main cost issue related to the sheer number of PUT requests.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are PUT requests in Amazon S3?
How does batching files reduce costs in Amazon S3?
What is S3 Intelligent-Tiering and why is it not the best option here?
A company stores large volumes of customer information and is looking for a service to automatically classify and secure sensitive data within their object storage. Which service should be utilized for continuous protection against data loss and to ensure compliance with data privacy regulations?
- Amazon Macie 
- Amazon GuardDuty 
- AWS Shield 
- Amazon Cognito 
Answer Description
The correct service for this use case is Amazon Macie, which leverages machine learning to automatically discover, classify, and protect sensitive data stored in object storage. It is designed to provide continuous protection, enabling compliance with privacy regulations. This contrasts with other services mentioned: Amazon GuardDuty does not classify or provide protection for data in storage but instead focuses on threat detection and monitoring. Amazon Cognito is not designed for data monitoring but for managing identities and access. AWS Shield protects against DDoS attacks and is not tailored for sensitive data classification or protection in storage services.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Amazon Macie classify and protect sensitive data?
What is the difference between Amazon Macie and Amazon GuardDuty?
How does Amazon Macie help with regulatory compliance?
A company operates under a multi-account strategy where one account is managed by the security engineers and another is operated by a separate team responsible for network administration. The security team needs to allow the network administration team's account access to a specific Amazon S3 bucket without broadening the access to other accounts. Which of the following is the MOST secure way to grant the required access?
- Set up a bucket policy that limits access to the S3 bucket based on the source IP range of the network administration team's office location. 
- Edit the S3 bucket's Access Control List (ACL) to include the user identifiers from the team handling network administration. 
- Implement a policy for individual users in the security engineers' account that grants permissions to the network administration team. 
- Attach a resource-based policy directly to the S3 bucket identifying the network administration team's account as the principal with the specified permissions. 
Answer Description
Attach a resource-based policy (bucket policy) to the S3 bucket that identifies the network administration team's AWS account as the principal and grants only the required permissions. A bucket policy is evaluated in the account that owns the resource and explicitly supports specifying an entire account in the Principal element, which cleanly limits access to that account.
IAM identity-based policies in the security engineers' account cannot by themselves grant principals from another account access to the bucket; a resource-based policy in the bucket owner's account is still required for cross-account access. Although legacy S3 ACLs can grant permissions to another AWS account via that account's canonical user ID, AWS now recommends disabling ACLs and using bucket policies for simpler management and finer-grained control. Restricting access by source IP address does not satisfy the requirement because any principal from any account could still reach the bucket if it originates from the allowed network range.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
Why are bucket policies preferred over S3 ACLs (Access Control Lists)?
How does the Principal element work in an S3 bucket policy?
To boost the reliability of an aging, traditional software system, a managed solution is needed that pools and streamlines database access, particularly during periods of high utilization. Which service would be the most appropriate to implement?
- Amazon Simple Notification Service (SNS) 
- Amazon RDS Proxy 
- AWS Lambda 
- AWS Global Accelerator 
Answer Description
The correct service for the described scenario is Amazon RDS Proxy. It allows legacy applications to pool and share database connections, reducing the stress on the database's compute and memory resources when handling many simultaneous connections, which is particularly beneficial during traffic surges. Amazon RDS Proxy functions as a proxy layer between the application and the database layer, enhancing scalability and resilience. The other listed services do not offer proxy capabilities for relational databases, nor are they designed to handle database connections and related scalability in this context.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon RDS Proxy used for?
How does Amazon RDS Proxy improve scalability during traffic surges?
What types of applications benefit from Amazon RDS Proxy?
An application publishes events to an Amazon SNS topic. One subscription delivers notifications to an Amazon SQS queue and has a JSON filter policy configured. What happens when a published message does not match the subscription's filter policy?
- The message is delivered to the SQS queue regardless of its attributes. 
- The message is delivered, but SNS flags it as unmatched and deletes it after 24 hours. 
- The message is filtered out and not delivered to that subscription. 
- The SNS Publish API call fails with an error indicating an attribute mismatch. 
Answer Description
SNS evaluates each published message against the filter policy that is attached to the subscription. If the attributes (or payload fields) of the message satisfy the policy, the message is delivered to the subscribed SQS queue. If they do not, the message is filtered out and never delivered to that subscription. The publish call itself still succeeds, and other subscriptions without conflicting policies continue to receive the message.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SNS?
What is a JSON Filter Policy in SNS?
What happens to messages that do not pass the filter policy?
Which Amazon S3 storage class offers the lowest cost for data that is accessed infrequently but must still be retrieved with millisecond latency when needed?
- Amazon S3 Standard-Infrequent Access (S3 Standard-IA) 
- Amazon S3 Glacier Deep Archive 
- Amazon S3 Standard 
- Amazon S3 Glacier Flexible Retrieval 
Answer Description
Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is designed for long-lived data that is not accessed often yet still needs rapid (millisecond) retrieval. It delivers the same low-latency performance as S3 Standard, with lower per-GB storage pricing and a retrieval fee. Glacier Flexible Retrieval and Glacier Deep Archive are lower-cost archival classes but require minutes to hours for retrieval, while S3 Standard targets frequently accessed data and has higher storage cost than Standard-IA.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Standard-Infrequent Access (S3 Standard-IA)?
How does S3 Standard-IA differ from Amazon S3 Glacier?
Why does S3 Standard-IA have retrieval fees?
An architect must devise a solution that allows a fleet of ephemeral compute instances to efficiently open and maintain connections to a relational database during unpredictable traffic spikes. Which service should the architect employ to ensure scalable and resilient database connectivity?
- Deploy a fully managed database proxy service for connection pooling. 
- Increase the compute capacity of the database instance to handle more connections. 
- Create additional read replicas to distribute the load across multiple instances. 
- Initiate a Multi-AZ deployment strategy for the database to ensure connectivity. 
Answer Description
Using a fully managed database proxy service allows ephemeral compute instances, like AWS Lambda functions, to efficiently manage database connections. This service can absorb the variability in concurrent connections, smoothing out spikes in traffic and preserving backend database stability. It achieves this through connection pooling and multiplexing, which enhances application scalability and resilience without exhausting connections. While other options may improve availability (Multi-AZ) or read performance (Read Replica), they do not offer the specific benefits of a managed proxy service when it comes to connection pooling. Scaling the compute engine vertically would not directly solve the problem of efficiently managing a large number of connections and could lead to resource exhaustion under high traffic conditions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is connection pooling in a managed database proxy?
How does a database proxy improve resilience during traffic spikes?
Why are solutions like Multi-AZ or read replicas insufficient for managing high connection counts?
An application running in an Amazon EC2 instance requires access to an Amazon S3 bucket to read and write data. What is the most secure method to grant the application the required permissions?
- Assign an IAM role to the EC2 instance with the required permissions. 
- Embed AWS access keys in the application source code. 
- Use environment variables to store AWS access keys on the instance. 
- Store AWS credentials in the instance in an encrypted file. 
Answer Description
Assigning an IAM role to the EC2 instance with the required permissions is the most secure method. IAM roles provide temporary security credentials that applications can use to access AWS services without the need to store credentials in the instance. The instance profile associated with the IAM role supplies these credentials securely through the instance metadata service.
Hardcoding AWS access keys or storing them in the instance, even in encrypted files or environment variables, poses a risk of credential exposure if the instance or the source code is compromised. IAM roles eliminate the need to manage long-term credentials, enhancing the overall security of the application.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role in AWS?
What is the instance metadata service, and how does it work?
Why are hardcoded credentials considered insecure in AWS?
Considering a scenario where an e-commerce company wants to perform daily ETL (Extract, Transform, Load) jobs on their transactional data stored in Amazon RDS, which service should be used for a fully managed, scalable, and serverless data transformation solution?
- AWS Lambda 
- AWS Glue 
- AWS Batch 
- Amazon Kinesis Data Firehose 
Answer Description
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. For applications requiring daily ETL jobs to process transactional data, especially from Amazon RDS, AWS Glue offers a serverless environment which scales automatically to meet the job’s processing demands. Unlike other services mentioned, AWS Glue is purpose-built for ETL operations, providing an array of tools and integration options for ETL purposes which make it the best choice among the options presented.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Glue used for?
How does AWS Glue compare to AWS Lambda for data processing?
What is the benefit of AWS Glue's serverless architecture?
Which AWS component acts as a virtual firewall for controlling traffic at the instance level within an Amazon VPC?
- Route Table 
- Network Access Control List (NACL) 
- Subnet CIDR 
- Security Group 
Answer Description
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups operate at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. Network Access Control Lists (NACLs) operate at the subnet level, so they are not the correct answer. Route tables direct network traffic, but do not control or filter traffic like a firewall. Subnet CIDRs are used for IP address allocation within a VPC and have no filtering capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a Security Group and a Network Access Control List (NACL)?
How does a Security Group's stateful nature affect traffic rules?
Can one instance in a VPC have multiple Security Groups?
Which Amazon S3 storage class is specifically optimized for data that is very rarely accessed, must be retained for many years to satisfy regulatory compliance requirements, and offers the lowest storage cost?
- Amazon S3 One Zone-IA 
- Amazon S3 Glacier Flexible Retrieval 
- Amazon S3 Intelligent-Tiering 
- Amazon S3 Glacier Deep Archive 
Answer Description
Amazon S3 Glacier Deep Archive is purpose-built for long-term archival and digital preservation. AWS positions it for customers in highly regulated industries that must keep data 7-10 years or longer for compliance. It provides the lowest cost of all S3 classes, with retrieval in hours, making it ideal when immediate access is not required.
- Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier) is also archival but costs more and targets use cases that may need retrieval in minutes.
- S3 Intelligent-Tiering automatically moves data among tiers but is not optimized for multi-year compliance archives.
- S3 One Zone-IA stores data in a single AZ and is unsuitable for long-term compliance retention because of its lower resilience.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key differences between Amazon S3 Glacier Deep Archive and S3 Glacier Flexible Retrieval?
What does 'long-term compliance retention' mean in the context of S3 Glacier Deep Archive?
Why is S3 One Zone-IA unsuitable for long-term compliance retention?
An e-commerce company needs to store a large number of product images and videos that customers will access frequently via their website and mobile app. The storage solution should be highly scalable, cost-effective, and accessible over HTTP(S). Which AWS storage service should the company use to meet these requirements?
- Store the files using Amazon EFS. 
- Store the data in Amazon S3. 
- Use Amazon EBS volumes for storage. 
- Use Amazon ElastiCache for storage. 
Answer Description
Amazon S3 (Simple Storage Service) is ideal for storing large amounts of static content like images and videos. It is highly scalable, cost-effective, and designed for web-scale computing, allowing direct access over HTTP(S). This makes it perfect for delivering content to customers via websites and mobile apps.
Amazon EFS (Elastic File System) is meant for file storage accessible by EC2 instances using NFS, not directly over the Internet. Amazon EBS (Elastic Block Store) provides block-level storage for EC2 instances and isn't suitable for serving files directly to users. Amazon ElastiCache is an in-memory data store used for caching, not for persistent storage of files accessible over HTTP(S).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Amazon S3 better suited than Amazon EFS for storing product images and videos?
What does it mean that Amazon S3 is 'highly scalable'?
How does AWS optimize costs in Amazon S3 for storing a large number of files?
An e-commerce company is expecting a significant spike in users accessing product images during an upcoming promotional event. They need a storage service that can serve these images with low latency at scale to enhance customer experience. Which of the following AWS services is the BEST choice to meet these requirements?
- Amazon EFS with provisioned throughput configured to serve files directly to users 
- Amazon Elastic File System (EFS) mounted on high-memory EC2 instances 
- Amazon Elastic Block Store (EBS) with Provisioned IOPS SSD (io1) volumes attached to EC2 instances serving the images 
- Amazon Simple Storage Service (S3) with an Amazon CloudFront distribution 
Answer Description
Amazon S3 with Amazon CloudFront is the best choice for serving content at scale with low latency. S3 provides durable storage and easy scalability for storing product images, while CloudFront, a content delivery network (CDN), caches the images close to the users at edge locations, thus reducing latency when accessing these assets during high traffic events. Amazon EBS is not designed for serving static content directly to users and does not integrate with a CDN. Amazon EFS is tailored for file system use cases and not optimized for delivering static content over the internet. Amazon EFS when connected to one or more EC2 instances could potentially handle large workloads, but it is not the most efficient choice when directly serving content at scale to many users compared to using Amazon S3 with CloudFront.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it store data?
How does Amazon CloudFront reduce latency for end users?
Why is EBS or EFS not ideal for serving static content at scale compared to S3 and CloudFront?
Your client is managing a global application that serves various independent stakeholders, requiring strict data segregation. To maintain a robust security posture, what method should they implement to ensure each stakeholder can access only their designated information while adhering to best practices for secure architecture design?
- Create dedicated roles for each stakeholder with tailored policies enforcing exclusive access to their own sets of resources. 
- Federate an on-premises directory with roles to manage stakeholder access within the platform's environment. 
- Organize stakeholders into groups and manage their permissions collectively based on established group roles. 
- Implement broad policies that manage the access rights of stakeholders at the organizational level. 
Answer Description
Creating individual roles with precise permissions tailored for each stakeholder equips them with exclusive access to their information, championing the principle of least privilege. This technique is key in preventing inter-stakeholder access and is congruent with established guidelines for safeguarding cloud resources. Arranging stakeholders into groups does not offer the required detail for segregation within a multi-tenant model and service control policies are more fitting for broad organizational constraints rather than fine-grained resource distinction. While integrating a directory with role-based access is a viable security measure, it does not inherently address the segregation of stakeholder data at the architecture level.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege?
Why are dedicated roles better for stakeholder segregation compared to groups?
How does AWS enforce role-based access control (RBAC)?
Which service facilitates the setup and governance of a multi-account environment by offering automated deployment of baseline environments and compliance auditing against standard best practices?
- Control Tower 
- Organizations 
- Security Hub 
- Identity and Access Management 
Answer Description
Control Tower is designed to manage multi-account environments by providing an automated way to set up a baseline environment with governance and compliance checks against best practices. While it incorporates elements of account management and identity services, it is distinct in providing automated orchestration for a secure and compliant multi-account setup. Organizations is primarily for account management and billing aggregation, does not automate the setup of a secure environment. Identity and Access Management is critical for defining users and permissions, but does not govern multi-account structures. Security Hub focuses on compiling security findings and does not manage account setup or enforce compliance policies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Control Tower and how does it facilitate multi-account setup?
How does Control Tower differ from AWS Organizations?
What are guardrails in AWS Control Tower, and why are they important?
A company is designing a web application that must route requests to different microservices based on values found in HTTP headers and URL paths. Which Elastic Load Balancing option provides this advanced request routing in the most cost-effective way without adding extra proxy instances?
- Application Load Balancer (ALB) 
- Network Load Balancer (NLB) 
- Classic Load Balancer (CLB) 
- Gateway Load Balancer (GLB) 
Answer Description
An Application Load Balancer (ALB) operates at OSI Layer 7 and supports content-based routing rules that inspect host headers, paths, HTTP headers, query strings, and source IP addresses. Network Load Balancer (NLB) and Gateway Load Balancer (GLB) operate at Layer 4 and cannot inspect request content. A Classic Load Balancer offers limited Layer 7 features but lacks modern rule-based routing. Therefore, using an ALB avoids the need to deploy and manage custom proxy tiers, making it the most cost-effective solution when advanced request routing is required.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the OSI Layers and how does Layer 7 differ from other layers?
Why is an Application Load Balancer cost-effective compared to implementing custom proxies?
What are the main use cases for Network Load Balancers and Gateway Load Balancers?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.