AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
Which service is designed to establish a private connection between the cloud environment and specific applications, helping minimize data transfer costs by avoiding the public internet?
AWS Direct Connect
Internet Gateway
VPC Peering
NAT Gateway
AWS PrivateLink
Answer Description
AWS PrivateLink establishes private connections between the internal network of the cloud environment and specific services or applications. This is cost-effective because it keeps the data within the Amazon network, avoiding the public internet which can incur additional costs. Although AWS Direct Connect helps reduce costs for on-premises to cloud data transfer, it's not the primary service for connecting to individual applications within the cloud environment. VPC Peering is used for interconnecting networks, but does not address application-specific routing needs. Internet Gateways and NAT Gateways facilitate public internet access and are contrary to the purpose of minimizing costs through avoidance of public routes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS PrivateLink and how does it work?
How does AWS Direct Connect differ from AWS PrivateLink?
What are the benefits of using AWS PrivateLink compared to traditional internet routing?
An e-commerce platform built with microservices experiences sudden traffic spikes during flash-sale campaigns. The order-ingestion service must hand off each order message for downstream processing with these requirements:
- Every order message must be processed at least once; duplicate processing is acceptable.
- Producers and consumers must scale independently to handle unpredictable surges without message loss.
- The solution should minimize operational overhead and keep services loosely coupled.
Which AWS service best meets these requirements?
Amazon EventBridge event bus
Amazon Kinesis Data Streams
Amazon Simple Queue Service (SQS)
AWS Step Functions
Answer Description
Amazon Simple Queue Service (SQS) is designed for decoupling producers and consumers with a fully managed message queue. Standard queues provide at-least-once delivery and automatically scale to virtually any throughput, allowing independent scaling of microservices .
Amazon Kinesis Data Streams is optimized for real-time analytics of large, ordered data streams and requires shard management; it is more complex than needed for simple message hand-off and may lose data if consumers fall behind shard retention.
Amazon EventBridge offers at-least-once event delivery but is optimized for routing events to multiple targets and has soft throughput quotas that can throttle extreme burst traffic.
AWS Step Functions orchestrates stateful workflows rather than providing a high-throughput message buffer between microservices.
Therefore, SQS is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Simple Queue Service (SQS) and how does it work?
What does 'at least once' processing mean in the context of message queuing?
What are the benefits of decoupling components in microservices architecture?
An international financial organization must ensure their highly transactional application's operations can withstand the outage of a data center without any service interruption. Furthermore, the application should incur minimal latency for users in Europe, North America, and Asia. Considering cost-effectiveness and operational complexity, what deployment approach adheres BEST to these requirements?
Establish the application in multiple AWS Regions each located near Europe, North America, and Asia, with an Amazon Route 53 latency-based routing policy.
Implement a global database cluster with cross-region read replicas to ensure the application's relational data remains available and experiences low latency accesses.
Utilize one AWS Region to host the primary instance and establish cross-region read replicas in regions closest to Europe, North America, and Asia.
Deploy the application into a single AWS Region and distribute it across multiple Availability Zones, leveraging Amazon Route 53 health checks for failover.
Answer Description
The correct approach is to establish the application in multiple AWS Regions near the primary user bases, using an Amazon Route 53 latency-based routing policy. This architecture directly addresses the two main requirements:
- Global Low Latency: By deploying the application's resources (compute and database) in regions close to users in Europe, North America, and Asia, and using latency-based routing, user requests are sent to the region with the lowest network latency.
- High Availability: A multi-region deployment ensures that the application can withstand a complete regional outage, which far exceeds the requirement of surviving a single data center (Availability Zone) failure.
The other options are incorrect for the following reasons:
- Deploying in a single region across multiple Availability Zones provides high availability for failures within that region but fails to provide low latency for a global user base.
- Implementing only a global database cluster or using cross-region read replicas are incomplete solutions. They only address the data layer. A complete solution requires deploying the application and compute resources in multiple regions as well. Furthermore, for a 'highly transactional' application, simply having read replicas in other regions does not solve for write latency, as writes must still travel to the primary database instance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Route 53 and how does it work?
What are the benefits of deploying applications across multiple AWS Regions?
What is the difference between Availability Zones and Regions in AWS?
A financial-services company stores sensitive transaction records in Amazon S3 using server-side encryption. The records must remain encrypted at rest, and the security team needs centralized control over the encryption keys, including the ability to enforce periodic, automated key rotation. Which configuration best meets these requirements?
Create AWS customer-managed keys and enable automatic key rotation (default 365 days).
Use AWS managed keys provided by the service, which rotate automatically every year.
Allow application developers to generate and replace encryption keys manually on a regular basis.
Create AWS customer-managed keys and run a scheduled script (for example, a Lambda function) to import new key material periodically.
Answer Description
AWS customer-managed KMS keys give the security team full control over the key policies, aliases, and rotation settings. Automatic rotation can be enabled (default 365 days, or a custom period from 90 - 2560 days), so the keys are refreshed on a schedule without manual effort. AWS-managed keys also rotate automatically every year, but the organization cannot view the key policies, attach custom policies, or adjust the rotation schedule; therefore they do not satisfy the requirement for centralized key management. Manually rotating keys through scripts or developer processes introduces operational overhead and potential error, and does not leverage KMS's built-in automation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are customer managed keys and how do they differ from managed service keys?
What does automated key rotation involve and why is it important?
How can organizations monitor and manage access to customer managed keys?
An organization needs to ensure that its compute instances, which handle sensitive data in an isolated environment, have the ability to securely access object storage without the data traveling over the internet. Which configuration aligns with these stringent security requirements?
Install a NAT device in the isolated environment to route traffic to the object storage.
Allocate public IP addresses to the compute instances for internet access to the object storage.
Set up a VPN connection from the compute instances to the object storage service.
Provision a service-specific gateway within the isolated environment for direct object storage access.
Answer Description
Implementing a gateway that directly connects the isolated environment to the object storage service via private networking ensures that the data does not traverse the public internet, thereby maintaining a high security posture. This gateway referred to as an endpoint specific to a service, allows for private communication between the network where the instances are hosted and the object storage service. Utilizing public IP addresses, NAT devices, or VPN connections would not satisfy the condition of keeping all traffic within the private network infrastructure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a service-specific gateway in AWS?
Why is it important to keep data from traveling over the internet?
What are the limitations of using public IP addresses for sensitive data access?
A security engineer is designing permissions for a mission-critical Amazon S3 bucket that resides in the production AWS account (111111111111). The engineer must guarantee that no IAM principals-users or roles-from any other AWS account, including the company's dev account (222222222222), can delete objects from this bucket. The solution must continue to allow valid delete operations that originate from principals in the production account. Which approach meets these requirements MOST effectively?
Attach a bucket policy to the S3 bucket that includes an explicit Deny for the actions s3:DeleteObject and s3:DeleteObjectVersion with Principal "*" and a Condition that aws:PrincipalAccount is not equal to "111111111111".
Enable S3 Block Public Access on the bucket.
Apply an IAM identity-based policy in the dev account that denies s3:DeleteObject against the production bucket.
Use an S3 access control list (ACL) that grants FULL_CONTROL permission to the bucket owner.
Answer Description
A bucket policy is a resource-based policy evaluated in the bucket's account. Option A adds an explicit Deny for the delete APIs when the request comes from any account other than 111111111111, so it blocks every cross-account principal while allowing in-account operations. Identity-based policies in other accounts (Option B) cannot override permissions in the production account. S3 Block Public Access (Option C) affects public (anonymous) access, not authenticated cross-account principals. ACLs (Option D) can only grant permissions; they cannot explicitly deny or scope by account, so they do not meet the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy?
How do S3 bucket policies interact with IAM roles and users?
What are the benefits of using resource-based policies for cross-account access?
Your client operates a multi-department organization and requires precise tracking of cloud infrastructure expenditure to appropriately charge each internal group. What feature should they apply to ensure expenses are attributed correctly for each department's usage?
Merge all departmental accounts into a single payment entity for streamlined billing
Apply resource labeling with key-value pairs customized to each department
Negotiate reduced pricing for extended commitment from each department
Configure spend monitoring tools to send alerts when each department's budget threshold is met
Answer Description
Cost allocation tags enable detailed tracking of cloud resource costs by tagging resources with key-value pairs, such as tagging them with the respective department's name. This facilitates detailed cost attribution to each department, based on their actual resource use, thus aiding in accurate internal chargebacks and financial governance. Using tags can make it easier to organize and visualize spending on cost management dashboards and reports. AWS Budgets is more about setting limits and forecasting, without assigning costs to specific entities. Multi-account billing consolidates payment across accounts, but does not attribute costs. Savings Plans offer discounted prices for committed usage and do not attribute costs to individual users or departments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are cost allocation tags in AWS?
How do I implement resource labeling in AWS?
What are the benefits of using cost allocation tags?
A digital education provider wants to enhance its platform by converting lecture transcriptions into structured data so the platform can automatically categorize content and generate sentiment metrics that reflect the lecturers' tone. The solution must automatically discover relevant topics discussed in each lecture and analyze the overall sentiment. Which AWS managed service should be used to process and analyze the transcribed text?
AWS Elemental MediaConvert
Amazon Translate
Amazon Polly
Amazon Comprehend
Answer Description
Amazon Comprehend is a fully managed natural language processing (NLP) service that can discover topics in a collection of documents (topic modeling) and perform real-time sentiment analysis, key-phrase extraction, and other NLP tasks without requiring any machine-learning expertise. It converts unstructured text such as lecture transcripts into structured data for downstream analytics. Amazon Polly (text-to-speech), AWS Elemental MediaConvert (media transcoding), and Amazon Translate (machine translation) do not provide topic discovery or sentiment analysis capabilities on text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Natural Language Processing (NLP)?
What is sentiment analysis?
How does Amazon Comprehend handle unstructured data?
A company is containerizing an internal API that normally runs at low traffic but occasionally experiences unpredictable, short-lived traffic spikes. The operations team does not want to manage or patch any Amazon EC2 instances and seeks the most cost-efficient compute option.
Which solution meets these requirements?
Run the containers on an Amazon EC2 Auto Scaling group using On-Demand Instances.
Deploy the application on Amazon Elastic Container Service (ECS) using EC2 Spot Instances.
Deploy the application on Amazon Elastic Kubernetes Service (EKS) with self-managed EC2 worker nodes purchased as Reserved Instances.
Deploy the application on Amazon ECS using the AWS Fargate launch type.
Answer Description
With AWS Fargate you specify only the vCPU and memory each task needs and are billed per-second for those resources. There are no EC2 instances to manage, so you avoid paying for idle cluster capacity. This makes AWS Fargate the most cost-effective choice for highly variable, bursty container workloads. EC2-based options (whether On-Demand, Spot, or Reserved) require running EC2 instances that may sit idle between spikes, while EKS on self-managed nodes has similar instance costs and management overhead.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Fargate?
How does Fargate pricing work?
What are the differences between AWS Fargate and Amazon EC2?
A company stores large amounts of data in S3 buckets. They are concerned about protecting sensitive information and want to automatically discover and classify personally identifiable information (PII) within their data. Which service should they use to accomplish this?
GuardDuty.
Shield.
WAF.
Macie.
Answer Description
Macie is designed to automatically discover, classify, and protect sensitive data stored in AWS, particularly in S3 buckets. It uses machine learning to recognize PII such as names, addresses, and credit card numbers. This makes it the best choice for the company's requirement to identify and protect PII in their data. GuardDuty is a threat intelligence service that monitors traffic for malicious activity and unauthorized behavior but does not provide data classification capabilities. WAF (Web Application Firewall) helps protect web applications from common web exploits but is not used for data discovery or classification. Shield provides protection against Distributed Denial of Service (DDoS) attacks but does not offer data classification or PII detection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Macie and how does it work?
What types of personally identifiable information (PII) can Macie identify?
How does Macie differ from other AWS security services like GuardDuty?
A startup company wants to receive alerts when their monthly cloud expenses approach a predefined limit to prevent unexpected charges. Which service should they use to achieve this?
AWS Budgets.
AWS Cost Explorer.
AWS Cost and Usage Report.
AWS Cost Anomaly Detection.
Answer Description
AWS Budgets allows users to set custom cost and usage budgets and receive alerts when costs or usage exceed (or are forecasted to exceed) those budgets. This helps in proactive cost management by notifying users before unexpected charges occur. AWS Cost Explorer helps visualize and analyze costs over time but does not provide alerting on budget thresholds. AWS Cost Anomaly Detection monitors spending patterns and alerts on unusual cost spikes but does not allow setting predefined budget limits. AWS Cost and Usage Report provides detailed reports on usage and costs but lacks budgeting alerts.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What types of budgets can I set with AWS Budgets?
How do I set up alerts in AWS Budgets?
What is the difference between AWS Budgets and AWS Cost Explorer?
An e-commerce company wants to process images uploaded by users by performing tasks such as resizing and format conversion. They wish to minimize infrastructure management and pay for compute time when their code is executing. Which computing approach best meets their requirements?
Deploying the application in virtual servers with auto-scaling.
Running the application in containers managed by a container orchestration service.
Using a serverless compute service that runs code in response to events.
Managing the application in dedicated virtual machines.
Answer Description
Using a serverless compute service such as AWS Lambda that runs code in response to events is the best fit for these requirements. Serverless computing allows developers to focus on writing code without managing servers, and charges are incurred when the code is running. This is ideal for intermittent tasks like image processing triggered by user uploads. Virtual servers with auto-scaling and container orchestration platforms still require infrastructure management and incur costs even when idle. Managing applications in dedicated virtual machines involves the most overhead and does not align with minimizing management and paying for execution time.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is serverless computing?
How does AWS Lambda work?
What are the advantages of using serverless architectures for image processing?
Which statement best describes Amazon S3 when compared with relational database storage?
It is a fully managed relational database service that supports engines such as MySQL and PostgreSQL out of the box.
It stores data as objects in buckets without requiring a predefined schema, so it is not suitable as the primary storage for relational database workloads.
It provides block-level storage volumes that must be attached to EC2 instances and formatted with a file system.
It stores data in a structured table format with a rigid schema defined in advance, making it ideal for relational queries.
Answer Description
Amazon S3 is an object storage service. Objects are stored in buckets with a key and optional metadata, and there is no requirement to define tables, columns, or other rigid relational structures before storing data. Because of this schemaless design, S3 is not used as the primary storage engine for traditional relational database workloads that rely on fixed schemas and SQL relationships. Relational data is better hosted on services such as Amazon RDS or Amazon Aurora, while S3 remains ideal for unstructured or semistructured data, backups, and data-lake use cases.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 and how does it work?
What are the advantages of using Amazon S3 over traditional databases?
What is the difference between Amazon S3 and relational databases?
Your client hosts their multimedia files on Amazon S3 and observes that these files are frequently accessed for up to 60 days after uploading. After 60 days, the access patterns decline sharply, but the client requires the files to be available for occasional access for at least one year. Which lifecycle policy should be applied to meet the client's need for cost optimization while maintaining file availability?
Transition objects to S3 Standard-Infrequent Access after 60 days and to S3 Glacier Flexible Retrieval after one year
Transition objects directly to S3 Glacier Flexible Retrieval after 60 days
Transition objects to S3 One Zone-Infrequent Access after 60 days
Keep the objects stored in S3 Standard without transitioning them to other storage classes
Answer Description
Configure an S3 Lifecycle rule that transitions the objects to S3 Standard-Infrequent Access (Standard-IA) 60 days after upload. Standard-IA offers the same millisecond latency and 11-nines durability as S3 Standard at a lower storage cost, with a 30-day minimum-storage duration that the workload already satisfies. Add a second transition to move the objects to S3 Glacier Flexible Retrieval (formerly "S3 Glacier") after they are 365 days old. Glacier Flexible Retrieval provides the lowest-cost archive tier that still allows retrieval in minutes to hours-adequate for the client's occasional access needs. S3 One Zone-IA is not chosen because, although it has the same durability as Standard-IA, it stores data in a single Availability Zone and offers lower availability (99.5%), making it less suitable for a primary copy that must remain available. Leaving the data in S3 Standard or moving it directly to Glacier after 60 days would either cost more or slow access during the high-access period.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the characteristics of S3 Standard-IA and Glacier classes?
How do lifecycle policies work in Amazon S3?
What is the difference between S3 Standard-IA and S3 One Zone-IA?
An online retailer is experiencing significant slowdowns during flash sales due to an increased number of customer queries to their product catalog database. To maintain customer experience, the retailer needs to implement a solution that improves query performance and can continue operations if an outage occurs in one of the data center facilities used by the retailer. Which strategy would provide the desired outcome?
Optimize the database by adding more indexes on frequently queried columns.
Upgrade to a larger database instance to provide more compute resources.
Enable cross-region replication for the existing database instance.
Implement read replicas across distinct data center facilities.
Answer Description
Implementing read replicas in various data center facilities (Availability Zones) not only spreads the load of customer queries across multiple sources but also ensures that the application can failover to other replicas in case of an outage. This is the most effective solution to address both read traffic performance and availability concerns without changing the primary data source configuration. Adding indexes may improve query performance but will not address the potential for an outage. Making the database instance larger will increase overall capacity but is a less targeted solution for read scalability and does not provide multi-facility resilience. Enabling cross-region replication adds geographical distribution but could introduce additional latency for the reads and doesn't address the immediate need to spread the read load.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are read replicas and how do they work?
What are Availability Zones and why are they important for reliability?
What are the limitations of simply adding more indexes to a database?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.