AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements

Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
- 20 Questions
- Unlimited time
- Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
A company has a legacy application that generates large log files which are periodically analyzed for troubleshooting and performance tuning. The application is running on an EC2 instance and the analysis tool can only access files over NFS. The company wants a scalable and durable storage solution that can be accessed concurrently from multiple EC2 instances in the same Availability Zone. Which storage solution should the company implement?
Amazon FSx for Windows File Server
Amazon Elastic Block Store (Amazon EBS)
Amazon Simple Storage Service (Amazon S3)
Amazon Elastic File System (Amazon EFS)
Answer Description
Amazon EFS is the correct choice because it is a managed file storage service that can be shared between multiple EC2 instances and supports the NFS protocol, which is required by the application. This makes it ideal for concurrent access to shared file systems. Amazon S3, while highly durable and suited for object storage, does not support the NFS protocol natively and would require additional steps to mount as a file system, making it less appropriate for this use case. Amazon EBS is block storage which does not support file-sharing capabilities and typically can be mounted to a single instance. Amazon FSx for Windows File Server provides fully managed file storage but uses the SMB protocol, which is not compatible with the requirement for NFS.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Elastic File System (Amazon EFS)?
How does Amazon EFS differ from Amazon Elastic Block Store (Amazon EBS)?
Why is Amazon EFS preferred over Amazon S3 for this use case?
A company has deployed an application across multiple Availability Zones that relies on Amazon DynamoDB. Recently, the application experienced a sudden, unpredictable spike in traffic and began to receive ProvisionedThroughputExceededException errors. The company wants to ensure that the application can automatically handle similar unexpected traffic spikes in the future without manual intervention or throttling.
What should the solutions architect do to meet these requirements?
Change the DynamoDB table to on-demand capacity mode so that it automatically scales to accommodate traffic spikes.
Request an AWS service quota increase for overall read and write capacity units for the account in the Region.
Configure Amazon CloudWatch alarms to notify operators when ConsumedReadCapacityUnits approaches the provisioned limit.
Manually increase the table's provisioned read and write capacity units to a higher value.
Answer Description
Switching the table to on-demand capacity mode is the recommended way to accommodate sudden, unpredictable spikes. In on-demand mode, DynamoDB automatically and instantly scales to any previously reached traffic level and doubles new peaks, eliminating the need for capacity planning and preventing throttling for most workloads. Manually increasing provisioned capacity or requesting a service-quota increase still requires forecasting and can be quickly outpaced by another spike. CloudWatch alarms provide visibility but do not solve the throttling problem themselves.
AWS documentation calls on-demand the default and recommended throughput option that "automatically scales to accommodate the most demanding workloads" and explicitly notes that auto scaling/manual scaling is "not recommended as a solution for dealing with spikey workloads."
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DynamoDB on-demand capacity mode?
How does on-demand capacity mode handle sudden traffic spikes compared to provisioned mode?
When should I consider using provisioned mode instead of on-demand mode?
An enterprise needs to capture live event data that surges intermittently, leading to volume spikes at irregular intervals. Which service would provide the optimal solution to accommodate such unpredictable high-velocity data inflows?
Amazon Kinesis Data Streams
AWS Direct Connect
AWS DataSync
Amazon Simple Queue Service
Answer Description
Amazon Kinesis Data Streams is the optimal service for scenarios where data arrives at variable rates, particularly when dealing with intermittent surges leading to high-velocity data inflows. It is designed to handle real-time streaming of large volumes of data and can scale automatically to match the throughput requirements during peak times. AWS DataSync is more suitable for data transfer tasks. Amazon SQS is useful for message queuing but not for real-time streaming, and AWS Direct Connect is a connectivity solution that doesn’t directly address the handling of variable-rate data inflows.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Kinesis Data Streams used for?
How does Kinesis Data Streams scale to handle surges in data inflow?
How is Kinesis Data Streams different from Amazon SQS?
An e-commerce platform built with microservices experiences sudden traffic spikes during flash-sale campaigns. The order-ingestion service must hand off each order message for downstream processing with these requirements:
- Every order message must be processed at least once; duplicate processing is acceptable.
- Producers and consumers must scale independently to handle unpredictable surges without message loss.
- The solution should minimize operational overhead and keep services loosely coupled.
Which AWS service best meets these requirements?
Amazon Kinesis Data Streams
Amazon EventBridge event bus
Amazon Simple Queue Service (SQS)
AWS Step Functions
Answer Description
Amazon Simple Queue Service (SQS) is designed for decoupling producers and consumers with a fully managed message queue. Standard queues provide at-least-once delivery and automatically scale to virtually any throughput, allowing independent scaling of microservices .
Amazon Kinesis Data Streams is optimized for real-time analytics of large, ordered data streams and requires shard management; it is more complex than needed for simple message hand-off and may lose data if consumers fall behind shard retention.
Amazon EventBridge offers at-least-once event delivery but is optimized for routing events to multiple targets and has soft throughput quotas that can throttle extreme burst traffic.
AWS Step Functions orchestrates stateful workflows rather than providing a high-throughput message buffer between microservices.
Therefore, SQS is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS, and why is it suited for decoupling microservices?
What makes Amazon Kinesis Data Streams unsuitable for this scenario?
How does SQS achieve at-least-once delivery, and why is it important?
In a scenario where a singular cloud account is shared across several project teams within an organization, the financial division is tasked to distribute costs to each respective project with high granularity. What should they implement to segregate spending effectively and attribute expenditures to the correct teams?
Setting up individual user accounts for each project unit and attributing resource spending based on the user account creating the resources.
Deploying a distinct virtual server for every project team, isolating each team's resources and observing the individual server cost metrics.
Activating and assigning descriptive labels to resources, and then using these labels to filter cost details in the cost management portal.
Answer Description
The implementation and application of cost allocation tags to resources is a sound approach because they empower the company to assign descriptive labels to resources, which can be leveraged to segment and categorize spending. When generating cost analysis reports, these tags can be used to parse out expenses, yielding a granular view of resource consumption and financial overhead attributed to each project team, thus facilitating precise chargeback and financial governance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are cost allocation tags in AWS?
How do cost allocation tags improve cost management in AWS?
What is the process to enable and apply cost allocation tags in AWS?
An emerging fintech startup requires a database solution for processing and storing large volumes of financial transaction records. Transactions must be quickly retrievable based on the transaction ID, and new records are ingested at a high velocity throughout the day. Consistency is important immediately after transaction write. The startup is looking to minimize costs while ensuring the database can scale to meet growing demand. Which AWS database service should the startup utilize?
Amazon DynamoDB with on-demand capacity
Amazon Neptune
Amazon DocumentDB
Amazon RDS with Provisioned IOPS
Answer Description
Amazon DynamoDB is the optimal solution for this use case as it provides a NoSQL database with the ability to scale automatically to accommodate high ingest rates of transaction records. It is designed for applications that require consistent, single-digit millisecond latency for any scale. Additionally, DynamoDB offers strong consistency, ensuring that after a write, any subsequent read will reflect the change. In contrast, RDS is better suited for structured data requiring relational capabilities, Neptune is tailored for graph database use cases, and DocumentDB is optimized for JSON document storage which, while capable of handling key-value pairs, is not as cost-effective or performant for this specific scenario as DynamoDB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does strong consistency mean in DynamoDB?
How does DynamoDB scale automatically to handle high ingest rates?
Why is DynamoDB more cost-effective for this use case compared to Amazon RDS?
An online retail company experiences variable traffic in their e-commerce platform, with significant spikes during holiday seasons. They need to ensure that their application can handle increased loads during peak times without over-provisioning resources during off-peak periods. Which compute option should they use to meet these requirements?
Use Amazon Lightsail to host the application for simplified management.
Deploy the application in a single large Amazon EC2 instance to handle peak traffic.
Run the application in AWS Lambda functions with provisioned concurrency.
Use Amazon EC2 instances with Auto Scaling groups to adjust capacity based on demand.
Answer Description
Using Amazon EC2 instances with Auto Scaling groups allows the company to automatically adjust the number of instances based on real-time demand. This approach ensures that sufficient compute resources are available during traffic spikes while scaling down during low-demand periods to optimize costs. Deploying the application in a single large EC2 instance lacks the flexibility to automatically scale up and down and could lead to performance bottlenecks during peak times. Running the application in AWS Lambda with provisioned concurrency is suitable for event-driven, stateless applications but may not be ideal for a stateful e-commerce platform. Utilizing Amazon Lightsail offers simplified management but does not provide the advanced scaling capabilities required for handling significant traffic variations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Auto Scaling group in AWS?
How do Auto Scaling policies work in AWS?
Why might AWS Lambda with provisioned concurrency not be suitable for stateful applications?
A corporation is required to automate the identification and categorization of stored content to enforce varying preservation requirements. Which service should be utilized to facilitate the discovery and categorization process, enabling the enforcement of corresponding preservation policies?
A managed service for cryptographic keys
A cloud storage service's lifecycle management feature
A service for managing identities and permissions
Amazon Macie
Answer Description
The service best suited for automating the discovery and classification of sensitive content, allowing the enforcement of retention policies, is Amazon Macie. It leverages machine learning and pattern matching to assist in accurately and efficiently identifying and classifying different types of content, making it ideal for this requirement. While other services mentioned may be related to data management or protection, none offer the same specialization in automated data discovery and classification as Macie.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Amazon Macie use machine learning for content classification?
What types of sensitive data can Amazon Macie identify?
Can Amazon Macie integrate with other AWS services for enforcement policies?
You have been tasked with designing a cost-optimized architecture for a read-heavy application that relies on a relational database. The application requires increased read capacity during business hours, with minimal impact on the primary database's performance. Which AWS service feature should be implemented to best meet these requirements?
Implement read replicas in Amazon Relational Database Service (RDS)
Configure an Amazon RDS Multi-AZ deployment
Use Amazon ElastiCache to cache read queries
Apply a Data Lifecycle Policy on Amazon RDS
Answer Description
Implementing read replicas using Amazon RDS allows you to handle read-heavy database workloads by creating one or more read-only copies of your database. This approach offloads read queries from the primary database, thus enhancing performance and availability without impacting the primary database's performance. Although the other options might appear feasible, they either do not address the requirement (such as Data Lifecycle Policy, which is related to data retention rather than read capacity) or are less efficient for the needs stated (like Multi-AZ deployment, which primarily addresses high availability and failover rather than read scalability). Caching with Amazon ElastiCache could improve read performance, but for read-heavy relational database applications, read replicas are purpose-built to serve this requirement and are the most cost-effective solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon RDS read replicas and how do they work?
How does Amazon RDS Multi-AZ differ from read replicas?
When should you use Amazon ElastiCache instead of RDS read replicas?
A Solutions Architect is tasked with securing a web application environment hosted in a private subnet on Amazon EC2 instances. These instances serve sensitive content and are placed behind a load distribution service offered by AWS. How can the Architect regulate the incoming network flow to guarantee that the web servers exclusively accept browser traffic that is encrypted?
Configure the load distribution service to listen only on port 443 but do not change the rules for the security group attached to the EC2 instances.
Install a software-based firewall on each EC2 instance to reject requests arriving on any port other than 443.
Apply stringent rules on the Network ACLs for the associated subnets allowing traffic only on port 443, while keeping the existing security group rules unchanged.
Modify the security group rules associated with the EC2 instances to allow ingress only on port 443, ensuring that the load distribution service listens for and passes through traffic on the same port.
Answer Description
By adjusting the security group tied to the EC2 instances allowing only ingress on port 443, the Architect ensures that merely HTTPS traffic, which is encrypted, can reach the EC2 servers. This practice does not disrupt the high availability since it doesn't impose any constraints on the distribution of incoming requests by the load balancer, which is already set to handle secure connections. Additionally, this method aligns with least privilege security principles. Altering policies on different aspects, such as network ACLs or instance-based packet filtering, would not provide the same targeted control or may add complexity without addressing the exclusive requirement of encrypted traffic directly reaching the servers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is port 443 used for encrypted browser traffic?
What is the difference between Security Groups and Network ACLs in AWS?
How does a load balancer interact with EC2 instances in this setup?
Your organization is using a leading cloud provider's services for application development and hosting. You are tasked with ensuring the adherence to the shared responsibility model for security. Which of the following tasks falls within your organization's scope rather than the cloud provider's?
Updating the physical network devices that are part of the dedicated cloud infrastructure
Maintaining the physical hardware on which cloud services operate
Implementing encryption for client-side data before storage in object storage services
Ensuring the underlying software that manages virtualization is up-to-date with security patches
Answer Description
According to the shared responsibility model for security in the cloud, while the provider is responsible for the security of the cloud infrastructure (computing hardware, storage, and networking), the customer is responsible for securing the data processed and stored within the cloud environment. This extends to client-side data encryption, which is the customer's duty, and not the provider's.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the shared responsibility model in cloud computing?
Why is client-side data encryption important in the shared responsibility model?
What aspects of virtualization and network management does the cloud provider handle in the shared responsibility model?
A retail company wishes to perform exploratory analysis on user interaction logs that are aggregated consistently throughout the day. The data, while structured and collected in real-time, demands the flexibility of immediate query capabilities with a syntax similar to SQL. Considering the need for effortless scalability and zero administration, which service would be the optimal choice?
Managed data lake solution
Business intelligence service
Managed Hadoop framework
Amazon Athena
Answer Description
The optimal choice for performing immediate, ad-hoc exploratory analysis on structured log data is a serverless interactive query service that directly analyzes data in storage, using familiar SQL syntax, requires no infrastructure to manage, and scales automatically. Amazon Athena meets all these criteria. Alternatives such as a managed data lake solution or a business intelligence service, are less suitable primarily because they serve different stages of the data analysis pipeline – the former for data storage and security configuration, and the latter for visualizations and business intelligence insights post data analysis rather than raw data processing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Amazon Athena differ from a managed data lake solution?
Why is Amazon Athena a zero-administration service?
What type of data can Amazon Athena process and analyze?
A healthcare company stores patient information that includes sensitive records in Amazon S3. They are subject to strict compliance regulations and need an automated way to classify their data at scale and be alerted of any potential exposure risks. Which service should they implement for continuous analysis of their stored content and to receive automated security alerts in case of unsecured sensitive data?
Use Amazon Cognito to manage patient identity verification and to secure sensitive records.
Implement Amazon GuardDuty for continuous threat detection and data classification in S3.
Configure AWS Secrets Manager for rotating credentials and alerting on data exposure.
Adopt Amazon Macie for content analysis and automated alerts on insecure data storage.
Answer Description
Amazon Macie is the AWS service specifically crafted for the purpose of analyzing and securing content that resides within Amazon S3. It uses machine learning and pattern matching to automatically recognize sensitive information such as healthcare records. When it detects unsecured data or abnormal data access patterns, it triggers alerts. This fits the requirement of the healthcare company to keep its patient records secure according to compliance regulations. Amazon GuardDuty is a threat detection service that monitors malicious activities rather than classifying content. While AWS Secrets Manager secures and rotates secrets such as database credentials and API keys, it does not classify or monitor object content within S3. Lastly, Amazon Cognito focuses on user identity management and would not assist with the data classification or monitoring needs of the healthcare company.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Amazon Macie classify sensitive data in S3?
How is Amazon Macie different from Amazon GuardDuty?
What kind of alerts does Amazon Macie provide if it detects unsecured sensitive data?
A company is hosting a static website which experiences predictable traffic patterns, with slight increases in users during weekend hours. The website content is occasionally updated with new articles and images. The Solution Architect needs to determine the most cost-effective compute service to host this static website. Which of the following services should the Architect recommend?
Host the website on Amazon Simple Storage Service (Amazon S3) and enable website hosting.
Provision a t3.micro Amazon EC2 instance to serve the static website and use Auto Scaling to handle increases during weekends.
Use AWS Elastic Beanstalk to deploy and manage the static website on a single Amazon EC2 instance.
Deploy the static website using AWS Lambda and Amazon API Gateway to serve the content.
Answer Description
Amazon S3 is the most cost-effective service for hosting static websites. It provides scalability, high availability, and is more cost-efficient compared to using compute instances or containers for serving static content. There is no need for a traditional server setup, and it can handle the predictable traffic easily. Elastic Beanstalk, while capable of running static websites, includes additional infrastructure management that is not needed for static content, thereby increasing costs unnecessarily. AWS Lambda is meant for running code in response to events and is not a typical choice for hosting a full static website. Amazon EC2 instances would provide more capacity than required for static content, leading to higher costs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Amazon S3 the most cost-effective service for hosting static websites?
What happens when website traffic increases with an S3-hosted static website?
How does enabling S3 website hosting work for serving static websites?
Your client's online retail system is being redesigned to enhance scalability and ensure that the inventory-tracking component can process transactions sequentially as they occur. To avoid any loss or misordering of transaction data, which AWS service should be implemented?
Use an Amazon SQS FIFO queue (managed message queue with FIFO capabilities)
Utilize a workflow-orchestration service to manage the application's tasks
Deploy a serverless function with an event-processing trigger
Implement a publish/subscribe service for event notifications
Answer Description
The most appropriate service is Amazon SQS FIFO queue (a managed message queue with FIFO capabilities). A FIFO queue guarantees exactly-once delivery and preserves strict ordering within a message group, which is critical for maintaining accurate inventory counts.
A standard publish/subscribe service such as an Amazon SNS standard topic offers at-least-once delivery and only best-effort ordering, so it could deliver transactions out of order. Workflow-orchestration services coordinate tasks but do not buffer or order messages. A serverless function alone is compute, not a durable message queue, and offers no ordering guarantees.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS FIFO queue?
How does Amazon SQS FIFO ensure message ordering?
Why is FIFO important for inventory-tracking systems?
A single AWS account is shared by several internal departments. Every AWS resource is tagged with the keys Department and CostCenter. The finance team wants a simple, interactive way to break down and analyze monthly AWS charges by each department so that showback and chargeback reports can be generated. Which AWS cost-management service will BEST meet this requirement?
AWS Cost and Usage Report (CUR) delivered to Amazon S3
AWS Budgets configured for each department
Amazon CloudWatch dashboards that display Billing metrics
AWS Cost Explorer with cost allocation tags enabled
Answer Description
AWS Cost Explorer integrates with cost-allocation tags that you activate in the Billing console. After tags such as Department or CostCenter are activated, you can group and filter spend in Cost Explorer to view the exact costs incurred by each department for any time range. This satisfies the need for interactive, tag-based chargeback analysis without requiring external tooling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do cost allocation tags work in AWS?
What is AWS Cost Explorer, and how is it used?
What is the difference between AWS Cost Explorer and the AWS Cost and Usage Report (CUR)?
A company has a workload on Amazon EC2 that exhibits variable usage patterns due to occasional marketing campaigns which lead to unpredictable bursts of traffic. Their current setup uses a fixed number of instances which often results in either over-provisioning or inability to serve peak traffic. What strategy should the Solutions Architect adopt to optimize for cost without sacrificing performance?
Increase the use of Spot Instances to benefit from cost savings during off-peak times.
Predominantly utilize Reserved Instances to ensure capacity and reduce costs.
Implement EC2 Auto Scaling with dynamic scaling policies to automatically adjust the number of instances in response to traffic demands.
Use a fixed number of On-demand Instances to simplify management.
Answer Description
Implementing EC2 Auto Scaling with dynamic scaling policies is the correct strategy in this scenario. Dynamic scaling adjusts the number of EC2 instances automatically in response to real-time demand, such as the unpredictable bursts of traffic caused by sporadic marketing campaigns. This action prevents over-provisioning (and thus overspending) during normal operation and also ensures performance isn't sacrificed during unexpected surges in demand.
Reserved Instances would not be cost-effective due to the unpredictable nature of the traffic, and Spot Instances might be interrupted during peak loads, leading to potential performance issues. On-demand instances alone would maintain performance but would not be cost-optimized.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is EC2 Auto Scaling, and how does it work?
Why are Reserved Instances not ideal for variable workloads?
What are the trade-offs of using Spot Instances for variable traffic patterns?
Which service is ideal for improving the response times of a dynamic web application by storing frequently accessed information in-memory?
Amazon Elastic Block Store (EBS)
AWS Snowball
Amazon Simple Storage Service (S3)
Amazon ElastiCache
Answer Description
Amazon ElastiCache is designed to store data in-memory to provide low-latency access to hot data. This helps improve the performance of applications by allowing quick retrieval of information which is accessed frequently, thus reducing the need to access slower disk-based storage systems. It is commonly used to enhance response times for dynamic web applications that require rapid data access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is in-memory data storage, and why is it faster than disk-based storage?
How does Amazon ElastiCache improve the performance of dynamic web applications?
What makes Amazon ElastiCache different from Amazon S3 for data storage?
An organization requires a mechanism to distribute web traffic across multiple EC2 instances to ensure high availability and fault tolerance for its online shopping platform. The web application needs to support the WebSocket protocol and maintain HTTP-cookie-based sticky sessions so that users always reach the same backend instance during a session. Which AWS service should be used to meet these specific requirements?
Application Load Balancer
Network Load Balancer
Classic Load Balancer
Amazon Route 53
Answer Description
The Application Load Balancer (ALB) is the only Elastic Load Balancing option that offers all of the needed features in one service:
- Layer-7 (HTTP/HTTPS) support that allows an HTTP/1.1 Upgrade to WebSocket (ws/wss) - native in ALB.
- HTTP-cookie-based stickiness that can be enabled at the target-group level.
Classic Load Balancer does not support WebSocket at all, and while a Network Load Balancer can carry WebSocket traffic over TCP/TLS listeners, it only offers source-IP affinity (and none for TLS listeners), not application-cookie stickiness. Route 53 is a DNS service and does not provide session stickiness.
Therefore, an Application Load Balancer best satisfies both the WebSocket and sticky-session requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Application Load Balancer (ALB) and how does it work?
How does HTTP-cookie-based stickiness work in an Application Load Balancer?
How does ALB support WebSocket communication?
Your company requires a solution to enhance user experience by minimizing content load times for a globally dispersed audience. Which service should be utilized to efficiently cache and deliver web content at scale?
Amazon CloudFront
Amazon Elastic File System (EFS)
AWS Global Accelerator
AWS Direct Connect
Answer Description
Amazon CloudFront is the appropriate service for efficiently caching and delivering web content to users with minimized load times because it is a content delivery network (CDN) that stores copies of content at edge locations closer to users worldwide. This proximity reduces latency and improves data transfer speeds, enhancing the user experience. The incorrect options provided do serve other specific purposes, such as improving application performance across regions (Global Accelerator), establishing dedicated network connections (Direct Connect), and offering managed file storage (Elastic File System), but they do not focus on caching and delivering web content globally like CloudFront does.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CDN and how does it work?
What are CloudFront edge locations?
How is CloudFront different from AWS Global Accelerator?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.