AWS Certified Solutions Architect Professional Practice Test (SAP-C02)
Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Professional SAP-C02 Information
The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.
This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.
AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.
Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Solutions for Organizational ComplexityDesign for New SolutionsContinuous Improvement for Existing SolutionsAccelerate Workload Migration and Modernization
A large enterprise is managing a hybrid environment with thousands of Amazon EC2 instances and on-premises servers. They need to enforce a standard baseline configuration across the entire fleet, which includes specific software versions and security settings. A key requirement is to automatically apply this baseline to newly launched instances and periodically scan the entire fleet for any configuration drift. If an instance is found to be non-compliant, it should be reported to a central dashboard. The solution must minimize manual intervention and use AWS native services for configuration management.
Which AWS Systems Manager capability should a solutions architect recommend to meet these requirements most effectively?
AWS Systems Manager State Manager
AWS Systems Manager Distributor
AWS Systems Manager Patch Manager
AWS Systems Manager Run Command
Answer Description
The correct answer is AWS Systems Manager State Manager. State Manager is designed to be a scalable configuration management service that automates the process of keeping managed nodes in a defined state. It uses documents to define the desired configuration and 'associations' to apply that configuration to a target set of instances on a schedule. State Manager continuously monitors the fleet for configuration drift and reports compliance status to a central dashboard, which directly addresses the key requirements of the scenario.
- AWS Systems Manager Run Command is used for executing ad-hoc or on-demand commands on managed instances. While it can be used to apply a configuration once, it does not inherently maintain a persistent state or automatically scan for and remediate configuration drift over time, making it less effective for this use case than State Manager.
- AWS Systems Manager Distributor is used to securely store and distribute software packages to managed instances. While it can be used as part of a configuration management solution (for example, a State Manager association could use Distributor to install a package), it does not by itself enforce the configuration or manage drift. It is a tool for package distribution, not state management.
- AWS Systems Manager Patch Manager is a specialized service for automating the process of patching operating systems and applications with security and other updates. Its scope is limited to patching and does not cover general-purpose configuration management, such as installing specific software or enforcing custom security settings as required by the scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Systems Manager State Manager, and how does it work?
How does State Manager differ from AWS Systems Manager Patch Manager?
What is configuration drift, and how does AWS State Manager handle it?
A financial-services company must migrate a 50-TB on-premises Oracle database to an Amazon RDS for Oracle instance in its primary AWS Region. The company already has one dedicated 10 Gbps AWS Direct Connect connection that terminates at a single colocation facility and uses a private virtual interface (VIF) to the company's VPC.
To meet a strict migration deadline and ensure long-term operational resilience, the solutions architect must design connectivity that
- increases total bandwidth,
- survives the loss of an entire Direct Connect location, and
- provides link-level encryption without adding significant throughput overhead.
Which networking solution best meets these requirements?
Order a second 10 Gbps cross-connect at the existing facility and create a Link Aggregation Group (LAG) with the two ports. Configure a Site-to-Site VPN over the internet as a backup path.
Provision a second 10 Gbps dedicated Direct Connect connection at a different colocation facility. Configure a private VIF on each connection and attach them directly to the VPC's virtual private gateway. Enable MACsec on both connections for Layer-2 encryption.
Upgrade the current Direct Connect port to 100 Gbps at the same location and create both public and private VIFs. Rely on application-layer encryption for data in transit.
Add a second 10 Gbps Direct Connect connection in a new facility. Create an AWS Transit Gateway, attach the VPC, and build IPsec Site-to-Site VPN tunnels over each Direct Connect to encrypt traffic.
Answer Description
Provisioning a second dedicated 10 Gbps Direct Connect connection in a different colocation facility and attaching private VIFs from both connections directly to the VPC's virtual private gateway meets all three requirements:
- Two geographically diverse connections protect against a complete location failure and allow active/active traffic flow for 20 Gbps aggregate bandwidth.
- MACsec can be enabled on each dedicated link to deliver Layer-2, near line-rate encryption between the customer router and the AWS Direct Connect device, eliminating the performance penalties of overlay IPsec tunnels.
- Direct Connect traffic is not encrypted by default, so MACsec satisfies the company's end-to-end encryption requirement.
Other choices are less suitable:
- Creating a Link Aggregation Group (LAG) in the same facility doubles bandwidth but still fails if the location goes offline.
- Adding IPsec VPN over Direct Connect provides encryption but lowers effective throughput and adds operational complexity.
- Upgrading to a single 100 Gbps port at the existing site increases bandwidth but does not address location-level availability or encryption.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is MACsec and why is it important in this context?
What is a VPC's virtual private gateway and how does it interact with Direct Connect?
Why is having two diverse Direct Connect locations important for resilience?
A financial services company operates a multi-account AWS environment with a dedicated 'Developer Tools' account (ID: 111122223333) and a 'Production' account (ID: 999988887777). A CI/CD pipeline, running on an EC2 instance in the Developer Tools account, needs to deploy updates to a specific Lambda function named 'TradeProcessor' (ARN: arn:aws:lambda:us-east-1:999988887777:function:TradeProcessor). A solutions architect has been tasked with designing an IAM configuration that provides the necessary cross-account access while adhering strictly to the principle of least privilege. Which of the following configurations is the most secure and meets the requirements?
In the Production account, create an IAM role named
LambdaUpdateRole
with an IAM policy that allows thelambda:UpdateFunctionCode
action on the resourcearn:aws:lambda:us-east-1:999988887777:function:TradeProcessor
. Configure the role's trust policy to allowsts:AssumeRole
actions from the specific IAM role ARN associated with the EC2 instance in the Developer Tools account.In the Production account, create an IAM user with programmatic access. Attach a policy to the user that allows the
lambda:UpdateFunctionCode
action on theTradeProcessor
function ARN. Store the user's access key and secret key in AWS Secrets Manager in the Developer Tools account and grant the EC2 instance's IAM role permission to retrieve them.In the Production account, create an IAM role named
LambdaUpdateRole
with an IAM policy that allows thelambda:*
action on all resources ("Resource": "*"
). Configure the role's trust policy to allowsts:AssumeRole
actions from the IAM role associated with the EC2 instance in the Developer Tools account.In the Production account, create an IAM role named
LambdaUpdateRole
with a policy allowinglambda:UpdateFunctionCode
on theTradeProcessor
function ARN. Configure the role's trust policy to allowsts:AssumeRole
actions from the root of the Developer Tools account ("Principal": {"AWS": "arn:aws:iam::111122223333:root"}
).
Answer Description
The correct answer provides the most secure and least-privilege access by using a specific role-to-role trust relationship and resource-level permissions. An IAM role in the Production account (LambdaUpdateRole
) is granted a narrow permission (lambda:UpdateFunctionCode
) limited only to the specific Lambda function's ARN. Its trust policy explicitly allows only the EC2 instance's role (ToolsEC2Role
) from the Developer Tools account to assume it. This ensures that only the designated EC2 instance can assume the role, and once assumed, the credentials can only be used to update that single Lambda function's code.
Using wildcards for the resource or action is overly permissive and violates the principle of least privilege, as it would allow the role to update any Lambda function, or perform any Lambda action, respectively.
Using an IAM user with long-lived access keys is not a best practice for programmatic access from an EC2 instance. IAM roles provide temporary, automatically rotated credentials, which is a more secure mechanism than managing static keys.
Configuring the trust policy in the Production account's role to trust the entire Developer Tools account root is less secure than trusting the specific role ARN. This configuration would allow any principal in the Developer Tools account with sts:AssumeRole
permissions to assume the LambdaUpdateRole
, not just the intended EC2 instance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of 'sts:AssumeRole' in the trust policy?
Why is using specific role-to-role trust relationships more secure than trusting the account root?
Why is using IAM roles preferred over IAM users with long-lived credentials?
A financial services company is migrating its on-premises data center to AWS. The migration includes a 500 TB on-premises NAS that stores critical financial analytics data. The company has an existing 1 Gbps AWS Direct Connect connection, which is currently utilized at 40% capacity for other business operations. The project timeline requires the initial 500 TB data transfer to be completed within 30 days. After the initial transfer, a subset of the data, approximately 50 TB, will continue to be updated on-premises and requires ongoing synchronization with the target Amazon S3 bucket until the final application cutover in three months.
The company's security policy mandates end-to-end encryption for all data in transit. A solutions architect needs to design the most efficient and cost-effective migration strategy that meets these requirements.
Which approach should the architect recommend?
Use AWS DataSync to transfer the entire 500 TB dataset over the Direct Connect connection. Schedule the DataSync task to run continuously until the migration is complete.
Use AWS Snowball Edge Storage Optimized devices for the initial bulk transfer. For ongoing synchronization, configure an AWS Transfer Family SFTP endpoint and use a scheduled script to sync changes.
Deploy an AWS Storage Gateway in File Gateway mode on-premises. Use AWS DataSync to migrate the entire 500 TB of data from the NAS to the File Gateway to be uploaded to Amazon S3.
Use AWS Snowball Edge Storage Optimized devices for the initial 500 TB transfer. Then, use an AWS DataSync agent on-premises to perform ongoing synchronization over the Direct Connect link.
Answer Description
The correct answer is to use AWS Snowball Edge for the initial bulk transfer and AWS DataSync for the ongoing synchronization. A 1 Gbps connection has a theoretical maximum throughput of about 10.8 TB per day. With only 60% of the bandwidth available (600 Mbps), the effective throughput is approximately 6.5 TB per day. Transferring 500 TB would take roughly 77 days, which does not meet the 30-day requirement. Therefore, an offline transfer method like AWS Snowball Edge is necessary for the initial bulk migration. AWS Snowball Edge Storage Optimized devices are suitable for petabyte-scale transfers. For the ongoing synchronization of the 50 TB active dataset, AWS DataSync is the ideal managed service. It can operate over the existing Direct Connect link, fully automates the incremental data transfer, provides end-to-end encryption, and validates data integrity, making it more efficient and less operationally complex than custom-scripted solutions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Direct Connect and how does it support migration projects?
What is AWS Snowball Edge, and why is it used for bulk data transfers?
How does AWS DataSync ensure secure and efficient ongoing data synchronization?
Your company operates 30 AWS accounts that are organized with AWS Organizations. Finance wants an interactive Amazon QuickSight dashboard that shows charge-back information by linked account, cost category, and cost-allocation tag. The reporting solution must do the following:
- Include resource-ID-level cost details.
- Refresh automatically at least once every 24 hours.
- Retain all historical cost data for multi-year trend analysis.
- Minimize operational overhead and avoid third-party tools.
Which approach will meet these requirements?
Export cost-optimization check results from AWS Trusted Advisor for every account to Amazon S3 each day and use AWS Glue and QuickSight to create cost dashboards from the exported reports.
Create individual AWS Budgets for each linked account, have the budgets send daily Amazon SNS notifications, store the notifications in Amazon S3 through an AWS Lambda subscriber, and build QuickSight visuals from the stored messages.
Enable an AWS Cost and Usage Report with resource-ID and hourly granularity in the management account. Deliver the report to an Amazon S3 bucket and turn on the Cost and usage dashboard powered by QuickSight to visualize the data.
Enable hourly granularity in AWS Cost Explorer and schedule a daily AWS Lambda function to call the Cost Explorer API, store the CSV output in Amazon S3, and query it with Amazon Athena for QuickSight dashboards.
Answer Description
An AWS Cost and Usage Report (CUR) satisfies every requirement. CUR delivers the most granular cost and usage data-including resource IDs, tags, and cost categories-directly to an Amazon S3 bucket and updates it up to three times per day. The CUR data can be visualized without custom ETL by enabling the built-in Cost and usage dashboard powered by QuickSight, and the files stay in S3 for as long as the company's retention policy allows.
Cost Explorer exports do not meet the retention or granularity needs: Cost Explorer keeps only about 13 months of history and its resource-level API is limited to the last 14 days. AWS Budgets generates threshold notifications rather than full cost datasets and updates only a few times per day. AWS Trusted Advisor provides optimization recommendations, not detailed, tag-based cost records. Therefore, enabling CUR with QuickSight integration is the only option that fulfills all the stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Cost and Usage Report (CUR)?
How does the Cost and Usage dashboard powered by QuickSight work?
Why is CUR preferred over Cost Explorer for long-term cost analysis?
A global enterprise is architecting a multi-account AWS environment. A central 'Shared Services' VPC hosts centralized tools. Numerous 'Application' VPCs, each in a separate AWS account, host business applications. The EC2 instances in these Application VPCs require frequent access to Amazon S3 and Amazon DynamoDB. The networking team has raised concerns about IP address exhaustion in the Application VPCs. Security requirements mandate that all traffic to S3 and DynamoDB must remain within the AWS network and be restricted to a specific list of approved resources. Which network design should a solutions architect recommend to meet these requirements in the most scalable and resource-efficient manner?
In each Application VPC, configure a NAT Gateway in a public subnet and update the route tables for the private subnets to direct S3 and DynamoDB traffic through the NAT Gateway.
Create VPC Interface Endpoints for S3 and DynamoDB in the central Shared Services VPC. Use AWS Transit Gateway to connect all Application VPCs to the Shared Services VPC and route all AWS service traffic through the centralized endpoints.
In each Application VPC, create VPC Interface Endpoints for both Amazon S3 and Amazon DynamoDB. Attach an endpoint policy to each endpoint to restrict access to the approved resources.
In each Application VPC, create VPC Gateway Endpoints for both Amazon S3 and Amazon DynamoDB. Attach an endpoint policy to each endpoint that explicitly allows access only to the approved S3 buckets and DynamoDB tables.
Answer Description
The correct answer is to create VPC Gateway Endpoints in each Application VPC. Gateway Endpoints are the ideal solution for connecting to Amazon S3 and DynamoDB privately from within a VPC. They provide several key benefits that directly address the requirements. First, they do not consume any IP addresses from your VPC's CIDR block, which is a critical advantage given the concern about IP address exhaustion. Second, they ensure traffic does not traverse the public internet by creating a private route between the VPC and the AWS services. Finally, you can attach an endpoint policy to a Gateway Endpoint to enforce fine-grained access control, restricting access to only the specified S3 buckets and DynamoDB tables, which satisfies the security mandate.
Incorrect options explained:
- Creating Interface Endpoints in each Application VPC is incorrect because Interface Endpoints, which use Elastic Network Interfaces (ENIs), consume private IP addresses from the subnets they are placed in. This would worsen the IP address exhaustion problem, making this solution less resource-efficient than using Gateway Endpoints.
- Centralizing Interface Endpoints in a Shared Services VPC is incorrect for two main reasons. While Interface Endpoints can be accessed over a Transit Gateway, Gateway Endpoints cannot; this architecture forces the use of the more expensive and IP-consuming Interface Endpoints. This design also introduces unnecessary complexity and potential data transfer costs associated with routing traffic through the Transit Gateway and the central VPC.
- Using a NAT Gateway is incorrect because it is designed to allow instances in private subnets to connect to the internet. Traffic routed through a NAT Gateway to access AWS services would traverse the public internet to reach the public service endpoints, which explicitly violates the security requirement that traffic must remain within the AWS network.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VPC Gateway Endpoint?
Why are VPC Gateway Endpoints preferred over NAT Gateways for private resource access?
What is the difference between VPC Gateway Endpoints and Interface Endpoints?
A solutions architect is designing a large multi-tenant SaaS application on AWS. The application uses a fleet of EC2 instances in an Auto Scaling group to process asynchronous jobs from an Amazon SQS queue. A single job from one tenant, known as a 'poison pill', could potentially cause a worker instance to crash repeatedly. This could lead to a rapid succession of instance terminations and launches, consuming resources and impacting the job processing capability for all tenants sharing the fleet. The architect needs to design a solution that minimizes the blast radius of such a failure, ensuring a problem caused by a single tenant affects the fewest other tenants possible. Which approach provides the most effective failure isolation for this scenario?
Configure the Auto Scaling group to span multiple Availability Zones and place an Application Load Balancer in front of the EC2 instances to distribute jobs.
Implement shuffle sharding by creating multiple target groups (virtual shards) from the total worker fleet and mapping each tenant to a unique combination of target groups.
Configure a dead-letter queue (DLQ) on the main SQS queue to automatically isolate messages that fail processing multiple times.
Implement a strict bulkhead pattern by provisioning a dedicated Auto Scaling group and SQS queue for each tenant.
Answer Description
The correct answer is to implement shuffle sharding. Shuffle sharding is an advanced architectural pattern that provides a high degree of workload isolation and blast radius reduction. It works by creating many virtual shards from a smaller pool of resources (the worker fleet) and assigning each tenant to a unique combination of these resources. In the event of a poison pill taking down the resources in one virtual shard, only the very small number of tenants assigned to that specific combination are affected. This massively reduces the blast radius compared to traditional sharding.
Implementing a separate Auto Scaling group for each tenant is an example of a bulkhead pattern, but it is not practical or cost-effective for a large-scale, multi-tenant application with thousands of tenants due to the high operational overhead and resource underutilization.
Using a standard Multi-AZ Auto Scaling group with an Application Load Balancer is a fundamental high-availability pattern that protects against Availability Zone failures, not application-level correlated failures like a poison pill. A poison pill would cause instances in all Availability Zones to fail, eventually affecting the entire fleet.
Configuring a dead-letter queue (DLQ) on the SQS queue is an essential practice for handling poison pill messages, but it does not solve the architectural problem of blast radius for the compute fleet. The DLQ isolates the problematic message after it has failed processing multiple times, but during those failures, it would have already impacted the shared compute fleet, affecting all tenants. Shuffle sharding proactively contains the impact of the compute failure itself.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is shuffle sharding in AWS?
How is shuffle sharding different from traditional sharding?
Why is the bulkhead pattern not suitable for a large multi-tenant SaaS application?
Your organization is migrating several on-premises Kubernetes microservices to AWS. Each microservice team will receive its own AWS account. A central networking account already owns a VPC and an Amazon EFS file system that must remain in place. Security and platform teams have issued these requirements:
- Cluster capacity (patching, scaling, operating system updates) must incur the least possible manual effort.
- Pods must never use node instance credentials; each microservice must receive only the AWS permissions it needs.
- The existing Amazon EFS file system must be available to the microservices as persistent, POSIX-compatible shared storage.
- Network administrators must retain ownership of subnets and route tables, but application teams must be able to deploy workloads from their own accounts.
Which architecture best meets all of these requirements?
Create a centralized Amazon EKS cluster in the networking account. Configure an AWS Fargate profile for each microservice namespace, share the VPC subnets with workload accounts by using AWS Resource Access Manager, mount the existing Amazon EFS file system with the Amazon EFS CSI driver, and map every Kubernetes service account to its own IAM role by using IAM roles for service accounts.
Implement a self-managed Kubernetes cluster on EC2 instances launched in a shared subnet of the networking account. Configure cross-account SSH access for each team, mount the EFS file system directly on the hosts, and use security groups on the nodes to isolate traffic.
Deploy an Amazon EKS cluster in every workload account with self-managed EC2 nodes, peer each cluster's VPC to the networking account, mount the EFS file system by exporting it over NFS, and store static AWS access keys in Kubernetes secrets for applications that call AWS services.
Provision a single Amazon EKS cluster in the networking account with managed EC2 node groups. Disable IAM roles for service accounts so that pods use the node instance profile, and attach the EFS file system by installing the NFS client on every node.
Answer Description
A centralized Amazon EKS cluster that runs all workloads on AWS Fargate removes the need to provision, patch, or scale EC2 worker nodes, satisfying the low-maintenance requirement. Subnets from the networking account can be shared with workload accounts by using AWS Resource Access Manager, so the network team retains control of VPC constructs while application teams can place ENIs for their pods in the shared subnets. The Amazon EFS CSI driver allows pods running on Fargate to mount the existing file system with static provisioning. Finally, mapping each Kubernetes service account to an IAM role by using IAM roles for service accounts provides temporary, least-privilege credentials inside the pod and avoids relying on the node's instance profile. The other options either require ongoing EC2 node management, store long-lived credentials inside pods, or fail to give the networking team continued ownership of the VPC.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Fargate and why is it beneficial for EKS clusters?
What is the Amazon EFS CSI Driver and how does it work?
How do IAM roles for service accounts improve security in EKS?
A financial services company runs a critical monolithic application on a fleet of Amazon EC2 instances behind an Application Load Balancer. The current deployment process involves manually stopping the application, deploying the new version on all instances simultaneously, and then restarting the application. This 'all-at-once' method results in significant downtime during each release and makes rollbacks a complex, time-consuming manual effort. The company wants to improve its operational excellence by adopting a deployment strategy that eliminates downtime and minimizes risk. As a solutions architect, which strategy should you recommend to meet these requirements?
Implement an in-place rolling update by configuring the Auto Scaling group to replace instances one by one with a new launch template version.
Implement a blue/green deployment strategy using AWS CodeDeploy, configuring it to shift traffic between two environments via the Application Load Balancer.
Automate the existing all-at-once deployment process using AWS Systems Manager Run Command to execute the deployment scripts simultaneously across all instances.
Re-platform the application onto AWS Elastic Beanstalk and configure its environment to use a managed rolling update deployment policy.
Answer Description
The correct answer is to implement a blue/green deployment strategy using AWS CodeDeploy. This strategy involves creating a new, separate 'green' environment with the new application version that runs alongside the existing 'blue' production environment. Once the green environment is fully tested and ready, traffic is shifted from the blue environment to the green environment via the Application Load Balancer. This cutover is nearly instantaneous, eliminating downtime. If any issues are detected, a rollback is just as fast, as traffic can be immediately rerouted back to the old blue environment, which remains on standby until the green environment is deemed stable.
An in-place rolling update is incorrect because, while it avoids complete downtime, it introduces risk by having a mix of old and new application versions running simultaneously. This can be problematic for monolithic applications and makes rollbacks more complex than a simple traffic switch.
Using AWS Elastic Beanstalk with a managed rolling update policy is a plausible but less optimal choice. While Elastic Beanstalk simplifies deployments, it abstracts a significant amount of control. For a critical financial application, a more granular and customizable solution using a dedicated CI/CD pipeline with AWS CodeDeploy is generally preferred to maintain fine-grained control over the deployment process.
Automating the current all-at-once deployment with AWS Systems Manager Run Command is incorrect because it only automates a fundamentally flawed process. It would make the deployment faster but would not solve the core problems of application downtime and the high risk associated with an all-at-once cutover.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a blue/green deployment strategy?
How does AWS CodeDeploy enable blue/green deployments?
What are the key advantages of blue/green deployments over rolling updates?
An e-commerce company is refactoring a legacy order-processing application into several microservices that run in separate AWS accounts. The monolith currently writes every order event to an Amazon SQS queue. A Lambda function examines each message's JSON payload and forwards it to three downstream SQS queues-one per microservice-based on the value of the eventType field (ORDER_CREATED, PAYMENT_CAPTURED, or ORDER_CANCELLED).
The development team wants to retire the Lambda router to reduce operational overhead, keep costs low, and continue using SQS for downstream processing. Exactly-once delivery and strict ordering are not required.
Which solution will meet these requirements with the least custom code?
Publish every order event to a single Amazon SNS standard topic. Create a dedicated Amazon SQS queue for each microservice and subscribe each queue to the topic. Attach a payload-based filter policy that matches only the required eventType values for that microservice.
Configure an Amazon EventBridge custom event bus. Publish each order event to the bus and create one rule per eventType that routes matching events to the appropriate SQS queue.
Replace the Lambda router with an Amazon SNS FIFO topic. Set the eventType value as the message-group ID and subscribe each microservice's SQS queue to the topic so that only matching messages are delivered.
Create three separate Amazon SNS topics, one for each eventType. Modify the order-processing service so that it publishes every event to all three topics, and have each microservice subscribe to its dedicated topic.
Answer Description
Publishing to a single Amazon SNS standard topic and attaching a filter policy to each subscription offloads all routing logic to the managed service. Each microservice still consumes from its own SQS queue, but it now receives only the event types that match its payload-based filter policy. This removes the custom Lambda router and scales automatically with no additional code or infrastructure.
EventBridge rules (second choice) could also filter messages, but it introduces another managed service and additional cost when SNS alone is sufficient. Creating three separate SNS topics (third choice) forces application changes and duplicates publishes, increasing complexity. Using an SNS FIFO topic with the eventType as the message-group ID (fourth choice) does not restrict delivery to particular subscribers-every subscribed queue still receives all messages unless a filter policy is added, so the router logic would remain necessary.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a payload-based filter policy in Amazon SNS?
Why is using Amazon SNS standard topics with SQS queues preferable for this use case?
How does Amazon EventBridge differ from Amazon SNS in this messaging use case?
A financial services company is deploying a new, computationally intensive workload on AWS for market simulation. The application is tightly-coupled and requires the lowest possible inter-node latency for optimal performance. The workload runs for several hours at a time, is fault-tolerant and can be interrupted, making it highly cost-sensitive. The company also wants to maximize the availability of compute capacity by allowing for flexibility in the specific EC2 instance types used, mitigating the risk of capacity unavailability for any single instance type.
Which approach meets all of these requirements MOST effectively?
Configure an EC2 Fleet with a Spot allocation strategy. Specify multiple instance types that meet the performance requirements and launch them into a single Cluster Placement Group within a single Availability Zone.
Create an Auto Scaling group with On-Demand Instances launched into a Spread Placement Group across multiple Availability Zones. Use multiple instance types in the launch template's overrides.
Use an EC2 Auto Scaling group with a mixed instances policy to launch instances into a Cluster Placement Group that spans multiple Availability Zones.
Launch EC2 Spot Instances using an Auto Scaling group configured with a launch template. Configure the Auto Scaling group to launch instances into a Partition Placement Group spread across multiple Availability Zones.
Answer Description
The correct answer is to use an EC2 Fleet with a Spot allocation strategy and launch instances into a Cluster Placement Group.
- Cluster Placement Group: This is the only placement strategy that groups instances into a low-latency, high-throughput network within a single Availability Zone. This directly addresses the requirement for the lowest possible inter-node latency for a tightly-coupled High-Performance Computing (HPC) workload.
- Spot Allocation Strategy: Since the workload is interruptible and cost-sensitive, Spot Instances are the most cost-effective compute pricing option.
- EC2 Fleet with Multiple Instance Types: Using an EC2 Fleet (or an Auto Scaling group with a mixed instances policy) with multiple suitable instance types specified (e.g., using attribute-based instance selection) addresses the need for compute capacity resilience. If one instance type has limited Spot capacity, the fleet can provision other specified instance types, ensuring the workload can run.
Incorrect answers explained:
- Partition Placement Group: This strategy is designed for large, distributed workloads like HDFS or Cassandra, where instances are spread across logical partitions on distinct hardware racks. It does not provide the lowest possible latency required for tightly-coupled workloads.
- Spread Placement Group: This strategy places each instance on distinct underlying hardware to reduce correlated failures. This maximizes the availability of individual critical instances but increases inter-node latency, making it unsuitable for this use case.
- Cluster Placement Group spanning multiple Availability Zones: This option is incorrect because a Cluster Placement Group, by design, cannot span multiple Availability Zones. This is a fundamental limitation that an architect must know when designing for low-latency networking.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Cluster Placement Group in EC2?
How does the Spot allocation strategy in EC2 Fleet work?
Why can't a Cluster Placement Group span multiple Availability Zones?
Acme Group has merged its healthcare business (subject to HIPAA) and its payment-processing subsidiary (subject to PCI-DSS). The company already uses AWS Organizations with all features enabled and operates centralized log-archive and security-tooling accounts in a dedicated Security OU. Leadership wants to 1) apply and audit guardrails for HIPAA and PCI workloads independently, 2) continue sharing the existing security services, 3) receive a single consolidated bill for the entire conglomerate, and 4) avoid additional operational overhead. Which multi-account and OU strategy best satisfies these requirements?
Create a separate AWS Organization for the payment subsidiary, enable consolidated billing in each organization, and share the log-archive account between the two organizations by using AWS Resource Access Manager.
Place all healthcare and payment workloads in separate VPCs inside a single shared AWS account, enable AWS Control Tower detective guardrails, and use an AWS Cost Category to allocate each subsidiary's spend.
Keep all workload accounts in the current Workloads OU, attach both HIPAA and PCI-DSS SCP sets to that OU, and rely on cost-allocation tags to distinguish the two subsidiaries.
Expand the current organization by creating two top-level workload OUs (Healthcare and Payments), move the respective workload accounts into each OU, retain the Security OU with the shared log-archive and security-tooling accounts, and attach HIPAA-specific SCPs to the Healthcare OU and PCI-DSS SCPs to the Payments OU while using the existing management account for consolidated billing.
Answer Description
A single AWS Organization keeps billing and top-level governance centralized, so the management account can continue to generate one consolidated invoice. Creating separate top-level workload OUs-one for HIPAA-regulated healthcare workloads and another for PCI-regulated payment workloads-provides clear isolation and lets the platform team attach distinct SCP sets to each OU for the relevant compliance framework. The existing Security OU can remain unchanged, allowing both subsidiaries to consume the shared log-archive and security-tooling accounts without duplication. Spinning up a second AWS Organization would eliminate the single bill and double the effort required to maintain guardrails, while keeping all workloads in one flat OU or (worse) in a single shared account would fail to provide the necessary regulatory isolation and granular policy control.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are SCPs and how do they enforce compliance in AWS Organizations?
Why is it necessary to create separate OUs for HIPAA and PCI-DSS workloads?
How does AWS Organizations enable consolidation of billing while maintaining compliance guardrails?
An investment-banking firm is re-architecting its proprietary trade-execution platform from on-premises VMs to AWS.
The Java microservice is stateless and scales horizontally from 10 to more than 500 vCPUs during U.S. trading hours.
Technical requirements for the new compute layer are:
- Sub-millisecond node-to-node network latency inside the Availability Zone.
- Isolation of the service that handles client-side TLS private keys so that even root on the EC2 host cannot read the keys.
- A phased migration to AWS Graviton-based instances to reduce cost while still supporting the current x86_64 build.
- Automatic horizontal scaling and zero-downtime rolling updates.
Which architecture meets all of the requirements with the LEAST operational overhead?
Launch the microservice on EC2 Dedicated Hosts running only M6i instances across two Availability Zones. Use an AWS CloudHSM cluster for key storage and distribute traffic with an Application Load Balancer.
Create an Amazon EC2 Auto Scaling group that uses a mixed-instances policy with separate launch templates for M6i (x86_64) and M6g (Arm64) instances. Enable Nitro Enclaves in each template, place the group in a cluster placement group, configure weighted capacity and a capacity-optimized allocation strategy, and use Instance Refresh for rolling updates.
Rewrite the application as AWS Lambda functions invoked through Amazon API Gateway. Use AWS KMS customer-managed keys for signing and configure Provisioned Concurrency to meet peak load.
Containerize the service and deploy it on AWS Fargate with Amazon ECS. Store the TLS private keys in AWS Secrets Manager and use Service Auto Scaling to add or remove tasks during trading hours.
Answer Description
An Amazon EC2 Auto Scaling group that uses a mixed-instances policy meets every stated need:
- Low-latency networking - Launching the ASG in a cluster placement group keeps instances in the same AZ rack segment, achieving the high-bandwidth, sub-millisecond latency required.
- Key isolation - Enabling Nitro Enclaves in the launch template creates an isolated execution environment that even root on the parent instance cannot access and that integrates natively with AWS KMS. Nitro Enclaves is supported on Intel, AMD, and Graviton instance families.
- Graviton adoption with x86 compatibility - A mixed-instances ASG can reference multiple launch templates (one Arm64 AMI for M6g, one x86_64 AMI for M6i) and use instance weights so that capacity can be satisfied by either architecture, allowing a gradual, risk-free migration.
- Horizontal scaling & rolling updates - Auto Scaling handles scale-out/scale-in based on metrics, and Instance Refresh (or rolling updates in CloudFormation) provides zero-downtime deployments with minimal management.
The other options fail at least one requirement:
- ECS Fargate cannot run Nitro Enclaves and cannot guarantee sub-millisecond latency between tasks, so it does not satisfy the key-isolation or latency needs.
- Dedicated Hosts with CloudHSM isolate keys but add significant cost and operational overhead, and this option fails to address the requirement for a phased migration to Graviton.
- Lambda-based rewrite removes control over intra-function network latency, is not an ideal architectural fit for this type of sustained compute workload, and offers no enclave-like isolation for the key-handling service.
Therefore, the mixed-instances EC2 Auto Scaling solution is the only one that covers every constraint with the lowest operational burden.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a cluster placement group in AWS?
What are Nitro Enclaves, and how do they ensure key isolation?
How does a mixed-instances policy improve migration to Graviton in Auto Scaling groups?
A global e-commerce company hosts its single-page application on EC2 instances behind an Application Load Balancer (ALB) in the us-east-1 Region. The application serves static assets from the path /static and makes personalized API calls at /api. Customers outside North America report first-page load times above 3 seconds, and analysis shows that 70 percent of the requests for /static originate outside the United States, accounting for most of the ALB's peak throughput. The architecture team must reduce end-to-end latency for worldwide users, decrease the load on the origin, keep TLS termination as close to viewers as possible, and ensure that user-specific API responses are never cached. No code or DNS changes to existing URLs are allowed. Which strategy best meets these requirements?
Create an Amazon CloudFront distribution in front of the ALB, add a cache behavior for /static/* that uses an optimized cache policy with compression, add a cache behavior for /api/* that uses the CachingDisabled managed policy and forwards all headers, and enable Origin Shield for the ALB origin.
Provision AWS Global Accelerator with the ALB as the only endpoint and enable HTTP/2 to improve global TCP performance.
Deploy identical EC2 application stacks behind ALBs in multiple Regions and use Amazon Route 53 latency-based routing to direct users to the nearest Region.
Enable S3 Transfer Acceleration on a new S3 bucket, migrate all static assets to the bucket, and update the application to reference the new bucket while continuing to access /api through the ALB.
Answer Description
Placing Amazon CloudFront in front of the ALB provides edge termination of TLS and a global network of edge locations that shorten round-trip times for viewers. Creating a cache behavior for /static/* with an optimized cache policy and compression lets CloudFront cache and compress static files, greatly reducing origin traffic. Configuring a second behavior for /api/* that uses the CachingDisabled managed policy (TTL 0) and forwards all headers prevents CloudFront from caching personalized API responses. Enabling Origin Shield adds an extra regional cache layer that further consolidates requests, improving cache hit ratio and offloading the ALB. Global Accelerator accelerates TCP handshakes but cannot cache content, S3 Transfer Acceleration requires rewriting URLs, and multi-region replication with Route 53 adds significant complexity and does not offload static traffic from a single origin.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudFront, and why is it suitable for this scenario?
What is Origin Shield in CloudFront, and how does it enhance caching efficiency?
Why is the 'CachingDisabled' managed policy used for /api/*, and what headers need to be forwarded?
A company operates hundreds of Amazon EC2 instances in private subnets across three production VPCs in the us-east-1 Region. The instances must receive Run Command instructions and software patches by using AWS Systems Manager and must also upload command output logs to an Amazon S3 bucket in the same Region. A new security policy forbids any traffic from these subnets from traversing a NAT gateway, internet gateway, or public IP address. The networking team also wants every AWS SDK call that the instances make to resolve to private IP addresses inside the VPCs and to minimize ongoing data-processing charges.
Which solution meets these requirements while providing the lowest operational cost?
In each VPC create gateway VPC endpoints for Amazon S3, AWS Systems Manager, and Amazon EC2. Update the private subnet route tables to point traffic for these services to the gateway endpoints and delete the NAT gateways.
In each VPC create interface VPC endpoints for SSM, SSMMessages, and EC2Messages, enable private DNS for the endpoints, and attach an endpoint policy that allows only the required Systems Manager actions. Create a gateway VPC endpoint for Amazon S3 and add it to the route tables used by the private subnets. Remove the NAT gateway routes.
Create an endpoint service (AWS PrivateLink) for Systems Manager and S3 in a shared-services VPC, share the service with the other VPCs by using AWS RAM, and create Route 53 private hosted zone records that map the public service domains to the endpoint's private IP addresses. Remove the NAT gateways.
Keep the NAT gateways in place but attach an S3 gateway endpoint to each route table. Add an IAM policy to every instance profile that denies access to public IP addresses.
Answer Description
Systems Manager is accessible from a VPC only through interface VPC endpoints (SSM, SSMMessages, and EC2Messages). Enabling private DNS on these endpoints ensures that the standard public service names resolve to the endpoints' private IP addresses, so application code does not need to change.
Amazon S3 can be reached over a gateway VPC endpoint, which is free of hourly and data-processing charges and keeps traffic on the AWS backbone without requiring NAT. Using the gateway endpoint for S3 and interface endpoints for Systems Manager removes the need for NAT gateways, satisfies the no-internet requirement, and keeps recurring costs lower than an all-interface-endpoint or NAT-based design.
The other choices fail because:
- Gateway endpoints exist only for S3 and DynamoDB, so they cannot be used for Systems Manager.
- Creating a user-managed endpoint service for S3 or Systems Manager is not supported and adds needless complexity.
- Retaining NAT gateways still violates the security mandate and continues to incur hourly and data-processing fees.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between an interface VPC endpoint and a gateway VPC endpoint?
Why is enabling private DNS important for interface VPC endpoints?
Why is a gateway VPC endpoint for S3 used instead of an interface VPC endpoint?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.