00:20:00

AWS Certified Solutions Architect Professional Practice Test (SAP-C02)

Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified Solutions Architect Professional SAP-C02
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Professional SAP-C02 Information

The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.

This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.

AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.

AWS Certified Solutions Architect Professional SAP-C02 Logo
  • Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test

  • 20 Questions
  • Unlimited
  • Design Solutions for Organizational Complexity
    Design for New Solutions
    Continuous Improvement for Existing Solutions
    Accelerate Workload Migration and Modernization
Question 1 of 20

An organization operates a stateless multi-tenant REST API on Amazon ECS (AWS Fargate). The service is fronted by Application Load Balancers that run in two AWS Regions (us-east-1 and eu-west-1). Security teams must allow customers to allowlist a small, unchanging set of public IP addresses. New reliability objectives specify that the application must keep working if an entire AWS Region fails, client traffic must shift to the healthy Region within 30 seconds, and failover must happen without any DNS cache flushes or other client-side changes. Operations also want a fully managed AWS solution with minimal maintenance. Which approach best meets these requirements?

  • Deploy AWS Global Accelerator with an endpoint group in each Region that targets the existing Application Load Balancers and rely on Global Accelerator health checks for automatic routing.

  • Establish dedicated AWS Direct Connect connections into each Region and advertise more-specific BGP prefixes to move traffic to the standby Region when a failure is detected.

  • Place the ALBs behind a single Amazon CloudFront distribution and configure an origin group for automatic origin failover between Regions.

  • Create active-passive Amazon Route 53 failover records that point to the ALBs, configure health checks, and reduce the record TTL to 30 seconds.

Question 2 of 20

A production account (1111) hosts a business-critical Amazon RDS for PostgreSQL Multi-AZ DB instance in the us-east-1 Region. A new compliance mandate requires that:

  • Encrypted backups must be retained for at least 35 days in a dedicated disaster-recovery (DR) account (2222) that belongs to the same AWS Organization.
  • The DR backups must reside in the us-west-2 Region so they are isolated from the primary Region.
  • Administrators in the DR account must be able to restore the database without assistance from the production account.
  • The solution must rely on managed AWS capabilities and minimize ongoing manual work.
  • A recovery point objective (RPO) of 24 hours is acceptable.

Which approach meets these requirements with the LEAST operational overhead?

  • In the production account, create an AWS Backup plan that performs daily snapshot backups of the RDS instance and copies them to a backup vault in the DR account in us-east-1. In the DR account, configure a second AWS Backup plan that automatically copies those snapshots to a backup vault in us-west-2 with a 35-day retention policy. Use customer-managed CMKs shared between the accounts for encryption.

  • Create a cross-Region read replica of the RDS instance in the DR account in us-west-2 and rely on the replica's automated backups, configured for 35-day retention, to satisfy the compliance mandate.

  • Enable cross-Region automated backups replication on the RDS instance from us-east-1 to us-west-2 and manually share each replicated backup with the DR account. Set the backup retention period to 35 days in us-west-2.

  • Use AWS Backup in the production account to define a single copy rule that sends daily RDS backups directly to a backup vault in the DR account in us-west-2 with a 35-day retention period.

Question 3 of 20

A solutions architect is troubleshooting a connectivity issue in a hybrid environment. An application running on an EC2 instance in a spoke VPC (10.20.0.0/16) cannot connect to an on-premises database server (192.168.10.50) on port 1433. The spoke VPC is connected to a central inspection VPC via an AWS Transit Gateway. The inspection VPC is connected to the on-premises data center via an AWS Direct Connect connection. All traffic from the spoke VPC to on-premises is routed through firewall appliances in the inspection VPC. On-premises network engineers have confirmed that their firewalls are not blocking the traffic. The architect needs to identify the component in the AWS network path that is blocking the connection. What is the MOST efficient first step to diagnose this issue?

  • Configure Route 53 Resolver Query Logging for the spoke VPC. Analyze the logs to ensure the on-premises database's hostname is correctly resolving to the IP address 192.168.10.50.

  • Enable VPC Flow Logs on the network interfaces for the application instance, the Transit Gateway attachment, and the inspection VPC firewall instances. Query the logs using Amazon Athena to find REJECT entries for traffic destined for 192.168.10.50 on port 1433.

  • Use the Route Analyzer feature in Transit Gateway Network Manager to analyze the path from the spoke VPC attachment to the Direct Connect gateway attachment, verifying that routes are correctly propagated.

  • Use VPC Reachability Analyzer to create and run an analysis with the application's EC2 instance network interface as the source and the on-premises database IP address (192.168.10.50) as the destination, specifying port 1433.

Question 4 of 20

A financial services company runs a latency-sensitive payment-processing workload in the us-east-1 Region. The workload uses an Amazon ECS cluster (EC2 launch type) with stateless microservices behind an Application Load Balancer, an Amazon Aurora MySQL DB cluster, and an Amazon ElastiCache for Redis cluster that stores session data.

Compliance rules require a recovery point objective (RPO) of 1 minute and a recovery time objective (RTO) of 5 minutes for a complete Regional failure. Management insists on a solution that is less costly than an active-active multi-Region deployment but still meets the objectives.

Which solution meets these requirements?

  • Implement a pilot-light strategy that replicates only the Aurora database to another Region, stores container images in Amazon ECR with cross-Region replication, and creates the ECS cluster, Redis nodes, and load balancer with AWS CloudFormation when a disaster is declared.

  • Deploy a warm-standby environment in a second Region: add an Aurora global database secondary cluster and a Redis Global Datastore replica, run a scaled-down copy of the ECS services (one task per service) behind an Application Load Balancer, enable cross-Region image and data replication, and use Amazon Route 53 failover routing to switch traffic when health checks fail.

  • Deploy a fully active-active architecture in two Regions with separate Aurora writer clusters, application-level replication for the Redis data, full-sized ECS services, and weighted Amazon Route 53 routing between the Regions.

  • Use AWS Elastic Disaster Recovery (AWS DRS) to continuously replicate the ECS instances and Redis nodes to a second Region, convert the Aurora cluster to an Aurora global database, and rely on AWS DRS orchestration to launch all recovered resources after a disaster.

Question 5 of 20

A company operates a Kubernetes-based product-catalog microservice that runs in Amazon EKS clusters deployed in us-east-1, eu-west-1, and ap-southeast-2. The service performs read-only SQL queries against a single Amazon Aurora MySQL DB cluster located in us-east-1. During global marketing events, 90 % of traffic originates outside the primary Region and p99 end-to-end latency in Europe spikes above 400 ms. New performance objectives state:

  • End-user latency must remain below 100 ms at the 99th percentile worldwide.
  • Read traffic is expected to grow 5× within 12 months.
  • Catalog updates occur only in us-east-1 and may be eventually consistent everywhere else within 2 minutes.
  • The solution must keep code changes and ongoing cost to a minimum.

Which combination of actions best meets these objectives?

  • Create an Amazon ElastiCache for Redis Global Datastore: deploy a primary cluster in us-east-1 and read-only replica clusters in eu-west-1 and ap-southeast-2, and implement a write-through pattern so the service first reads from the Region-local Redis endpoint and updates the primary cluster after each catalog change.

  • Migrate the catalog to Amazon DynamoDB and place a DynamoDB Accelerator (DAX) cluster in each Region, routing reads to the local DAX endpoint and writes to DynamoDB in us-east-1.

  • Implement Amazon Aurora Global Database by adding read-only secondary clusters in eu-west-1 and ap-southeast-2, and modify the service so that read queries are routed to the Region-local reader endpoint.

  • Put the API behind an Amazon CloudFront distribution and use Lambda@Edge to cache GET responses for 120 seconds at edge locations worldwide while leaving the Aurora database unchanged.

Question 6 of 20

A financial services company utilizes a multi-account AWS environment with a hub-and-spoke network architecture centered around an AWS Transit Gateway. The security team is mandated to perform deep packet inspection (DPI) on all east-west traffic between spoke VPCs. The inspection must be conducted by a fleet of third-party intrusion detection system (IDS) appliances deployed on EC2 instances within a dedicated 'inspection' VPC. The solution must be highly scalable, have minimal performance impact on application workloads, and centralize the inspection tooling. Which approach should a solutions architect recommend to meet these requirements?

  • Configure VPC Flow Logs for all traffic in the spoke VPCs. Stream the logs to a central Amazon S3 bucket and use Amazon Athena for analysis.

  • Deploy AWS Network Firewall in the inspection VPC. Configure the Transit Gateway to route all inter-VPC traffic through the Network Firewall endpoints for inspection.

  • In the inspection VPC, configure a Gateway Load Balancer (GWLB) with the IDS appliance fleet as a target group. Create GWLB Endpoints in each spoke VPC and modify route tables to direct all traffic through the GWLB.

  • Configure VPC Traffic Mirroring on the source Elastic Network Interfaces (ENIs) in the spoke VPCs. Set the mirror target to a Network Load Balancer (NLB) in the inspection VPC that fronts the IDS appliance fleet.

Question 7 of 20

A financial services company is modernizing a monolithic on-premises application by refactoring it into containerized microservices to be deployed on Amazon ECS. A key security requirement is that all east-west traffic (service-to-service communication) between the microservices must be routed through a fleet of third-party network security appliances for deep packet inspection. The company wants to use AWS Fargate to minimize infrastructure management overhead. Which architectural challenge must a solutions architect address to meet these requirements when using the Fargate launch type?

  • AWS Fargate tasks cannot be assigned security groups, which prevents the implementation of the network traffic filtering rules required by the security appliances.

  • The use of an Application Load Balancer (ALB) for Fargate services encrypts all east-west traffic, which prevents network security appliances from performing deep packet inspection.

  • Fargate tasks use the awsvpc network mode, giving each task a dedicated ENI within a subnet, which complicates routing intra-VPC traffic to a centralized inspection appliance.

  • Fargate does not support the host network mode, which is required to bind the security appliances directly to the same underlying instance as the application containers.

Question 8 of 20

A large enterprise uses AWS Organizations to manage dozens of member accounts. The finance team has reported a significant, unexpected increase in costs, but the high-level views in AWS Cost Explorer are insufficient for identifying the root cause. The company has configured AWS Cost and Usage Reports (CUR) to be delivered hourly in Apache Parquet format to an Amazon S3 bucket in the management account.

A solutions architect needs to implement a scalable and cost-effective solution to perform complex, ad-hoc SQL queries on this CUR data. The goal is to identify specific resources and API operations contributing to the cost increase across the entire organization.

Which approach will achieve this with the LEAST operational overhead?

  • Set up an AWS Glue crawler to run on the S3 bucket containing the CUR data. Configure the crawler to populate the AWS Glue Data Catalog. Use Amazon Athena to run standard SQL queries against the table created by the crawler.

  • Create an Amazon EMR cluster configured with Apache Spark. Develop Spark SQL jobs to load the Parquet files from Amazon S3 into data frames and run queries from a Zeppelin notebook attached to the cluster.

  • Develop an AWS Lambda function triggered by Amazon S3 events when new CUR files are delivered. The function will parse the Parquet files and load the data into a provisioned Amazon RDS for PostgreSQL database for querying.

  • Use Amazon S3 Select to query individual CUR Parquet files directly in the S3 bucket. Develop a script that iterates through all CUR files for the desired time range, executes S3 Select queries on each, and aggregates the results in the client application.

Question 9 of 20

An enterprise uses AWS Organizations to manage more than 500 AWS accounts. The security team has created a dedicated security-tooling account in the us-east-1 Region and must meet the following requirements:

  1. AWS Security Hub must be enabled in every current and future account in all Regions.
  2. All findings must be visible only in the security-tooling account.
  3. No other account may designate itself as the Security Hub delegated administrator. The solution must follow the principle of least privilege and require minimal ongoing maintenance. Which approach BEST meets these requirements?
  • Use a CloudFormation StackSet to deploy a template that enables Security Hub and its default standards in every current account and Region; configure the StackSet for automatic deployment to new accounts.

  • From the organization management account, run securityhub enable-organization-admin-account in each enabled Region to set the security-tooling account as delegated administrator. In the delegated administrator account, run securityhub update-organization-configuration with AutoEnable=true and enable the default standards for all Regions. Attach an SCP at the organization root that denies securityhub:EnableOrganizationAdminAccount to every account except the management account.

  • Enable Security Hub only in the security-tooling account and create a cross-Region finding aggregator. In each member account, add an EventBridge rule that forwards Security Hub findings to the aggregator.

  • Enable Security Hub through AWS Control Tower guardrails when the landing zone is set up. Rely on the guardrails to enable Security Hub in new accounts and prevent changes to the delegated administrator.

Question 10 of 20

A financial services company is modernizing a component of its legacy market data processing application. This component, currently hosted on a fleet of over-provisioned Amazon EC2 instances, handles unpredictable, high-throughput transaction bursts. A critical requirement is to maintain processing latency under 100ms per invocation to meet SLAs. The primary goals are to reduce costs associated with idle capacity and improve scalability. The development team has refactored the component into an AWS Lambda function. Which configuration should a Solutions Architect recommend to meet these requirements MOST effectively?

  • Configure Provisioned Concurrency for the function, setting the number of concurrent executions based on anticipated peak load.

  • Increase the function's memory allocation to the maximum and rely on on-demand scaling.

  • Create a scheduled Amazon EventBridge rule that invokes the function every minute to keep it warm.

  • Place the function in a target group behind an Application Load Balancer and configure aggressive health checks to trigger invocations.

Question 11 of 20

A financial analytics company runs a platform on EC2 instances within private subnets, distributed across multiple Availability Zones in the us-east-1 region. The application frequently downloads terabytes of data from a critical third-party data provider's API, which is hosted outside of AWS. To facilitate this, a NAT Gateway is deployed in each Availability Zone. A cost analysis reveals that the "NAT Gateway - Data Processed" fees are a major operational expense. The company wants to drastically reduce these data transfer costs while preserving the high-availability, multi-AZ posture of the application. The third-party provider has recently announced that they offer an endpoint service powered by AWS PrivateLink in the us-east-1 region.

What is the MOST cost-effective solution to reduce these charges?

  • Create a VPC interface endpoint for the third-party's endpoint service within the company's VPC. Reconfigure the application to use the endpoint's DNS name to access the API.

  • Consolidate to a single NAT Gateway in one Availability Zone and update the VPC route tables to direct all outbound traffic through it.

  • Use AWS Direct Connect to establish a dedicated connection to the us-east-1 region and route the API requests through this connection.

  • Set up a fleet of caching proxy servers on EC2 instances in public subnets. Direct the application's data requests through this caching layer.

Question 12 of 20

A global media company operates a real-time video-streaming service with backend infrastructure deployed on EC2 instances behind Network Load Balancers (NLBs) in the us-east-1, eu-central-1, and ap-southeast-2 Regions. The service uses a custom TCP-based protocol for streaming. Users connect to the geographically closest regional endpoint by means of DNS-based routing.

The company is experiencing two major issues:

  1. Some corporate clients have strict firewall egress rules and struggle to whitelist the multiple static public IP addresses that each regional NLB exposes (one per Availability Zone and Region).
  2. During a recent service impairment in one Region, users were not automatically routed to a healthy Region, resulting in a significant outage for a large user segment.

The company wants to implement a solution that provides static entry points for the application and improves availability with fast, automatic cross-Region failover. Which solution best meets these requirements?

  • Establish AWS Direct Connect connections to each AWS Region and use a Direct Connect gateway for inter-Region failover.

  • Deploy an AWS Global Accelerator and configure each regional NLB as an endpoint in its respective endpoint group.

  • Configure an Amazon CloudFront distribution with the regional NLBs as custom origins and use a CloudFront Function to manage failover between origins.

  • Use Amazon Route 53 with a combination of latency-based routing and failover routing policies. Configure health checks for each regional NLB.

Question 13 of 20

Your company operates several AWS accounts managed with AWS Organizations. In the shared dev account, application teams need to create and maintain IAM roles for their Lambda functions and ECS tasks. The security team has produced a guardrail policy that grants only the following permissions:

  • Read access to two designated S3 buckets
  • Write access to one DynamoDB table

Developers must be allowed to self-service creation and updates of IAM roles only if the resulting roles never exceed the permissions in the guardrail policy, and the security team does not want to manually review each policy or role that is created.

Which solution BEST enforces the principle of least privilege while meeting these requirements?

  • Require developers to tag their roles with Environment=Dev and apply an ABAC policy that allows actions only when the principal and resource share the same tag.

  • Enable the AWS Config managed rule IAM_POLICY_NO_STATEMENTS_WITH_ADMIN_ACCESS and use an EventBridge rule to invoke a Lambda function that automatically deletes any policy marked NON_COMPLIANT.

  • Create a customer-managed policy that contains the approved S3 and DynamoDB permissions and designate it as a permissions boundary. Grant the developer group iam:CreateRole, iam:PutRolePolicy, and iam:AttachRolePolicy permissions only when the iam:PermissionsBoundary condition key equals the boundary policy's ARN.

  • Attach a service control policy to the dev account that denies iam:CreateRole for all principals except the security team, and have the security team create roles for developers on request.

Question 14 of 20

A fintech company runs its order-management microservices as stateless Amazon ECS services fronted by an Application Load Balancer in us-east-1. Transactional data is stored in an Amazon Aurora MySQL cluster in the same Region. The company's business-continuity policy requires that the application must continue to operate if an entire AWS Region becomes unavailable, with a recovery time objective (RTO) under 60 seconds and a recovery point objective (RPO) under 30 seconds. Additionally, failover must be fully automated with no human intervention, and the solution should avoid the cost of running a fully active/active stack in every Region.

Which architecture meets these requirements MOST cost-effectively?

  • Enable Aurora Multi-AZ with two readable standbys in us-east-1, replicate automated snapshots to us-west-2, and restore the snapshots into a new Aurora cluster before redeploying the ECS services after an outage.

  • Create a cross-Region read replica of the Aurora cluster in us-west-2 and configure an Amazon CloudWatch alarm that invokes an AWS Lambda function to promote the replica and update Route 53 DNS records when health checks fail.

  • Migrate the relational schema to Amazon DynamoDB global tables in both Regions and use AWS Database Migration Service continuous replication to keep the tables synchronized; direct the application to the secondary Region during a failure.

  • Convert the Aurora cluster to an Aurora Global Database and add a secondary Aurora cluster in us-west-2. Use Amazon Route 53 Application Recovery Controller routing controls to automatically promote the secondary cluster and shift traffic when a regional failure is detected.

Question 15 of 20

During a modernization effort, an on-premises e-commerce platform is being refactored into microservices on AWS. The new Order service must publish an event that triggers the Payment service whenever a customer completes checkout. Requirements for the messaging layer are as follows: the Payment service must never process the same order more than once, messages that relate to the same order must be delivered in the exact sequence in which they were generated, holiday sales can create burst traffic of tens of thousands of orders per second so the solution must scale automatically without manual sharding, and operations staff want to keep queue-management overhead to a minimum.

Which solution meets these requirements?

  • Publish events to an Amazon SQS FIFO queue with high-throughput mode enabled. Set the MessageGroupId to the order ID and turn on content-based deduplication.

  • Publish events to an Amazon SNS FIFO topic. Configure an Amazon SQS FIFO queue subscription for the Payment service.

  • Publish events to an Amazon SQS FIFO queue that uses a single MessageGroupId value such as "payment" for every message. Configure the consumer to process messages sequentially.

  • Publish events to an Amazon SQS standard queue. Include the order ID in a message attribute so the consumer can ignore duplicate messages that occasionally appear.

Question 16 of 20

A financial-services company runs Monte Carlo risk simulations overnight in a single AWS Region. The job launches up to 1,000 Linux instances for 6 hours. At least 100 instances must always be running to meet the service-level agreement, but the rest of the workload can tolerate Spot interruptions. The chief architect wants to minimize compute cost, obtain Spot capacity from pools that have the lowest interruption risk, and automatically replace any Spot Instances that receive a rebalance recommendation.

Which solution meets all of these requirements?

  • Use AWS Batch with a single Spot compute environment that specifies the SPOT_CAPACITY_OPTIMIZED allocation strategy and a desired vCPU count of 1,000. Batch will automatically provision the required capacity and handle any Spot interruptions.

  • Request an EC2 Spot Fleet for 1,000 instances across all Availability Zones using the lowest-price allocation strategy. Set OnDemandTargetCapacity to 100 and rely on a Lambda function to relaunch tasks if Spot Instances are interrupted.

  • Deploy the workload in an EC2 Auto Scaling group that launches only Spot Instances with the lowest-price allocation strategy. Configure Capacity Rebalancing so the group starts new Spot Instances when interruptions occur.

  • Create an EC2 Auto Scaling group that uses a launch template for the simulation instances. Configure a mixed instances policy with OnDemandBaseCapacity set to 100, OnDemandPercentageAboveBaseCapacity set to 0, and SpotAllocationStrategy set to capacity-optimized. Enable Capacity Rebalancing for the group.

Question 17 of 20

A solutions architect is tasked with optimizing a large fleet of m5.4xlarge EC2 instances running a legacy, monolithic, stateful .NET Framework application on Windows Server. The application serves critical business functions and experiences unpredictable, spiky traffic patterns. An analysis using Amazon CloudWatch shows that the average CPU utilization is consistently below 20%, but the memory utilization is consistently high, between 80-90%. The company wants to significantly reduce costs without compromising performance or availability during peak loads. Any proposed solution must provide a systematic way to apply recommendations across a multi-account organization.

Which strategy should the solutions architect recommend to meet these requirements?

  • Refactor the monolithic application into containerized microservices on .NET Core and deploy it to an Amazon EKS cluster with Cluster Autoscaler.

  • Use AWS Compute Optimizer to get data-driven recommendations and begin migrating the fleet to an appropriate memory-optimized (R-series) instance type.

  • Use AWS Cost Explorer Rightsizing recommendations to identify underutilized instances and manually downsize them to m5.2xlarge instances.

  • Implement an Amazon EC2 Auto Scaling group with a step scaling policy based on CPU utilization, setting the minimum size to a smaller instance like m5.large.

Question 18 of 20

A financial services company operates a critical, multi-tier application in the us-east-1 Region. The application consists of a fleet of EC2 instances in an Auto Scaling group behind an Application Load Balancer, an Amazon RDS for PostgreSQL Multi-AZ database, and an Amazon ElastiCache for Redis cluster.

To meet business continuity requirements, the company must implement a disaster recovery (DR) strategy in the us-west-2 Region with a Recovery Time Objective (RTO) of 15 minutes and a Recovery Point Objective (RPO) of 1 minute. The company has chosen a warm standby approach to balance recovery time with cost.

Which of the following designs BEST implements a warm standby strategy that meets these requirements?

  • In us-west-2, deploy a fully scaled-out duplicate of the production environment, including the Auto Scaling group, RDS database, and ElastiCache cluster. Use an Amazon Route 53 latency-based routing policy to distribute traffic between us-east-1 and us-west-2.

  • In us-west-2, create a cross-region read replica for the Amazon RDS for PostgreSQL database. Configure an Auto Scaling group with a minimum and desired capacity of one small EC2 instance. Provision a small, single-node ElastiCache for Redis cluster. For failover, promote the RDS read replica, scale up the Auto Scaling group, and update an Amazon Route 53 failover record.

  • In us-west-2, create a cross-region read replica for the RDS for PostgreSQL database. Configure an Auto Scaling group with a minimum and desired capacity of zero. Configure Amazon ElastiCache for Redis with Global Datastore to replicate the cache. Use Amazon Route 53 to fail over traffic.

  • In us-west-2, replicate application AMIs and Amazon RDS snapshots. In a disaster, deploy a new AWS CloudFormation stack using the replicated AMI and restore the database from the latest snapshot. Use Amazon Route 53 to redirect traffic to the new stack.

Question 19 of 20

A company has 20 AWS member accounts that are linked to a management account by AWS Organizations. Each account hosts multiple workloads that are owned by different internal teams. Finance must:

  • Generate a monthly chargeback report that shows the AWS cost for every team.
  • Roll the cost of any untagged resources into a bucket named "NoOwner" so that Finance can spot tagging gaps.
  • View a cost forecast for the next three months.
  • Receive an automated email when a team's monthly spend is forecasted to exceed its budget by more than 10 percent.

The cloud architect wants to meet these requirements with the least operational overhead while using only native AWS services.

Which combination of actions meets the requirements?

  • Activate a "Team" cost allocation tag in the management account. Create an AWS Cost Category that inherits the tag and assigns a default value such as "NoOwner" for unmatched resources. In AWS Cost Explorer, build a monthly report grouped by the cost category and enable a three-month forecast. Create an AWS cost budget for each cost-category value with a forecasted threshold of 110 % and configure an email alert.

  • Enable AWS Cost Anomaly Detection and create a monitor for every account. Use AWS Compute Optimizer to project future costs and configure a single AWS Budget that sends alerts when the organization's unblended cost exceeds 10 % of the previous month.

  • Enable the AWS Cost and Usage Report (CUR) for each member account. Deliver CUR files to Amazon S3, run scheduled Amazon Athena queries that group costs by the "Team" tag, and visualize the results in Amazon QuickSight. Use Amazon CloudWatch alarms on Athena query results to send email alerts when spend exceeds 110 % of budget.

  • Create a billing group for each team in AWS Billing Conductor. Use AWS Pricing Calculator to estimate quarterly spend for each billing group and configure Amazon EventBridge rules that send SNS email notifications when actual account-level spend in AWS Cost Explorer exceeds the estimate by 10 %.

Question 20 of 20

A startup is refactoring its stateless microservice API that currently runs on EC2 instances. Baseline traffic is about 100 requests per minute, but social-media campaigns can cause spikes of up to 10 000 requests per minute that last 15-30 minutes. The 95th-percentile backend response time must stay below 200 ms during spikes, compute cost must be as low as possible when demand is minimal, and the five-person operations team must not manage server patching or cluster capacity. Which architecture best satisfies these business objectives?

  • Run the containers in an Amazon ECS service using AWS Fargate. Define a capacity-provider strategy that assigns a higher weight to FARGATE_SPOT than FARGATE, set the desired task count between 1 and 300, and attach a target-tracking scaling policy on the ALBRequestCountPerTarget metric to keep the average load at 50 requests per task.

  • Run the containers on an Amazon EKS cluster that uses only Spot Instances managed by Cluster Autoscaler and Karpenter, keeping one node permanently in the cluster.

  • Repackage the workload as AWS Lambda functions behind Amazon API Gateway and configure 1 000 units of Provisioned Concurrency for each function.

  • Deploy the containers to an Amazon ECS cluster backed by an EC2 Auto Scaling group of c6i.large On-Demand instances and use a target-tracking scaling policy that keeps average CPU utilization at 60 percent with predictive scaling enabled.