00:20:00

AWS Certified Solutions Architect Professional Practice Test (SAP-C02)

Use the form below to configure your AWS Certified Solutions Architect Professional Practice Test (SAP-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified Solutions Architect Professional SAP-C02
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified Solutions Architect Professional SAP-C02 Information

The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a test for people who want to show advanced skills in cloud design using Amazon Web Services. It proves that you can handle large, complex systems and design solutions that are secure, reliable, and meet business needs. Passing this exam shows a higher level of knowledge than the associate-level test and is often needed for senior cloud roles.

This exam includes multiple-choice and multiple-response questions. It covers areas like designing for high availability, choosing the right storage and compute services, planning for cost, and managing security at scale. You will also need to understand how to migrate big applications to the cloud, design hybrid systems, and use automation tools to keep environments efficient and safe.

AWS suggests having at least two years of real-world experience before taking this test. The SAP-C02 exam takes 180 minutes, includes about 75 questions, and requires a scaled score of 750 out of 1000 to pass. Preparing usually means lots of practice with AWS services, using study guides, and trying practice exams. For many professionals, this certification is an important milestone toward becoming a cloud architect or senior cloud engineer.

AWS Certified Solutions Architect Professional SAP-C02 Logo
  • Free AWS Certified Solutions Architect Professional SAP-C02 Practice Test

  • 20 Questions
  • Unlimited
  • Design Solutions for Organizational Complexity
    Design for New Solutions
    Continuous Improvement for Existing Solutions
    Accelerate Workload Migration and Modernization
Question 1 of 20

A global corporation is adopting a multi-VPC architecture on AWS, with numerous VPCs spread across several AWS Regions. They also maintain a significant on-premises data center connected to AWS via AWS Direct Connect. The key requirements are to enable seamless, transitive communication between all VPCs (inter-VPC) and between the on-premises network and all VPCs. The solution must be highly scalable, centrally managed, and minimize operational overhead. A solutions architect needs to design the optimal network topology. Which approach best meets these requirements?

  • Use an AWS Direct Connect Gateway and associate it with a Virtual Private Gateway (VGW) in each VPC. This will provide connectivity from on-premises to all VPCs and enable inter-VPC communication through the Direct Connect Gateway.

  • Create a full mesh of VPC peering connections between all VPCs. Establish a separate AWS Direct Connect private virtual interface (VIF) from the on-premises network to each individual VPC.

  • Designate one VPC as a 'transit hub'. Use VPC peering to connect all other 'spoke' VPCs to this hub VPC. Establish a Direct Connect connection to the hub VPC and configure routing instances within it to forward traffic.

  • Deploy an AWS Transit Gateway in each region. Peer the Transit Gateways across regions and create attachments for each VPC. Connect the on-premises data center to a Transit Gateway via a Direct Connect Gateway attachment.

Question 2 of 20

A company operates its production workload in the us-east-1 Region. The stack consists of an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer and an Amazon RDS for MySQL DB instance that is already configured for Multi-AZ high availability.

Management has mandated a cross-Region disaster-recovery (DR) strategy so the workload can continue running from the us-west-2 Region if a full regional outage occurs. Business continuity requirements are:

  • Recovery point objective (RPO) must be no greater than 5 minutes.
  • Recovery time objective (RTO) must be no greater than 15 minutes.
  • Ongoing infrastructure cost in the DR Region must be kept to a minimum.
  • Wherever possible, managed AWS services should be used to reduce operational overhead.

Which approach meets these requirements MOST cost-effectively?

  • Deploy a pilot-light environment in us-west-2 with an identical Auto Scaling group set to a desired capacity of 1 and a Multi-AZ RDS instance. Use AWS Database Migration Service for ongoing replication. Place both Application Load Balancers behind Route 53 latency-based routing to direct users automatically.

  • Configure AWS Backup to copy daily Amazon EBS and RDS snapshots to us-west-2. Store a CloudFormation template for the entire stack in an S3 bucket in us-west-2. During an outage, deploy the template, restore the latest snapshots, and update Route 53 to point to the new Application Load Balancer.

  • Use CloudEndure Migration to replicate EC2 instances and their EBS volumes to us-west-2. Schedule an AWS Lambda function to take encrypted RDS snapshots every 5 minutes and copy them to us-west-2. Configure Route 53 geolocation routing to send traffic to us-west-2 if health checks fail.

  • Use AWS Elastic Disaster Recovery (AWS DRS) to continuously replicate the EC2 instances to a staging area in us-west-2. Create a cross-Region read replica of the RDS DB instance in us-west-2. During a failover, launch recovery EC2 instances from DRS, promote the RDS read replica, and update an Amazon Route 53 failover record to direct traffic to the Application Load Balancer in us-west-2.

Question 3 of 20

A company runs a latency-sensitive SaaS application that streams real-time market data to customers over WebSocket connections. All traffic is routed through an internet-facing Application Load Balancer (ALB) in us-east-1. Performance reports show a 95th-percentile round-trip latency of about 400 ms for users in Singapore and Sydney. The operations team must reduce latency for those users as quickly as possible, keep the workload in a single AWS Region, and avoid any application code changes. Which solution will most effectively meet these requirements?

  • Provision a second ALB in ap-southeast-1 and use Amazon Route 53 latency-based DNS records to direct users.

  • Place an Amazon CloudFront distribution in front of the ALB, forward all viewer headers, and disable caching.

  • Configure AWS Global Accelerator and add the existing ALB as a standard accelerator endpoint.

  • Deploy reverse-proxy EC2 instances in Regions closest to users that tunnel traffic back to the ALB.

Question 4 of 20

Your organization must migrate approximately 120 TB of on-premises scientific data to Amazon S3 within the next 10 calendar days. The research facility is located in a rural area with a maximum outbound WAN bandwidth of 50 Mbps, and the connection is shared with production workloads. All data must remain protected by FIPS 140-2 validated encryption modules during transit and while at rest. Which AWS service or combination of services will meet the schedule with the LEAST disruption to the existing network?

  • Request an AWS Snowball Edge Compute Optimized device, install the AWS DataSync agent on it, and copy the data over the WAN link into S3.

  • Request two AWS Snowball Edge Storage Optimized (80 TB) devices, copy the data across both appliances, and ship them back to AWS for import into S3.

  • Order a single AWS Snowball Edge Storage Optimized (210 TB) device, copy the data to its NFS or S3-compatible endpoint, and return the appliance for automatic import into the target S3 bucket.

  • Deploy AWS DataSync on-premises and throttle the transfer to 50 Mbps over the existing VPN to upload the data directly to Amazon S3.

Question 5 of 20

A global enterprise has 250 AWS accounts that are organized into multiple organizational units (OUs) in AWS Organizations. Security policy mandates that every Amazon EC2 instance must automatically install any Critical or Important operating-system security patch within 24 hours of its release. The solution must provide a single place to configure and report patch compliance for all accounts and Regions, use only the existing SSM Agent, remediate non-compliant instances automatically, and impose the least possible operational overhead on the central cloud-operations team.

Which approach best meets these requirements?

  • Enable Amazon Inspector across the organization by delegating administration to a central account, then configure Amazon EventBridge rules that match Inspector EC2 vulnerability findings with a CVSS score of 7.0 or higher and start an SSM Automation runbook that executes AWS-RunPatchBaseline on the affected instances. Use the Inspector console for compliance visibility.

  • Create an AWS Config conformance pack that contains the managed rule EC2_MANAGEDINSTANCE_PATCH_COMPLIANCE_STATUS_CHECK and attach an auto-remediation action that invokes the AWS-RunPatchBaseline Automation runbook on every NON_COMPLIANT instance. Run the rule once every 24 hours and aggregate the results in the management account.

  • From the management account, deploy an AWS Systems Manager Quick Setup Patch Manager policy to the entire organization. Configure a custom patch baseline with a 0-day auto-approval rule for Critical and Important patches, select the Scan and install operation, and schedule the State Manager association to run daily. Quick Setup propagates the baseline, schedule, and compliance reporting across all member accounts and Regions by using the existing SSM Agent.

  • Use AWS CloudFormation StackSets to deploy identical custom patch baselines, nightly maintenance windows, and AWS-RunPatchBaseline Run Command tasks in every account and Region. Tag each instance with its patch group and build a cross-account CloudWatch dashboard to display patch compliance.

Question 6 of 20

Your company replicates its on-premises application servers to AWS by using AWS Elastic Disaster Recovery (AWS DRS). Continuous block-level replication is already configured and the servers appear as Ready for recovery in the AWS DRS console for the us-west-2 Region. Management now mandates a warm standby disaster-recovery strategy so the DR Region can immediately process a small amount of user traffic while keeping monthly operating costs low. You must meet an RTO of 15 minutes and an RPO of seconds.

Which approach will satisfy these requirements?

  • Keep the default AWS DRS configuration, which maintains switched-off resources in the staging area and launches production-sized instances only when a recovery job starts.

  • Replace AWS DRS with cross-Region backups managed by AWS Backup and restore the servers with AWS CloudFormation during a disaster drill; route traffic to the DR Region by changing Route 53 weights.

  • Enable instance-type right-sizing in the AWS DRS launch template so that matching C5 instances are chosen automatically; do not keep any recovery instances running before a disaster.

  • Launch the recovery instances once, keep them running on smaller EC2 instance types behind an Application Load Balancer, and use Auto Scaling policies to resize the fleet to production sizes only after a failover is declared.

Question 7 of 20

A financial services company is building a high-frequency trading (HFT) platform on AWS. The core trading algorithms require the absolute lowest possible latency-ideally single-digit milliseconds-to process real-time market data feeds from a major stock exchange located in the New York City (NYC) metropolitan area. The goal is to minimize the round-trip time between the AWS-hosted application and the exchange's matching engine. Which networking and infrastructure strategy should a solutions architect propose to achieve this objective?

  • Deploy the application in the us-east-1 (N. Virginia) Region and configure Amazon Route 53 with Geoproximity routing to the exchange.

  • Deploy the application in the us-east-1 (N. Virginia) Region and place an AWS Global Accelerator in front of the application endpoints.

  • Deploy the application within an AWS Local Zone located in the NYC metropolitan area (e.g., us-east-1-nyc-1a).

  • Deploy the application to multiple Availability Zones in the us-east-1 (N. Virginia) Region and establish an AWS Direct Connect connection to the exchange.

Question 8 of 20

A solutions architect needs to improve the resilience of a stateless microservices API that is fronted by an Application Load Balancer (ALB) in the us-east-1 Region. The ALB currently distributes traffic across targets in three Availability Zones. The team must meet the following requirements:

  1. If AWS detects an infrastructure problem in any Availability Zone (AZ), traffic to that AZ must shift automatically to the remaining healthy AZs with no operator action.
  2. The mechanism must exercise itself automatically each week to confirm that the workload continues to operate when one AZ is unavailable.
  3. The solution must minimize custom automation and ongoing operational effort.

Which approach will satisfy these requirements?

  • Enable zonal autoshift for the ALB in Amazon Route 53 Application Recovery Controller (ARC) and configure the required practice run schedule.

  • Replace the ALB with a Network Load Balancer, configure Route 53 failover records with health checks for each AZ, and run a scripted job to toggle the primary record every week.

  • Place the ALB behind an Amazon CloudFront distribution with two origins mapped to different AZs, enable origin failover, and schedule weekly cache invalidations to force failover tests.

  • Create an AWS Lambda function that is triggered by AWS Health events to remove the impaired AZ from the ALB, and schedule an AWS Fault Injection Simulator experiment to disable an AZ every week.

Question 9 of 20

Your company operates more than 400 AWS member accounts that are centrally managed with AWS Organizations. The security team needs to be alerted whenever any Amazon S3 bucket in a member account receives a resource-based policy that makes the bucket publicly readable or grants read access to principals outside the organization. Notifications must arrive within 1 hour of the policy change and be delivered to an existing Amazon SNS topic in the security-tooling account. The team also wants a single console where they can review all historical findings. The solution must introduce the least ongoing operational overhead.

Which combination of actions will meet these requirements?

  • Enable Amazon GuardDuty S3 protection for the organization and configure GuardDuty findings to be forwarded through AWS Security Hub to the SNS topic.

  • Enable Amazon Macie organization-wide from the management account and create EventBridge rules in the security-tooling account that forward Macie Policy:IAMUser/S3BucketPublic findings to the SNS topic.

  • Register the security-tooling account as the delegated administrator for IAM Access Analyzer, create an organization-level external-access analyzer there, and add an Amazon EventBridge rule that sends new aws.access-analyzer finding events to the existing SNS topic.

  • In every member account, enable the AWS Config managed rule s3-bucket-public-read-prohibited, aggregate the rule results to a central aggregator in the security-tooling account, and configure an EventBridge rule that forwards NON_COMPLIANT events to the SNS topic.

Question 10 of 20

A solutions architect is designing a multi-tier web application in a VPC. The architecture consists of a fleet of web servers in a public subnet and a fleet of application servers in a private subnet. The web servers must accept HTTPS traffic (TCP port 443) from clients on the internet. The security group for the web servers correctly allows inbound traffic on TCP port 443 from 0.0.0.0/0. Despite this, users report intermittent connection timeouts when accessing the application. A review of VPC Flow Logs shows that SYN packets from clients are reaching the web servers, but the corresponding SYN-ACK responses from the servers are being dropped. What is the MOST likely cause of this issue and the correct way to resolve it?

  • The security group for the web servers is missing an outbound rule. Add an outbound rule to the security group to allow traffic on TCP ports 1024-65535 to 0.0.0.0/0.

  • The network ACL for the public subnet is blocking outbound return traffic. Add an outbound rule to the public subnet's NACL to allow traffic on TCP ports 1024-65535 to destination 0.0.0.0/0.

  • The network ACL for the public subnet is blocking inbound traffic. Add an inbound rule with a lower number than the default deny rule to allow TCP port 443 from source 0.0.0.0/0.

  • The network ACL for the private subnet is blocking return traffic. Add an outbound rule to the private subnet's NACL to allow traffic on TCP ports 1024-65535 to the public subnet's CIDR range.

Question 11 of 20

You are modernizing a claims-processing workflow that currently runs as a monolithic cron job on an on-premises server. For every new claim, the job performs three sequential actions:

  1. Validate the claim data.
  2. Call an external fraud-scoring API. The API responds asynchronously by sending an HTTPS callback within up to 3 hours. A duplicate call to the API generates an additional cost.
  3. Persist the fraud score in Amazon DynamoDB and notify the claimant.

The modernization design must meet these requirements:

  • Replace the cron job with a fully managed, serverless orchestration service that minimizes custom code.
  • Guarantee exactly-once execution of each fraud-scoring request.
  • Pause the workflow until the external system returns the fraud score, without polling.
  • Handle thousands of concurrent claims with minimal operational overhead and provide built-in execution history for auditing.

Which solution meets these requirements MOST cost-effectively?

  • Create an AWS Step Functions Standard workflow. Use a Task state that invokes a Lambda function to send the request to the external API and passes a task token. Configure the Task state with the Wait for Callback (.waitForTaskToken) pattern so the workflow pauses until the external system returns the token through an API Gateway endpoint.

  • Create an AWS Step Functions Express workflow that invokes the external API synchronously and then uses a Wait state of 3 hours before persisting the result.

  • Create a Standard Step Functions workflow that uses the Run a Job (.sync) integration pattern to invoke the external API with a 3-hour timeout.

  • Use Amazon EventBridge Scheduler to trigger an AWS Lambda function for each claim; store workflow state and callback information in DynamoDB, and resume processing when the Lambda function is reinvoked by the external system.

Question 12 of 20

A global enterprise operates a large multi-account environment using AWS Organizations. The finance department needs to implement a detailed chargeback model for various business units and projects. They require the ability to perform complex, ad-hoc queries on granular cost and usage data going back several years. The current method of using the AWS Billing console and basic monthly reports is insufficient for their needs. The company has a strong preference for serverless, managed AWS services to minimize operational overhead. Which strategy should a solutions architect recommend to meet these requirements most effectively?

  • Use AWS Config with custom rules to track resource creation across all accounts. Create a Lambda function to query the AWS Config history to correlate resources with business units and store the results in Amazon DynamoDB for reporting.

  • Create multiple AWS Budgets for each business unit. Configure budget actions to send alerts via Amazon SNS when costs exceed thresholds. Use AWS Cost Explorer to create and share custom reports for cost trends.

  • Configure AWS Cost and Usage Reports (CUR) to be delivered to an Amazon S3 bucket. Use AWS Glue to catalog the data, and then query it using Amazon Athena. Create dashboards for the finance team using Amazon QuickSight.

  • Enable Detailed Billing Reports (DBR) and save them to an S3 bucket. Develop a scheduled AWS Lambda function to parse the reports and load the processed data into an Amazon RDS database for querying.

Question 13 of 20

A financial analytics company runs a critical overnight ETL job using a self-managed Apache Spark cluster on a fleet of r5.4xlarge EC2 instances. The job runs for approximately 4 hours each night, but the cluster remains active 24/7 to be ready for the next run, leading to high costs from idle resources. The data processing volume can fluctuate by up to 50% day-to-day. The operations team spends considerable time on cluster maintenance, security patching, and managing Spark versions. A solutions architect has been tasked with proposing a new architecture that most significantly reduces the Total Cost of Ownership (TCO) while maintaining the processing capabilities. Which AWS managed service offering should the architect recommend?

  • Keep the existing cluster architecture but purchase an Instance Savings Plan for the r5 instance family to cover the EC2 usage.

  • Re-platform the job to run on a transient Amazon EMR cluster that uses Spot Instances for task nodes.

  • Migrate the ETL workload to AWS Glue jobs.

  • Containerize the Spark application and orchestrate it using AWS Batch with AWS Fargate compute environments.

Question 14 of 20

A company runs a mission-critical mobile application that currently stores all user-generated data in an Amazon RDS for PostgreSQL instance located in us-east-1. Traffic has grown rapidly, and the database now experiences write saturation during peak hours, causing latency spikes for users in Europe and Asia-Pacific. New business requirements specify that the re-architected data layer must:

  • Provide single-digit-millisecond read and write latency for users in us-east-1, eu-west-1, and ap-southeast-1.
  • Allow each Region to continue accepting reads and writes if another Region becomes unavailable.
  • Scale automatically to absorb unpredictable surges up to 10× the previous peak traffic without manual capacity changes.
  • Minimize day-to-day operational overhead and database administration effort.

Which approach best satisfies all of these requirements?

  • Keep the existing RDS instance as the primary writer and configure AWS Database Migration Service (AWS DMS) to perform ongoing replication to read-only PostgreSQL instances in the other two Regions.

  • Migrate the data to Amazon DynamoDB, configure a global table spanning us-east-1, eu-west-1, and ap-southeast-1, and use on-demand capacity mode (with adaptive capacity).

  • Deploy an Amazon ElastiCache for Redis cluster with Global Datastore across the three Regions and direct all writes to the primary Redis cluster.

  • Create an Aurora PostgreSQL global database with secondary clusters in eu-west-1 and ap-southeast-1 and enable write forwarding.

Question 15 of 20

The FinOps team for a multi-account AWS environment needs an automated billing alert for the development account. The solution must, at the beginning of each calendar month, automatically derive the alert threshold from the average monthly spend of the preceding six months and then notify a dedicated Slack channel when the spend for the current month exceeds 120 percent of that threshold. The implementation must introduce the least possible ongoing operational work. Which approach will meet these requirements?

  • Export daily cost and usage data to Amazon S3, run an hourly Step Functions workflow that uses the Cost Explorer API to compute the six-month average, store the value in Parameter Store, and send a Slack notification via SNS when current cost exceeds 120 percent of that parameter.

  • Enable AWS Cost Anomaly Detection for the account, configure a daily summary subscription with an USD 800 cost-impact threshold, and send the alerts to the Slack channel through AWS Chatbot.

  • Create an auto-adjusting cost budget for the development account that uses the Last 6 Months baseline and sets an alert at 120 percent of the budgeted amount. Attach an Amazon SNS notification and map the SNS topic to the Slack channel with AWS Chatbot.

  • Create a CloudWatch alarm on the AWS/Billing EstimatedCharges metric with a static threshold equal to 120 percent of the previous six-month average. Trigger a monthly Lambda function from EventBridge to recalculate and update the threshold, and use AWS Chatbot to forward alarm notifications to Slack.

Question 16 of 20

A global logistics company is migrating its on-premises data center to AWS. The portfolio includes hundreds of business-critical applications running on a mix of VMware vSphere virtual machines and physical Linux servers. A primary business requirement is to minimize cutover downtime to under 10 minutes per application. The migration team must also be able to conduct multiple, non-disruptive test cutovers in AWS for each application over several weeks before the final production cutover. The source applications must remain fully operational during the entire replication and testing period.

Which AWS service should a solutions architect recommend to meet these requirements?

  • AWS Server Migration Service (SMS)

  • AWS Application Migration Service (AWS MGN)

  • AWS DataSync

  • VM Import/Export

Question 17 of 20

A global retail company operates a large, hybrid environment with thousands of Amazon EC2 instances across multiple AWS accounts and a significant number of on-premises servers in their data centers. The operations team is struggling with configuration drift across this fleet, leading to inconsistent application behavior and compliance violations. They need a scalable, centralized solution to enforce a desired configuration state, including specific software versions and security settings, on all servers. The solution must minimize operational overhead by avoiding the need to manage dedicated configuration management servers and should automatically remediate any detected drift.

Which AWS Systems Manager capability should a solutions architect recommend to meet these requirements most effectively?

  • Develop a complex SSM Automation runbook that checks for drift and orchestrates remediation steps.

  • Configure SSM Inventory to collect metadata and use AWS Config rules to detect non-compliant resources.

  • Create SSM State Manager associations that apply a desired configuration document on a schedule.

  • Use SSM Run Command to periodically execute scripts that check and apply the required configuration.

Question 18 of 20

A manufacturing company is migrating an MPI-based high-performance computing (HPC) simulation that will run on 128 c7n.16xlarge Amazon EC2 instances. The application needs sub-millisecond internode latency and at least 25 Gbps of sustained throughput between all nodes. The team is willing to place every instance in the same Availability Zone and wants the simplest way to achieve the required network performance. Which deployment strategy should the solutions architect recommend?

  • Launch all 128 c7n.16xlarge instances into a cluster placement group in a single Availability Zone.

  • Launch the instances without a placement group and attach Elastic Fabric Adapter (EFA) to each node.

  • Use a rack-level spread placement group across three Availability Zones to keep nodes on separate racks.

  • Launch the instances into a partition placement group that spans two Availability Zones with four partitions.

Question 19 of 20

A global e-commerce company is redesigning its product-catalog service. The new architecture must:

  1. Sustain up to 500 000 writes per second during vendor bulk uploads.
  2. Return catalog items with single-digit-millisecond latency to web and mobile clients.
  3. Let analysts run complex ad-hoc SQL queries across the entire catalog once per hour without affecting the transactional workload.
  4. Minimize operational overhead and ongoing cost.

Which approach will meet these requirements MOST effectively?

  • Store the catalog in sharded Amazon DocumentDB clusters. Use AWS Glue jobs to copy the data hourly to Amazon S3 and query it with Amazon Redshift Spectrum.

  • Deploy Amazon Aurora MySQL with provisioned writer and multiple read replicas. Take hourly automated snapshots, load them into Amazon Redshift with AWS DMS, and let analysts query the Redshift cluster.

  • Store the catalog in Amazon DynamoDB using on-demand capacity. Enable point-in-time recovery and schedule hourly DynamoDB table exports to Amazon S3. Analysts query the exported data in S3 with Amazon Athena.

  • Persist catalog documents in Amazon OpenSearch Service with UltraWarm storage. Use the OpenSearch SQL plugin for both transactional reads and analytic queries.

Question 20 of 20

An e-commerce company is refactoring a legacy order-processing application into several microservices that run in separate AWS accounts. The monolith currently writes every order event to an Amazon SQS queue. A Lambda function examines each message's JSON payload and forwards it to three downstream SQS queues-one per microservice-based on the value of the eventType field (ORDER_CREATED, PAYMENT_CAPTURED, or ORDER_CANCELLED).

The development team wants to retire the Lambda router to reduce operational overhead, keep costs low, and continue using SQS for downstream processing. Exactly-once delivery and strict ordering are not required.

Which solution will meet these requirements with the least custom code?

  • Configure an Amazon EventBridge custom event bus. Publish each order event to the bus and create one rule per eventType that routes matching events to the appropriate SQS queue.

  • Publish every order event to a single Amazon SNS standard topic. Create a dedicated Amazon SQS queue for each microservice and subscribe each queue to the topic. Attach a payload-based filter policy that matches only the required eventType values for that microservice.

  • Replace the Lambda router with an Amazon SNS FIFO topic. Set the eventType value as the message-group ID and subscribe each microservice's SQS queue to the topic so that only matching messages are delivered.

  • Create three separate Amazon SNS topics, one for each eventType. Modify the order-processing service so that it publishes every event to all three topics, and have each microservice subscribe to its dedicated topic.