AWS Certified Solutions Architect Professional SAP-C02 Practice Question
A financial-services company ingests tens of thousands of trade events per second from multiple producer microservices in a single AWS Region. The downstream trade-processing engine can scale horizontally but occasionally crashes on rare edge cases. The company has these reliability requirements:
An Availability Zone outage or consumer crash must not cause event loss.
Duplicate event processing is acceptable if the application can handle idempotent requests.
Messages that fail processing more than three times must be isolated automatically so that valid messages continue to be processed.
Which architecture meets these requirements with the least amount of application refactoring?
Publish events to an Amazon SQS standard queue. Configure a redrive policy that moves a message to a dead-letter queue after three processing attempts. Deploy producers and an Auto Scaling group of consumer instances across at least two AZs, and make the consumer logic idempotent.
Ingest events into an Amazon Kinesis Data Stream with enhanced fan-out enabled. Increase the stream's retention period to 24 hours and deploy the consumer application across multiple AZs.
Replace SQS with a pair of Active/Standby Amazon MQ brokers in different AZs. Enable broker mirroring and rely on the broker's retry settings to discard unprocessable messages.
Publish events to a single Amazon SQS FIFO queue with one message group ID and enable high-throughput mode. Rely on the queue's exactly-once semantics and omit a dead-letter queue.
Publishing the events to an Amazon SQS standard queue meets all three requirements. SQS stores messages redundantly across multiple AZs, so an AZ failure or consumer crash does not cause data loss. Standard queues deliver messages at least once, so the consumer should already be idempotent to tolerate duplicates. By attaching a redrive policy that moves a message to a dead-letter queue after three failed receives, poison messages are automatically isolated without writing additional code.
A FIFO queue with a single message-group ID restricts throughput and still needs a dead-letter queue; it offers features that are unnecessary for this use case. Kinesis Data Streams handles high volume but lacks built-in DLQ support and would require additional consumer logic. Amazon MQ Active/Standby provides durability but does not scale as easily and still requires custom handling for poison messages. Therefore, the SQS standard queue with a redrive policy and multi-AZ consumers is the simplest, most reliable solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a dead-letter queue (DLQ) in Amazon SQS?
Open an interactive chat with Bash
How does Amazon SQS provide reliability across multiple Availability Zones?
Open an interactive chat with Bash
What does it mean for a message consumer to be idempotent?
Open an interactive chat with Bash
AWS Certified Solutions Architect Professional SAP-C02
Design for New Solutions
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access