AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.

- Free AWS Certified Developer Associate DVA-C02 Practice Test
- 20 Questions
- Unlimited
- Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
Free Preview
This test is a free preview, no account required. 
 Subscribe to unlock all content, keep track of your scores, and access AI features!
A developer needs to process incoming files uploaded to an Amazon S3 bucket and transform the data using an AWS Lambda function. The developer wants the function to be triggered as soon as a new file is stored in the bucket. What is the MOST suitable way to achieve this requirement?
- Route the S3 event notifications to Amazon CloudWatch Events/EventBridge and associate the Lambda function as a target. 
- Send the S3 event notifications to an Amazon SQS queue and set up the queue as an event source for the Lambda function. 
- Configure the Lambda function to be triggered by Amazon S3 event notifications when a new object is created. 
- Publish the S3 event notifications to an Amazon SNS topic and subscribe the Lambda function to that topic. 
Answer Description
Using Amazon S3 event notifications to trigger a Lambda function directly when a new object is created provides the immediate response required for the developer's use case. It is the most direct method for event-driven integration between Amazon S3 and AWS Lambda. Configuring a Lambda function with an Amazon S3 event as the trigger allows the function to execute/respond as soon as the specified S3 event, such as a PUT, occurs.
Using Amazon SNS or Amazon SQS would introduce additional steps and potential delays, as it would require the S3 event to first publish to a topic or send a message to a queue, which would then trigger the Lambda function. This is less efficient and suitable for cases where indirect coupling or message filtering is required. Amazon S3 event notifications to Amazon CloudWatch Events/EventBridge would also introduce unnecessary complexity for this direct integration use case.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do you configure S3 event notifications to trigger a Lambda function?
What permissions are required for S3 to invoke a Lambda function?
What are the limitations or considerations when using S3 event notifications with Lambda?
A developer needs to ensure that a serverless function, which is responsible for processing incoming messages, will redirect failed executions to a separate storage service for later analysis. Which feature should the developer configure to BEST adhere to this requirement?
- Boost the serverless function's available compute resources to prevent message processing failures. 
- Set up an automated re-publishing mechanism through another notification service to keep submitting the message until it is processed without errors. 
- Configure a dead-letter queue to save messages that the function fails to process for subsequent analysis. 
- Alter the function's retry settings to indefinitely attempt reprocessing the problematic message until successful. 
Answer Description
Enabling a dead-letter queue (DLQ) for the Lambda function is the best proactive measure to handle failed execution messages. The DLQ pattern is a well-established way to isolate messages that a Lambda function fails to process, allowing them to be moved to a separate queue and not interfere with the continuous flow of new messages being processed. This aids in analyzing and debugging issues without losing the context of the failure. This is preferable to re-attempting processing within the Lambda function which might cause repeated failures and does not segregate the problematic messages, or increasing resources which might not address the root cause of processing failures.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a dead-letter queue (DLQ) in AWS Lambda?
How does enabling a DLQ improve reliability in message processing?
Can a dead-letter queue automatically retry failed events?
A developer is implementing an application that requires frequent retrieval of items from an Amazon DynamoDB table. To optimize performance, the application needs to minimize latency and reduce the number of network calls. Given the need for efficient data access patterns, which method should the developer use when implementing code that interacts with the DynamoDB table using the AWS SDK?
- Perform individual - GetItemoperations for each item.
- Utilize - BatchGetItemfor batch retrieval of items.
- Employ a - Scanoperation to fetch all table items and filter as needed.
- Use - PutItemcalls with a filter to only insert the desired items.
Answer Description
Using the batch operations API of AWS SDKs, such as BatchGetItem in the case of Amazon DynamoDB, allows the application to retrieve up to 100 items from one or more DynamoDB tables in a single operation. This reduces the number of network calls compared to individual GetItem requests for each item and results in less network latency. The use of Query or Scan would not be as efficient because they are designed for different purposes. Query is used to retrieve items using a key condition expression that requires a specific partition key, and Scan reads every item in a table, which can be less efficient for frequently accessed individual items. PutItem is used for data insertion, not retrieval, and therefore is not suitable for the scenario described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is `BatchGetItem` in DynamoDB?
How does `BatchGetItem` differ from `GetItem` in DynamoDB?
Why is `Scan` less efficient for frequent data retrieval in DynamoDB?
A developer is implementing a cloud-based messaging system where it's critical that messages are processed at least once, but the processing service is occasionally prone to temporary outages. How should the developer ensure message processing can be retried reliably without overwhelming the service when it comes back online?
- Implement retries with exponential backoff and jitter 
- Increase the message visibility timeout to its maximum limit 
- Set up unlimited immediate retries for all failed messages 
- Create a dedicated error queue for failed message processing attempts 
Answer Description
By implementing retries with exponential backoff and jitter, the developer can ensure that the message processing is retried reliably without overwhelming the service after an outage. Exponential backoff increases the delay between retry attempts, reducing the load on the service when it recovers, and jitter adds randomness to these delays to prevent synchronized retries. A dedicated error queue is not a retry mechanism but rather a way to separate unprocessable messages. Unlimited immediate retries could easily overwhelm the service, and increasing the message visibility timeout alone does not provide a mechanism for retrying message processing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff, and how does it work in retries?
Why is jitter important when using exponential backoff?
What is a message visibility timeout, and why doesn’t increasing it solve the issue?
You are implementing a notification system for an online shopping platform hosted on AWS. The system must send emails to customers after their order has been processed. Given that the order-processing system can become backlogged during peak times, which pattern should you employ to ensure that the email notifications do not block or slow down the order-processing workflow?
- Employ an asynchronous pattern with a message queue that collects notifications to be processed by a separate email-sending service independently of order processing. 
- Adopt a synchronous pattern for small batches of orders, switching to an asynchronous pattern only when detecting a processing backlog. 
- Use a synchronous pattern where the order-processing service sends the email directly before confirming the order as complete within the same workflow. 
- Implement a hybrid pattern, sending the email synchronously for premium customers, while using an asynchronous approach for regular customers. 
Answer Description
An asynchronous communication pattern is suitable for this scenario because it allows the order-processing logic to finish without waiting for the email to be delivered. The order service can publish a message to Amazon SQS (or another queue), which then triggers a separate component-such as an AWS Lambda function-to send the email. This decoupling lets the order system remain responsive under load. A synchronous approach would force each order request to wait for the email operation, creating latency and backlogs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS and why is it used in asynchronous workflows?
How does AWS Lambda work with Amazon SQS to process messages asynchronously?
Why is a synchronous pattern not suitable for high-load order processing systems?
Your company's cloud architecture leans heavily on AWS Lambda to process e-commerce transactions. The functions must scale automatically and remain unaffected by the state of previous invocations. A payment-processing function calls an external gateway and must reliably handle transactions during peak hours. To ensure the function adheres to a stateless design, what should you do?
- Store transaction state data in an external database or caching service such as Amazon DynamoDB or Amazon ElastiCache. 
- Configure the Lambda function to process transactions asynchronously using an internal queue. 
- Enable automatic scaling on the Lambda function to handle the increased number of transactions. 
- Use environment variables to store transaction state between invocations. 
Answer Description
Storing transaction state externally (for example, in Amazon DynamoDB or Amazon ElastiCache) separates state from compute, allowing each Lambda invocation to run independently. Environment variables are suitable for static configuration, not dynamic state. Automatic scaling or asynchronous queues improve throughput but do not, by themselves, remove shared state between invocations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important for AWS Lambda functions to be stateless?
What are the differences between Amazon DynamoDB and Amazon ElastiCache for storing state?
How does automatic scaling relate to AWS Lambda's stateless architecture?
Which of the following is the BEST description of a stateless application component?
- It retains session state information between requests, necessitating a mechanism to store user data on the server for ongoing interactions. 
- It treats each request as a separate, independent transaction, without the need to maintain client session state on the server. 
- It requires sticky sessions to ensure that a user's session data is maintained across multiple interactions with the application. 
- It relies on server-side storage to keep track of user states and preferences, reducing the client's overhead in subsequent requests. 
Answer Description
A stateless application component does not save client state on the server between requests; each request is independent of the others, and the server does not retain session information. This is the hallmark of stateless design, making it easy to scale horizontally by adding more servers. A stateful component would retain session state information between requests, which can complicate scaling due to the need to either persist session state or maintain session affinity. Stateless components generally do not require sticky sessions since each request is self-contained and can be processed by any available instance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is stateless design beneficial for scalability?
What are sticky sessions and why aren’t they needed in stateless applications?
How does a stateless application manage user data between requests?
A developer is building a new AWS Lambda function using Python. The function is triggered by Amazon S3 object-creation events and contains complex business logic that needs to be validated. The developer needs to write repeatable tests that verify only the function's core logic without making actual calls to other AWS services. What is the MOST effective approach for unit testing this function's logic?
- Use the AWS SAM CLI command - sam local invokewith a sample S3 event JSON file to run the function in a local Docker container.
- Mock the S3 event and AWS SDK clients, and then invoke the Lambda handler function directly with the mock event. 
- Write an AWS CLI script to invoke the deployed Lambda function, passing a base64-encoded S3 event as the payload. 
- Deploy the function to a development environment and upload a test object to the S3 bucket to trigger an invocation. 
Answer Description
The most effective approach for unit testing the Lambda function's core logic is to mock the S3 event and any AWS SDK clients it uses. This allows the developer to invoke the handler function directly with a simulated event and control the behavior of external dependencies, ensuring the test is isolated and focuses exclusively on the business logic. This method is fast, repeatable, and does not require an active AWS environment or incur costs from service calls. Deploying the function to test, or using sam local invoke, are forms of integration testing, as they involve more components than just the isolated unit of code.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a unit test and an integration test?
Why is isolation important for unit tests?
What tools are commonly used to mock dependencies in unit testing?
Which description best exemplifies a loosely coupled component within a system architecture?
- Hardcoding the endpoint of a specific Lambda function for invocation from an EC2 instance 
- Calling a specific instance of an application server directly to process a task 
- Using a shared database table for direct communication between two microservices 
- Publishing messages to an SQS queue instead of making direct API calls to another service 
Answer Description
A loosely coupled component is designed to interact with other components with minimal dependencies or knowledge about them, often communicating through well-defined interfaces or messaging systems. Option 'Publishing messages to an SQS queue instead of making direct API calls to another service' is an example of loose coupling because it uses a message queue as an intermediary, which reduces the dependencies between the sender and the receiver. The other options describe situations that create direct dependencies, characterizing tight coupling, which is less desirable for scalability and resilience.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SQS and why is it used in a system architecture?
How does loose coupling improve scalability and resilience in system architecture?
What are the key differences between tightly coupled and loosely coupled systems?
When designing a web service for processing financial transactions, what mechanism can a developer implement to prevent duplicate submissions from charging a customer multiple times?
- Generate a unique identifier for each operation, allowing the service to detect and ignore retries of transactions that have already been executed. 
- Integrate a distributed tracing service to handle de-duplication of transaction requests. 
- Track the status codes from previous submissions and use them to determine if the operation should be retried. 
- Record the timestamp for each operation and only process requests if subsequent submissions occur after a specific time interval. 
Answer Description
The correct approach to ensure idempotent operations is to issue a unique transaction identifier for each financial operation. This identifier enables the service to recognize repeat submissions and disregard them. Timestamps do not guarantee idempotency because requests may be retried automatically by clients without alteration. Relying on response codes as a method of idempotency is also unreliable because response codes can change with network conditions or server states and are typically not a part of the client's request. AWS X-Ray is a service for analyzing and debugging distributed applications and does not provide a direct mechanism for implementing idempotency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does a unique transaction identifier ensure idempotency in financial operations?
Why are timestamps not sufficient for preventing duplicate transactions?
What is idempotency, and why is it important in financial systems?
Your team is designing an AWS-based application where one component processes customer orders, and another component handles inventory management. Considering that you need to minimize the interdependence between these two components in case one of them fails, which approach would contribute to a more loosely coupled architecture?
- Using an Amazon Simple Queue Service (SQS) to facilitate message passing between the order-processing and inventory-management components 
- Creating database triggers within your order database to automatically update the inventory-management system in real time 
- Implementing synchronous REST API interactions between the two services, requiring an immediate response after an order is placed 
- Setting up a REST service within the inventory-management component that is called directly by the order-processing service for every new order 
Answer Description
Using an Amazon Simple Queue Service (SQS) queue to pass messages between the order-processing and inventory-management components supports a loosely coupled architecture. With SQS, if the inventory component is slow to process messages or temporarily unavailable, order processing is not directly affected because messages remain durable in the queue. In contrast, synchronous REST API interactions or database triggers introduce tight coupling by requiring immediate responses or shared data states, which can cascade failures when one component experiences issues.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a loosely coupled architecture?
How does Amazon SQS work to ensure durability and fault tolerance?
What is the difference between synchronous API calls and asynchronous messaging?
Your company's new web application is required to authenticate users leveraging their existing social network accounts to streamline the sign-in process. Which service would you utilize to enable this feature while maintaining a seamless user experience and secure authentication process?
- AWS Security Token Service 
- AWS Identity and Access Management 
- AWS Directory Service 
- Amazon Cognito 
Answer Description
The correct service to use in this scenario is Amazon Cognito, as it provides federated identity management that allows for integration with social identity providers like Facebook, Google, and Amazon, along with SAML and OIDC identity providers. This service simplifies the integration of user authentication through existing social networks within your web application. AWS IAM is a service for managing permissions and is not designed for federated social identity providers. AWS STS provides temporary security credentials for short-term access but not social identity provider integration. AWS Directory Service is used for integrating AWS resources with an existing on-premise Active Directory but is not designed for direct integration with social media identity providers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Amazon Cognito integrate with social identity providers?
What is the difference between Amazon Cognito and AWS IAM?
What is federated identity and why is it important for user authentication?
An application processes a very high volume of messages. The architecture must guarantee that no messages are lost and must allow operators to inspect or re-process any message that fails after several processing attempts. Which design best meets these requirements?
- Deploy an Amazon Kinesis Data Stream with a Lambda consumer and rely on Lambda's on-failure destination to handle failed records. 
- Create an SQS delay queue to postpone the visibility of new messages and give the system time to recover from processing delays. 
- Apply an exponential backoff retry strategy in the Lambda function that processes the messages, without using a DLQ. 
- Set up an Amazon SQS queue and configure a dead-letter queue (DLQ) to capture messages that exceed a maximum receive count. 
Answer Description
Using an Amazon SQS standard (or FIFO) queue with an associated dead-letter queue (DLQ) is the most effective pattern. SQS automatically tracks the ReceiveCount for each message and, when that count exceeds the configured maxReceiveCount, moves the full message body to the DLQ. The message is durably stored for up to the DLQ's retention period and can be redriven to the source queue after debugging, ensuring that no data is lost.
A Kinesis Data Stream with a Lambda consumer supports on-failure destinations, but only metadata about the failed record is forwarded; the payload itself remains in the stream and eventually expires (24 hours by default, up to 365 days with extended retention). Additional logic is required to replay or persist the failed data. Lambda-level backoff retries or an SQS delay queue provide temporary resiliency but do not offer durable storage for repeatedly failing messages. Therefore, an SQS queue with a properly configured DLQ best satisfies the guarantee of no message loss and straightforward recovery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon SQS dead-letter queue (DLQ)?
How is message retention managed in an SQS queue?
What is the difference between an SQS standard queue and a delay queue?
Which architectural pattern is best described by a design that breaks down an application into smaller, interconnected services, each responsible for a specific business function?
- Choreography 
- Fanout 
- Microservices 
- Monolithic 
Answer Description
The microservices pattern is characterized by creating a suite of small, independently deployable services. Each service in a microservices architecture can be deployed, upgraded, scaled, and restarted independent of other services in the application. This differs from a monolithic architecture where all the functionality is handled within one large codebase and from other patterns like event-driven where services may interact primarily based on events rather than direct communication.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key benefits of using a microservices architecture over a monolithic architecture?
How do microservices typically communicate with each other?
What are some common challenges when working with a microservices architecture?
Which tool offered by Amazon Web Services can developers use to invoke and debug their serverless functions locally, simulating the cloud environment on their own machine?
- AWS SAM CLI 
- AWS SDKs 
- Amazon CodePipeline 
- AWS CodeDeploy 
Answer Description
The correct tool for invoking and debugging serverless functions locally is AWS SAM CLI. It allows developers to test their Lambda functions in an environment similar to the actual AWS runtime. AWS CodePipeline and AWS CodeDeploy are used in deployment cycles and do not have the capability to run or test functions locally. The AWS SDKs are for interacting with AWS services in your application code, not for local invocation of functions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS SAM CLI and how does it simulate the cloud environment locally?
Why can't AWS CodePipeline or CodeDeploy invoke serverless functions locally?
How are AWS SDKs different from AWS SAM CLI when developing with Lambda?
An application running on Amazon ECS makes HTTP API calls to an AWS service. Occasionally the service returns 5xx errors caused by transient throttling. The development team needs to implement a client-side technique that improves resilience without creating traffic spikes that could overwhelm the service. Which approach meets these requirements?
- Retry failed requests after a fixed delay between attempts 
- Implement retries with exponential backoff and jitter 
- Increase the request payload size with every retry 
- Send multiple parallel requests each time a call fails 
Answer Description
Retries with exponential backoff and jitter spread retry attempts over progressively longer, randomized intervals. This reduces the chance of synchronized retry storms and gives the downstream AWS service time to recover. Fixed-delay retries, parallel requests, or increasing the request payload do not mitigate overload and can amplify failures.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff in the context of retries?
Why is jitter important in combination with exponential backoff?
What happens if retries aren't implemented correctly in fault-tolerant applications?
You are working on a serverless application where each component is designed to execute a distinct operation within an e-commerce checkout process. During the development cycle, you want to confirm that each component functions independently and as expected without making actual calls to cloud services. What technique should you employ within your unit tests to simulate the behavior of the external dependencies?
- Create instances of client objects specific to cloud resources within your unit tests 
- Reference recorded responses from an object storage service during test execution 
- Configure an API management service to handle dependencies during test runs 
- Utilize SDK mocking utilities to emulate the behavior of external service calls 
Answer Description
To simulate the behavior of external dependencies, you should utilize mocking utilities provided with the SDK for your chosen programming language. This ensures that tests can run without the need for actual service calls, allowing for true unit testing by isolating the code from external interactions. Creating real client instances during testing would result in calls to the actual services, which contradicts the principles of unit testing. Similarly, configuring API Gateway to manage test dependencies creates actual service interactions. Using sample responses from an object storage service introduces external dependency to tests, which is contrary to the concept of isolation in unit testing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SDK mocking?
Why should unit tests avoid calling real services?
How does SDK mocking differ from API management during testing?
A developer is designing a new application that processes sensitive financial data. The application will store processed data in Amazon S3. For compliance reasons, the data must be encrypted at all times. Which type of encryption should the developer use to ensure that the data is encrypted before it leaves the application's host and remains encrypted in transit and at rest within Amazon S3?
- Use server-side encryption with Amazon S3 managed keys (SSE-S3) when uploading the data. 
- Activate default S3 bucket encryption with an AWS Key Management Service (KMS) managed key. 
- Implement client-side encryption using a customer-managed key prior to uploading the data to Amazon S3. 
- Enable Secure Socket Layer (SSL) on the application's server and rely on S3 bucket policies to handle encryption. 
Answer Description
Client-side encryption is the correct approach because it meets the requirement to encrypt data before it leaves the application's host. By encrypting the data on the client side, it is protected prior to transmission, during transit to Amazon S3, and while at rest in the S3 bucket.
- Server-side encryption options, such as Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) or with AWS KMS keys (SSE-KMS), are incorrect because the encryption occurs on the AWS side after the data is received by Amazon S3. This does not fulfill the requirement to have the data encrypted before it leaves the application environment. 
- Using only Secure Socket Layer (SSL)/Transport Layer Security (TLS) is insufficient. While SSL/TLS encrypts data in transit, it does not encrypt the data on the host before it is sent or keep it encrypted at rest within the S3 bucket; server-side encryption would still be required for encryption at rest. 
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is client-side encryption, and how does it differ from server-side encryption?
How does enabling Secure Socket Layer (SSL) compare to client-side encryption for securing data in this case?
What are the benefits of using a customer-managed key for client-side encryption?
You are developing a RESTful API. When a client sends a POST request to create a new resource, but the resource already exists, which HTTP status code should your API return to best adhere to standard practices?
- 409 Conflict 
- 500 Internal Server Error 
- 400 Bad Request 
- 202 Accepted 
Answer Description
According to standard HTTP status code definitions, 409 Conflict is the appropriate response when a request conflicts with the current state of the server, such as trying to create a resource that already exists. A 400 Bad Request implies general client errors, 202 Accepted is used when a request has been accepted for processing but the process has not been completed, and 500 Internal Server Error indicates a server error, not a conflict of resource state.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the meaning of the 409 Conflict HTTP status code?
How does a 400 Bad Request differ from a 409 Conflict?
When would you use the 202 Accepted status code in a RESTful API?
Which design pattern can improve the resiliency of a microservices architecture by using a publish/subscribe service that distributes each event concurrently to multiple independent message queues so that every consumer processes the event in isolation?
- Expose every microservice through synchronous REST API calls and invoke them sequentially from the producer service. 
- Store all events in a shared relational database table that each microservice queries for new records. 
- Send every message to a single Amazon SQS queue that all consumer services poll in turn. 
- Configure an Amazon SNS topic that fans out each published message to a dedicated Amazon SQS queue for every consumer microservice. 
Answer Description
Using an Amazon SNS topic to fan out events to separate Amazon SQS queues is a proven pattern for decoupling producers and consumers. Each microservice reads from its own queue, so a failure or slowdown in one consumer does not block others. The queues buffer messages, add durability, and let each service scale at its own pace, increasing overall fault tolerance of the system.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SNS and how does it work with Amazon SQS?
Why is fan-out architecture better for microservices compared to a shared relational database?
What are the advantages of using Amazon SNS with Amazon SQS over a single shared SQS queue?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.