AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.

Free AWS Certified Developer Associate DVA-C02 Practice Test
- 20 Questions
- Unlimited
- Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
What process involves converting an object into a data format that can be stored and reconstructed later?
Normalizing
Decoding
Serialization
Encoding
Answer Description
Serialization is the process of converting an object into a data format that can be easily stored or transmitted and later deserialized to recreate the original object. This concept is key in application development for storing data in a format that the data store can understand and interact with. Deserialization is the reverse process, taking data structured in a specific format and building it back into an object. Encoding and Decoding refer to the process of converting data from one form to another, particularly when discussing character sets (such as UTF-8) or binary data. They can be part of the serialization process, but in themselves are not equivalent to serializing or deserializing objects.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is serialization important in application development?
What are commonly used serialization formats?
How does deserialization impact security?
A developer needs to securely manage the storage and retrieval of database login credentials for an application hosted on Amazon EC2 instances. The application code requires these credentials to establish database connections at runtime. Which method is the recommended best practice for handling these credentials securely?
Use a secret management service like AWS Secrets Manager to store the database credentials, allowing the application to retrieve them securely at runtime.
Create an IAM policy that grants the EC2 instance profile the necessary permissions to connect to the database without needing to store explicit credentials.
Embed the database credentials directly in the source code of the application after encrypting them with a basic encryption algorithm.
Configure environment variables on the EC2 instances to hold the login details, and read them directly within the application when needed.
Answer Description
The correct method for handling sensitive login information is to utilize a dedicated secret management service, such as AWS Secrets Manager or AWS Systems Manager Parameter Store. These services provide a centralized and secure repository to store, manage, retrieve, and rotate credentials. Storing sensitive information in plaintext within application code or in environment variables on the server are insecure practices that can lead to inadvertent exposure in logs, version control, or to anyone with access to the server environment. While IAM database authentication is a secure method that uses an access policy to allow EC2 instances to connect to supported RDS databases without a password, a secret management service is the more general and recommended solution for storing and managing explicit credentials for any type of database.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is AWS Secrets Manager considered a best practice for managing credentials?
How does AWS Secrets Manager help with credential rotation?
What is the difference between Secrets Manager and AWS Systems Manager Parameter Store?
An application includes an AWS Lambda function that is triggered when an object is uploaded to an Amazon S3 bucket. The function extracts metadata from the file and stores it in an Amazon DynamoDB table. The development team is adding a CI workflow that will run unit tests on every commit. They want the tests to execute quickly on build servers without provisioning AWS resources or incurring charges, while still confirming that the function calls S3 and DynamoDB with the expected parameters. Which approach should they use?
Create a dedicated developer AWS account and provision the S3 bucket and DynamoDB table as part of test setup.
Enable AWS X-Ray tracing and analyze the service calls recorded during test execution.
Configure a Lambda alias named test and invoke that alias from the unit tests.
Use a mocking framework or AWS SDK stubber to simulate Amazon S3 and Amazon DynamoDB in the unit tests.
Answer Description
A mocking framework (or an AWS SDK stubber such as boto3's Stubber or the AWS SDK for JavaScript's mock-client) can replace the S3 and DynamoDB clients inside the unit tests. The mock objects intercept calls, return predefined responses, and allow assertions on the parameters the function sends-all without creating AWS resources or generating cost. Spinning up real S3 buckets and DynamoDB tables, invoking a separate Lambda alias, or enabling AWS X-Ray would either add cost, provisioning time, or fail to isolate the code under test, so they are less suitable for fast, repeatable unit tests.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does a mocking framework work in unit tests?
What is the AWS SDK Stubber?
Why is using real AWS resources not suitable for unit tests?
Your team has developed a new feature that makes asynchronous API calls to a backend service and you need to verify its behavior in the production-like environment. Which service would provide the BEST approach to test this feature before it is released to production considering you want to detect any integration issues as early as possible?
Use AWS X-Ray to trace the requests made by the new feature and analyze the service response times.
Deploy the new feature using a canary deployment with AWS CodeDeploy to test its behavior with a small percentage of traffic.
Run the feature on a selection of real devices using AWS Device Farm to test different network conditions and device interaction.
Initiate a build using AWS CodeBuild to integrate the new feature and run unit tests against the updated backend service.
Answer Description
AWS CodeDeploy allows you to deploy the application incrementally across instances and track the health of the deployment. By deploying to a subset of instances first (canary deployment), it can emulate production behavior, allowing you to monitor and test the new feature while limiting the impact of potential issues before deploying to all instances. AWS X-Ray is primarily used for analyzing and debugging production applications, not for pre-deployment testing. AWS Device Farm focuses on app testing on real mobile devices which is not relevant for backend service API calls testing, and AWS CodeBuild is for compiling your source code and running unit tests, which doesn't necessarily capture integration behavior like asynchronous API calls in a production-like environment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary deployment in AWS CodeDeploy?
How does AWS X-Ray differ from AWS CodeDeploy in testing new features?
When should you use AWS Device Farm versus AWS CodeDeploy?
An application utilizes a serverless function to handle web requests and needs to retrieve data from a relational database safely contained within a subnet that is not exposed to the public internet. Which step is vital for the serverless function to communicate effectively with the database?
Establish a VPC Peering connection to enable the serverless function hosted on one VPC to access the database in another VPC.
Enhance the computational resources of the function by increasing its allocated memory capacity.
Alter the database settings to make it publicly accessible and thereby simplify function's access to it.
Configure the serverless function with specific network settings that include the VPC ID, subnet identifiers within the VPC, and security group identifiers.
Answer Description
To enable secure communication between a serverless function (such as AWS Lambda) and a relational database instance located within a private subnet of a Virtual Private Cloud (VPC), the developer must configure the serverless function with the specific network settings corresponding to that VPC. This includes the ID of the VPC, the subnets within the VPC where the database is situated, and the security groups that govern the allowed network traffic to and from the serverless function. This will provide the network path necessary for the function to access the database without exposing it to the public internet. Increasing the function's memory does not directly enable network connectivity to the VPC resources. A VPC Peering connection is unnecessary and inappropriate when both resources are in the same VPC. Opening public access to the database contradicts the requirement for private access and raises significant security risks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VPC in AWS?
What is a security group and how does it work?
Why use private subnets for databases in a VPC?
A company is looking to encrypt data at rest for their Amazon DynamoDB table, which contains sensitive information. They want to guarantee that the encryption does not affect the performance of their application. Which service should they use to accomplish this without managing server-side encryption themselves?
Force all connections to the DynamoDB table to use SSL/TLS
Enable Amazon DynamoDB's default encryption at rest using AWS managed keys
Create an IAM role with a policy that enforces encryption at rest
Implement client-side encryption before storing the data in the DynamoDB table
Answer Description
Using AWS managed encryption with Amazon DynamoDB provides transparent data encryption at rest without affecting the performance of the application. It uses AWS Key Management Service (AWS KMS) to manage the encryption keys, which eliminates the overhead of managing server-side encryption directly. While client-side encryption could also protect data at rest, it would add complexity to the application and could impact performance. Additionally, SSL/TLS ensures encryption in transit but does not encrypt data at rest, and IAM roles are used for access control and do not address encryption needs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS KMS and how does it manage encryption keys?
Why is default encryption at rest better for performance compared to client-side encryption?
What is the difference between encryption in transit and encryption at rest?
Which service offers the capability to obtain temporary, privileged credentials for making authenticated requests to various resources, particularly in scenarios involving identity federation or cross-account access?
Security Token Service
Cognito User Identity Pools
Simple Storage Service
Key Management Service
Answer Description
The correct service for retrieving short-term, privileged credentials is the Security Token Service, which is often used in conjunction with federated user authentication or for assuming roles. These credentials are constrained by time and the permissions defined in their associated policies, fostering a secure environment that adheres to the principle of least privilege. The Security Token Service is a component of the overarching identity management system provided by AWS but specializes in this temporary credential issuance. In comparison, Cognito is mostly oriented towards user identity and access management in apps, whereas the Key Management Service is involved in creating and controlling encryption keys, not in issuing temporary security credentials based on permission policies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of AWS Security Token Service (STS)?
What is identity federation in the context of AWS STS?
How does AWS STS differ from Amazon Cognito?
An application is experiencing intermittent failures due to throttling when making requests to an AWS service. Which implementation would BEST enhance the application's resiliency by addressing these specific failures?
Implementing retries with exponential backoff and jitter
Increasing the timeout settings for the API requests
Migrating the application to run on larger EC2 instances
Creating additional replicas of the application's database
Answer Description
Implementing retries with exponential backoff and jitter is considered a best practice when dealing with intermittent failures, such as throttling by an AWS service. This approach gradually increases the wait time between retries, reducing the potential for recurring conflicts and mitigating the risk of overwhelming the service with repeated requests. Jitter further randomizes the wait times, helping to prevent synchronized retries from multiple sources, which could lead to additional throttling. The other options may provide benefits in different contexts but do not directly address the issue of intermittent failures due to throttling as effectively as exponential backoff with jitter.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff in AWS?
What is jitter and how does it improve exponential backoff?
What AWS services commonly use retries with exponential backoff and jitter?
When designing a system that requires the capability to process a high number of messages per second, which service feature should be used to ensure the most efficient processing while maintaining the ordering of messages?
Implementing Dead Letter Queues in Amazon SQS
Using Amazon SQS FIFO Queues with Message Group IDs
Utilizing Amazon SNS with Content-Based Filtering
Enabling Long Polling on Amazon SQS Standard Queues
Answer Description
Amazon SQS provides a feature called FIFO (First-In-First-Out) queues that not only offers exactly-once processing capability but also ensures that the order in which messages are sent and received is strictly preserved. However, FIFO queues are not designed for very high throughput scenarios unlike standard queues which offer high throughput but do not guarantee exact ordering of messages. In scenarios where both high throughput and message ordering are critical, combining FIFO queues with message group IDs allows for effectively partitioning the message space into multiple ordered streams within the same queue, thus maintaining high throughput while ensuring the order of messages within each group ID. Other options presented do not offer this combination of high throughput and message ordering required for such scenarios.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of Message Group IDs in Amazon SQS FIFO queues?
How do FIFO queues differ from Standard queues in Amazon SQS?
When should I use Dead Letter Queues in Amazon SQS?
A development team must secure sensitive customer files in cloud-based object storage. The requirements stipulate that the encryption keys used should be under the company's direct control, with an automated process for changing these keys periodically. Which service and configuration would best fulfill these criteria?
Enable a managed key rotation service within the platform's cloud object storage
Engage a private certificate authority to apply server-side encryption policies to the cloud storage
Use self-managed keys in Key Management Service set to automatically rotate for object storage server-side encryption
Adopt a hardware security module service for key storage and institute a manual rotation process
Answer Description
The appropriate service for the creation and administration of encryption keys in the cloud is Key Management Service (KMS), which provides options to both create your own encryption keys and configure them for automatic rotation. Utilizing KMS keys offers the capability to fulfill the need for encryption at rest within S3 while also adhering to the company's guidelines for regular key rotation. The reliance on the platform's own managed keys would preclude direct control and rotation management. Certificate Manager's primary use case involves issuing certificates for TLS, not storage encryption, while CloudHSM is tailored for specific compliance requirements and does not innately manage automatic key rotation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Key Management Service (KMS)?
How does automatic key rotation work in KMS?
Why is AWS Certificate Manager (ACM) not suitable for this use case?
A company needs to update its web application running on AWS without disrupting the user experience. Which deployment strategy should they employ to ensure that only a portion of users are directed to the new version initially, thereby allowing for performance and stability monitoring before the new version is fully deployed?
Canary deployment
Rolling deployment
Blue/green deployment
All-at-once deployment
Answer Description
The correct answer is a canary deployment strategy. A canary deployment involves deploying the new version of an application to a small subset of users before fully rolling it out. This allows developers to monitor the performance and stability of the application with live traffic and reduce the risk of introducing issues that could affect all users. In contrast, blue/green deployment swaps the entire environment at once, which doesn't meet the requirement for gradual exposure. A rolling deployment uniformly updates instances but again, does not support the feature of selective traffic routing as described. All-at-once deployment doesn't allow for gradual exposure, as it updates all instances simultaneously.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary deployment in software development?
How does canary deployment differ from blue/green deployment?
What monitoring tools are commonly used during a canary deployment on AWS?
A development team manages a critical serverless application using Amazon API Gateway and AWS Lambda. They have a single REST API definition that needs to be deployed across three distinct environments: dev, test, and prod. Each environment requires the API to integrate with a different backend Lambda function (e.g., my-function-dev
, my-function-test
, and my-function-prod
). The team wants to avoid duplicating the entire API definition for each environment and needs a dynamic way to specify the correct Lambda function based on the deployment stage. Which API Gateway feature should the developer use to meet these requirements most efficiently?
Comprehensive resource duplication for each environment
Exclusive deployment of separate Lambda functions per environment
Isolation of environments using network ACLs and security groups
Stage variables
Answer Description
Stage variables are the correct feature for this scenario. They are name-value pairs that can be defined for each deployment stage of a REST API, acting like environment variables. By defining a variable in each stage (e.g., lambdaName
) and setting its value to the corresponding function name (my-function-dev
, my-function-test
), the API integration can dynamically reference this variable (${stageVariables.lambdaName}
) to invoke the correct backend Lambda function without changing the core API definition. Comprehensive resource duplication is inefficient and costly. Network ACLs and security groups provide network-level isolation, but do not help configure the API's backend integration. Exclusively deploying functions per environment is part of the solution, but it is the stage variables that allow API Gateway to dynamically route to them.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are stage variables in AWS API Gateway?
How do stage variables differ from duplicating API resources for each environment?
When should you use stage variables instead of services like AWS AppConfig?
A development team is working on a web application hosted on AWS. The team uses AWS CodeCommit as their version control system and follows a standard naming convention for branches, where 'main' is used for production, 'develop' for staging, and feature branches are created off 'develop' for new features. They decide to work on a new feature and want to ensure it goes through proper testing in staging before being merged into 'main' for production. Which of the following branching strategies should the development team follow to best adhere to their workflow and ensure that the new feature is properly tested and reviewed before deploying to production?
Create a feature branch off another existing feature branch to work on the new functionality.
Merge the new feature code directly into 'main' from the developer's local machine.
Create a new feature branch from 'develop', commit changes to this branch, and upon completion, merge back into 'develop' for staging.
Branch off from 'main', work on the new feature, and then merge back into 'main' when testing is concluded.
Answer Description
The development team should create a new branch from 'develop', work on the feature, and then merge back into 'develop' once it passes all checks and tests. This strategy aligns with their existing workflow where 'develop' is used as the staging branch for pre-production readiness. Merging the feature directly into 'main' bypasses the staging environment, and creating a branch from 'main' is not conducive since 'main' represents the production-ready state. Branching off another feature branch deviates from the typical use of feature branches, which are meant to be short-lived and branch off from a common development or staging branch.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is 'develop' used as the staging branch in the workflow?
What are the benefits of feature branches in version control systems like AWS CodeCommit?
What happens if feature code is merged directly into 'main' without using staging?
Which tool offered by Amazon Web Services can developers use to invoke and debug their serverless functions locally, simulating the cloud environment on their own machine?
AWS CodeDeploy
Amazon CodePipeline
AWS SAM CLI
AWS SDKs
Answer Description
The correct tool for invoking and debugging serverless functions locally is AWS SAM CLI. It allows developers to test their Lambda functions in an environment similar to the actual AWS runtime. AWS CodePipeline and AWS CodeDeploy are used in deployment cycles and do not have the capability to run or test functions locally. The AWS SDKs are for interacting with AWS services in your application code, not for local invocation of functions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS SAM CLI and how does it simulate the cloud environment locally?
Why can't AWS CodePipeline or CodeDeploy invoke serverless functions locally?
How are AWS SDKs different from AWS SAM CLI when developing with Lambda?
A development team needs to store access credentials for a production database securely. These credentials must be retrievable programmatically by their web application without embedding them directly into the source code or configuration files. Which service should the team use to achieve this with the ability to rotate secrets periodically and audit access?
Web Application Firewall
Certificate Manager
Secrets Manager
Parameter Store
Answer Description
The service specifically suited for storing and managing secrets, such as database credentials, is the Secrets Manager. It not only allows secure storage and retrieval but also offers features for automatic rotation of secrets and access auditing, matching the team's requirements. Parameter Store does provide secure storage for configuration data and secrets, but it lacks some of the advanced secret management functionalities like built-in secret rotation provided by Secrets Manager. WAF and Certificate Manager serve different purposes entirely; WAF protects web applications from common web exploits, and Certificate Manager deals with SSL/TLS certificates, therefore, neither are appropriate choices for managing database credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Secrets Manager?
How does automatic secret rotation work in Secrets Manager?
How is Secrets Manager different from Parameter Store?
Which technique should be implemented to develop a resilient and fault-tolerant application that can handle intermittent failures when making HTTP requests to an AWS service?
Decreasing the timeout with each retry attempt
Increasing the payload size with each retry attempt
Retries with exponential backoff and jitter
Using static waits between retry attempts
Retries with constant backoff
Sending multiple parallel requests to expedite processing
Answer Description
Implementing retries with exponential backoff and jitter is a fault-tolerant design pattern, helping to ensure that an application can handle intermittent failures gracefully by waiting longer between each failed attempt, reducing the likelihood of overwhelming the service with high volumes of requests. Using only retries without backoff could contribute to further overloading the service, potentially exacerbating failure conditions. Static waits or increasing payload size do not address the fundamental issue of handling intermittent failures and can actually worsen the situation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff in the context of retries?
Why is jitter important in combination with exponential backoff?
What happens if retries aren't implemented correctly in fault-tolerant applications?
When retrieving data from a DynamoDB table, which operation allows you to use a partition key to narrow down results and retrieve only the items that match your specific criteria?
Query operation
Fetch operation
Scan operation
Lookup operation
Answer Description
A query operation in Amazon DynamoDB is the most efficient way to retrieve data by using a partition key (and an optional sort key) to narrow down the results to the items that match exactly what you're looking for. This operation finds items within a table or a secondary index using the primary key attribute values specified by you, leading to faster and more efficient data retrieval compared to a scan operation which reads every item in the table or index and can consume more read throughput, resulting in higher latency and cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a Query operation and a Scan operation in DynamoDB?
What are primary key attributes in DynamoDB?
When should you use a secondary index in DynamoDB for querying data?
A developer is integrating a serverless function with a third-party web service. The developer needs to confirm that the function reacts appropriately under different response conditions, including successful data retrieval, errors, and timeouts, without interacting with the actual service. What is the most effective method to mimic the third-party service and assess the function's various operational responses?
Embed static response conditions within the serverless function code to facilitate response scenario assessment.
Establish an Amazon API Gateway and configure mock integrations for reproducing varied operational scenarios.
Activate the serverless function in a live setting with enhanced logging to track its handling of different operational conditions.
Implement an auxiliary serverless function to reproduce the behavior of the third-party service for testing purposes.
Answer Description
Setting up an Amazon API Gateway with mock integrations is the most effective method to mimic third-party web service responses. It enables the developer to configure the expected responses-such as those indicating a successful operation, an error, or a timeout-without relying on the real web service. This provides a controlled environment for thorough assessment of the serverless function's response to diverse scenarios. Other methods, such as hard-coding responses or utilizing additional serverless functions for emulation, do not offer the same level of flexibility or convenience as API Gateway's mock integrations for simulating a wide spectrum of behaviors.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are mock integrations in Amazon API Gateway?
How do mock integrations differ from deploying a live test setup?
Why is using mock integrations more efficient than hardcoding responses in the function code?
A development team needs an automated workflow that, whenever source code is committed, compiles the code, runs tests, and deploys the resulting artifact to a test environment. If the tests pass, the workflow must automatically promote the application to a staging (pre-production) environment. Which AWS service should the team use to orchestrate these multiple deployment stages?
CodeCommit
CloudFormation
CodeBuild
CodePipeline
Answer Description
AWS CodePipeline is a continuous delivery service that lets you model, visualize, and automate your application's release process. You can define multiple stages-such as source, build, test, and deploy-and integrate actions like CodeBuild builds or CodeDeploy deployments. Each time code is committed, CodePipeline automatically runs the pipeline, promoting the artifact through successive environments until it reaches the specified stage. CodeBuild alone can compile and test code but does not coordinate multi-stage workflows. CodeCommit is only a Git repository, and CloudFormation provisions infrastructure but does not orchestrate CI/CD stages, making CodePipeline the correct choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS CodePipeline and how does it work?
What is the difference between CodePipeline and CodeBuild?
Why can't AWS CodeCommit or CloudFormation be used for this scenario?
You are implementing a notification system for an online shopping platform hosted on AWS. The system must send emails to customers after their order has been processed. Given that the order-processing system can become backlogged during peak times, which pattern should you employ to ensure that the email notifications do not block or slow down the order-processing workflow?
Adopt a synchronous pattern for small batches of orders, switching to an asynchronous pattern only when detecting a processing backlog.
Employ an asynchronous pattern with a message queue that collects notifications to be processed by a separate email-sending service independently of order processing.
Use a synchronous pattern where the order-processing service sends the email directly before confirming the order as complete within the same workflow.
Implement a hybrid pattern, sending the email synchronously for premium customers, while using an asynchronous approach for regular customers.
Answer Description
An asynchronous communication pattern is suitable for this scenario because it allows the order-processing logic to finish without waiting for the email to be delivered. The order service can publish a message to Amazon SQS (or another queue), which then triggers a separate component-such as an AWS Lambda function-to send the email. This decoupling lets the order system remain responsive under load. A synchronous approach would force each order request to wait for the email operation, creating latency and backlogs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS and why is it used in asynchronous workflows?
How does AWS Lambda work with Amazon SQS to process messages asynchronously?
Why is a synchronous pattern not suitable for high-load order processing systems?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.