AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.
Scroll down to see your responses and detailed results
Free AWS Certified Developer Associate DVA-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Given an application that processes a high volume of messages and requires a resilient architecture to manage potential processing bottlenecks, which approach would most effectively guarantee no loss of messages and facilitate recovery for messages that could not be processed after multiple attempts?
Apply a backoff retry strategy on a Lambda function triggered by message arrival, ensuring that temporary processing issues are mitigated.
Deploy an Amazon Kinesis stream and attach a Lambda function to process the streamed records, ensuring the function has robust exception handling.
Create an SQS Delay Queue to postpone the delivery of new messages and give the system time to recover from any current processing delay.
Set up an Amazon SQS queue and configure a dead-letter queue (DLQ) to catch messages that fail to be processed after a certain number of attempts.
Answer Description
Implementing an Amazon Kinesis stream with a corresponding Lambda function as a processor is a good starting point; however, it doesn't provide a built-in solution for handling messages that fail processing after several attempts. Using an Amazon SQS queue with a properly configured dead-letter queue (DLQ) offers the best approach, as it explicitly handles failed message processing by moving them to the DLQ after a predefined number of attempts, thereby preventing message loss and allowing for future inspection or reprocessing. Enabling backoff retry strategies on a Lambda function without a DLQ does offer some temporary resilience but eventually may lead to message loss if the issue persists. Similarly, implementing an SQS Delay Queue does not inherently handle failed processing attempts, but only delays message visibility to consumers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Dead-Letter Queue (DLQ)?
How does Amazon SQS help avoid message loss?
What is a backoff retry strategy?
When designing a web service for processing financial transactions, what mechanism can a developer implement to prevent duplicate submissions from charging a customer multiple times?
Track the status codes from previous submissions and use them to determine if the operation should be retried.
Integrate a distributed tracing service to handle de-duplication of transaction requests.
Record the timestamp for each operation and only process requests if subsequent submissions occur after a specific time interval.
Generate a unique identifier for each operation, allowing the service to detect and ignore retries of transactions that have already been executed.
Answer Description
The correct approach to ensure idempotent operations is to issue a unique transaction identifier for each financial operation. This identifier enables the service to recognize repeat submissions and disregard them. Timestamps do not guarantee idempotency because requests may be retried automatically by clients without alteration. Relying on response codes as a method of idempotency is also unreliable because response codes can change with network conditions or server states and are typically not a part of the client's request. AWS X-Ray is a service for analyzing and debugging distributed applications and does not provide a direct mechanism for implementing idempotency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is idempotency in the context of web services?
How is a unique transaction identifier generated?
What are potential drawbacks of using timestamps for transaction processing?
Your team is designing an AWS-based application where one component processes customer orders, and another component handles inventory management. Considering that you need to minimize the interdependence between these two components in case one of them fails, which approach would contribute to a more loosely coupled architecture?
Creating database triggers within your order database to automatically update the inventory management system in real-time
Setting up a REST service within the inventory management component that is called directly by the order processing service for every new order
Using an Amazon Simple Queue Service (SQS) to facilitate message passing between the order processing and inventory management components
Implementing synchronous REST API interactions between the two services, requiring an immediate response after an order is placed
Answer Description
Using an Amazon Simple Queue Service (SQS) queue to pass messages between the order processing and inventory management components supports a loosely coupled architecture. With SQS, if the inventory management component is slow to process messages or temporarily unavailable, order processing isn't directly affected since messages are queued. On the other hand, synchronous REST API interactions and database triggers are indicative of tight coupling because they rely on immediate responses and can lead to cascading failures if one component has issues. A REST service calling another REST service synchronously is tightly coupled, which can also introduce points of failure if the receiving service is down.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS and how does it work?
What does 'loosely coupled architecture' mean?
Why are synchronous REST API interactions considered tightly coupled?
You are developing a RESTful API. When a client sends a POST request to create a new resource, but the resource already exists, which HTTP status code should your API return to best adhere to standard practices?
409 Conflict
500 Internal Server Error
202 Accepted
400 Bad Request
Answer Description
According to standard HTTP status code definitions, 409 Conflict is the appropriate response when a request conflicts with the current state of the server, such as trying to create a resource that already exists. A 400 Bad Request implies general client errors, 202 Accepted is used when a request has been accepted for processing but the process has not been completed, and 500 Internal Server Error indicates a server error, not a conflict of resource state.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a RESTful API?
What are HTTP status codes and why are they important?
What does '409 Conflict' mean in HTTP status codes?
Which of the following is the BEST description of a stateless application component?
It retains session state information between requests, necessitating a mechanism to store user data on the server for ongoing interactions.
It requires sticky sessions to ensure that a user's session data is maintained across multiple interactions with the application.
It relies on server-side storage to keep track of user states and preferences, reducing the client's overhead in subsequent requests.
It treats each request as a separate, independent transaction, without the need to maintain client session state on the server.
Answer Description
A stateless application component does not save client state on the server between requests; each request is independent of the others, and the server does not retain session information. This is the hallmark of stateless design, making it easy to scale horizontally by adding more servers. A stateful component would retain session state information between requests, which can complicate scaling due to the need to either persist session state or maintain session affinity. Stateless components generally do not require sticky sessions since each request is self-contained and can be processed by any available instance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does it mean for an application to be stateless?
How does stateless design improve application scalability?
What is a sticky session, and why is it not needed in stateless applications?
Is it possible to enhance a microservices architecture's resiliency by utilizing a publish/subscribe service to distribute messages concurrently to multiple message queueing services?
False
True
Answer Description
Yes, employing a publish/subscribe service to distribute messages simultaneously to several message queueing services is a well-established method for decoupling components within a microservices architecture, enhancing fault tolerance and resilience. When a publisher sends a message, the service fans it out to all subscribed queues, each corresponding to a different consumer service. Should one consumer service experience an issue, it doesn't affect the ability of other services to continue processing their respective messages. This design pattern prevents any single point of failure from bringing down the entire system, thereby improving overall fault tolerance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are microservices in the context of software architecture?
What is a publish/subscribe (pub/sub) service?
How do message queueing services contribute to system resiliency?
Which tool offered by Amazon Web Services can developers use to invoke and debug their serverless functions locally, simulating the cloud environment on their own machine?
AWS SAM CLI
AWS CodeDeploy
Amazon CodePipeline
AWS SDKs
Answer Description
The correct tool for invoking and debugging serverless functions locally is AWS SAM CLI. It allows developers to test their Lambda functions in an environment similar to the actual AWS runtime. AWS CodePipeline and AWS CodeDeploy are used in deployment cycles and do not have the capability to run or test functions locally. The AWS SDKs are for interacting with AWS services in your application code, not for local invocation of functions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does SAM stand for in AWS SAM CLI?
How does AWS SAM CLI simulate the Lambda environment?
What are the main differences between AWS SAM CLI and AWS SDKs?
A developer is implementing an application that requires frequent retrieval of items from an Amazon DynamoDB table. To optimize performance, the application needs to minimize latency and reduce the number of network calls. Given the need for efficient data access patterns, which method should the developer use when implementing code that interacts with the DynamoDB table using the AWS SDK?
Perform individual
GetItem
operations for each item.Utilize
BatchGetItem
for batch retrieval of items.Use
PutItem
calls with a filter to only insert the desired items.Employ a
Scan
operation to fetch all table items and filter as needed.
Answer Description
Using the batch operations API of AWS SDKs, such as BatchGetItem
in the case of Amazon DynamoDB, allows the application to retrieve up to 100 items from one or more DynamoDB tables in a single operation. This reduces the number of network calls compared to individual GetItem
requests for each item and results in less network latency. The use of Query
or Scan
would not be as efficient because they are designed for different purposes. Query
is used to retrieve items using a key condition expression, and Scan
reads every item in a table, which can be less efficient for frequently accessed individual items. While PutItem
is used for data insertion, not retrieval, hence it is not suitable for the scenario given.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is `BatchGetItem` and how does it work in DynamoDB?
What are the differences between `GetItem`, `Query`, and `Scan` in DynamoDB?
Why is `Scan` not recommended for frequently accessed individual items?
You are working on a serverless application where each component is designed to execute a distinct operation within an e-commerce checkout process. During the development cycle, you want to confirm that each component functions independently and as expected without making actual calls to cloud services. What technique should you employ within your unit tests to simulate the behavior of the external dependencies?
Configure an API management service to handle dependencies during test runs
Create instances of client objects specific to cloud resources within your unit tests
Utilize SDK mocking utilities to emulate the behavior of external service calls
Reference recorded responses from an object storage service during test execution
Answer Description
To simulate the behavior of external dependencies, you should utilize mocking utilities provided with the SDK for your chosen programming language. This ensures that tests can run without the need for actual service calls, allowing for true unit testing by isolating the code from external interactions. Creating real client instances during testing would result in calls to the actual services, which contradicts the principles of unit testing. Similarly, configuring API Gateway to manage test dependencies creates actual service interactions. Using sample responses from an object storage service introduces external dependency to tests, which is contrary to the concept of isolation in unit testing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are SDK mocking utilities and how do they work?
Why is it important to isolate external dependencies in unit tests?
What are the drawbacks of creating real client instances for testing?
You are implementing a notification system for an online shopping platform hosted on AWS. The system must send emails to customers after their order has been processed. Given that the order processing system can become backlogged during peak times, which pattern should you employ to ensure that the email notifications do not block or slow down the order processing workflow?
Adopt a synchronous pattern for small batches of orders, switching to an asynchronous pattern only when detecting a processing backlog.
Implement a hybrid pattern, sending the email synchronously for premium customers, while using an asynchronous approach for regular customers.
Use a synchronous pattern where the order processing service sends the email directly before confirming the order as complete within the same workflow.
Employ an asynchronous pattern with a message queue that collects notifications to be processed by a separate email-sending service independently of order processing.
Answer Description
An asynchronous communication pattern is suitable for this scenario because it allows the order processing to complete without waiting for the email notification to be sent. The email sending process can be offloaded to a separate service, such as a message queue that triggers a Lambda function to send emails. This decoupling ensures that the order processing system remains responsive even during times of high load. A synchronous pattern would require each order to wait until the email is sent before marking the order as complete, potentially causing delays and backlogs in the order processing system.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an asynchronous pattern in the context of application development?
What is a message queue and how does it help in asynchronous processing?
How does using AWS Lambda for sending emails fit into the asynchronous model?
A developer is designing a new application that processes sensitive financial data. The application will store processed data in Amazon S3. For compliance reasons, the data must be encrypted at all times. Which type of encryption should the developer use to ensure that the data is encrypted before it leaves the application's host and remains encrypted in transit and at rest within Amazon S3?
Enable Secure Socket Layer (SSL) on the application's server and rely on S3 bucket policies to handle encryption.
Implement client-side encryption using a customer-managed key prior to uploading the data to Amazon S3.
Activate default S3 bucket encryption with an AWS Key Management Service (KMS) managed key.
Use server-side encryption with Amazon S3 managed keys (SSE-S3) when uploading the data.
Answer Description
Server-side encryption with Amazon S3 managed keys (SSE-S3) will encrypt the data as it arrives in S3, but does not guarantee that data is encrypted in transit or before it leaves the application's host. Using client-side encryption, the data is encrypted by the client (in this case, the application), which ensures that data is encrypted before it is sent over the network to Amazon S3. As a result, the data remains encrypted in transit and when stored at rest in S3. Other options are incorrect because they either involve encryption managed by S3 after the data has been uploaded (missing in transit encryption), or are not related to S3.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is client-side encryption and how does it work?
What are the differences between client-side encryption and server-side encryption?
What are AWS Key Management Service (KMS) managed keys and how are they used?
A developer is implementing a cloud-based messaging system where it's critical that messages are processed at least once, but the processing service is occasionally prone to temporary outages. How should the developer ensure message processing can be retried reliably without overwhelming the service when it comes back online?
Implement retries with exponential backoff and jitter
Create a dedicated error queue for failed message processing attempts
Increase the message visibility timeout to its maximum limit
Set up unlimited immediate retries for all failed messages
Answer Description
By implementing retries with exponential backoff and jitter, the developer can ensure that the message processing is retried reliably without overwhelming the service after an outage. Exponential backoff increases the delay between retry attempts, reducing the load on the service when it recovers, and jitter adds randomness to these delays to prevent synchronized retries. A dedicated error queue is not a retry mechanism but rather a way to separate unprocessable messages. Unlimited immediate retries could easily overwhelm the service, and increasing the message visibility timeout alone does not provide a mechanism for retrying message processing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter and why is it important in retry mechanisms?
How does a dedicated error queue differ from a retry mechanism?
Which architectural pattern is best described by a design that breaks down an application into smaller, interconnected services, each responsible for a specific business function?
Microservices
Choreography
Fanout
Monolithic
Answer Description
The microservices pattern is characterized by creating a suite of small, independently deployable services. Each service in a microservices architecture can be deployed, upgraded, scaled, and restarted independent of other services in the application. This differs from a monolithic architecture where all the functionality is handled within one large codebase and from other patterns like event-driven where services may interact primarily based on events rather than direct communication.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key benefits of using microservices?
How do microservices communicate with each other?
What challenges might arise when adopting a microservices architecture?
Your company's new web application is required to authenticate users leveraging their existing social network accounts to streamline the sign-in process. Which service would you utilize to enable this feature while maintaining a seamless user experience and secure authentication process?
AWS Identity and Access Management
Amazon Cognito
AWS Security Token Service
AWS Directory Service
Answer Description
The correct service to use in this scenario is Amazon Cognito, as it provides federated identity management that allows for integration with social identity providers like Facebook, Google, and Amazon, along with SAML and OIDC identity providers. This service simplifies the integration of user authentication through existing social networks within your web application. AWS IAM is a service for managing permissions and is not designed for federated social identity providers. AWS STS provides temporary security credentials for short-term access but not social identity provider integration. AWS Directory Service is used for integrating AWS resources with an existing on-premise Active Directory but is not designed for direct integration with social media identity providers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Cognito and how does it work?
What are federated identity providers and why are they important?
What is the difference between AWS IAM and Amazon Cognito?
Which technique should be implemented to develop a resilient and fault-tolerant application that can handle intermittent failures when making HTTP requests to an AWS service?
Retries with exponential backoff and jitter
Decreasing the timeout with each retry attempt
Retries with constant backoff
Using static waits between retry attempts
Increasing the payload size with each retry attempt
Sending multiple parallel requests to expedite processing
Answer Description
Implementing retries with exponential backoff and jitter is a fault-tolerant design pattern, helping to ensure that an application can handle intermittent failures gracefully by waiting longer between each failed attempt, reducing the likelihood of overwhelming the service with high volumes of requests. Using only retries without backoff could contribute to further overloading the service, potentially exacerbating failure conditions. Static waits or increasing payload size do not address the fundamental issue of handling intermittent failures and can actually worsen the situation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter, and why is it important in retry logic?
Why is it ineffective to increase payload size with each retry?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.