AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.
Free AWS Certified Developer Associate DVA-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Which technique should be implemented to develop a resilient and fault-tolerant application that can handle intermittent failures when making HTTP requests to an AWS service?
Increasing the payload size with each retry attempt
Decreasing the timeout with each retry attempt
Retries with constant backoff
Sending multiple parallel requests to expedite processing
Using static waits between retry attempts
Retries with exponential backoff and jitter
Answer Description
Implementing retries with exponential backoff and jitter is a fault-tolerant design pattern, helping to ensure that an application can handle intermittent failures gracefully by waiting longer between each failed attempt, reducing the likelihood of overwhelming the service with high volumes of requests. Using only retries without backoff could contribute to further overloading the service, potentially exacerbating failure conditions. Static waits or increasing payload size do not address the fundamental issue of handling intermittent failures and can actually worsen the situation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter, and why is it important in retry logic?
Why is it ineffective to increase payload size with each retry?
Which tool offered by Amazon Web Services can developers use to invoke and debug their serverless functions locally, simulating the cloud environment on their own machine?
AWS SAM CLI
AWS SDKs
AWS CodeDeploy
Amazon CodePipeline
Answer Description
The correct tool for invoking and debugging serverless functions locally is AWS SAM CLI. It allows developers to test their Lambda functions in an environment similar to the actual AWS runtime. AWS CodePipeline and AWS CodeDeploy are used in deployment cycles and do not have the capability to run or test functions locally. The AWS SDKs are for interacting with AWS services in your application code, not for local invocation of functions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does SAM stand for in AWS SAM CLI?
How does AWS SAM CLI simulate the Lambda environment?
What are the main differences between AWS SAM CLI and AWS SDKs?
Which of the following is the BEST description of a stateless application component?
It treats each request as a separate, independent transaction, without the need to maintain client session state on the server.
It requires sticky sessions to ensure that a user's session data is maintained across multiple interactions with the application.
It relies on server-side storage to keep track of user states and preferences, reducing the client's overhead in subsequent requests.
It retains session state information between requests, necessitating a mechanism to store user data on the server for ongoing interactions.
Answer Description
A stateless application component does not save client state on the server between requests; each request is independent of the others, and the server does not retain session information. This is the hallmark of stateless design, making it easy to scale horizontally by adding more servers. A stateful component would retain session state information between requests, which can complicate scaling due to the need to either persist session state or maintain session affinity. Stateless components generally do not require sticky sessions since each request is self-contained and can be processed by any available instance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does it mean for an application to be stateless?
How does stateless design improve application scalability?
What is a sticky session, and why is it not needed in stateless applications?
A developer is implementing an application that requires frequent retrieval of items from an Amazon DynamoDB table. To optimize performance, the application needs to minimize latency and reduce the number of network calls. Given the need for efficient data access patterns, which method should the developer use when implementing code that interacts with the DynamoDB table using the AWS SDK?
Use
PutItem
calls with a filter to only insert the desired items.Perform individual
GetItem
operations for each item.Utilize
BatchGetItem
for batch retrieval of items.Employ a
Scan
operation to fetch all table items and filter as needed.
Answer Description
Using the batch operations API of AWS SDKs, such as BatchGetItem
in the case of Amazon DynamoDB, allows the application to retrieve up to 100 items from one or more DynamoDB tables in a single operation. This reduces the number of network calls compared to individual GetItem
requests for each item and results in less network latency. The use of Query
or Scan
would not be as efficient because they are designed for different purposes. Query
is used to retrieve items using a key condition expression that requires a specific partition key, and Scan
reads every item in a table, which can be less efficient for frequently accessed individual items. PutItem
is used for data insertion, not retrieval, and therefore is not suitable for the scenario described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is `BatchGetItem` and how does it work in DynamoDB?
What are the differences between `GetItem`, `Query`, and `Scan` in DynamoDB?
Why is `Scan` not recommended for frequently accessed individual items?
Which design pattern can improve the resiliency of a microservices architecture by using a publish/subscribe service that distributes each event concurrently to multiple independent message queues so that every consumer processes the event in isolation?
Send every message to a single Amazon SQS queue that all consumer services poll in turn.
Store all events in a shared relational database table that each microservice queries for new records.
Expose every microservice through synchronous REST API calls and invoke them sequentially from the producer service.
Configure an Amazon SNS topic that fans out each published message to a dedicated Amazon SQS queue for every consumer microservice.
Answer Description
Using an Amazon SNS topic to fan out events to separate Amazon SQS queues is a proven pattern for decoupling producers and consumers. Each microservice reads from its own queue, so a failure or slowdown in one consumer does not block others. The queues buffer messages, add durability, and let each service scale at its own pace, increasing overall fault tolerance of the system.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are microservices in the context of software architecture?
What is a publish/subscribe (pub/sub) service?
How do message queueing services contribute to system resiliency?
When designing a web service for processing financial transactions, what mechanism can a developer implement to prevent duplicate submissions from charging a customer multiple times?
Record the timestamp for each operation and only process requests if subsequent submissions occur after a specific time interval.
Generate a unique identifier for each operation, allowing the service to detect and ignore retries of transactions that have already been executed.
Integrate a distributed tracing service to handle de-duplication of transaction requests.
Track the status codes from previous submissions and use them to determine if the operation should be retried.
Answer Description
The correct approach to ensure idempotent operations is to issue a unique transaction identifier for each financial operation. This identifier enables the service to recognize repeat submissions and disregard them. Timestamps do not guarantee idempotency because requests may be retried automatically by clients without alteration. Relying on response codes as a method of idempotency is also unreliable because response codes can change with network conditions or server states and are typically not a part of the client's request. AWS X-Ray is a service for analyzing and debugging distributed applications and does not provide a direct mechanism for implementing idempotency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is idempotency in the context of web services?
How is a unique transaction identifier generated?
What are potential drawbacks of using timestamps for transaction processing?
Which architectural pattern is best described by a design that breaks down an application into smaller, interconnected services, each responsible for a specific business function?
Microservices
Fanout
Monolithic
Choreography
Answer Description
The microservices pattern is characterized by creating a suite of small, independently deployable services. Each service in a microservices architecture can be deployed, upgraded, scaled, and restarted independent of other services in the application. This differs from a monolithic architecture where all the functionality is handled within one large codebase and from other patterns like event-driven where services may interact primarily based on events rather than direct communication.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key benefits of using microservices?
How do microservices communicate with each other?
What challenges might arise when adopting a microservices architecture?
Given an application that processes a high volume of messages and requires a resilient architecture to manage potential processing bottlenecks, which approach would most effectively guarantee no loss of messages and facilitate recovery for messages that could not be processed after multiple attempts?
Deploy an Amazon Kinesis stream and attach a Lambda function to process the streamed records, ensuring the function has robust exception handling.
Create an SQS Delay Queue to postpone the delivery of new messages and give the system time to recover from any current processing delay.
Set up an Amazon SQS queue and configure a dead-letter queue (DLQ) to catch messages that fail to be processed after a certain number of attempts.
Apply a backoff retry strategy on a Lambda function triggered by message arrival, ensuring that temporary processing issues are mitigated.
Answer Description
Implementing an Amazon Kinesis stream with a corresponding Lambda function as a processor is a good starting point; however, it doesn't provide a built-in solution for handling messages that fail processing after several attempts. Using an Amazon SQS queue with a properly configured dead-letter queue (DLQ) offers the best approach, as it explicitly handles failed message processing by moving them to the DLQ after a predefined number of attempts, thereby preventing message loss and allowing for future inspection or reprocessing. Enabling backoff retry strategies on a Lambda function without a DLQ does offer some temporary resilience but eventually may lead to message loss if the issue persists. Similarly, implementing an SQS Delay Queue does not inherently handle failed processing attempts, but only delays message visibility to consumers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Dead-Letter Queue (DLQ)?
How does Amazon SQS help avoid message loss?
What is a backoff retry strategy?
A developer is designing a new application that processes sensitive financial data. The application will store processed data in Amazon S3. For compliance reasons, the data must be encrypted at all times. Which type of encryption should the developer use to ensure that the data is encrypted before it leaves the application's host and remains encrypted in transit and at rest within Amazon S3?
Implement client-side encryption using a customer-managed key prior to uploading the data to Amazon S3.
Use server-side encryption with Amazon S3 managed keys (SSE-S3) when uploading the data.
Enable Secure Socket Layer (SSL) on the application's server and rely on S3 bucket policies to handle encryption.
Activate default S3 bucket encryption with an AWS Key Management Service (KMS) managed key.
Answer Description
Client-side encryption is the correct approach because it meets the requirement to encrypt data before it leaves the application's host. By encrypting the data on the client side, it is protected prior to transmission, during transit to Amazon S3, and while at rest in the S3 bucket.
Server-side encryption options, such as Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) or with AWS KMS keys (SSE-KMS), are incorrect because the encryption occurs on the AWS side after the data is received by Amazon S3. This does not fulfill the requirement to have the data encrypted before it leaves the application environment.
Using only Secure Socket Layer (SSL)/Transport Layer Security (TLS) is insufficient. While SSL/TLS encrypts data in transit, it does not encrypt the data on the host before it is sent or keep it encrypted at rest within the S3 bucket; server-side encryption would still be required for encryption at rest.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is client-side encryption and how does it work?
What are the differences between client-side encryption and server-side encryption?
What are AWS Key Management Service (KMS) managed keys and how are they used?
You are implementing a notification system for an online shopping platform hosted on AWS. The system must send emails to customers after their order has been processed. Given that the order-processing system can become backlogged during peak times, which pattern should you employ to ensure that the email notifications do not block or slow down the order-processing workflow?
Implement a hybrid pattern, sending the email synchronously for premium customers, while using an asynchronous approach for regular customers.
Adopt a synchronous pattern for small batches of orders, switching to an asynchronous pattern only when detecting a processing backlog.
Employ an asynchronous pattern with a message queue that collects notifications to be processed by a separate email-sending service independently of order processing.
Use a synchronous pattern where the order-processing service sends the email directly before confirming the order as complete within the same workflow.
Answer Description
An asynchronous communication pattern is suitable for this scenario because it allows the order-processing logic to finish without waiting for the email to be delivered. The order service can publish a message to Amazon SQS (or another queue), which then triggers a separate component-such as an AWS Lambda function-to send the email. This decoupling lets the order system remain responsive under load. A synchronous approach would force each order request to wait for the email operation, creating latency and backlogs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an asynchronous pattern in the context of application development?
What is a message queue and how does it help in asynchronous processing?
How does using AWS Lambda for sending emails fit into the asynchronous model?
Your team is designing an AWS-based application where one component processes customer orders, and another component handles inventory management. Considering that you need to minimize the interdependence between these two components in case one of them fails, which approach would contribute to a more loosely coupled architecture?
Using an Amazon Simple Queue Service (SQS) to facilitate message passing between the order processing and inventory management components
Creating database triggers within your order database to automatically update the inventory management system in real-time
Implementing synchronous REST API interactions between the two services, requiring an immediate response after an order is placed
Setting up a REST service within the inventory management component that is called directly by the order processing service for every new order
Answer Description
Using an Amazon Simple Queue Service (SQS) queue to pass messages between the order processing and inventory management components supports a loosely coupled architecture. With SQS, if the inventory management component is slow to process messages or temporarily unavailable, order processing isn't directly affected since messages are queued. On the other hand, synchronous REST API interactions and database triggers are indicative of tight coupling because they rely on immediate responses and can lead to cascading failures if one component has issues. A REST service calling another REST service synchronously is tightly coupled, which can also introduce points of failure if the receiving service is down.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SQS and how does it work?
What does 'loosely coupled architecture' mean?
Why are synchronous REST API interactions considered tightly coupled?
You are developing a RESTful API. When a client sends a POST request to create a new resource, but the resource already exists, which HTTP status code should your API return to best adhere to standard practices?
500 Internal Server Error
202 Accepted
400 Bad Request
409 Conflict
Answer Description
According to standard HTTP status code definitions, 409 Conflict is the appropriate response when a request conflicts with the current state of the server, such as trying to create a resource that already exists. A 400 Bad Request implies general client errors, 202 Accepted is used when a request has been accepted for processing but the process has not been completed, and 500 Internal Server Error indicates a server error, not a conflict of resource state.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a RESTful API?
What are HTTP status codes and why are they important?
What does '409 Conflict' mean in HTTP status codes?
Your company's new web application is required to authenticate users leveraging their existing social network accounts to streamline the sign-in process. Which service would you utilize to enable this feature while maintaining a seamless user experience and secure authentication process?
Amazon Cognito
AWS Identity and Access Management
AWS Security Token Service
AWS Directory Service
Answer Description
The correct service to use in this scenario is Amazon Cognito, as it provides federated identity management that allows for integration with social identity providers like Facebook, Google, and Amazon, along with SAML and OIDC identity providers. This service simplifies the integration of user authentication through existing social networks within your web application. AWS IAM is a service for managing permissions and is not designed for federated social identity providers. AWS STS provides temporary security credentials for short-term access but not social identity provider integration. AWS Directory Service is used for integrating AWS resources with an existing on-premise Active Directory but is not designed for direct integration with social media identity providers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Cognito and how does it work?
What are federated identity providers and why are they important?
What is the difference between AWS IAM and Amazon Cognito?
You are working on a serverless application where each component is designed to execute a distinct operation within an e-commerce checkout process. During the development cycle, you want to confirm that each component functions independently and as expected without making actual calls to cloud services. What technique should you employ within your unit tests to simulate the behavior of the external dependencies?
Configure an API management service to handle dependencies during test runs
Reference recorded responses from an object storage service during test execution
Utilize SDK mocking utilities to emulate the behavior of external service calls
Create instances of client objects specific to cloud resources within your unit tests
Answer Description
To simulate the behavior of external dependencies, you should utilize mocking utilities provided with the SDK for your chosen programming language. This ensures that tests can run without the need for actual service calls, allowing for true unit testing by isolating the code from external interactions. Creating real client instances during testing would result in calls to the actual services, which contradicts the principles of unit testing. Similarly, configuring API Gateway to manage test dependencies creates actual service interactions. Using sample responses from an object storage service introduces external dependency to tests, which is contrary to the concept of isolation in unit testing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are SDK mocking utilities and how do they work?
Why is it important to isolate external dependencies in unit tests?
What are the drawbacks of creating real client instances for testing?
A developer is implementing a cloud-based messaging system where it's critical that messages are processed at least once, but the processing service is occasionally prone to temporary outages. How should the developer ensure message processing can be retried reliably without overwhelming the service when it comes back online?
Set up unlimited immediate retries for all failed messages
Implement retries with exponential backoff and jitter
Create a dedicated error queue for failed message processing attempts
Increase the message visibility timeout to its maximum limit
Answer Description
By implementing retries with exponential backoff and jitter, the developer can ensure that the message processing is retried reliably without overwhelming the service after an outage. Exponential backoff increases the delay between retry attempts, reducing the load on the service when it recovers, and jitter adds randomness to these delays to prevent synchronized retries. A dedicated error queue is not a retry mechanism but rather a way to separate unprocessable messages. Unlimited immediate retries could easily overwhelm the service, and increasing the message visibility timeout alone does not provide a mechanism for retrying message processing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter and why is it important in retry mechanisms?
How does a dedicated error queue differ from a retry mechanism?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.