AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-90 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.
Scroll down to see your responses and detailed results
Free AWS Certified Developer Associate DVA-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
A development team is looking to manage environment-specific settings for their e-commerce web service interface, which routes incoming requests to the same backend AWS Lambda functions. They want to ensure that while the Lambda codebase remains identical, the behavior can be customized based on the environment it's deployed in. How should the team achieve this distinction within the web service interface provided by AWS?
Utilize environment configuration parameters, unique to each deployment stage of the web service interface, to pass specific values to the backend without changing the functions.
Alter the backend function code to incorporate environment-specific logic, and bundle this variance within the function deployment package.
Establish distinct backend functions for each deployment stage to manage the configuration differences, and allocate them to the web service interface accordingly.
Introduce separate interface methods distinguished by the intended environment to control the routing of requests.
Answer Description
Stage variables in AWS's web service interface solution allow for the configuration of environment-specific settings without altering the backend Lambda functions. By defining different stage variables for development and production, developers can alter the behavior of the interface, such as resource paths or logging levels, without duplicating Lambda functions or creating multiple interfaces. This approach is cost-effective and maintains a single codebase for backend processing. Incorrect answers suggest inefficient practices like deploying separate sets of Lambda functions or modifying backend code for each environment, which could lead to increased costs and complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are stage variables in AWS, and how do they work?
Why is using environment configuration parameters preferable over creating separate Lambda functions?
What are some common use cases for stage variables?
Is it possible to enhance a microservices architecture's resiliency by utilizing a publish/subscribe service to distribute messages concurrently to multiple message queueing services?
False
True
Answer Description
Yes, employing a publish/subscribe service to distribute messages simultaneously to several message queueing services is a well-established method for decoupling components within a microservices architecture, enhancing fault tolerance and resilience. When a publisher sends a message, the service fans it out to all subscribed queues, each corresponding to a different consumer service. Should one consumer service experience an issue, it doesn't affect the ability of other services to continue processing their respective messages. This design pattern prevents any single point of failure from bringing down the entire system, thereby improving overall fault tolerance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are microservices in the context of software architecture?
What is a publish/subscribe (pub/sub) service?
How do message queueing services contribute to system resiliency?
A development team is working on a multi-tier application that involves serverless functions and virtual server instances. To diagnose performance issues and better understand the interaction between components, the team decides they need a service to track the requests as they travel through the different parts of the application. Which service should they use to achieve this with minimal changes to the existing code?
X-Ray
Inspector
CloudWatch
CloudTrail
Answer Description
The service designed for end-to-end tracing of requests across a distributed system, including both serverless and server-based components, is AWS X-Ray. AWS X-Ray collects data about requests and enables developers to analyze and debug production, distributed applications, such as those built using a microservices architecture. The alternatives mentioned either focus on API call auditing, application and infrastructure monitoring, or security assessments, but do not provide the distributed request tracing capabilities needed for the team's requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly does AWS X-Ray do?
What are the key features of AWS X-Ray?
How does AWS X-Ray differ from CloudWatch and CloudTrail?
Which service is primarily used to gather insights from system-wide metrics, set up alarms based on specific conditions, and aggregate log data from various sources within a central platform?
Amazon CloudWatch
Amazon S3
AWS CloudTrail
Amazon Kinesis
Answer Description
Amazon CloudWatch is the correct answer because it is explicitly designed to monitor applications and infrastructure, allowing you to collect metrics, create alarms, and access logs in a unified way. This aids in observing system performance and operational health. Amazon Kinesis focuses on real-time data streaming and analysis, not on the comprehensive monitoring capabilities provided by CloudWatch. AWS CloudTrail is a governance, compliance, and auditing service that records API calls, including actions taken by AWS services, but does not offer the extensive metric and alarm features of CloudWatch. While Amazon S3 can be used for storing log data, it is mainly a storage service and lacks the active monitoring and alerting facilities that CloudWatch offers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What types of metrics can Amazon CloudWatch monitor?
How do I set up alarms in Amazon CloudWatch?
What is the difference between Amazon CloudWatch and AWS CloudTrail?
A development team is working on a new feature in a project hosted in AWS CodeCommit. They want to ensure that any changes pushed to the master branch have been reviewed and approved by at least two team members before being merged. Which feature in CodeCommit can they utilize to enforce this requirement?
Implementing stage locking on the master branch
Requiring a pre-signed commit policy on the master branch
Configuring a pull request approval rule in CodeCommit for the master branch
Enabling branch protections for the master branch
Answer Description
Pull request approvals are a feature in AWS CodeCommit that teams can use to enforce code review policies. By requiring a certain number of approvals before changes can be merged into a particular branch, like the master branch, teams can ensure that multiple team members have reviewed the changes. Branch protections in other services, such as GitHub, may offer similar functionality but are not the correct answer here because the scenario specifically involves AWS CodeCommit. The pre-signed commit policy is not applicable here, as it is not a feature of AWS CodeCommit. Lastly, the concept of 'stage locking' does not directly apply to Git-based version control systems and thus is incorrect in this context.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are pull request approval rules in CodeCommit?
How do I configure pull request approval rules in CodeCommit?
What is branch protection, and how does it differ from pull request approval rules?
Your team needs to update a production application running on AWS Elastic Beanstalk without downtime and with the ability to revert back immediately if issues arise. Which deployment policy should you employ to BEST meet these requirements?
Rolling
Blue/Green
Rolling with additional batch
Immutable
Answer Description
The Blue/Green deployment strategy allows a new version of the application to be deployed alongside the old version before a full-scale switch. This method minimizes downtime and provides an immediate rollback mechanism if the new version fails, by simply routing traffic back to the old environment. Incremental deployments, such as Rolling and Rolling with additional batch, would still have a limited impact on availability as instances are updated in batches and do not provide an immediate rollback capability. Immutable deployments, while minimizing downtime by provisioning new instances, do not provide the immediate rollback capability that Blue/Green deployments offer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Blue/Green deployment?
How does Blue/Green deployment support rollback?
What are the advantages of using Blue/Green deployments over other strategies?
Your application uses an Amazon DynamoDB table to serve user profile information. You notice an increase in the read load causing throttling issues due to repeated accesses to the same items. Which caching strategy should you implement to minimize read latency and reduce the load on the DynamoDB table while ensuring data consistency for frequently accessed items?
Apply lazy loading to load user profiles into the cache only when updates occur
Set a time-to-live (TTL) for user profiles to invalidate the cache periodically
Store user profiles in Amazon S3 and synchronize them with DynamoDB
Use a write-through cache to preload user profiles into Amazon ElastiCache
Increase the read capacity units (RCUs) for the DynamoDB table to handle the higher load
Implement a read-through cache using Amazon ElastiCache
Answer Description
Implementing a 'read-through' caching strategy will minimize read latency and reduce the load on the DynamoDB table by caching the data after the first read from the database. Subsequent reads are served directly from the cache, which decreases the burden on the table. As the cache is populated on-demand during a database read, it ensures that the data in the cache is consistent with the database. Other caching strategies might not be as effective for this scenario. For example, write-through caching is more suited for write-heavy applications, lazy loading can lead to stale data if not managed properly, and TTL doesn't directly address the problem of high read load and throttling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a read-through cache and how does it work?
What is Amazon ElastiCache and why is it used with DynamoDB?
What are the potential drawbacks of using a read-through cache?
A developer needs to process a stream of incoming data and update a DynamoDB table accordingly. The developer has decided to use a Lambda function to process the batches of records as they arrive and update DynamoDB. Which AWS service should be used to trigger the Lambda function in response to the incoming data stream?
Use Amazon Kinesis Data Streams to trigger the Lambda function.
Configure Amazon S3 event notifications to trigger the Lambda function.
Employ Amazon SNS to publish messages that trigger the Lambda function.
Utilize Amazon SQS to queue the data and trigger the Lambda function.
Answer Description
AWS Lambda can be directly triggered by Amazon DynamoDB Streams when data modification occurs in the DynamoDB table, enabling real-time processing of the changes. However, the scenario described involves processing a stream of incoming data, not changes to a DynamoDB table. Amazon Kinesis is a service for real-time processing of large, streaming data, and Lambda functions can be triggered by Kinesis Data Streams. Therefore, using Amazon Kinesis Data Streams to trigger the Lambda function is the correct answer because it aligns with the requirement of processing incoming data stream.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Kinesis Data Streams and how does it work?
What are the advantages of using AWS Lambda with Kinesis Data Streams?
What other services can trigger AWS Lambda, and when should I use them?
Your company is developing a serverless application with an AWS Lambda function that requires multiple development and testing environments. Which AWS feature allows you to point to different versions of a Lambda function for different integration testing environments without modifying the function's ARN?
Lambda Versions
Lambda Aliases
Stage Variables in API Gateway
Environment Variables in Lambda
Answer Description
AWS Lambda Aliases enable you to route traffic to different versions of a Lambda function. Aliases are like pointers and can be changed to point to different function versions as needed. This is useful for integration testing as you can maintain separate testing environments without changing the function's ARN, thus ensuring the stability and consistency of your application during the testing phase. Using different function versions or Stage variables are incorrect choices because while they can point to different configuration settings or Lambda versions, they do not offer the same aliasing capability for routing across different environments without altering the ARN.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are AWS Lambda Aliases?
How do Lambda Versions differ from Aliases?
What are some use cases for using Lambda Aliases in a serverless architecture?
While debugging an AWS Lambda function triggered by an API Gateway, a developer finds that the HTTP client receives an error code indicating that the server cannot process the request due to a client error. Which HTTP status code is typically returned by the API Gateway when the request payload is formatted correctly but invalid?
200 OK
502 Bad Gateway
500 Internal Server Error
400 Bad Request
Answer Description
The HTTP 400 Bad Request
status code indicates that the server could not understand the request due to invalid syntax. When an API Gateway receives a request with a well-formed payload that does not meet the criteria or expected values, it responds with this error code. It suggests that the client should not repeat this request without modification. The 200 OK
status code indicates that the request has succeeded, which is not appropriate in this context. A 500 Internal Server Error
suggests a server-side issue, which would not be the correct response for a client error in formatting. Finally, 502 Bad Gateway
implies that the server, while acting as a gateway or proxy, received an invalid response from the upstream server, which does not fit the scenario described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are some common causes of a 400 Bad Request error in API Gateway?
How can a developer debug a 400 Bad Request error in AWS Lambda?
What are the differences between client error and server error HTTP status codes?
Which configuration under Amazon API Gateway allows a developer to associate a REST API with a custom domain name?
Base Path Mapping
Custom Domain Names
Resource Policies
Stage Variables
Answer Description
API Gateway Custom Domain Names are used to define a custom domain for your API Gateway APIs and map individual paths to different stages in your API. This functionality enables serverless applications to be accessible via human-friendly URLs. Custom Domain Names are distinctly different from Stages, which are essentially different snapshots of your API and do not inherently provide the means to use custom URLs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Custom Domain Names in API Gateway?
What is Base Path Mapping and how does it relate to Custom Domain Names?
What are Stages in API Gateway?
An application running on an EC2 instance needs to securely interact with various cloud resources. To follow best security practices, you've assigned a particular role to the instance to facilitate this. Which method should your application use to authenticate service requests seamlessly?
Generate and use a dedicated set of long-term security credentials, storing them in the instance storage for service requests.
Rely on the cloud SDK's default behavior to retrieve temporary security credentials provided through the instance's metadata.
Embed a fixed set of security credentials within the application's source code to authenticate service requests.
Implement a custom script to fetch temporary security tokens using
GetSessionToken
for service request authorization.
Answer Description
When an EC2 instance is launched with an assigned role, temporary security credentials are provided automatically through the Instance Metadata Service (IMDS). The application can then make secure requests to other services without explicit credential management. Manually invoking GetSessionToken
is not necessary since automatic credential fetching is already available. Storing access and secret keys is insecure, and embedding them in app code is not advisable as it poses significant security risks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Instance Metadata Service (IMDS)?
What are temporary security credentials and why are they important?
What are the risks associated with embedding security credentials in application code?
Using exponential backoff alone is more effective in handling retries in a distributed system than combining exponential backoff with jitter for managing transient failures.
True
False
Answer Description
The statement is false. While exponential backoff is a strategy that progressively increases the wait time between consecutive retry attempts, thus reducing the load on the server and giving it time to recover, it might still lead to the so-called 'retry-storm' if many clients are trying to reconnect at the same intervals. By adding jitter, or randomization to the wait times, the retries are more evenly spread out, which further decreases the likelihood of overwhelming the system. This combined approach reduces the chance of synchronized retries across multiple instances and is generally acknowledged as a best practice for fault-tolerant design within distributed systems.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter and why is it important in distributed systems?
What are transient failures and how do they affect distributed systems?
An application development team is leveraging a serverless framework provided by AWS to manage and deploy their serverless infrastructure. When the team needs to introduce new updates for testing that will not impact the live environment accessed by users, which action aligns best with this requirement?
Update the live environment during off-peak hours to minimize the impact on end-users.
Deploy the updates to a separate staging environment that replicates the live settings.
Conduct IAM role permission testing to ensure that the updates will not affect user access control.
Apply the updates in a different, unconnected environment to protect the live environment.
Answer Description
The best practice for deploying updates without impacting the live user environment is to deploy the changes to a separate staging environment. This approach provides a testing ground that simulates the live environment but prevents any disruptions or unintended consequences from reaching actual users. Directly updating the live environment can lead to untested or faulty code impacting users, while choosing a completely unrelated environment may not provide accurate testing feedback. Testing IAM role permissions is part of securing the deployments, but it does not replace the need for environment-specific deployment for testing changes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a staging environment, and why is it important?
How does deploying to a separate environment improve the testing process?
What are the risks of updating the live environment directly?
A development team needs to allow an external consultant's account to access a specific Amazon S3 bucket to store and retrieve files essential for a joint project. The external consultant should not be given user credentials within the team's AWS account. What type of policy should the development team attach to the S3 bucket to allow access directly to the bucket itself?
Identity-based policy attached to a user
IAM group policy
Resource-based policy (e.g., S3 bucket policy)
Service control policy (SCP)
Answer Description
A resource-based policy, specifically an S3 bucket policy, is the correct method to directly grant permissions on the S3 bucket to entities in another AWS account without sharing any user credentials. This policy allows the external account to assume the necessary permissions. While a service control policy (SCP) applies to all IAM entities in an AWS Organizations account and does not grant permissions to outside accounts, and IAM user policies are attached directly to users within your account, not to resources.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a resource-based policy in AWS?
How does an S3 bucket policy work?
What are the key differences between resource-based policies and identity-based policies in AWS?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.