AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.
Scroll down to see your responses and detailed results
Free AWS Certified Developer Associate DVA-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
A company's application is hosted on AWS and leverages Elastic Beanstalk for its production environment. They aim to deploy a new version of the application with minimal impact on their live environment. They want to ensure that only a small percentage of users will be directed to the new version initially, and if issues arise, they can quickly revert to the old version. Which deployment strategy should they implement to meet this requirement?
Rolling deployments
All-at-once deployments
Blue/green deployments
Canary deployments
Answer Description
The correct answer is 'Canary deployments,' because this strategy allows for slowly rolling out the change to a subset of users before making it available to everybody. This approach is particularly useful for minimizing the impact on the production environment if a rollback is necessary due to unforeseen issues with the new version. A 'Blue/green deployment' is incorrect because it involves two separate but identical environments where traffic is switched all at once, not gradually. 'Rolling deployments' is also incorrect as it gradually replaces instances of the old version with the new one on a rolling basis without specifically targeting a population percentage. Lastly, 'All-at-once deployments' is incorrect since it deploys the new version to all instances simultaneously, which would not allow for the controlled exposure or easy rollback that the scenario requires.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Canary deployment?
What are the benefits of using Canary deployments?
How does a Canary deployment differ from a Blue/green deployment?
A developer has built a web application that interfaces with a product catalog stored in Amazon DynamoDB. The application experiences intermittent periods of high read traffic which could potentially throttle DynamoDB read capacity. To maintain application responsiveness, which service should the developer integrate to provide an effective caching layer?
AWS DataSync
Amazon S3 Transfer Acceleration
Amazon ElastiCache
Amazon RDS Read Replica
Answer Description
Amazon ElastiCache is the appropriate service to be used for adding a caching layer because it supports caching strategies like write-through, read-through, and lazy loading which can handle high traffic and prevent throttling issues with DynamoDB. It supports both Memcached and Redis in-memory data stores which are suitable for various caching use cases.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon ElastiCache and how does it work?
What are the advantages of using caching strategies like read-through and lazy loading?
Can you compare ElastiCache with Amazon RDS Read Replicas?
An application development team has implemented a system of microservices and requires a method to transparently diagnose issues that may arise during the traversal of requests across multiple service boundaries. Which service would provide them with the ability to track these requests, analyze performance bottlenecks, and visualize the service interaction?
Amazon VPC Flow Logs
Amazon CloudWatch
Amazon CloudTrail
AWS X-Ray
Answer Description
The correct service for the described needs is AWS X-Ray. This service is specifically designed to help developers analyze and debug both production and distributed applications, such as those built with a microservices architecture. It provides an end-to-end view of requests as they journey through your application, and it constructs a detailed service map that visualizes the application’s architecture. While services like CloudTrail and VPC Flow Logs are useful for their own specific use cases (API activity monitoring and network traffic logging, respectively), they do not offer the dedicated tracing or application topology visualization capabilities that AWS X-Ray provides. Similarly, Amazon CloudWatch focuses primarily on monitoring operational metrics and logs, and it doesn't offer the same comprehensive tracing or service mapping that is native to AWS X-Ray.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS X-Ray and how does it work?
What are the benefits of using AWS X-Ray for microservices?
How does AWS X-Ray compare to Amazon CloudWatch?
What deployment strategy allows for the simultaneous release of two versions of an application, directing a portion of the traffic to the new version and the rest to the current version to test performance and stability before full adoption?
A/B testing
Blue/green deployment
Rolling deployment
Canary deployment
Answer Description
The correct answer is 'Canary deployment'. Canary deployments are used to test the performance and stability of a new software version with a subset of users before rolling it out to all users. The term is derived from the 'canary in a coal mine' concept, where small, safe exposure to potential risk is monitored before proceeding further. Blue/green and rolling deployments, while viable strategies, do not use partial traffic routing for testing purposes. Instead, they provide full version replacement either instantly or incrementally across infrastructure. A/B testing, while involving traffic splitting, refers to a method primarily used to compare variations in user experience rather than software version stability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly is a canary deployment and how is it different from other deployment strategies?
Can you explain the benefits of using a canary deployment strategy?
What are some potential challenges associated with canary deployments?
A development team is utilizing AWS CodeCommit for their source control and they wish to implement a workflow strategy that allows parallel development while ensuring that new features can be developed independently from the hotfixes to the production code. Which of the following branching strategies would best facilitate this requirement?
Using a single 'development' branch for both new features and hotfixes and merging to main for releases
Using a feature branching strategy and creating a separate branch for hotfixes
Creating different repositories for features and hotfixes
Applying all changes directly to the main branch and using tags to differentiate feature changes from hotfixes
Answer Description
Feature branching strategy allows for the development of new features to be done in isolation from the main branch (often called master or main). This means that different feature branches can be used for developing new features without affecting the stability of the main branch, which is often used for production or release purposes. When a feature is complete, it can then be merged back into the main branch. The hotfix branch is created specifically to address critical issues in the production environment and can be merged back into the main and development branches with pinpoint precision. This separation of concerns helps ensure that neither new development nor emergency fixes interrupt the flow of the other.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a feature branching strategy?
Why is it important to separate feature development from hotfixes?
What are the potential drawbacks of creating different repositories for features and hotfixes?
You are developing an application that uses Amazon DynamoDB to store user activity data. The nature of the application requires that when a user performs an action, the most recent data must be shown immediately in their activity feed. Which type of read operation should you use to ensure the most up-to-date data is always retrieved?
Write-through caching read
Eventually consistent read
Strongly consistent read
Read-through caching read
Answer Description
To ensure the most up-to-date data is retrieved, a strongly consistent read should be used. Strongly consistent reads return a response with the most recent data, reflecting all writes that received a successful response before the read. Eventual consistency might return stale data since it allows for the possibility of reads occurring before all of the recent write operations have fully propagated to all copies of the data. Write-through and read-through are caching strategies, not consistency models and thus do not directly pertain to the consistency of database read operations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a strongly consistent read?
What is the difference between strongly consistent and eventually consistent reads?
Can you explain write-through and read-through caching?
A developer needs to ensure that immediate alerts are sent out when a serverless function's usage nears the upper bounds of its allowed parallel executions, potentially impacting the application's availability. Which approach should the developer employ for real-time notification?
Schedule regular audits of usage patterns through log analysis and manually trigger notifications when limits are approached.
Incorporate error handling in the relevant code to catch execution limit errors and send emails via a mail delivery service.
Set up a configuration recorder to monitor changes to the function's configuration and alert on significant modifications.
Create an alarm based on the 'ConcurrentExecutions' metric that triggers an alert to a messaging topic when nearing the execution limit.
Answer Description
The correct approach is to create an alarm that monitors the service's 'ConcurrentExecutions' metric, signaling an impending breach of the allowable parallel execution threshold. The creation of such an alarm can be configured to send a notification to a designated topic within a notification service, which dispatches the alert to the developer. This is a proactive and automated way to monitor for critical thresholds and ensure the developer is promptly notified. The other approaches either do not provide real-time alerts, are not designed for this specific monitoring purpose, or involve manual processes that would not be immediate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the 'ConcurrentExecutions' metric in AWS Lambda?
How do I create an alarm in AWS for monitoring metrics?
What is AWS SNS and how does it work with alarms?
Which service is designed to automate the phases of your project's release process, such as build, test, and deployment, whenever there is a change in code?
CodePipeline
EC2
Lambda
CodeBuild
Answer Description
The correct answer automates the release process, handling the workflow from source code to deployment. It enables you to model complex workflows and integrates with other services for building and deployment. In contrast, the service focused on compilation and testing does not provide a full pipeline model. Another option allows you to execute code without managing servers, falling short as a full CI/CD solution. The last choice primarily offers scalable compute capacity and is not tailored for CI/CD.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are CI/CD and how do they relate to AWS services?
What is the difference between CodePipeline and CodeBuild?
How does CodePipeline integrate with other AWS services?
Your team is enhancing the security of your company's application by moving sensitive environment configurations out of the codebase. They need a solution that allows easy updates to these settings as business requirements evolve, without necessitating a new deployment of the application stack. Which service should they implement for storing and retrieving these settings while ensuring sensitive information is encrypted and access is controlled?
Simple Storage Service (S3) with custom encryption
Function-as-a-Service environment variables
Elastic Compute Cloud (EC2) instance metadata
Systems Manager Parameter Store
Answer Description
The Parameter Store, part of Systems Manager, is the appropriate service for managing configuration data and secrets. It allows storing of parameters as plain text or encrypted data, making it suitable for handling sensitive information. It supports version tracking and notifications on parameter changes, and can be accessed dynamically at runtime. This service also provides granular permissions via IAM for controlling access, and integrates with KMS for encryption, ensuring a high security standard without the need to redeploy applications when configurations change.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Systems Manager Parameter Store?
How does Parameter Store ensure data security?
What are the benefits of using Parameter Store compared to other services?
In the context of AWS, what is an accepted method for securing applications by confirming the identity of a client making requests to a server?
Using one-time passwords at each login attempt
Implementing SSL/TLS for the transmission of sensitive data
Assigning specific Amazon Resource Names (ARNs) to clients
Using JSON Web Tokens (JWT) in the authorization header of HTTP requests
Answer Description
Bearer tokens, such as JSON Web Tokens (JWT), are a means to ensure that the party making a request to a server is the one it claims to be. When a user authenticates, they receive a token that they include in the header of their HTTP requests. The server validates this token before proceeding with the request, thus securing the application by authenticating the client. Other options listed do not perform the role of bearer tokens because SSL/TLS is used for secure transmission, one-time passwords are typically used for multi-factor authentication, and Amazon Resource Names (ARNs) are used to identify AWS resources, not for securing application transactions between clients and servers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are JSON Web Tokens (JWT)?
How does the server validate a JWT?
What is the main advantage of using bearer tokens like JWT for authentication?
Using exponential backoff alone is more effective in handling retries in a distributed system than combining exponential backoff with jitter for managing transient failures.
False
True
Answer Description
The statement is false. While exponential backoff is a strategy that progressively increases the wait time between consecutive retry attempts, thus reducing the load on the server and giving it time to recover, it might still lead to the so-called 'retry-storm' if many clients are trying to reconnect at the same intervals. By adding jitter, or randomization to the wait times, the retries are more evenly spread out, which further decreases the likelihood of overwhelming the system. This combined approach reduces the chance of synchronized retries across multiple instances and is generally acknowledged as a best practice for fault-tolerant design within distributed systems.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter and why is it important in distributed systems?
What are transient failures and how do they affect distributed systems?
Your company has an application that uploads processed data to an S3 bucket. You need to design a system where an AWS Lambda function is invoked for each new object created in the bucket to perform additional processing. What combination of steps should be taken to ensure the proper integration between S3 and the Lambda function?
Apply a bucket policy to the S3 bucket to allow the Lambda function to be invoked when a new object is created.
Create an S3 lifecycle policy that invokes the Lambda function upon new object upload.
Configure an event notification on the S3 bucket to invoke the Lambda function when an object is created.
Enable Cross-Origin Resource Sharing (CORS) on the S3 bucket to trigger the Lambda function.
Answer Description
To trigger an AWS Lambda function on S3 object creation, an event notification should be configured on the specific S3 bucket. Each event notification can specify an event type (such as 's3:ObjectCreated:*') and the destination for the event, which would be the Lambda function's ARN (Amazon Resource Name). Thus, the correct answer is 'Configure an event notification on the S3 bucket to invoke the Lambda function when an object is created'. Bucket policies and CORS configurations on the S3 bucket do not relate to the invocation of Lambda functions based on object creation events. An S3 lifecycle policy is used for object management (such as expiration or transition) and not for event invocation of Lambda functions
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an event notification in AWS S3?
What is the ARN (Amazon Resource Name) and why is it important?
What is a bucket policy and how does it differ from an event notification?
A front-end development team requires data from a service that is yet to be implemented. As the lead on inter-service communications, you need to configure your gateway to provide static responses to continue front-end development. Which technique would allow you to achieve this without relying on the service logic itself?
Setting up a mock integration directly on the gateway to mimic expected responses.
Applying request validation on the incoming requests to the interface.
Enabling cross-origin resource sharing (CORS) at the interface.
Implementing a temporary version of the service logic that produces static data.
Answer Description
A mock integration directly on the gateway allows you to simulate the business logic response by returning preconfigured static data for a given endpoint. This bypasses the need for any actual backend service to be operational or implemented, enabling front-end teams to work against a predictable interface. On the other hand, deploying premature service logic just to produce static data introduces unnecessary overhead and complexity. CORS is related to cross-origin resource sharing, which isn't relevant to simulating service logic. Request validation is concerned with the structure of incoming requests, not the service responses.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a mock integration in the context of a gateway?
How does enabling cross-origin resource sharing (CORS) relate to inter-service communication?
Why is it problematic to implement a temporary version of the service logic just to provide static data?
An organization's development team is preparing to roll out a serverless application that utilizes multiple cloud resources, including object storage, a NoSQL database, and serverless compute functions. The application must be able to read and write data to specific storage buckets and database tables. To comply with best security practices, how should you provision access for this application?
Construct a custom security profile for the application, restricting permissions exclusively to the operations required on designated storage buckets and database tables.
Deactivate explicit permission policies and deploy network-based controls to govern access to the necessary service resources.
Employ the root user's credentials for the application to ensure uninterrupted service access without having to manage multiple permission sets.
Generate an access key and secret key combination for the application, granting full management capabilities for all services to avoid potential disruptions.
Answer Description
Based on the principle of least privilege, the correct approach is to create a custom security profile (also known as an IAM role when implemented in AWS) for the application with the minimal set of permissions needed, which are those that allow reading from and writing to the specified storage buckets and database tables only. Granting full management capabilities across all services, as well as using the root user's credentials, would violate this principle by providing unnecessarily broad permissions and introduce significant security risks. Relying solely on network-based access controls is also incorrect as these are different types of security measures that do not manage the permissions for accessing cloud resources the way IAM roles or policies do.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege?
What are IAM roles in AWS?
Why should you avoid using root user credentials in AWS?
An application developer is tasked with enabling authentication through social media logins for a new mobile app and subsequently permitting access to specific cloud storage and database services on behalf of the authenticated user. Which Amazon Cognito feature should the developer leverage to achieve these requirements?
External identity provider integration with a token broker service
Directory services with custom application-level federation logic
Amazon Cognito User Pools
Synchronization service for user data and preferences
Amazon Cognito Identity Pools
IAM roles with trust relationships
Answer Description
The developer should leverage Amazon Cognito Identity Pools to meet the requirements. Identity Pools provide federated authentication, enabling users to authenticate with social identity providers, and then obtain temporary permissions to access cloud services directly. This is different from User Pools, which offer user directory functionality and authentication without direct federation into service-level access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon Cognito Identity Pools?
How do Amazon Cognito User Pools differ from Identity Pools?
What is federated authentication and why is it used?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.