AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.
Scroll down to see your responses and detailed results
Free AWS Certified Developer Associate DVA-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
An enterprise has mandated that their cloud-hosted applications authenticate users from the on-premises directory service without duplicating sensitive credentials. Which approach should be employed to meet this requirement while leveraging the organization's existing user directory?
- You selected this option
Generate temporary access credentials for users via a token service to authenticate against the on-premises directory service.
- You selected this option
Integrate the application through federation using SAML 2.0 with the organization's existing identity management system.
- You selected this option
Migrate the on-premises directory service users to a cloud directory service with User Pools.
- You selected this option
Implement application-side user authentication controls using the Access Control List (ACL) feature of a cloud directory service.
Answer Description
The correct approach is to integrate the cloud application with the on-premises directory service using a federation protocol such as SAML 2.0. IAM supports federation with SAML, which allows users to authenticate using their existing corporate credentials without storing those credentials in the cloud. While Cognito is also a service that supports federation, IAM with SAML is specifically designed to work seamlessly with corporate directories like Active Directory and is hence the better-suited choice for this particular use case. The other options mentioned do not directly address the requirement of federating with an existing on-premises directory service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAML 2.0 and how does it work?
What is the role of IAM in AWS when using SAML 2.0?
What is the benefit of using federated authentication over direct credential storage?
In a CI/CD workflow, which of the following best describes the purpose of using branches?
- You selected this option
To duplicate an existing environment, but locking it for read-only operations.
- You selected this option
To enable parallel development on different features or components within the same project.
- You selected this option
To queue up deployment tasks so they run one after another.
- You selected this option
To combine code changes directly to the production environment.
Answer Description
Branches are used to create separate lines of development in a project, enabling multiple developers to work on different features or components simultaneously without interfering with each other's work. The use of branches allows for a controlled way to integrate changes from different sources into the main codebase, typically through mechanisms such as pull requests or merge requests that facilitate code reviews and automated testing before the changes are incorporated into the main branch (often called 'master' or 'main').
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CI/CD workflow?
What are pull requests and how do they work?
What are the benefits of using branches in version control?
A software development team is utilizing AWS CodeCommit for their project's source control and wants to update the remote repository with the latest changes committed locally. Which command should the developers execute to synchronize their local repository updates with the remote AWS CodeCommit repository?
- You selected this option
git update
- You selected this option
git commit
- You selected this option
git clone
- You selected this option
git push
Answer Description
The correct answer is git push
. This command is used to upload local repository content to a remote repository. It's the standard way to share changes with other team members and keep repositories in sync. git commit
only commits changes to the local repository and does not interact with remote repositories. git update
is not a standard Git command. git clone
is used to copy a repository from a remote source, not to push changes to it.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'git push' command do specifically?
What are the differences between 'git push' and 'git commit'?
What is AWS CodeCommit, and how does it relate to Git?
A developer is configuring a Lambda function to access resources in a separate AWS account. To follow best security practices, the developer needs to grant the Lambda function the necessary permissions. What should the developer use to accomplish this?
- You selected this option
Store the target account user's credentials in Lambda environment variables and use them to access resources.
- You selected this option
Attach an inline policy to the Lambda function's execution role granting access to the target account resources.
- You selected this option
Use the Lambda function's own execution role directly to access resources in the target account without assuming any roles.
- You selected this option
Create an IAM role in the target account that the Lambda function can assume, with the necessary permissions attached.
Answer Description
Cross-account access can be achieved by assuming an IAM role. The IAM role should be created in the target AWS account with the necessary permissions and a trust policy that allows the Lambda function's account to assume the role. Using a user's credentials directly is discouraged and goes against the principle of least privilege, as it can provide unnecessary and broad access. Inline policies are not the correct approach for cross-account access and Lambda environment variables are used for configuration and not for setting up cross-account permissions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role and how does it work?
What is the principle of least privilege, and why is it important?
What is a trust policy in IAM, and why is it necessary for cross-account access?
Your application is hosted on AWS and experiences intermittent failures. You need to implement a logging strategy to capture and store relevant application logs that will assist you in troubleshooting the issue. Given the requirement for minimal performance impact on your application and the ability to analyze the logs within minutes of the logged events, which logging approach should you employ?
- You selected this option
Use Amazon DynamoDB to store log data as soon as it is generated by the application.
- You selected this option
Implement log streaming directly to Amazon Kinesis Data Firehose for immediate storage and analysis.
- You selected this option
Configure the application to use the Amazon CloudWatch Logs agent to send logs to CloudWatch for storage and analysis.
- You selected this option
Manually push application logs to an Amazon S3 bucket for storage and eventual analysis.
Answer Description
Amazon CloudWatch Logs provides real-time monitoring and logging capabilities. By utilizing CloudWatch Logs agents, you can efficiently transmit logs to CloudWatch without significant performance impacts on the application due to optimized batching and transmission of logs. Moreover, CloudWatch allows querying of log data within minutes of ingestion using features such as CloudWatch Logs Insights. This is essential for troubleshooting issues that require a quick response. On the other hand, manually pushing logs to Amazon S3 would not provide real-time analysis and would require custom solutions for efficient log ingestion, which could also impact application performance. Amazon DynamoDB is not designed as a logging service, and using Amazon Kinesis Data Firehose for direct push of logs would be less efficient due to higher costs and complexity compared to CloudWatch. Streaming logs into the database could result in high write-throughput costs and may not be optimized for log data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon CloudWatch Logs and how do they work?
What is CloudWatch Logs Insights and how can it help with log analysis?
Why is it not recommended to use Amazon DynamoDB for logging purposes?
AWS CodeCommit allows developers to restrict who can commit directly to the master branch to enforce code reviews and maintain code quality.
- You selected this option
True
- You selected this option
False
Answer Description
AWS CodeCommit supports branch policies that allow repository administrators to set rules to enforce code quality. This includes the ability to restrict who can commit directly to specific branches, like the master branch. By setting up branch policies, organizations can enforce code reviews and other compliance measures, ensuring that no direct commits can occur without the required level of scrutiny. Limiting direct commits to certain branches is a common practice to ensure the stability of production code and enforce a workflow that includes pull requests and code reviews.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are branch policies in AWS CodeCommit?
How do pull requests work in CodeCommit?
What is the significance of restricting commits to the master branch?
A development team is monitoring a serverless function that handles data processing tasks triggered by a NoSQL database stream. They are encountering issues with failed executions and need a way to quantify these incidents due to client-related mistakes. Which approach would BEST assist them in identifying and quantifying these specific issues over time for the purpose of application fine-tuning?
- You selected this option
Implement a custom metric recorded directly from the function whenever a client error is encountered during the data processing
- You selected this option
Incorporate a distributed tracing system to tag and review instances of client errors in function executions
- You selected this option
Expand log entries to include detailed error messages for each execution and periodically review for client errors
- You selected this option
Increase the function's monitoring level to capture all execution paths and errors
Answer Description
Creating and emitting a custom metric when the serverless function encounters client-related errors allows the team to track the exact number of such incidents. This is because custom metrics can be tailored to the application's unique operational requirements, thus offering the most insightful data for this particular situation. Options like enabling detailed monitoring or reviewing logs would provide broader information that might not be directly related to the client-related errors. Similarly, while distributed tracing systems could give further insights into the execution path, they are generally not as suitable for aggregating and analyzing event counts over time specifically for optimization purposes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are custom metrics in AWS, and how are they used?
What is the importance of monitoring in serverless architecture?
What are client-related errors, and why should they be monitored specifically?
What AWS service allows you to decouple microservices, distributed systems, and serverless applications by using a managed message queue?
- You selected this option
Amazon MQ
- You selected this option
Amazon Simple Queue Service (Amazon SQS)
- You selected this option
Amazon Kinesis
- You selected this option
Amazon Simple Notification Service (Amazon SNS)
Answer Description
Amazon Simple Queue Service (Amazon SQS) is a managed message queue service that enables you to decouple and scale microservices, distributed systems, and serverless applications. By providing a messaging queue, it allows components in a system to communicate without being directly connected, thereby enhancing the fault tolerance and scalability of the application. Amazon SNS, on the other hand, is a publish/subscribe service, not a queue service. Amazon Kinesis is used primarily for real-time data streaming, not message queuing. Amazon MQ is a managed message broker service but not a native AWS messaging queue service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a managed message queue and how does it work?
Can you explain the difference between Amazon SQS and Amazon SNS?
What are the benefits of using Amazon SQS in microservices architecture?
A developer has stored several configuration parameters required by an application in AWS Systems Manager Parameter Store. The application will retrieve these parameters at runtime without hardcoding them into the application code. When deploying an AWS Lambda function that needs access to these parameters, which IAM policy action must be included to allow the function to retrieve the parameters?
- You selected this option
ssm:PutParameter
- You selected this option
ssm:GetParameters
- You selected this option
ec2:DescribeInstances
- You selected this option
ssm:DescribeParameters
Answer Description
For an AWS Lambda function to retrieve parameters from AWS Systems Manager Parameter Store, it requires the ssm:GetParameters
IAM policy action. This action grants the function the necessary permissions to read the parameters at runtime. Other actions, such as ssm:PutParameter
or ssm:DescribeParameters
, are not required for retrieval and are used for different operations. ec2:DescribeInstances
is unrelated to accessing parameters and would not grant access to AWS Systems Manager Parameter Store.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Systems Manager Parameter Store?
What is the significance of IAM policy actions like 'ssm:GetParameters'?
Why shouldn't actions like 'ssm:PutParameter' or 'ec2:DescribeInstances' be included for Lambda function access to parameters?
A development team is utilizing a CI/CD service provided by AWS to streamline their application deployment. They have automated their release process but require a senior developer's sign-off before the application is pushed to the live environment. Which step should be taken to guarantee that the senior developer is notified to review and approve the release after the automated tests pass?
- You selected this option
Implement an approval action linked to a notification mechanism using an Amazon SNS topic which the senior developer is subscribed to for email alerts.
- You selected this option
Configure a monitoring service to send an alert to the senior developer when the automated testing phase succeeds.
- You selected this option
Develop a function within a serverless compute service to dispatch emails after validation checks have been passed.
- You selected this option
Arrange for a service hook to issue an email notice to the senior developer upon the conclusion of the verification procedures.
Answer Description
The question aims to validate the candidate's knowledge of setting up notifications for manual approval processes. The correct configuration would be to utilize an approval action that leverages Amazon SNS for email notifications. The senior developer would receive these notifications as they would be subscribed to the designated SNS topic. The incorrect answers involve using unrelated AWS services, like CloudWatch, which is primarily for monitoring and alarms, and AWS Lambda, which does serve for custom automation tasks but is not directly related to notification setup for pipeline approvals. The mention of webhooks also does not directly pertain to this scenario, as they are generally used for external event triggering rather than user notifications for approvals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SNS and how does it work?
How do approval actions function in a CI/CD pipeline?
What role do automated tests play in a CI/CD process?
A software development team wants to establish a seamless procedure that automatically compiles, verifies, and deploys their application to a testing arena upon every source code submission. Following successful validation, the system should then proceed to position the code in a pre-production zone. Which service should the team choose to facilitate this process of automating the progression between multiple deployment stages?
- You selected this option
CodeBuild
- You selected this option
CodeCommit
- You selected this option
CodePipeline
- You selected this option
CloudFormation
Answer Description
The development team should use a service designed to model, visualise, and automate the release process of their applications. This specific service should support defining various stages within a pipeline, such as building, validating, and positioning of code into different environments. Selecting a tool that integrates with other services for building and deployment tasks is also crucial. While there are multiple services related to version control and build processes, the one required here must have the capability to coherently orchestrate the flow between environments, ensuring a seamless transition of application code from commits to production readiness.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS CodePipeline?
How does AWS CodeBuild fit into the automation process?
What are the roles of AWS CodeCommit and CloudFormation in the deployment process?
Your company is developing a serverless application with an AWS Lambda function that requires multiple development and testing environments. Which AWS feature allows you to point to different versions of a Lambda function for different integration testing environments without modifying the function's ARN?
- You selected this option
Environment Variables in Lambda
- You selected this option
Stage Variables in API Gateway
- You selected this option
Lambda Versions
- You selected this option
Lambda Aliases
Answer Description
AWS Lambda Aliases enable you to route traffic to different versions of a Lambda function. Aliases are like pointers and can be changed to point to different function versions as needed. This is useful for integration testing as you can maintain separate testing environments without changing the function's ARN, thus ensuring the stability and consistency of your application during the testing phase. Using different function versions or Stage variables are incorrect choices because while they can point to different configuration settings or Lambda versions, they do not offer the same aliasing capability for routing across different environments without altering the ARN.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are AWS Lambda Aliases?
How do Lambda Versions differ from Aliases?
What are some use cases for using Lambda Aliases in a serverless architecture?
A development team is working on an AWS Lambda function designed to process incoming image files from an Amazon S3 bucket. The team wants to track the number of images processed per minute as a custom metric. Which code implementation will effectively allow the team to send this custom metric to Amazon CloudWatch?
- You selected this option
The Lambda function should call the
putMetricData
operation of the CloudWatch client with a namespace and metric data for images processed each minute. - You selected this option
The Lambda function should store the count of images processed in an Amazon DynamoDB table, which acts as a custom metric data source for CloudWatch.
- You selected this option
The Lambda function should use an Amazon Kinesis stream to capture the count of processed images, and then CloudWatch collects the metric from the stream.
- You selected this option
The Lambda function should write the count of processed images to the application logs, and CloudWatch parses the logs to extract the metric.
Answer Description
To emit a custom metric to Amazon CloudWatch from an AWS Lambda function, developers must use the PutMetricData API action provided by the Amazon CloudWatch service. When using AWS SDK for this action, they should create a metric data query structure that includes the namespace, metric name, value, and a timestamp. The putMetricData
operation will then publish the custom metric to CloudWatch. Note that other answers involve incorrect methods that do not align with how CloudWatch custom metrics should be emitted from a Lambda function.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudWatch and how does it work?
What is the PutMetricData API action in AWS CloudWatch?
What are the differences between CloudWatch metrics and logs?
A software company utilizes AWS Lambda for deploying a mission-critical application. In their upcoming release, they plan to incorporate a canary release strategy to introduce a new feature incrementally while mitigating risks. Assuming they already have a Lambda alias for their production environment, how should the company configure the alias to slowly route a small percentage of user traffic to the new feature while the majority still accesses the stable version?
- You selected this option
Deploy the new version as a separate Lambda function without an alias and manually invoke the new function to represent a percentage of total traffic.
- You selected this option
Configure the Lambda alias to immediately redirect 100% of traffic to the new version to test the new feature in live conditions.
- You selected this option
Deploy the new feature as a new Lambda function and update the production alias configuration to point solely to the new function, relying on Lambda's inherent traffic shifting capabilities.
- You selected this option
Adjust the production alias to serve both the old and the new Lambda versions, and configure the alias routing with a small weight towards the new version, gradually increasing it based on the monitoring results.
Answer Description
The team should update their production alias to point to both the old and new versions of the Lambda function and then use version weights within the alias configuration to specify the percentage of traffic that each version receives. This setup allows them to control the traffic flow and incrementally increase the weight towards the new version as confidence in stability increases. Assigning 100% to one version, updating the function code directly, or deploying without aliases, does not provide the gradual traffic shifting capability required for a canary release strategy. Therefore, careful allocation of weights to the alias is the correct approach.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary release strategy in software deployment?
How does AWS Lambda support traffic shifting for versions?
What are the risks of not using a canary release strategy?
Which service offers automated code reviews and actionable recommendations for improving code quality, along with identifying sections of code that might expose security vulnerabilities?
- You selected this option
CloudFormation
- You selected this option
CloudTrail
- You selected this option
Config
- You selected this option
CodeGuru
Answer Description
The correct service is designed to automate code reviews and provide recommendations for enhancing code quality, including identifying security vulnerabilities. The other options listed are services that serve different purposes: orchestration of resource creation, tracking resource configurations and changes, and logging API activity across an account respectively.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly is AWS CodeGuru?
How does CodeGuru identify security vulnerabilities?
Can you explain the difference between CodeGuru and the other listed services?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.