AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.
Scroll down to see your responses and detailed results
Free AWS Certified Developer Associate DVA-C02 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
Which service should a developer use to track the requests that travel through multiple components of a distributed system, helping to understand the application's behavior and identify bottlenecks?
- You selected this option
Inspector
- You selected this option
X-Ray
- You selected this option
CloudTrail
- You selected this option
CloudFormation
Answer Description
The correct service for tracking requests in a distributed system, providing an end-to-end view of requests as they travel through the application, is intended to enable developers to analyze and debug systems. It helps in creating a map of services used by an application with detailed latency information for each service component. The other options listed are not primarily used for application tracing. CloudTrail is for auditing API use, Inspector for security assessments, and CloudFormation for infrastructure management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS X-Ray and how does it work?
How does X-Ray differ from CloudTrail?
What are the benefits of using X-Ray for developers?
Which service is primarily used for the secure storage, management, and retrieval of credentials and other sensitive configuration details, while also supporting automatic rotation of these details?
- You selected this option
Key Management Service
- You selected this option
Systems Manager
- You selected this option
Secure Storage Service
- You selected this option
Secrets Manager
Answer Description
Secrets Manager is the service specifically created for securely managing secrets, such as database credentials and API keys, which includes capabilities like secrets rotation. Systems Manager provides broader configuration management but lacks the direct focus on secrets lifecycle management. The other options are either incorrect because they do not exist (Amazon Secure Storage Service) or are used for key management rather than secret rotation (Key Management Service).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What kind of secrets can be managed with Secrets Manager?
How does Secrets Manager automatically rotate credentials?
What is the difference between AWS Secrets Manager and Systems Manager?
A developer needs to ensure that immediate alerts are sent out when a serverless function's usage nears the upper bounds of its allowed parallel executions, potentially impacting the application's availability. Which approach should the developer employ for real-time notification?
- You selected this option
Set up a configuration recorder to monitor changes to the function's configuration and alert on significant modifications.
- You selected this option
Incorporate error handling in the relevant code to catch execution limit errors and send emails via a mail delivery service.
- You selected this option
Create an alarm based on the 'ConcurrentExecutions' metric that triggers an alert to a messaging topic when nearing the execution limit.
- You selected this option
Schedule regular audits of usage patterns through log analysis and manually trigger notifications when limits are approached.
Answer Description
The correct approach is to create an alarm that monitors the service's 'ConcurrentExecutions' metric, signaling an impending breach of the allowable parallel execution threshold. The creation of such an alarm can be configured to send a notification to a designated topic within a notification service, which dispatches the alert to the developer. This is a proactive and automated way to monitor for critical thresholds and ensure the developer is promptly notified. The other approaches either do not provide real-time alerts, are not designed for this specific monitoring purpose, or involve manual processes that would not be immediate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the 'ConcurrentExecutions' metric in AWS Lambda?
How do I create an alarm in AWS for monitoring metrics?
What is AWS SNS and how does it work with alarms?
Which tool offered by Amazon Web Services can developers use to invoke and debug their serverless functions locally, simulating the cloud environment on their own machine?
- You selected this option
Amazon CodePipeline
- You selected this option
AWS CodeDeploy
- You selected this option
AWS SAM CLI
- You selected this option
AWS SDKs
Answer Description
The correct tool for invoking and debugging serverless functions locally is AWS SAM CLI. It allows developers to test their Lambda functions in an environment similar to the actual AWS runtime. AWS CodePipeline and AWS CodeDeploy are used in deployment cycles and do not have the capability to run or test functions locally. The AWS SDKs are for interacting with AWS services in your application code, not for local invocation of functions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does SAM stand for in AWS SAM CLI?
How does AWS SAM CLI simulate the Lambda environment?
What are the main differences between AWS SAM CLI and AWS SDKs?
To comply with an inter-company collaboration, your team is required to configure a cloud storage resource enabling another organization to have readonly access to specific files. Your task is to determine how to accomplish this without granting unnecessary privileges or altering user management in the other organization. What is the most effective method to establish this level of access control?
- You selected this option
Update the storage resource's ACL to give ownership permissions to the external entity’s account identifier.
- You selected this option
Provision individual user accounts for the external entity within your identity management system, assigning full privileges over the storage resource.
- You selected this option
Deploy a series of temporary URLs for each file, allowing indefinite access to the resources without restriction.
- You selected this option
Create a bucket policy that allows readonly access to the specified files for the external entity's account identifier.
Answer Description
A bucket policy is the most suitable option for this scenario because it allows you to grant precise permissions, such as readonly access, to resources in your bucket to another account without creating individual user accounts or sharing security credentials. This method adheres to the principle of least privilege, ensuring you're not granting any more permissions than necessary. The alternatives either grant overly broad permissions, do not cater to cross-account access efficiently, or would require managing credentials directly, which is not advised.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a bucket policy in AWS?
What does 'readonly access' mean in the context of AWS S3?
What is the principle of least privilege and why is it important?
Your team is enhancing the security of your company's application by moving sensitive environment configurations out of the codebase. They need a solution that allows easy updates to these settings as business requirements evolve, without necessitating a new deployment of the application stack. Which service should they implement for storing and retrieving these settings while ensuring sensitive information is encrypted and access is controlled?
- You selected this option
Systems Manager Parameter Store
- You selected this option
Function-as-a-Service environment variables
- You selected this option
Elastic Compute Cloud (EC2) instance metadata
- You selected this option
Simple Storage Service (S3) with custom encryption
Answer Description
The Parameter Store, part of Systems Manager, is the appropriate service for managing configuration data and secrets. It allows storing of parameters as plain text or encrypted data, making it suitable for handling sensitive information. It supports version tracking and notifications on parameter changes, and can be accessed dynamically at runtime. This service also provides granular permissions via IAM for controlling access, and integrates with KMS for encryption, ensuring a high security standard without the need to redeploy applications when configurations change.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Systems Manager Parameter Store?
How does Parameter Store ensure data security?
What are the benefits of using Parameter Store compared to other services?
An application you are developing is responsible for processing messages from a queue and executing an operation that relies on a third-party service, which is known for its occasional unpredictability. To improve your application's robustness against these intermittent service failures, which pattern should you implement in your operation logic?
- You selected this option
Implement a circuit breaker to immediately halt operations upon an error
- You selected this option
Retries with exponential backoff and jitter
- You selected this option
Attempt immediate recursive retries after a failed operation
- You selected this option
Increase read and write capacity to the queue during peak times
Answer Description
In the scenario given, a third-party service that is intermittently unavailable would benefit from implementing retries with exponential backoff and jitter. The exponential backoff method systematically increases the wait time between retries after a failed attempt to call the third-party service. Adding jitter reduces the probability that multiple instances of your application will attempt to retry at the same moment, avoiding simultaneous retries that can lead to the 'thundering herd' problem when the service recovers. This can be particularly useful for distributed systems that rely on external services to ensure they can continue to operate effectively during transient outages. Using a circuit breaker does address an unreliable service, but the question specifically asks for a pattern that involves retry logic. Increasing read and write capacity or immediate recursive retries do not address the underlying issue of the service's unpredictability and may worsen it by potentially overloading the service when it comes back online.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is exponential backoff and how does it work?
What is jitter and why is it important in retry logic?
What is the difference between a circuit breaker and retry logic?
When refining the debugging process for a serverless function, you decide to enhance its trace data by adding relevant business-specific metadata. Which service would you use to achieve this by annotating the function's execution flow?
- You selected this option
Step Functions
- You selected this option
X-Ray
- You selected this option
CloudTrail
- You selected this option
CloudWatch
Answer Description
The service that enables developers to add detailed annotations to the execution flow of an application, including serverless functions, is X-Ray. By using X-Ray, you can insert metadata that provides additional context within trace segments, offering insights into the application's behavior and state changes. This level of observability is crucial for effective troubleshooting. The other services mentioned do not offer this annotation capability: CloudTrail focuses on recording AWS API calls, CloudWatch primarily deals with monitoring and metrics, and Step Functions orchestrate different serverless components rather than provide annotations for tracing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is X-Ray and how does it work?
What types of metadata can I annotate with X-Ray?
How do X-Ray and CloudWatch differ in terms of monitoring?
A team of developers manages a web service endpoint interfacing with a mobile application. They need to roll out a new feature which adjusts the data format returned to clients. To prevent potential disruptions to the live mobile application, what strategy should they employ for introducing this new feature to end-users?
- You selected this option
Create a new, identically named environment and quickly switch all user traffic to this updated version to minimize downtime.
- You selected this option
Initiate a staged rollout by gradually increasing the new feature exposure to a fraction of live traffic before full deployment.
- You selected this option
Conduct thorough testing in an isolated clone of the production environment before updating the live service.
- You selected this option
Merge the changes directly into the production environment during low-traffic periods to evaluate real user interactions without notice.
Answer Description
A canary release approach involves directing a small amount of live traffic to the new feature to monitor its impact and performance before making it generally available to all users. This method is preferred for services like web and mobile application backends since it reduces the risk of disruptions if unexpected issues arise. Creating a new stage with the same name or applying changes directly to the existing production environment could potentially disrupt all users if there are issues with the update. While functional testing in an isolated environment is an important step, it does not provide a mechanism for gradual exposure to the live user base.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary release and how does it work?
What are the advantages of staged rollouts over complete replacements?
What testing strategies should be used before a canary release?
An engineer needs to automate resource management on a virtual server hosted on a cloud platform. The application must abide by security best practices and not use long-term static credentials. What is the most secure approach for the engineer to facilitate necessary permissions for the application?
- You selected this option
Place access credentials within the application's source code and configure it to reference these for making service requests.
- You selected this option
Embed static access credentials with extensive permissions in the environment variables of the virtual server for application use.
- You selected this option
Attach an IAM role to the virtual server which has policies corresponding to the required level of access for the application.
- You selected this option
Distribute a static user's access and secret keys to the application through the virtual server's launch configuration.
- You selected this option
Configure the virtual server to use the cloud platform's root user credentials for all management tasks.
Answer Description
Attaching an IAM role to the virtual server instance is considered a security best practice as it provides temporary credentials that are automatically rotated and do not require management or embedding in the application's code. Directly embedding static long-term credentials or using root account credentials is strongly discouraged due to the heightened risk of compromise and the likelihood of violating the principle of least privilege.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role in cloud computing?
What are security best practices for using cloud resources?
Why is embedding static access credentials considered risky?
Your application, hosted on multiple Amazon EC2 instances, needs to perform periodic data processing tasks on an Amazon S3 bucket. The tasks require the application to have read, write, and list permissions on the bucket. To align with security best practices, which action should you take to grant these S3 permissions to the application?
- You selected this option
Attach an IAM managed policy with the required S3 permissions directly to the EC2 instances.
- You selected this option
Create an IAM role with the specified S3 permissions and attach it to the EC2 instances using an instance profile.
- You selected this option
Configure a resource-based policy on the S3 bucket to grant the EC2 instances the required permissions.
- You selected this option
Create an IAM user for each EC2 instance with permissions to access the S3 bucket and store the credentials in a configuration file on each instance.
Answer Description
Attaching an instance profile that contains an IAM role with the necessary S3 permissions to your EC2 instances is the recommended solution for this scenario. This allows the application to assume the IAM role and obtain temporary credentials, which can be used to access the S3 bucket. The use of an instance profile ensures that the EC2 instance can securely make API calls to AWS services on behalf of the user that assumed the role. Unlike static credentials, these permissions are automatically rotated and managed by AWS. Creating individual IAM users is not scalable for multiple instances, and hard-coding credentials is a security risk and violates the AWS recommended practice of not embedding secrets in code. Using a resource-based policy on the S3 bucket is not possible, as it cannot grant the necessary EC2 instance permissions to assume a role.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role and why is it used with EC2 instances?
What is the difference between an IAM user and an IAM role?
What is an instance profile in the context of EC2 and IAM roles?
Utilizing Amazon CloudWatch Logs Insights for querying application logs does not support time-based queries, making it unsuitable for isolating incidents that occurred during specific time frames.
- You selected this option
False
- You selected this option
True
Answer Description
The correct answer is false. Amazon CloudWatch Logs Insights does indeed support time-based queries, which allows developers to isolate and examine log data from specific time frames. This is a critical feature for troubleshooting to pinpoint when specific incidents or anomalies occurred. Offering 'true' as an option plays on the possibility of lacking time-based query features, which could be a common pitfall for someone not familiar with CloudWatch Logs Insights' capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do time-based queries work in CloudWatch Logs Insights?
What query language does CloudWatch Logs Insights use?
What are some common use cases for Amazon CloudWatch Logs Insights?
A development team is refining their serverless application's logging approach to enable better insight during debug sessions. They need the capability to effortlessly filter and search through log entries for specific execution contexts like the runtime version and unique request identifiers. Which logging strategy should the team employ to ensure their log data is conducive to rapid and effective troubleshooting?
- You selected this option
Focus on capturing only exception stack traces in the log files, considering that this information suffices for pinpointing issues.
- You selected this option
Adopt structured logging with a consistent format like JSON that supports key-value pairs for all pertinent information, enhancing the interpretability and searchability of the logs.
- You selected this option
Use a rudimentary logging mechanism that captures standard output streams without specific structure to save on storage space.
- You selected this option
Depend solely on a distributed tracing service to monitor the application, presuming it alleviates the need for detailed logs.
Answer Description
Structured logging is an approach where developers log data in a consistent, predefined format, like JSON with key-value pairs. This format is beneficial because it makes it easier to search, filter, and analyze log data, allowing teams to isolate issues rapidly and efficiently. The addition of structured contextual information such as execution runtime version and unique identifiers facilitates targeted querying which is highly valuable during troubleshooting. Alternatives like unstructured text logs, only capturing error messages, or sole reliance on distributed tracing services without context-rich application-specific logs would not provide the same ease of filtering and detail required for efficient debugging.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is structured logging and how does it differ from traditional logging methods?
Why is using JSON a preferred format for structured logging?
What are some tools or services that can help with structured logging?
A developer needs to process incoming files uploaded to an Amazon S3 bucket and transform the data using an AWS Lambda function. The developer wants the function to be triggered as soon as a new file is stored in the bucket. What is the MOST suitable way to achieve this requirement?
- You selected this option
Send the S3 event notifications to an Amazon SQS queue and set up the queue as an event source for the Lambda function.
- You selected this option
Publish the S3 event notifications to an Amazon SNS topic and subscribe the Lambda function to that topic.
- You selected this option
Configure the Lambda function to be triggered by Amazon S3 event notifications when a new object is created.
- You selected this option
Route the S3 event notifications to Amazon CloudWatch Events/EventBridge and associate the Lambda function as a target.
Answer Description
Using Amazon S3 event notifications to trigger a Lambda function directly when a new object is created provides the immediate response required for the developer's use case. It is the most direct method for event-driven integration between Amazon S3 and AWS Lambda. Configuring a Lambda function with an Amazon S3 event as the trigger allows the function to execute/respond as soon as the specified S3 event, such as a PUT, occurs.
Using Amazon SNS or Amazon SQS would introduce additional steps and potential delays, as it would require the S3 event to first publish to a topic or send a message to a queue, which would then trigger the Lambda function. This is less efficient and suitable for cases where indirect coupling or message filtering is required. Amazon S3 event notifications to Amazon CloudWatch Events/EventBridge would also introduce unnecessary complexity for this direct integration use case.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon S3 event notifications?
How does AWS Lambda integrate with Amazon S3?
What are the benefits of using Lambda over other services like SNS or SQS for this use case?
A software team has noticed that a serverless function, which processes incoming streaming data, is occasionally hitting its maximum execution time, resulting in incomplete tasks. The function's workload involves complex computations and varying sizes of data payloads. To enhance the processing efficiency and reduce the likelihood of timeouts, which of the following adjustments should be the team's initial step?
- You selected this option
Implement a distributed tracing service to examine the function's behavior in detail.
- You selected this option
Allocate more memory to the serverless function.
- You selected this option
Manually optimize the function by removing lines of code to decrease the execution time.
- You selected this option
Extend the maximum allowed execution time for the function.
Answer Description
Allocating more memory to the serverless function can lead to an improvement in the compute capacity available to the function. Since the compute capacity scales with the memory size, this can decrease execution time for complex tasks and large data payloads. Thus, timeouts are less likely to occur. Extending the function's execution time may alleviate symptoms but does not address performance inefficiency and may increase costs without solving the real issue. Removing code lines is a practice of code optimization, but without a strategy, it can be ineffective at solving the problem. Moreover, it is not a configuration setting that can be adjusted for performance tuning. Diagnostic tracing with a tool can highlight performance issues, but it does not directly improve execution time.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the impact of allocating more memory to a serverless function?
Why shouldn't we just extend the maximum execution time for the function?
What are some effective ways to optimize a serverless function's performance aside from increasing memory?
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.