Scroll down to see your responses and detailed results
Free AWS Certified Developer Associate DVA-C02 Practice Test
Prepare for the AWS Certified Developer Associate DVA-C02 exam with this free practice test. Randomly generated and customizable, this test allows you to choose the number of questions.
- Questions: 15
- Time: 15 minutes (60 seconds per question)
- Included Objectives:Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
An enterprise has mandated that their cloud-hosted applications authenticate users from the on-premises directory service without duplicating sensitive credentials. Which approach should be employed to meet this requirement while leveraging the organization's existing user directory?
Migrate the on-premises directory service users to a cloud directory service with User Pools.
Generate temporary access credentials for users via a token service to authenticate against the on-premises directory service.
Implement application-side user authentication controls using the Access Control List (ACL) feature of a cloud directory service.
Integrate the application through federation using SAML 2.0 with the organization's existing identity management system.
Answer Description
The correct approach is to integrate the cloud application with the on-premises directory service using a federation protocol such as SAML 2.0. IAM supports federation with SAML, which allows users to authenticate using their existing corporate credentials without storing those credentials in the cloud. While Cognito is also a service that supports federation, IAM with SAML is specifically designed to work seamlessly with corporate directories like Active Directory and is hence the better-suited choice for this particular use case. The other options mentioned do not directly address the requirement of federating with an existing on-premises directory service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAML 2.0 and how does it work?
What is the role of IAM in AWS when using SAML 2.0?
What is the benefit of using federated authentication over direct credential storage?
When a team of Developers requires a secure, scalable, managed repository for their collaborative coding efforts, which solution should they primarily consider?
Use Bitbucket as the primary remote repositories with scripts to synchronize code with S3 for backups.
Employ CodeCommit for its native integration and managed source control capabilities.
Implement a series of GitHub repositories configured to trigger CodePipeline for integration with platform services.
Rely on CodePipeline to host and manage the version control of the source code.
Answer Description
Utilizing CodeCommit provides a secure, highly scalable, managed source control service that hosts private Git repositories. This service is specifically designed to work seamlessly with other services on the platform, ensuring a cohesive development experience. Conversely, GitHub and Bitbucket, while popular for source control, are third-party services that do not offer the same level of integration as native platform services. CodePipeline is focused on CI/CD processes and uses repositories rather than serving as a source control hosting service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is CodeCommit and how does it integrate with AWS services?
What are the main advantages of using a managed source control solution like CodeCommit over third-party services?
Can you explain the differences between GitHub, Bitbucket, and CodeCommit in terms of their integration with CI/CD processes?
Your team is responsible for the upkeep of a distributed architecture spanning numerous services. The services generate copious amounts of operational data, and you need an effective way to track and visualize key performance indicators such as error rates, response times, and throughput. Select the method that would BEST enable your team to grasp the overall system health at a glance, considering the diverse datasets and the need for real-time analysis.
Configure individual alarms for each service to trigger notifications when performance metrics fall outside of expected norms.
Exclusively deploy a tracing tool to visualize service interactions and deduce error rates and latencies without additional metric analysis.
Regularly perform ad-hoc queries on operational logs using the built-in query language of the cloud provider's logging service.
Implement a comprehensive dashboard using the monitoring service provided by the cloud provider, including appropriate widgets for key metrics from different services.
Answer Description
Creating a dashboard in Amazon CloudWatch with customized widgets provides the optimal solution for monitoring system health. It allows the team to view multiple metrics concurrently, offering a comprehensive, real-time glance at application performance across services. Although Amazon CloudWatch Alarms, Logs Insights, and AWS X-Ray offer valuable data examination and notifications, they do not provide a singular, unified view ideal for immediate and holistic understanding of system health.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are key performance indicators (KPIs) and why are they important in a distributed architecture?
How does Amazon CloudWatch help in monitoring and visualizing system health?
What are the differences between using alarms, logs, and dashboards for monitoring?
As part of setting up the CI/CD pipeline for a newly developed serverless application, your team needs to ensure that code changes are automatically tested before they are merged into the main branch of the repository. Which service would you use to perform this action after every commit?
CodeCommit
CodeDeploy
CodeBuild
CodePipeline
Answer Description
The correct service for the scenario described is CodeBuild, which has the capacity to automatically execute unit tests and other commands each time there is an update to the repository, ensuring that new commits don't break the application. CodeDeploy is mainly focused on deployment tasks and doesn't inherently run tests. CodePipeline is responsible for orchestrating the flow of updates, rather than executing the tests themselves. CodeCommit is a source control service and does not directly handle testing; it would integrate with other tools to achieve this functionality.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS CodeBuild?
How does CodeBuild fit into a CI/CD pipeline?
What other services work with CodeBuild?
A developer has written a Lambda function to process incoming user profile updates and wants to track the number of failed update attempts due to validation errors. They need to create a custom metric for this purpose and report it using Amazon CloudWatch. Which approach should the developer use to implement this requirement?
Emit logs in the CloudWatch embedded metric format (EMF) from the Lambda function for every failed validation attempt.
Implement AWS X-Ray tracing in the Lambda function and filter the trace data to count the validation errors.
Use the AWS SDK in the Lambda function to put a custom metric by calling the CloudWatch PutMetricData API when a validation error occurs.
Create a new CloudWatch Logs log stream and write an error message with a specific format for every failed validation attempt.
Answer Description
To create a custom metric for failed profile update attempts due to validation errors, the developer should use the CloudWatch PutMetricData API. This action allows for publishing of custom metrics directly to Amazon CloudWatch. Using the AWS SDK to increment a custom metric ensures that each specific scenario such as 'failed update attempts due to validation errors' is tracked accurately. While creating a new log stream and logging error messages to CloudWatch Logs does provide persistence of error data, it does not fulfill the requirement of creating and utilizing a custom metric. Similarly, using AWS X-Ray to trace requests would provide insights into the request flow and latency but would not create a custom metric. Finally, emitting logs with the embedded metric format (EMF) would be a valid approach for collecting custom metrics from Lambda logs, but since the question specifies using the PutMetricData API, the correct approach is to use the AWS SDK to call PutMetricData.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon CloudWatch and how does it relate to monitoring metrics?
What is the AWS SDK and how is it used in Lambda functions?
Can you explain the CloudWatch PutMetricData API and its use case?
AWS Private Certificate Authority allows you to issue certificates that can be used to secure network communications and establish trust, even without the internet.
True
False
Answer Description
The statement is true because AWS Private Certificate Authority (PCA) is a private certificate authority service that enables you to issue and manage private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates that are used to establish secure network communications and to secure applications. These certificates can be used within an organization to securely authenticate and encrypt traffic, even in environments without internet access, such as a private corporate intranet or an isolated virtual private cloud. It's important to understand that while PCA allows you to manage these private certificates, public trust is not established through AWS PCA as it would be with a public certificate authority.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Private Certificate Authority (PCA)?
What are SSL and TLS certificates used for?
How does a private certificate authority differ from a public certificate authority?
A developer is implementing an application that requires frequent retrieval of items from an Amazon DynamoDB table. To optimize performance, the application needs to minimize latency and reduce the number of network calls. Given the need for efficient data access patterns, which method should the developer use when implementing code that interacts with the DynamoDB table using the AWS SDK?
Use
PutItem
calls with a filter to only insert the desired items.Employ a
Scan
operation to fetch all table items and filter as needed.Perform individual
GetItem
operations for each item.Utilize
BatchGetItem
for batch retrieval of items.
Answer Description
Using the batch operations API of AWS SDKs, such as BatchGetItem
in the case of Amazon DynamoDB, allows the application to retrieve up to 100 items from one or more DynamoDB tables in a single operation. This reduces the number of network calls compared to individual GetItem
requests for each item and results in less network latency. The use of Query
or Scan
would not be as efficient because they are designed for different purposes. Query
is used to retrieve items using a key condition expression, and Scan
reads every item in a table, which can be less efficient for frequently accessed individual items. While PutItem
is used for data insertion, not retrieval, hence it is not suitable for the scenario given.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is `BatchGetItem` and how does it work in DynamoDB?
What are the differences between `GetItem`, `Query`, and `Scan` in DynamoDB?
Why is `Scan` not recommended for frequently accessed individual items?
A development team is looking to manage environment-specific settings for their e-commerce web service interface, which routes incoming requests to the same backend AWS Lambda functions. They want to ensure that while the Lambda codebase remains identical, the behavior can be customized based on the environment it's deployed in. How should the team achieve this distinction within the web service interface provided by AWS?
Introduce separate interface methods distinguished by the intended environment to control the routing of requests.
Establish distinct backend functions for each deployment stage to manage the configuration differences, and allocate them to the web service interface accordingly.
Utilize environment configuration parameters, unique to each deployment stage of the web service interface, to pass specific values to the backend without changing the functions.
Alter the backend function code to incorporate environment-specific logic, and bundle this variance within the function deployment package.
Answer Description
Stage variables in AWS's web service interface solution allow for the configuration of environment-specific settings without altering the backend Lambda functions. By defining different stage variables for development and production, developers can alter the behavior of the interface, such as resource paths or logging levels, without duplicating Lambda functions or creating multiple interfaces. This approach is cost-effective and maintains a single codebase for backend processing. Incorrect answers suggest inefficient practices like deploying separate sets of Lambda functions or modifying backend code for each environment, which could lead to increased costs and complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are stage variables in AWS, and how do they work?
Why is using environment configuration parameters preferable over creating separate Lambda functions?
What are some common use cases for stage variables?
Which configuration under Amazon API Gateway allows a developer to associate a REST API with a custom domain name?
Stage Variables
Base Path Mapping
Resource Policies
Custom Domain Names
Answer Description
API Gateway Custom Domain Names are used to define a custom domain for your API Gateway APIs and map individual paths to different stages in your API. This functionality enables serverless applications to be accessible via human-friendly URLs. Custom Domain Names are distinctly different from Stages, which are essentially different snapshots of your API and do not inherently provide the means to use custom URLs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Custom Domain Names in API Gateway?
What is Base Path Mapping and how does it relate to Custom Domain Names?
What are Stages in API Gateway?
Which type of encryption key provided by AWS allows a developer to explicitly handle rotation schedules and define the key's usage policy?
Customer-managed keys
Provider-managed keys
Platform-managed keys
Automatically-rotated keys
Answer Description
Customer-managed keys provide the developer with the authority to intricately configure the key's policy and handle its rotation, deletion, and lifecycle. This autonomy is a primary differentiator from the AWS managed keys where the control over these aspects is limited and managed mostly by the cloud service provider.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Customer-managed keys in AWS?
What is key rotation and why is it important?
How do Customer-managed keys differ from Provider-managed keys?
A developer is tasked with enabling a mobile application to authenticate users via third-party social media platforms and subsequently authorize them to directly call cloud services. Which Amazon service should be utilized to fulfill this requirement?
Implement an Amazon Cognito Identity Pool to federate with external identity providers and acquire temporary credentials for service access.
Leverage the Security Token Service to build federated user sessions with external platforms.
Incorporate Amazon QuickSight for identity management and authorization of cloud service API calls.
Deploy an Amazon Cognito User Pool to directly manage external authentication and access.
Answer Description
Amazon Cognito Identity Pools (Federated Identities) give developers the ability to create unique identities for users and federate them with external identity providers, such as social media platforms (Facebook, Google, Amazon, etc.). With these unique identities, the application can secure temporary credentials allowing users to directly interact with cloud service APIs. This is necessary for direct access to cloud services post-authentication. In contrast, User Pools are mainly to create and manage a directory of app users and they don't directly facilitate access to cloud service APIs. While STS is indeed used for temporary credentials, it by itself doesn't cater to social identity provider federation nor provide directory services. AWS QuickSight is a business analytics service and not pertinent to identity management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Cognito Identity Pool?
How do Identity Pools differ from User Pools in Amazon Cognito?
What are temporary credentials and why are they important?
To increase the transparency into the performance of a critical web application, a developer needs to record event-specific information that can indicate the system's health. Without altering the existing infrastructure setup, which method should the developer choose to directly integrate this telemetry capture within the application's code base?
Incorporate the necessary API calls to the monitoring service to transmit the custom telemetry data as required.
Leverage the high-level functionalities of a software development kit to streamline the transmission process of telemetry data.
Adapt the system agent pre-installed on the host machine to gather and dispatch the new set of metrics.
Utilize a configuration management service to automate the creation of telemetry resources for the monitoring service.
Answer Description
The correct method for a developer to directly insert custom telemetry from within the application is to use the monitoring service's API to publish these metrics. This technique is immediate, requires minimal additional setup, and enables straightforward integration into the code base. The other options, such as utilizing configuration management tools, system agents for metric collection, or SDKs that simplify API interactions, are valid in certain contexts but may be more indirect or less efficient for the task of custom metric emission directly from the application.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is telemetry in the context of web applications?
What are API calls and how do they work for telemetry capture?
What is the difference between telemetry data and traditional logging?
A cloud architect must design a system for handling image processing tasks that could peak unpredictably, ensuring continuation of image submissions without delay for the end-users. Which design pattern should the architect choose to fulfill this requirement?
Implementing a design where image processing tasks are queued for execution without requiring the user to wait for each task completion
Creating a design based on a pattern that is driven by the occurrence of specific events, without addressing user submission responsiveness
Establishing a system where image processing operations are performed instantaneously upon submission and acknowledgment is made only after completion
Configuring a direct processing method where each request is processed in order and the user session is maintained until confirmation of image handling
Answer Description
The asynchronous interaction design pattern is the correct choice in this scenario, as it allows submitted tasks to be queued for processing independently from the submission operation. This ensures that the end-users are not delayed while the images are being processed, which is particularly beneficial during times of high load. A synchronous design pattern would require the users to wait for each processing task to complete before continuing, leading to possible delays and a poor user experience during peak times. Event-driven architectures can encompass both synchronous and asynchronous processing but do not in themselves distinguish the type of responsiveness to user submissions required here.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the asynchronous interaction design pattern?
Why is queuing important in an asynchronous system?
What are some examples of technologies used for implementing a queuing system?
Your company's new web application has just been deployed to production, and you're tasked with setting up its operational backbone. You need to prepare the application for incident response and ensure that you have adequate visibility into the system's performance and behavior. Which approach aligns best with your goal of full system observability?
Caching frequently accessed data to optimize read times.
Setting up a comprehensive system of structured logs, detailed metrics, and distributed tracing.
Creating a detailed metrics dashboard that displays the CPU and memory usage of your application environments.
Configuring alerts to notify the team upon encountering any HTTP 5XX errors.
Answer Description
Implementing a comprehensive strategy that incorporates structured logs, detailed metrics, and distributed tracing provides full observability into the application's health and performance. Structured logs allow capturing consistent and context-rich logging data, detailed metrics offer quantitative information about the system's operation, and distributed tracing enables tracking of individual requests as they flow through the system, linking the metrics and logs to provide a complete picture of the application's behavior. Caching data optimizes read times but doesn't contribute to observability. Similarly, setting up a single metrics dashboard provides a view but not in-depth tracking of individual requests. Configuring alerts for errors ensures an incident response but does not facilitate ongoing analysis of system performance and behavior.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are structured logs and why are they important for observability?
What are detailed metrics, and how do they contribute to monitoring applications?
What is distributed tracing and how does it help in understanding application behavior?
You are implementing a notification system for an online shopping platform hosted on AWS. The system must send emails to customers after their order has been processed. Given that the order processing system can become backlogged during peak times, which pattern should you employ to ensure that the email notifications do not block or slow down the order processing workflow?
Employ an asynchronous pattern with a message queue that collects notifications to be processed by a separate email-sending service independently of order processing.
Use a synchronous pattern where the order processing service sends the email directly before confirming the order as complete within the same workflow.
Implement a hybrid pattern, sending the email synchronously for premium customers, while using an asynchronous approach for regular customers.
Adopt a synchronous pattern for small batches of orders, switching to an asynchronous pattern only when detecting a processing backlog.
Answer Description
An asynchronous communication pattern is suitable for this scenario because it allows the order processing to complete without waiting for the email notification to be sent. The email sending process can be offloaded to a separate service, such as a message queue that triggers a Lambda function to send emails. This decoupling ensures that the order processing system remains responsive even during times of high load. A synchronous pattern would require each order to wait until the email is sent before marking the order as complete, potentially causing delays and backlogs in the order processing system.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an asynchronous pattern in the context of application development?
What is a message queue and how does it help in asynchronous processing?
How does using AWS Lambda for sending emails fit into the asynchronous model?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.