Scroll down to see your responses and detailed results
Prepare for the AWS Certified Developer Associate DVA-C02 exam with this free practice test. Randomly generated and customizable, this test allows you to choose the number of questions.
You have been given the task to test an AWS Lambda function locally, which is designed to process JSON data from an Amazon API Gateway proxy integration. The function expects to receive a payload with a nested JSON object in the body that contains project details. Which of the following test events correctly emulates the expected format of the incoming event from API Gateway?
{"httpMethod": "POST", "body": "{"project": {"name": "New Project", "description": "An upcoming project."}}", "path": "/projects", "requestContext": {"httpMethod": "POST", "path": "/prod/projects"}, "isBase64Encoded": false}
{"httpMethod": "POST", "body": "{'project': {'name': 'New Project', 'description': 'An upcoming project.'}}", "path": "/projects", "requestContext": , "isBase64Encoded": false}
{"httpMethod": "POST", "body": "{"project": {"name": "New Project", "description": "An upcoming project"}}", "path": "/projects"}
{"method": "POST", "body": {"project": {"name": "New Project", "description": "An upcoming project."}}, "path": "/projects"}
The correct answer is the one that adheres to the standard event structure that an AWS Lambda function expects from an API Gateway with proxy integration. This includes the correct use of 'httpMethod', 'body' with properly escaped JSON string, 'path', and 'requestContext'. The body must also be a string that is a serialized JSON object since the Lambda function will parse it as JSON. The correct option presents a nested JSON in a string format, properly serialized and escaped. Other options either present the body as an object instead of a string, improperly escape double quotes, or miss important property fields.
AI Generated Content may display inaccurate information, always double-check anything important.
Which aspect of cloud application management involves the collection and analysis of data to understand the internal state of the system that can facilitate proactive identification and resolution of issues?
Logging
Monitoring
Observability
Caching
Observability extends beyond basic logging and monitoring by offering insights into the system's internal state, derived from the output data. This helps in proactive and efficient problem-solving. Logging refers to the recording of events and data, while monitoring is the process of tracking metrics and logs to oversee the performance and health of the system. Observability uses this data to provide a deeper understanding of the system, allowing for proactive issue identification and resolution.
AI Generated Content may display inaccurate information, always double-check anything important.
A developer is configuring a Lambda function to access resources in a separate AWS account. To follow best security practices, the developer needs to grant the Lambda function the necessary permissions. What should the developer use to accomplish this?
Create an IAM role in the target account that the Lambda function can assume, with the necessary permissions attached.
Use the Lambda function's own execution role directly to access resources in the target account without assuming any roles.
Store the target account user's credentials in Lambda environment variables and use them to access resources.
Attach an inline policy to the Lambda function's execution role granting access to the target account resources.
Cross-account access can be achieved by assuming an IAM role. The IAM role should be created in the target AWS account with the necessary permissions and a trust policy that allows the Lambda function's account to assume the role. Using a user's credentials directly is discouraged and goes against the principle of least privilege, as it can provide unnecessary and broad access. Inline policies are not the correct approach for cross-account access and Lambda environment variables are used for configuration and not for setting up cross-account permissions.
AI Generated Content may display inaccurate information, always double-check anything important.
Your application, hosted on multiple Amazon EC2 instances, needs to perform periodic data processing tasks on an Amazon S3 bucket. The tasks require the application to have read, write, and list permissions on the bucket. To align with security best practices, which action should you take to grant these S3 permissions to the application?
Configure a resource-based policy on the S3 bucket to grant the EC2 instances the required permissions.
Attach an IAM managed policy with the required S3 permissions directly to the EC2 instances.
Create an IAM role with the specified S3 permissions and attach it to the EC2 instances using an instance profile.
Create an IAM user for each EC2 instance with permissions to access the S3 bucket and store the credentials in a configuration file on each instance.
Attaching an instance profile that contains an IAM role with the necessary S3 permissions to your EC2 instances is the recommended solution for this scenario. This allows the application to assume the IAM role and obtain temporary credentials, which can be used to access the S3 bucket. The use of an instance profile ensures that the EC2 instance can securely make API calls to AWS services on behalf of the user that assumed the role. Unlike static credentials, these permissions are automatically rotated and managed by AWS. Creating individual IAM users is not scalable for multiple instances, and hard-coding credentials is a security risk and violates the AWS recommended practice of not embedding secrets in code. Using a resource-based policy on the S3 bucket is not possible, as it cannot grant the necessary EC2 instance permissions to assume a role.
AI Generated Content may display inaccurate information, always double-check anything important.
When developing serverless applications, it is possible to execute and debug your code on your local machine, simulating the cloud environment, without having to upload your functions to the actual cloud service each time during testing.
The statement is false; the framework requires all tests to be performed directly in the cloud and does not support local testing.
The statement is true; the framework allows for local testing and debugging which simulates the cloud environment.
The framework for building serverless applications provides a local environment that mimics the cloud execution environment. This feature allows developers to iterate quickly by executing and debugging their code on local machines, saving time and resources by not requiring code to be deployed to the cloud service for initial testing phases. This is particularly useful for developing, testing, and debugging Lambda functions.
AI Generated Content may display inaccurate information, always double-check anything important.
A software engineer needs to automate the deployment of a server-side application that relies on functions triggered by HTTP requests. The engineer plans to manage this application across various environments, each requiring distinct settings. Which service should the engineer choose to define the function triggers and environment configurations in a template format?
Deployment Automation Service
Pipeline Automation Service
Serverless Application Model
Infrastructure as Code Service
Cloud Infrastructure Development Kit
Web Application Hosting Service
The correct answer is the Serverless Application Model, which provides a streamlined way for defining functions, APIs, and the event source mappings that trigger them. It also supports configuration variants for different stages of deployment, such as development and production. This simplifies the process and makes it efficient for developers to manage serverless applications in a multi-environment setup.
The other options, like the service for deploying web applications or the deployment automation service, do not offer the same level of simplicity and direct support for serverless architecture templating. The infrastructure as code service is also capable of defining resources but does not offer the simplified syntax and tools that are tailored for serverless applications like the Serverless Application Model does. The Cloud Infrastructure Development Kit is a more generic tool for defining cloud resources using familiar programming languages and could be used for these purposes, but it doesn't offer the same level of convenience for serverless deployments specifically.
AI Generated Content may display inaccurate information, always double-check anything important.
Your company's cloud architecture leans heavily on Lambda functions to process e-commerce transactions. The functions must scale automatically and remain unaffected by the state of previous invocations. Currently, a function processes payments by using an external payment gateway and it must reliably handle transactions during peak hours. To ensure that transaction handling by Lambda functions adheres to a stateless design, what should be done?
Configure the Lambda functions to process transactions asynchronously using an internal queue.
Use environment variables to store transaction state between invocations.
Store transaction state data in an external database or caching service such as Amazon DynamoDB or Amazon ElastiCache.
Enable automatic scaling on the Lambda function to handle the increased number of transactions.
By storing the state externally in a database or caching system, the Lambda functions do not need to retain state between invocations. This approach aligns with the stateless nature of AWS Lambda, where each invocation should handle requests independently without relying on any prior state information. Using environment variables for configuration does not address state management between function invocations. Similarly, enabling automatic scaling or using asynchronous invocation does not inherently make an application stateless, although these configurations contribute to scalability and performance.
AI Generated Content may display inaccurate information, always double-check anything important.
An application development team is leveraging a serverless framework provided by AWS to manage and deploy their serverless infrastructure. When the team needs to introduce new updates for testing that will not impact the live environment accessed by users, which action aligns best with this requirement?
Conduct IAM role permission testing to ensure that the updates will not affect user access control.
Deploy the updates to a separate staging environment that replicates the live settings.
Apply the updates in a different, unconnected environment to protect the live environment.
Update the live environment during off-peak hours to minimize the impact on end-users.
The best practice for deploying updates without impacting the live user environment is to deploy the changes to a separate staging environment. This approach provides a testing ground that simulates the live environment but prevents any disruptions or unintended consequences from reaching actual users. Directly updating the live environment can lead to untested or faulty code impacting users, while choosing a completely unrelated environment may not provide accurate testing feedback. Testing IAM role permissions is part of securing the deployments, but it does not replace the need for environment-specific deployment for testing changes.
AI Generated Content may display inaccurate information, always double-check anything important.
A developer has deployed a web application that is exhibiting intermittent failures. To effectively monitor application behavior and quickly pinpoint these issues, which service could be used to analyze log data in real time and trigger appropriate responses based on specific log patterns?
Utilize Amazon RDS Performance Insights for log events assessment.
Set up AWS Lambda to poll logs periodically and act on them.
Implement Amazon S3 event notifications to monitor log object changes.
Use Amazon CloudWatch Logs to establish metric filters and alarms.
Amazon CloudWatch Logs is the suitable service for analyzing log data in real time and setting up mechanisms to act upon particular log patterns, such as setting up metric filters and alarms. It facilitates the continuous monitoring, storing, and accessing log files. Additionally, application issues can be addressed rapidly by using its features to trigger automated responses. Options involving other services are incorrect as they are not primarily used for log data analysis or don't support triggering actions based on log events.
AI Generated Content may display inaccurate information, always double-check anything important.
Which service is designed to provide developers with insights into the performance and operation of their distributed applications, offering capabilities to collect, analyze, and visualize tracing data?
CloudTrail
Inspector
CloudFormation
X-Ray
The service designed for providing insights into the performance and operation of distributed applications is X-Ray. It allows the collection, analysis, and visualization of tracing data, which is crucial for developers to understand and optimize their applications. CloudTrail, on the other hand, focuses on logging and auditing AWS account activity. Inspector assesses applications for vulnerabilities and deviations from best practices. CloudFormation automates infrastructure provisioning, which is unrelated to application performance tracing.
AI Generated Content may display inaccurate information, always double-check anything important.
A developer is working on enhancing the security of a serverless infrastructure where user authentication is handled by an OIDC-compliant external identity provider. Upon a user's successful sign-in, the external service issues a token. The developer needs to ensure that this token is validated before allowing access to the serverless function endpoint. Which approach should the developer implement to enforce token validation?
Utilize a Lambda function programmed to evaluate and verify the token before proceeding with the request.
Apply a resource-based policy directly on the function to check for the presence of the token in the request.
Configure a role with specified permissions that authenticates users based on the provided token.
Deploy client-side certificates to secure the endpoint and validate the incoming tokens.
The developer should implement a Lambda authorizer, which is a way to handle custom authorization logic before granting access to the serverless function endpoint. The Lambda authorizer can verify the validity of the token and determine if the request should be allowed or denied. This approach is particularly useful in serverless architectures where application components are loosely coupled, and an external identity provider manages user authentication. This verification is performed within the AWS environment without making a round trip to the external identity provider. On the contrary, IAM roles are for access management within AWS services and resources, not for validating tokens directly. Resource-based policies define permissions for AWS resources, but they do not provide a method for validating bearer tokens. Client-side certificates are used for mutual TLS (mTLS) authentication but do not apply to the scenario involving verification of tokens provided by an external identity provider.
AI Generated Content may display inaccurate information, always double-check anything important.
Your team requires a service to handle feature flags and setting values, which are expected to frequently change. This service must allow you to validate the new configurations against a defined schema and provide mechanisms to deploy them on a scheduled basis or with a gradual rollout. In addition, limiting the percentage of the environment that receives these configuration updates at one time should be possible. Which service should be utilized to achieve this functionality?
Secrets Manager
CodePipeline
AppConfig
Parameter Store
The service that fits the requirements described is designed to enable developers to manage and safely deploy application configurations separate from the application code. It has features for validating the changes against a schema, supporting scheduled updates, and allowing a controlled rollout, which includes the functionality to limit the percentage of an environment targeted by a new configuration. This advanced configuration deployment is not offered by the services primarily focused on secure storage of secrets or on providing basic configuration storage. Moreover, the described service aligns more closely with configuration and feature management rather than with the continuous integration and delivery process, which another service targets.
AI Generated Content may display inaccurate information, always double-check anything important.
Which service is designed to support the management of settings at scale and enable safe deployment of configurations without necessitating code changes?
Secrets Manager
Parameter Store
AppConfig
CodeConfig
The correct service is designed to facilitate the separation of code from environment-specific configurations, allowing for settings to be deployed independently of the application code. This is a best practice, as it promotes consistent configuration across environments and makes management tasks simpler. The incorrect services, while they can store parameters or secrets, are not primarily focused on comprehensive configuration management at scale or safe deployment practices as their core functionalities. The last option is not a real service, and is listed to ensure that the test taker is not just recognizing AWS product names but actually understands their purposes.
AI Generated Content may display inaccurate information, always double-check anything important.
A development team must secure sensitive customer files in cloud-based object storage. The requirements stipulate that the encryption keys used should be under the company's direct control, with an automated process for changing these keys periodically. Which service and configuration would best fulfill these criteria?
Use self-managed keys in Key Management Service set to automatically rotate for object storage server-side encryption
Adopt a hardware security module service for key storage and institute a manual rotation process
Enable a managed key rotation service within the platform's cloud object storage
Engage a private certificate authority to apply server-side encryption policies to the cloud storage
The appropriate service for the creation and administration of encryption keys in the cloud is Key Management Service (KMS), which provides options to both create your own encryption keys and configure them for automatic rotation. Utilizing KMS keys offers the capability to fulfill the need for encryption at rest within S3 while also adhering to the company's guidelines for regular key rotation. The reliance on the platform's own managed keys would preclude direct control and rotation management. Certificate Manager's primary use case involves issuing certificates for TLS, not storage encryption, while CloudHSM is tailored for specific compliance requirements and does not innately manage automatic key rotation.
AI Generated Content may display inaccurate information, always double-check anything important.
When architecting a table in Amazon DynamoDB to catalog customer interactions that occur at various points in time, with the necessity to retrieve entries based on individual customer and specific date ranges, what key schema should the developer implement to support this access pattern efficiently?
Utilize only the time parameter of the interactions as the table's partition key without considering customer-specific sorting.
Implement a single unique identifier for all interactions without additional keys.
Design the table with a partition element for individual customer reference and a sort element capturing the time aspect of interactions.
Rely solely on the customer reference for the table's partition key without incorporating a sort element.
The developer should employ a composite primary key with two elements: a partition element that represents the individual customer, and a sort element that aligns with the chronological aspect of the interactions. This approach enables the table to isolate data per customer while allowing sorted and range-based retrieval within the confines of each customer's data set. A design with a singular identifier disregards the query needs by customer and date. A primary key based solely on the customer element would fail to account for multiple interactions and lead to poor distribution of data, whereas focusing on chronological elements alone could lead to similarly skewed data distribution due to concurrent interactions across customers.
AI Generated Content may display inaccurate information, always double-check anything important.
Looks like that's it! You can go back and review your answers or click the button below to grade your test.
Join premium for unlimited access and more features