AWS Certified Developer Associate Practice Test (DVA-C02)
Use the form below to configure your AWS Certified Developer Associate Practice Test (DVA-C02). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Developer Associate DVA-C02 Information
AWS Certified Developer - Associate showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices, and proficiency in developing, deploying, and debugging cloud-based applications by using AWS. Preparing for and attaining this certification gives certified individuals more confidence and credibility. Organizations with AWS Certified developers have the assurance of having the right talent to give them a competitive advantage and ensure stakeholder and customer satisfaction.
The AWS Certified Developer - Associate (DVA-C02) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to demonstrate proficiency in developing, testing, deploying, and debugging AWS Cloud-based applications. The exam also validates a candidate’s ability to complete the following tasks:
- Develop and optimize applications on AWS.
- Package and deploy by using continuous integration and continuous delivery (CI/CD) workflows.
- Secure application code and data.
- Identify and resolve application issues.

Free AWS Certified Developer Associate DVA-C02 Practice Test
- 20 Questions
- Unlimited time
- Development with AWS ServicesSecurityDeploymentTroubleshooting and Optimization
Your company runs an online retail application built from multiple microservices. Customers now report check-out failures. You enable AWS X-Ray and inspect the service map, which shows a spike in faults on the edge between the payment-processing service and the inventory service. What does this spike most likely indicate, and what should be your first troubleshooting action?
It probably reflects a transient network glitch after deployment; monitor the system for a day or two before taking action.
It means the inventory service is under-provisioned; immediately add more compute and memory to that service.
Tracing overhead is slowing requests, so disabling X-Ray should help restore normal checkout performance.
The spike shows repeated errors during interactions between the two services; examine their traces and CloudWatch logs to find the root cause.
Answer Description
A spike in faults on an X-Ray service map indicates a high rate of errors when one service calls another. The best first step is to drill into that edge or node and review the related traces, CloudWatch logs, and performance metrics to pinpoint exceptions, timeouts, or misconfigurations causing the failures. The other responses either assume capacity problems, postpone investigation, or disable the tracing tool that revealed the issue.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a service map in cloud application monitoring?
How do you analyze error logs and performance metrics to identify root causes?
What tools can you use to trace issues in microservices architectures?
Your application frequently reads user profile data that can experience sudden spikes in traffic, while profile updates occur only occasionally. To reduce database load and latency you plan to add a cache. You also want to minimize operating costs and ensure profiles are not served stale for long periods. Which caching approach best satisfies these requirements?
Employ a lazy-loading caching strategy with an appropriate time-to-live (TTL) to maintain freshness
Avoid caching and continue to serve all reads directly from the database
Implement write-through caching to keep the cache synchronized with every database update
Use a read-through caching layer so the cache automatically fetches data on a miss
Answer Description
Use a lazy-loading (cache-aside) strategy combined with an appropriate TTL. The cache is populated only when a profile is requested, so seldom-read items do not consume memory or increment cache charges. The TTL automatically expires entries after a set interval, ensuring the cache is periodically refreshed and limiting the window during which stale data could be served. Write-through would duplicate every infrequent update into the cache, adding latency and storing data that may never be read. Read-through offers similar lazy population but requires additional middleware without providing a cost or freshness advantage in this scenario. Not caching would leave the database exposed to the bursty read load you are trying to avoid.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is lazy loading in caching?
What is TTL in caching, and why is it important?
How does lazy loading with TTL compare to write-through caching?
When you use the AWS SDK for Python (Boto3) to perform an operation on an Amazon S3 bucket that does not exist, which botocore exception class is actually raised by the SDK (before you inspect the error code in the response)?
BucketNotFound
ClientError
S3BucketMissing
NoSuchBucket
Answer Description
Boto3 wraps every error that comes back from an AWS service-including the Amazon S3 NoSuchBucket error code-in a single Python exception class: botocore.exceptions.ClientError. To determine whether the underlying service error was NoSuchBucket, developers catch ClientError and then check e.response['Error']['Code'] for the string "NoSuchBucket". Names such as BucketNotFound or S3BucketMissing are not defined in the SDK, and although 'NoSuchBucket' appears in the error code, it is not the name of the Python exception class.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is botocore.exceptions.ClientError?
How do you check the error code when ClientError is raised?
What is the difference between error codes and exception classes in Boto3?
A developer is building a new mobile application and needs to implement a feature that allows users to sign up and sign in with their email address and a password. After a successful sign-in, the application must receive a JSON Web Token (JWT) to manage the user's session. Which AWS service should the developer use to meet these specific authentication requirements?
Amazon Cognito Identity Pools
Amazon Cognito User Pools
AWS Identity and Access Management (IAM) Users
AWS Security Token Service (STS)
Answer Description
Amazon Cognito User Pools provide a fully managed user directory that enables developers to add sign-up and sign-in functionality to web and mobile applications. Upon successful authentication, a user pool issues JSON Web Tokens (JWTs) that the application can use to manage user sessions and authorize API calls. In contrast, Amazon Cognito Identity Pools are used to grant users temporary access to AWS services, and they do not directly provide sign-up/sign-in user directory functionality. AWS IAM Users are meant for managing access to the AWS account for administrators and developers, not for application end-users. The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials but does not provide a user directory or sign-in UI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the differences between User Pools and Identity Pools in Amazon Cognito?
How does Amazon Cognito User Pools authenticate users?
What is the role of AWS IAM or Security Token Service in relation to Cognito Identity Pools?
A company is rolling out a new application on AWS that will handle sensitive customer information. The security team mandates that all customer data must be encrypted not only when stored (at rest) but also as it moves between services (in transit). Which of the following solutions should the development team implement to ensure compliance with the security team's mandate?
Use Amazon S3 with Server-Side Encryption (SSE) and leverage HTTPS for data in transit.
Encrypt sensitive database columns at rest and ensure IAM policies for database access are in place.
Apply strict IAM policies to control access to data but rely on the application to handle encryption.
Use built-in database encryption at rest and rely on network ACLs for data in transit.
Answer Description
Enforcing encryption at rest can be achieved by storing data using Amazon S3 with Server-Side Encryption (SSE) enabled, where Amazon handles the entire encryption process transparently. The data is decrypted for you when you access it. For encryption in transit, implementing HTTPS connections ensures data transferred between the client and servers is secured. While IAM policies do manage access, they do not directly encrypt data. Additionally, database column encryption only addresses data at rest and doesn't provide encryption in transit.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Server-Side Encryption (SSE)?
Why is HTTPS important for securing data in transit?
How do IAM policies differ from encryption methods for securing data?
Your team is working on a microservices architecture that necessitates the ability to rigorously validate the environment's configuration data before it is pushed to production to avoid potential disruptions caused by misconfiguration. Which service should they implement to facilitate the pre-deployment validation of configurations and ensure safe deployment?
AWS Systems Manager Parameter Store
AWS Secrets Manager
AWS AppConfig
AWS CodeDeploy
Answer Description
The service that offers the ability to conduct pre-deployment validation checks on configuration data, ensuring its correctness before the configuration is deployed to the application, is designed for structured configuration management and is capable of deploying configurations without impacting the application state. AWS AppConfig is specifically built for this purpose, enabling developers to create, manage, and quickly deploy configurations. The service also allows for configurations to be validated against a defined schema, and it supports controlled deployment strategies to limit the blast radius of any potential misconfigurations. AWS Secrets Manager's primary use case is to secure, store, and manage access to secrets, and while it's an application configuration tool in the broader sense, it does not offer configuration validation capabilities. AWS Systems Manager Parameter Store provides hierarchical storage for configuration data and secrets management but does not support the same level of validation and controlled deployment options as AppConfig. AWS CodeDeploy is a deployment service that automates application deployments but is not specialized in configuration management, nor does it offer validation features focused on configuration data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does AWS AppConfig validate configurations before deployment?
What is the difference between AWS AppConfig and AWS Systems Manager Parameter Store?
What are controlled deployment strategies in AWS AppConfig?
A company needs to update its web application running on AWS without disrupting the user experience. Which deployment strategy should they employ to ensure that only a portion of users are directed to the new version initially, thereby allowing for performance and stability monitoring before the new version is fully deployed?
All-at-once deployment
Blue/green deployment
Rolling deployment
Canary deployment
Answer Description
The correct answer is a canary deployment strategy. A canary deployment involves deploying the new version of an application to a small subset of users before fully rolling it out. This allows developers to monitor the performance and stability of the application with live traffic and reduce the risk of introducing issues that could affect all users. In contrast, blue/green deployment swaps the entire environment at once, which doesn't meet the requirement for gradual exposure. A rolling deployment uniformly updates instances but again, does not support the feature of selective traffic routing as described. All-at-once deployment doesn't allow for gradual exposure, as it updates all instances simultaneously.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary deployment in software development?
How does canary deployment differ from blue/green deployment?
What monitoring tools are commonly used during a canary deployment on AWS?
Your development team is tasked with optimizing a backend AWS Lambda function for an online sports trivia game. The Lambda function will receive player responses through Amazon API Gateway and needs to process them efficiently during high-traffic events such as live sports matches. Considering the need to adhere to stateless design principles, which method should you employ to temporarily store session-specific data needed within a single game?
Store session-specific data in Amazon ElastiCache for Redis.
Persist session-specific data in Amazon DynamoDB with short expiry TTL.
Write session-specific data to Amazon Elastic File System (EFS) mounted to the Lambda function.
Maintain temporary game state using Lambda environment variables.
Answer Description
Using Amazon ElastiCache for Redis is the best option for temporary session-specific data storage for a stateless Lambda function. Redis provides fast, in-memory data storage and retrieval, which is ideal for the ephemeral nature of session data within a single game session without persisting state in the Lambda function itself. Amazon DynamoDB, while durable and scalable, introduces higher single-digit-millisecond latencies and is optimized for long-lived data, making it less suitable when microsecond-level performance is required. Lambda environment variables are intended for static configuration rather than per-invocation data, and mounting Amazon EFS would create cross-invocation persistence that violates stateless principles and could become a scalability bottleneck.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is ElastiCache for Redis suitable for temporary session-specific data storage?
Why is DynamoDB less suitable for storing session-specific data in this context?
What are the limitations of using Lambda environment variables for session-specific data?
A developer has configured an Amazon API Gateway to trigger an AWS Lambda function. After a recent deployment, API logs indicate that API Gateway is failing to invoke the function due to a misconfigured integration. The error suggests that while the API Gateway endpoint exists, it cannot get a valid response from the upstream Lambda function it's configured to proxy to. Which HTTP status code is the client most likely receiving?
400 Bad Request
404 Not Found
502 Bad Gateway
500 Internal Server Error
Answer Description
A 502 Bad Gateway error indicates that the server, while acting as a gateway or proxy, received an invalid response from the upstream server. In this scenario, API Gateway is the proxy, and the Lambda function is the upstream server. A misconfigured integration, such as incorrect permissions or if the function doesn't exist, prevents API Gateway from getting a valid response, resulting in a 502 error. A 404 Not Found error would typically mean the API Gateway endpoint itself could not be found, not that the backend integration failed. A 400 Bad Request would indicate a client-side error, like a malformed request. A 500 Internal Server Error is a generic server error, but 502 is more specific to gateway or proxy problems.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are some common reasons a 502 Bad Gateway error occurs in API Gateway?
How does API Gateway act as a proxy for a Lambda function?
How can you troubleshoot a 502 error in Amazon API Gateway?
A team wants to automate the deployment of their web service across several stages. The source code, once integrated, must be automatically transferred to a testing stage. Following successful verification by the quality assurance team, an explicit sign-off is required before the changes are delivered to the end users. Which approach within their CI/CD process should the team implement to fulfill this requirement?
Incorporate a step for manual approvals following the automated testing within the pipeline
Establish a step for manual approvals before initiating any tests
Utilize infrastructure as code management services to integrate a manual approval requirement
Rely on automated testing to authorize the transition to the user-facing environment
Answer Description
To enforce a pause for human sign-off after automated tests, the team should add a manual approval action immediately after the test stage in their pipeline (for example, an Approval action in AWS CodePipeline). This allows the pipeline to progress automatically to testing, then waits for a named approver to review and approve before the production deployment proceeds. Adding the approval before tests would block automated verification, while relying solely on automated tests removes the required human gate. Infrastructure-as-code tools alone do not provide the approval gate without a dedicated manual approval step in the pipeline.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a manual approval step in a CI/CD pipeline?
How does automated testing differ from manual approval in CI/CD workflows?
What AWS services support manual approval steps in CI/CD pipelines?
A company is developing a serverless payment API that triggers an AWS Lambda function to process credit-card charges. Because of intermittent network failures, the frontend may automatically retry failed HTTP requests. Which mechanism should the developer add to each API request so that a customer is never charged more than once if a retry occurs?
Record the timestamp for each operation and only process requests if subsequent submissions occur after a specific time interval.
Generate a unique identifier for each operation, allowing the service to detect and ignore retries of transactions that have already been executed.
Track the status codes from previous submissions and use them to determine if the operation should be retried.
Integrate a distributed tracing service to handle de-duplication of transaction requests.
Answer Description
The correct approach to ensure idempotent operations is to issue a unique transaction identifier (often called an idempotency key) for each financial operation. The service stores the first result associated with that identifier and returns the same response for subsequent requests that present the same key, preventing duplicate charges. Timestamps do not guarantee idempotency because retries can reuse the same timestamp or suffer from clock skew. Checking previous HTTP status codes is unreliable because network issues can obscure responses, and status codes are not part of the retried request. AWS X-Ray provides distributed tracing and observability; it does not implement request de-duplication or idempotency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does a unique transaction identifier ensure idempotency in financial operations?
Why are timestamps not sufficient for preventing duplicate transactions?
What is idempotency, and why is it important in financial systems?
A developer has built a web application that retrieves product details from an Amazon DynamoDB table. During promotional events the application experiences spikes in read requests, which could exceed the table's provisioned read-capacity units and throttle requests. The developer wants to add a fully managed caching layer that is purpose-built for DynamoDB, delivers microsecond latency, and requires only minimal application changes. Which AWS service should the developer use?
Amazon DynamoDB Accelerator (DAX)
Amazon RDS Read Replica
AWS DataSync
Amazon ElastiCache
Answer Description
Amazon DynamoDB Accelerator (DAX) is a fully managed, in-memory cache that is API-compatible with DynamoDB. Because it sits transparently between the application and DynamoDB, the developer can redirect existing SDK calls to the DAX endpoint with minimal code changes. DAX automatically handles read-through and write-through caching, absorbs bursty read traffic, and can reduce response times from milliseconds to microseconds, preventing throttling of the underlying table. ElastiCache is a general-purpose in-memory store, but it requires additional application logic for cache population and invalidation. RDS Read Replicas, AWS DataSync, and Amazon S3 Transfer Acceleration do not provide a managed cache for DynamoDB reads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes Amazon DynamoDB Accelerator (DAX) unique compared to ElastiCache?
How does DAX handle spikes in read requests during high-traffic events?
What are read-through and write-through caching in DAX?
Your company is preparing to deploy a web application using Amazon Elastic Container Service (ECS). The application is containerized, and you are responsible for deploying new image updates with minimal downtime. Which service should you use to store and manage your Docker container images to seamlessly integrate with ECS and track the image versions?
Amazon Elastic Block Store (EBS)
AWS Lambda
Amazon Elastic Container Registry (ECR)
Amazon Simple Storage Service (S3)
Answer Description
Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. It's integrated with Amazon ECS, allowing for seamless deployment of images to containers in a secure, scalable, and reliable manner. ECR also provides image scanning to identify software vulnerabilities, and it supports image versioning through tags, making it the optimal choice for managing container images on AWS.
Amazon Elastic Block Store (EBS) is primarily for block storage, not for storing container images. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, and it does not store container images. Amazon Simple Storage Service (S3) can store any type of object, but it's not the most integrated solution for managing Docker container images for use with ECS.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Elastic Container Registry (ECR) and how does it integrate with ECS?
How does image versioning work in Amazon ECR?
What is the role of image scanning in Amazon ECR?
During the development stages of a serverless application, a developer is looking to test the request and response behavior as if the back-end was already implemented. What service feature should the developer use to create this simulated environment to validate the front-end integration?
Employ AWS Step Functions to replicate the back-end logic
Use AWS CloudFormation to provision mock integration resources
Leverage Amazon EventBridge to schedule mock events for the integration
Configure Amazon API Gateway mock integrations for simulating the necessary endpoints
Answer Description
Amazon API Gateway mock integrations enable developers to set up the entire API interface without having to have the backend services available. This feature allows for the creation of integration responses and error cases to test how the front end handles these situations. This capability is vital during development for ensuring that the application logic is properly handling requests and responses. AWS Step Functions are not designed for mocking API interfaces but for orchestrating microservices, applications and workflows. CloudFormation, while used for infrastructure as code, does not provide a direct way to mock service responses. Amazon EventBridge schedules events but is not intended for simulating backend responses.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do mock integrations in Amazon API Gateway work?
Can Amazon API Gateway mock integrations simulate complex data responses?
What are the benefits of mock integrations during development?
A software engineer is tasked with enabling an application, hosted on a virtual server, to interact with a cloud object storage service for uploading and downloading data. The engineer needs to implement a secure method of authentication that obviates the need to hardcode or manually input long-term credentials. What is the most appropriate strategy to achieve this while adhering to security best practices?
Manually enter the user credentials for the service at the start of each application session on the virtual server.
Assign a role to the virtual server that grants appropriate permissions to interact with the object storage service.
Hardcode the service user's access credentials in the source code of the application on the virtual server.
Save the service user's access credentials in a text file on the root directory of the virtual server for the application to use.
Answer Description
Assigning a role to the virtual server that specifies the permissions for the object storage service is the most secure and manageably sound approach, allowing the application to access resources without the need to embed or manually enter credentials. This method leverages the capability of the virtual server to use temporary credentials that are automatically rotated, preventing the risks associated with hardcoding or exposing long-term credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role in AWS?
How does assigning a role to a virtual server improve security?
What is the difference between an IAM user and an IAM role?
A developer is building a serverless application using AWS Lambda functions and Amazon API Gateway. To streamline deployment, the developer needs a tool that uses a declarative template to define the application's resources, including functions, APIs, and event source mappings. The tool must also support managing separate configurations for different environments like 'dev' and 'prod'. Which AWS tool is the most appropriate choice?
AWS CodeDeploy
AWS Serverless Application Model (SAM)
AWS Cloud Development Kit (CDK)
AWS CloudFormation
Answer Description
The correct option is AWS Serverless Application Model (SAM). AWS SAM is an open-source framework specifically designed to streamline the development and deployment of serverless applications. It uses a simplified YAML or JSON template syntax, which is an extension of AWS CloudFormation, to define serverless resources like functions, APIs, and event source mappings. SAM also has built-in features for managing different deployment environments, directly meeting the user's requirements.
- AWS CloudFormation is the underlying Infrastructure as Code (IaC) service that SAM uses. While you can define the entire application using native CloudFormation, its syntax is more verbose for serverless resources. SAM provides a specialized, higher-level abstraction that simplifies the template, making it the more appropriate and efficient tool for this specific scenario.
- The AWS Cloud Development Kit (CDK) is an IaC framework that uses familiar programming languages (like Python or TypeScript) to define cloud resources. The question specifically asks for a tool that uses a declarative 'template format' (like YAML/JSON), making SAM the better fit over the programmatic approach of the CDK.
- AWS CodeDeploy is a service that automates application deployments to various compute services, including AWS Lambda. It is focused on the deployment process itself, such as managing traffic-shifting strategies like canary and blue/green, rather than defining the infrastructure resources in a template.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly is the AWS Serverless Application Model (SAM)?
How does SAM handle different environment-specific configurations?
How does SAM compare to Infrastructure as Code services like AWS CloudFormation?
A software development team is implementing a set of microservices that will be exposed to their clients through endpoints. They are looking to adopt best practices for testing newly developed features without affecting the live production environment accessed by their customers. What is the most effective method to achieve this within the service that provides the endpoints?
Create a new deployment stage with specific configurations for the microservices that are separate from production settings
Reuse the production endpoints for testing by applying request filtering based on a custom header to differentiate traffic
Generate a clone of the existing production environment and make code changes directly there to validate new features
Implement feature toggles in the production codebase to switch between production and development configurations
Answer Description
The most effective method involves setting up a new deployment stage within the service that provides the endpoints, which in this context is Amazon API Gateway. Stages enable you to create independent environments for development and production. Setting up stage variables, specific authentication methods, and throttling rules for the development environment ensures that testing does not interfere with the production environment. This approach allows the team to test new code changes under conditions that closely mimic the live setup without risking the integrity and stability of the production systems.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are deployment stages in Amazon API Gateway?
How do stage variables work in Amazon API Gateway?
What are the advantages of using separate deployment stages for development and production?
Which service should be used if a developer needs to ensure that code changes are automatically compiled, tested, and deployed to a staging environment upon committing updates to the version control system?
CodeDeploy
Simple Storage Service (S3)
CodePipeline
CodeBuild
Answer Description
The correct answer is AWS CodePipeline because it is specifically designed to orchestrate and automate the steps required to release software changes frequently and reliably, including the scenarios in which code is automatically processed upon updates. While AWS CodeBuild and AWS CodeDeploy are part of the CI/CD process, they do not offer the end-to-end automation of code updates from version control systems to deployment as a standalone service. Amazon S3 is a storage service and does not handle automation of build, test, and deployment workflows.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS CodePipeline?
How does CodePipeline integrate with version control systems?
How is CodePipeline different from CodeBuild and CodeDeploy?
A developer needs to update an important backend service used for processing data. This service is critical, and any downtime must be avoided. The developer wants to transition the traffic gradually to observe the new behavior without disrupting the current users. What deployment technique should be applied to achieve this zero-downtime update and facilitate observability and quick rollback if issues arise?
Introduce a new version of the service and leverage the progressive traffic shifting feature to incrementally route user requests, monitoring the update's stability before redirecting all traffic.
Directly overwrite the current service code with the new update, relying on the underlying system to manage the transition seamlessly.
Configure a brand-new endpoint in the managed API service, directing a copy of incoming requests to the updated backend version for testing.
Modify the service's environment properties to point to the new version, which will transparently divert the traffic.
Answer Description
When updating a critical backend service such as a serverless function, it is important to have a strategy that allows for gradual transition of traffic to enable monitoring and quick rollback without affecting end-users. Utilizing the cloud provider's feature to create a new service version and update its pointer to control the percentage of traffic shift is an effective canary deployment strategy. It enables the developer to observe the new version's behavior and revert to the old version if needed. This practice ensures that any potential issues with the new version do not impact all users at once. Direct deployment without traffic control, updating configuration variables, or deploying through a new API stage would not provide the same level of control and safety for this operation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary deployment?
How does traffic shifting work in cloud deployments?
What are the key benefits of using a canary deployment strategy?
A developer needs to ensure that a serverless function, which is responsible for processing incoming messages, will redirect failed executions to a separate storage service for later analysis. Which feature should the developer configure to BEST adhere to this requirement?
Configure a dead-letter queue to save messages that the function fails to process for subsequent analysis.
Alter the function's retry settings to indefinitely attempt reprocessing the problematic message until successful.
Boost the serverless function's available compute resources to prevent message processing failures.
Set up an automated re-publishing mechanism through another notification service to keep submitting the message until it is processed without errors.
Answer Description
Enabling a dead-letter queue (DLQ) for the Lambda function is the best proactive measure to handle failed execution messages. The DLQ pattern is a well-established way to isolate messages that a Lambda function fails to process, allowing them to be moved to a separate queue and not interfere with the continuous flow of new messages being processed. This aids in analyzing and debugging issues without losing the context of the failure. This is preferable to re-attempting processing within the Lambda function which might cause repeated failures and does not segregate the problematic messages, or increasing resources which might not address the root cause of processing failures.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a dead-letter queue (DLQ) in AWS Lambda?
How does enabling a DLQ improve reliability in message processing?
Can a dead-letter queue automatically retry failed events?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.