Microsoft Azure Developer Associate Practice Test (AZ-204)
Use the form below to configure your Microsoft Azure Developer Associate Practice Test (AZ-204). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Developer Associate AZ-204 Information
Navigating the AZ-204 Azure Developer Associate Exam
The Microsoft Azure Developer Associate (AZ-204) certification is a crucial credential for cloud developers specializing in the Microsoft Azure ecosystem. This exam is designed for professionals who are responsible for all phases of the development lifecycle, including gathering requirements, design, development, deployment, security, maintenance, performance tuning, and monitoring. Candidates should have 1-2 years of professional development experience, including hands-on experience with Microsoft Azure. The exam validates a developer's proficiency in leveraging Azure's tools, SDKs, and APIs to build and maintain cloud applications and services.
The AZ-204 exam assesses a broad set of skills across five primary domains. These areas include developing Azure compute solutions (25-30%), developing for Azure storage (15-20%), implementing Azure security (15-20%), monitoring, troubleshooting, and optimizing Azure solutions (5-10%), and connecting to and consuming Azure services and third-party services (20-25%). The exam itself consists of 40-60 questions and has a duration of about 100 minutes. The question formats can vary, including multiple-choice, scenario-based questions, and drag-and-drop tasks.
The Value of Practice Exams in Preparation
A critical component of a successful study plan for the AZ-204 exam is the use of practice tests. Taking practice exams offers several key benefits that go beyond simply memorizing facts. They help you become familiar with the style, wording, and difficulty of the questions you are likely to encounter on the actual exam. This familiarity can help reduce anxiety and improve time management skills during the test.
Furthermore, practice exams are an excellent tool for self-assessment. They allow you to gauge your readiness, identify areas of weakness in your knowledge, and focus your study efforts accordingly. By reviewing your answers, especially the incorrect ones, you can gain a deeper understanding of how different Azure services work together to solve real-world problems. Many candidates find that simulating exam conditions with timed practice tests helps build the confidence needed to think clearly and methodically under pressure. Microsoft itself provides a practice assessment to help candidates prepare and fill knowledge gaps, increasing the likelihood of passing the exam.

Free Microsoft Azure Developer Associate AZ-204 Practice Test
- 20 Questions
- Unlimited
- Develop Azure compute solutionsDevelop for Azure storageImplement Azure securityMonitor and troubleshoot Azure solutionsConnect to and consume Azure services and third-party services
You are developing an ASP.NET Core Web API protected with the Microsoft Identity platform (v2 endpoint). Client apps will call the API either on behalf of a signed-in user (delegated flow) or as a daemon service (client-credentials flow). The API must programmatically verify the permission conveyed in the token. Which claim should the API evaluate in each scenario?
Delegated flow - check the aud claim; client-credentials flow - check the appid claim.
Delegated flow - check the groups claim; client-credentials flow - check the scope claim.
Delegated flow - check the scp (scope) claim; client-credentials flow - check the roles claim.
Delegated flow - check the roles claim; client-credentials flow - check the scp (scope) claim.
Answer Description
Access tokens obtained through delegated flows include the scp (scope) claim, which lists the delegated permissions that the user and client app have for the target API. Tokens obtained through the client-credentials flow never carry scp; instead they include the roles claim that lists the application roles (app-only permissions) granted to the calling service principal. Therefore, the API should inspect the scp claim when the call is on behalf of a user and the roles claim when the call is made by a daemon or background service. Inspecting roles for delegated tokens or scp for app-only tokens will always fail, because those claims are not issued in those contexts. Other claims such as aud, appid, or groups do not directly convey the permission being requested.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between delegated flow and client-credentials flow in Microsoft Identity Platform?
How does the scp (scope) claim function in the token during a delegated flow?
Why does the client-credentials flow use the roles claim instead of scp?
You are building a .NET 6 Web API that will be called by a single-page application (SPA) registered in Microsoft Entra ID. The SPA obtains tokens from the v2.0 endpoint by requesting the scope api://
Which manifest property should you modify, and what value should it have?
Remove accessTokenAcceptedVersion (leave it null).
Set oauth2AllowImplicitFlow to true.
Add the SPA's client ID to the knownClientApplications array.
Set accessTokenAcceptedVersion to 2.
Answer Description
The v2.0 endpoint issues access tokens that include a version 2 issuer (…/v2.0). A Web API app registration that was created before the v2 endpoint was introduced is configured to accept only v1 tokens. To allow the API to validate v2 tokens you must set the manifest property accessTokenAcceptedVersion to 2. No other manifest setting affects issuer validation for v2 tokens, and properties such as oauth2AllowImplicitFlow or knownClientApplications relate to different scenarios (SPA implicit grant and pre-authorized clients, respectively). Setting accessTokenAcceptedVersion to null or 1 continues to limit the API to v1 tokens, so the issuer validation error would remain.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the 'accessTokenAcceptedVersion' property?
What happens if 'accessTokenAcceptedVersion' is not properly configured?
How does the v2.0 endpoint differ from the v1.0 endpoint in Microsoft Entra ID?
You manage an Azure Container Registry (ACR) named "contosoacr" in the resource group "dev-rg". You must publish the public Docker Hub image "nginx:1.25" to the registry as "web/nginx:stable" without first downloading the image to your workstation. Which Azure CLI command accomplishes this goal?
docker pull nginx:1.25 && docker tag nginx:1.25 contosoacr.azurecr.io/web/nginx:stable && docker push contosoacr.azurecr.io/web/nginx:stable
az acr repository copy --name contosoacr --source nginx:1.25 --image web/nginx:stable --resource-group dev-rg
az acr build --registry contosoacr --image web/nginx:stable docker.io/library/nginx:1.25 --resource-group dev-rg
az acr import --name contosoacr --source docker.io/library/nginx:1.25 --image web/nginx:stable --resource-group dev-rg
Answer Description
The az acr import command lets you copy images from an external registry directly into your Azure Container Registry, removing the need to pull them locally or provide Docker Hub credentials for public images. Using --source specifies the origin image, and --image defines the target repository and tag inside ACR. Other options shown either require a local pull and push (docker pull/tag), rely on a non-existent az acr repository copy command, or attempt to run az acr build, which is meant to build images from source code or Dockerfiles, not to copy existing images.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Container Registry (ACR)?
How does 'az acr import' work to copy Docker images?
What is the difference between 'az acr import' and 'docker pull/push workflows'?
You are developing an Azure Function that uses Application Insights. Inside a catch block you must send the caught exception together with a string property named CustomerId and a numeric metric named ElapsedMs. A TelemetryClient instance named telemetryClient is already available. Which method call should you use to meet the requirement?
telemetryClient.TrackException(ex, new Dictionary<string,string>{{"CustomerId", customerId}}, new Dictionary<string,double>{{"ElapsedMs", elapsedMs}});
telemetryClient.TrackTrace("Exception", SeverityLevel.Error, new Dictionary<string,string>{{"CustomerId", customerId}});
telemetryClient.TrackException(new ExceptionTelemetry(ex) { Properties = { {"CustomerId", customerId} } });
telemetryClient.TrackEvent("ExceptionCaught", new Dictionary<string,string>{{"CustomerId", customerId}}, new Dictionary<string,double>{{"ElapsedMs", elapsedMs}});
Answer Description
The overload of TelemetryClient.TrackException that accepts an Exception plus two dictionaries lets you attach both custom string properties and numeric metrics in the same call. Passing the exception object, a dictionary that contains the "CustomerId" key, and another dictionary that contains the "ElapsedMs" key ensures all requested data is included in the exception telemetry sent to Application Insights. The other options either omit the metric, use TrackTrace, or use TrackEvent, none of which satisfy the requirement to record both the exception and the custom metric in a single exception telemetry item.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Application Insights?
What is a TelemetryClient in Azure?
Why use dictionaries with TrackException in Application Insights?
You are building a console application that runs as a background service and must call a custom web API secured by Microsoft Entra ID. The API exposes a scope named "Tasks.Read". The console app must obtain an access token without any user interaction. What should you configure to enable the console app to receive a valid token for the API?
Add the console app's managed identity to an Azure AD group that is assigned the "Tasks.Read" scope and rely on group claims in the token.
Register the console app as a public client, configure implicit grant, and request an access token directly from the authorization endpoint.
Create an application permission for "Tasks.Read", assign that permission to the console app, grant tenant-wide admin consent, and request the token by using the OAuth 2.0 client credentials flow.
Expose "Tasks.Read" only as a delegated permission, require users to grant consent individually, and obtain the token through the authorization code flow.
Answer Description
Because the console application runs without a signed-in user, it must authenticate as an application (daemon). The web API therefore needs an application permission that represents the Tasks.Read operation. After you expose this application permission, you assign it to the console app registration and grant tenant-wide admin consent. The console app can then use the OAuth 2.0 client credentials flow to request a token whose audience is the API. Delegated permissions, implicit grant, or public-client settings rely on a user context and are not suitable. Adding the managed identity to a group does not satisfy the permission/consent requirements for Azure AD tokens.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the OAuth 2.0 client credentials flow?
What are application permissions and how are they different from delegated permissions?
What is tenant-wide admin consent in Microsoft Entra ID?
You need to provision a new Azure App Service Web App named "contoso-api" for a .NET 7.0 application. The web app must run on Linux and be placed in the existing resource group "RG1" and the existing App Service plan "asp-linux" (located in the same resource group). Which Azure CLI command meets the requirements?
az webapp create --resource-group RG1 --plan asp-linux --name contoso-api --runtime "DOTNETCORE|7.0" --os-type Windows
az webapp create --resource-group RG1 --plan asp-linux --name contoso-api --runtime "DOTNETCORE|7.0" --os-type Linux
az webapp create --resource-group RG1 --name contoso-api --runtime "DOTNETCORE|7.0" --os-type Linux
az webapp create --resource-group RG1 --plan asp-linux --name contoso-api --runtime "DOTNET:7.0" --os-type Linux
Answer Description
The az webapp create command provisions the Web App. The --plan parameter places the app in an existing App Service plan, and --os-type Linux enforces the operating system. For a .NET 7 workload on App Service for Linux, the runtime identifier must be "DOTNETCORE|7.0". Omitting --plan would create the app in a free plan, using a Windows OS type would violate the requirement, and the value "DOTNET:7.0" is not a valid runtime string.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an App Service Plan in Azure?
What is the significance of the runtime 'DOTNETCORE|7.0' in Azure App Service?
Why is '--os-type' important when provisioning an Azure Web App?
Your team needs to monitor the public HTTPS endpoint of an ASP.NET Core API using Application Insights. The test must validate the SSL certificate, allow you to specify a custom HTTP header, and run from multiple Azure locations without writing any code. Which type of availability test should you configure?
Standard test
URL ping test
Custom TrackAvailability test
Multi-step web test
Answer Description
The Standard test is the only built-in Application Insights availability test that lets you configure advanced options such as SSL certificate validation and custom request headers while still being codeless. A URL ping test is limited to a simple GET operation with no header customization or certificate checks. A custom TrackAvailability test does support any logic you write, but it requires adding code to the application and therefore does not meet the "without writing code" requirement. A multi-step web test is deprecated for new resources and is no longer the recommended choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of the Standard test in Application Insights?
How does a URL ping test differ from a Standard test?
Why is the Multi-step web test deprecated in Application Insights?
You have an Azure App Service for Linux that runs a custom container image stored in Azure Container Registry (ACR). Developers frequently push new versions of the image by overwriting the existing "latest" tag. You must ensure the web app automatically pulls and runs the updated image each time the tag is pushed, without any manual steps. Which action should you take?
Package the application code into a zip file and deploy it with Run-From-Package.
Enable Continuous Deployment for the ACR image in Deployment Center, allowing a registry webhook to redeploy the web app automatically.
Set the app setting WEBSITES_CONTAINER_START_TIME_LIMIT to 0 so App Service re-pulls the container image each time it starts.
Create a staging deployment slot and configure autoswap to production whenever the slot restarts.
Answer Description
When you enable Continuous Deployment for an ACR-based image in the App Service Deployment Center, the portal creates a webhook in the registry that targets the web app. Every time the specified tag is pushed, ACR sends the webhook, which triggers App Service to pull and start the new image automatically. Changing WEBSITES_CONTAINER_START_TIME_LIMIT only affects the startup timeout and does not trigger image refreshes. Run-From-Package deploys code, not containers. Autoswap between deployment slots orchestrates slot traffic changes but does not cause the app to detect a new image in the registry.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Deployment Center, and how does it enable Continuous Deployment?
What is a webhook, and how is it used in Azure Continuous Deployment?
Why doesn't changing the WEBSITES_CONTAINER_START_TIME_LIMIT trigger image refreshes?
You use a C# timer-triggered Azure Function with the CRON expression '0 0 */4 * * *' so it runs every four hours. The function must also execute once immediately when the function app starts. Which configuration change accomplishes this?
Enable the AlwaysOn setting on the Function App in Azure portal.
Create an application setting named WEBSITE_TIME_TRIGGER with the value Startup.
Add the parameter RunOnStartup = true to the TimerTrigger attribute.
Set useMonitor to false for the timer extension in host.json.
Answer Description
The TimerTrigger attribute supports an optional RunOnStartup property. When set to true, the runtime executes the function once when the host starts and then continues following the defined schedule. AlwaysOn only keeps the Function App warm but does not invoke the function, useMonitor controls checkpointing and does not affect startup execution, and no WEBSITE_TIME_TRIGGER app setting exists. Therefore adding RunOnStartup = true to the TimerTrigger attribute is the correct approach.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a TimerTrigger in Azure Functions?
What is a CRON Expression and how does it work in Azure?
What does RunOnStartup = true do in a TimerTrigger?
You are developing a multi-tenant ASP.NET Core Web API called Inventory API protected by Microsoft Entra ID. Client apps access the API by using the OAuth 2.0 client-credentials flow. An application role "inventory.read.all" is defined in the API registration and assigned to the clients. When validating incoming access tokens, which claim and value should you verify to authorize read operations?
Check that the appid claim equals "inventory.read.all".
Check that the aud claim equals "inventory.read.all".
Check that the roles claim includes "inventory.read.all".
Check that the scp claim equals "inventory.read.all".
Answer Description
Tokens obtained through the client-credentials flow carry application permissions. Microsoft Entra ID does not include the scp (scope) claim in these tokens. Instead, it places granted application roles in the roles claim. Therefore, the API should verify that the roles claim contains the expected application role value ("inventory.read.all").
The other options are incorrect:
- scp is absent in client-credentials tokens.
- aud identifies the audience, not permissions.
- appid is the caller's client ID and unrelated to role validation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the client-credentials flow in OAuth 2.0?
Why is the roles claim important in Microsoft Entra ID's client-credentials flow?
What is the difference between the aud and appid claims in OAuth 2.0 tokens?
You are developing a telemetry ingestion solution that uses an Azure Event Hubs Standard namespace. Devices publish thousands of events per second. Auditors require an immutable copy of every raw event in Azure Storage for at least 90 days to allow replay. You must meet the requirement with minimal changes to producer and consumer code. What should you do?
Create an Azure Function that is triggered by the Event Hub and writes each event payload to Azure Blob Storage.
Enable diagnostic settings on the Event Hubs namespace and send the diagnostic logs to Azure Storage.
Configure an Azure Stream Analytics job that reads from the Event Hub and outputs the stream to Azure Blob Storage.
Enable Event Hubs Capture and set the destination to an Azure Storage container that has a 90-day lifecycle deletion policy.
Answer Description
Event Hubs Capture is a built-in feature of Standard and Dedicated namespaces that automatically writes all arriving events to Azure Blob Storage or Azure Data Lake Storage in Avro format. Because Capture runs inside the Event Hubs service, no additional code is required in producers or consumers. A storage lifecycle policy can delete files older than 90 days, meeting the retention requirement. An Azure Function or Stream Analytics job would work but adds custom components. Diagnostic settings forward only service logs and metrics, not event payloads, so they do not satisfy the audit requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Event Hubs Capture?
What is Avro format, and why is it used in Event Hubs Capture?
How does Azure Storage's lifecycle deletion policy work?
You are building an ASP.NET Core web app that authenticates to a service by using a client certificate. The certificate is stored in an Azure Key Vault named ContosoVault and was imported as an exportable PFX file. At start-up you must load the certificate with its private key into an X509Certificate2 object in memory. A DefaultAzureCredential instance named credential is already available. Which C# approach should you implement?
Use Azure CLI inside the app to run az keyvault certificate download and load the downloaded file into an X509Certificate2 object.
Instantiate a SecretClient with the vault URI and credential, call GetSecretAsync("clientCert"), convert the Value to byte[], then create the X509Certificate2 object from the byte array.
Instantiate a CertificateClient, call GetCertificateAsync("clientCert"), and pass the returned Certificate.Content bytes to the X509Certificate2 constructor.
Instantiate a KeyClient, call GetKeyAsync("clientCert"), extract the key material, and build the X509Certificate2 object from it.
Answer Description
When a PFX certificate is stored in Azure Key Vault, the private key material is kept in a hidden secret that shares the certificate's name. To obtain the complete PFX payload (public certificate and private key), the application must call SecretClient.GetSecretAsync on the secrets collection, then convert the returned Base-64 string into a byte array and pass it to the X509Certificate2 constructor.
CertificateClient.GetCertificateAsync only returns the public part of the certificate in DER format, so the resulting X509Certificate2 will lack a private key. KeyClient works with cryptographic keys, not certificates, and GetKeyAsync cannot return a PFX. The Azure CLI is irrelevant inside application code.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is X509Certificate2 in C#?
What is the purpose of DefaultAzureCredential in Azure SDK?
How does SecretClient.GetSecretAsync work in Azure Key Vault?
Your company runs an Azure Cosmos DB account that has East US as the single write region and West Europe added as a read region. A microservice that runs only in West Europe must read documents that it previously wrote and must see those writes immediately, but it can tolerate other clients' updates becoming visible later. To keep latency and request units (RU) usage as low as possible, which consistency level should the microservice specify on its SDK requests?
Strong consistency
Consistent Prefix consistency
Eventual consistency
Session consistency
Answer Description
Session consistency guarantees that a client will always read its own writes and reads will honor monotonic reads and writes within the same session token. Because the guarantee is scoped to the caller's session, the database does not have to coordinate all replicas, so it incurs lower latency and RU charges than the globally enforced Strong and Bounded Staleness levels. Eventual and Consistent Prefix offer even lower latency but cannot guarantee that a client will immediately observe its own writes. Therefore, the microservice should set Session consistency for its requests.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cosmos DB?
What does session consistency mean in Azure Cosmos DB?
What are request units (RUs) in Azure Cosmos DB?
You are developing a .NET 8 isolated-process Azure Function that runs on the Consumption plan. The function needs a SQL Database connection string. The value must be changeable without code redeployment and must rotate centrally while remaining inaccessible to other developers. You want to minimize code changes and follow Azure guidance for configuration and secret management. Which approach should you use?
Add the connection string to the local.settings.json file and deploy the file with the function code.
Save the connection string as a plain key-value in Azure App Configuration and load it at runtime with the AzureAppConfiguration client library.
Create an application setting for the function app and paste the connection string as its value.
Store the connection string as a secret in Azure Key Vault and reference it in the function app's application settings by using a Key Vault reference syntax.
Answer Description
A Key Vault reference placed in the application settings of the Function app keeps the actual secret inside Azure Key Vault, where it can be rotated centrally. The reference is resolved by the platform, so your code still reads the value from standard configuration providers with no changes. Using a plain application setting or local.settings.json stores the secret in the Function's configuration, not in a dedicated secret store. Storing the value directly in Azure App Configuration as a key-value keeps it out of Key Vault and requires additional code or policies to handle rotation securely.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Key Vault and how does it help with secret management?
How do Key Vault references in application settings work?
Why is using local.settings.json or plain application settings not recommended for secret management?
You are deploying a new Azure API Management (APIM) instance by using an ARM template. The template contains the following snippet:
"sku": {
"name": "Consumption",
"capacity": 2
}
After running the deployment, the template validation fails with the error "Property capacity is not allowed".
To complete the deployment successfully while keeping the billing model that charges per execution and scales automatically, what should you do?
Change the sku.name value to Developer and keep capacity set to 1.
Keep the sku settings and add the property "autoScale": "enabled" to the template.
Change the sku.name value to Basic and keep capacity set to 2.
Remove the capacity property (or set it to 0) and redeploy the Consumption tier.
Answer Description
The Consumption tier is designed for serverless, per-execution billing and always scales automatically. Because there are no dedicated units, the capacity property must be omitted (or set to 0) in the ARM template. Any positive value is rejected. Other tiers (Developer, Basic, Standard, Premium) require capacity to specify the number of units (or a fixed value of 1 for Developer). Therefore, removing the capacity property (or setting it to 0) is the only way to deploy an APIM instance that keeps the Consumption pricing model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the Consumption tier in Azure API Management?
Why is the 'capacity' property not allowed in the Consumption tier?
What are the differences between the Consumption and Basic tiers in Azure API Management?
Your team recently added a FeatureToggleButton custom event to track how users enable a new capability in your ASP.NET Core web app. The app is already sending telemetry to Application Insights. Product owners want to know, without writing any KQL, how many unique users clicked this event during the last 24 hours and how many of those users were new versus returning. Which built-in usage analytics blade should you use to get this information most quickly?
Users
Events
Live Metrics Stream
Sessions
Answer Description
The Users blade in Application Insights usage analytics focuses on unique user identities. It automatically separates results into new and returning users and allows you to filter by a specific custom event (such as FeatureToggleButton) and by time range.
- The Sessions blade shows the number and characteristics of sessions, not distinct users.
- The Events blade lists occurrences of events but does not break them down into new versus returning users unless you write queries.
- Live Metrics Stream provides real-time server metrics and has no concept of new or returning users. Therefore, opening the Users blade is the fastest way to answer the product owners' question.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Application Insights in Azure?
How does the Users blade differ from the Sessions blade in Application Insights?
What kind of data can custom events in Application Insights track?
You build a .NET application that uses the ChangeFeedProcessor class (SDK v3) to react to inserts and updates in an Azure Cosmos DB container. The processor will run in several Kubernetes pods for horizontal scale. Which configuration ensures that every logical partition range from the change feed is processed by only one pod at any given time, preventing concurrent duplicate processing across pods?
Invoke CheckpointAsync after processing every item to write manual checkpoints.
Configure all pods to use the same lease container located in the same Cosmos DB account.
Assign a unique instance name to each pod when building the processor.
Set the processor's start time to DateTime.UtcNow each time it starts.
Answer Description
The Change Feed Processor coordinates work by writing lease documents that map logical partition ranges to individual processor instances. When all pods point to the same lease container (and the same processorName / leasePrefix), the SDK assigns each partition range to only one pod at a time and records checkpoints in the corresponding lease documents. Separate lease containers create isolated processor groups, which can lead to the same change being processed concurrently by multiple pods. Setting unique instance IDs, calling CheckpointAsync frequently, or changing the start time do not provide cross-pod coordination; they affect only local behavior.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of a lease container in the Change Feed Processor?
How does a logical partition range work in Cosmos DB?
Why don't unique instance names or CheckpointAsync calls prevent duplicate processing?
You are creating an Azure Monitor metric alert that is triggered when an Application Insights URL ping availability test fails from at least two test locations within a 5-minute window. The on-call engineer must receive an SMS notification each time the alert fires. Which change to the alert configuration guarantees that the text message is sent?
Select the Send SMS option on the Availability Test blade and save the test configuration.
Convert the metric alert to a log alert that uses a KQL query against trackAvailability telemetry.
Add the engineer's phone number under the Notification settings of the Application Insights resource.
Attach an Azure Monitor action group that contains an SMS receiver to the metric alert and leave the action group enabled.
Answer Description
Azure Monitor alerts do not send notifications directly; instead, they invoke one or more action groups. An action group can contain several receiver types, such as email, SMS, push, voice, ITSM, webhooks, or logic apps. By attaching an action group that includes an SMS receiver (the engineer's phone number) to the availability-based metric alert, the alert engine automatically places the SMS when the alert condition is met. Simply enabling an option on the availability test, editing Application Insights properties, or changing the alert type does not create the notification path-only an action group with the required receiver ensures delivery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Monitor action group?
How does Azure Monitor use metrics and logs for alerts?
What is trackAvailability telemetry in Application Insights?
Your background service runs without user interaction and must periodically list all users in your Azure AD tenant by calling Microsoft Graph. Using the Microsoft Identity platform, which OAuth 2.0 grant type and Microsoft Graph permission type should you implement to meet the requirement?
Device code grant with a delegated permission such as Group.Read.All
Implicit grant with an application permission such as Directory.Read.All
Client credentials grant with an application permission such as User.Read.All
Authorization code grant with a delegated permission such as User.Read.All
Answer Description
Because the service has no signed-in user, delegated permissions are impossible. The Microsoft identity platform recommends the OAuth 2.0 client credentials grant for daemon or service applications. This grant issues a token on behalf of the application itself, so it must request an application permission such as User.Read.All. Authorization code, device code, and implicit grants are user-interactive and use delegated permissions, which would fail when no user is present. Application permissions in the implicit flow are not supported.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the OAuth 2.0 client credentials grant?
What is the difference between application and delegated permissions in Microsoft Graph?
Why can't authorization code, device code, or implicit grants work for background services?
You are developing an order-processing application that publishes messages to an Azure Service Bus queue. Network interruptions can occasionally cause the sender to retry the same message. The consumer must process each order exactly once even if duplicates are sent within 10 minutes. What should you configure on the queue?
Increase the queue's lock duration to 10 minutes.
Set the queue's MaxDeliveryCount property to 1 and enable dead-lettering on expiration.
Enable duplicate detection and set the duplicate detection history window to 10 minutes.
Require sessions and set the session idle timeout to 10 minutes.
Answer Description
Azure Service Bus can automatically discard messages that have the same MessageId as one that was previously accepted during a specified time interval. Enabling duplicate detection and setting a 10-minute duplicate detection history window satisfies the requirement because the broker will remove any retries that arrive within that window, ensuring the order is processed only once. Sessions provide ordered, state-full processing but do not eliminate duplicates. Setting MaxDeliveryCount to 1 limits how many times a single message is delivered after a processing failure; it does not address multiple identical messages with different identifiers. Increasing the lock duration only gives the consumer more time to finish processing a single delivery and does not prevent duplicate messages from being received.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Service Bus duplicate detection?
What is the significance of the duplicate detection history window in Azure Service Bus?
How does Azure Service Bus handle duplicate messages differently from message retries?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.