Microsoft Azure Developer Associate Practice Test (AZ-204)
Use the form below to configure your Microsoft Azure Developer Associate Practice Test (AZ-204). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Developer Associate AZ-204 Information
Navigating the AZ-204 Azure Developer Associate Exam
The Microsoft Azure Developer Associate (AZ-204) certification is a crucial credential for cloud developers specializing in the Microsoft Azure ecosystem. This exam is designed for professionals who are responsible for all phases of the development lifecycle, including gathering requirements, design, development, deployment, security, maintenance, performance tuning, and monitoring. Candidates should have 1-2 years of professional development experience, including hands-on experience with Microsoft Azure. The exam validates a developer's proficiency in leveraging Azure's tools, SDKs, and APIs to build and maintain cloud applications and services.
The AZ-204 exam assesses a broad set of skills across five primary domains. These areas include developing Azure compute solutions (25-30%), developing for Azure storage (15-20%), implementing Azure security (15-20%), monitoring, troubleshooting, and optimizing Azure solutions (5-10%), and connecting to and consuming Azure services and third-party services (20-25%). The exam itself consists of 40-60 questions and has a duration of about 100 minutes. The question formats can vary, including multiple-choice, scenario-based questions, and drag-and-drop tasks.
The Value of Practice Exams in Preparation
A critical component of a successful study plan for the AZ-204 exam is the use of practice tests. Taking practice exams offers several key benefits that go beyond simply memorizing facts. They help you become familiar with the style, wording, and difficulty of the questions you are likely to encounter on the actual exam. This familiarity can help reduce anxiety and improve time management skills during the test.
Furthermore, practice exams are an excellent tool for self-assessment. They allow you to gauge your readiness, identify areas of weakness in your knowledge, and focus your study efforts accordingly. By reviewing your answers, especially the incorrect ones, you can gain a deeper understanding of how different Azure services work together to solve real-world problems. Many candidates find that simulating exam conditions with timed practice tests helps build the confidence needed to think clearly and methodically under pressure. Microsoft itself provides a practice assessment to help candidates prepare and fill knowledge gaps, increasing the likelihood of passing the exam.

Free Microsoft Azure Developer Associate AZ-204 Practice Test
- 20 Questions
- Unlimited time
- Develop Azure compute solutionsDevelop for Azure storageImplement Azure securityMonitor and troubleshoot Azure solutionsConnect to and consume Azure services and third-party services
You are developing a .NET worker service that uploads images to an Azure Storage container. Each image must be stored with the content type set to "image/png" and with custom metadata key "source" set to "webapp". To minimize latency and transaction costs, you want to satisfy both requirements with a single service request. Which SDK call should you use?
Call BlobClient.UploadAsync(BinaryData content, overwrite: true).
Call BlobClient.StartCopyFromUriAsync with BlobCopyFromUriOptions that include metadata and headers.
Call BlobClient.UploadAsync(content, new BlobUploadOptions { HttpHeaders = new BlobHttpHeaders , Metadata = new Dictionary<string,string> { { "source", "webapp" } } });
First call BlobClient.SetMetadataAsync then call BlobClient.SetHttpHeadersAsync for the blob.
Answer Description
The BlobClient.UploadAsync overload that accepts a BlobUploadOptions object lets you specify both HTTP headers and metadata in the same upload request. By setting the HttpHeaders.ContentType property to "image/png" and providing the metadata dictionary, the SDK sends only one PUT Blob operation that applies all settings. Calling SetMetadataAsync and SetHttpHeadersAsync would require two additional round-trips, UploadAsync without options would apply neither setting, and StartCopyFromUriAsync is intended for server-side copies, not direct uploads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is BlobClient in Azure SDK?
What is the purpose of HttpHeaders and Metadata in BlobUploadOptions?
Why is using a single SDK call for upload more efficient?
An HTTP-triggered Azure Functions app runs on the Consumption plan. Some requests perform intensive processing and take up to 25 minutes. Users report that these requests always fail after roughly 10 minutes. You must allow the function to complete successfully without modifying the function's code. Which action should you take?
Move the app to an Azure Functions Premium plan and set functionTimeout to 30 minutes in host.json.
Increase the instance count of the Consumption plan to its maximum scale-out limit.
Update host.json to set functionTimeout to 30 minutes while keeping the app on the Consumption plan.
Enable Always On for the existing Consumption plan function app.
Answer Description
In a Consumption plan, Azure Functions enforces an execution timeout of 5 minutes by default and no more than 10 minutes even when functionTimeout is increased in host.json. To run longer-running executions you must move the app to either a Premium plan or a Dedicated (App Service) plan, where the execution timeout is unlimited. After migrating, you can raise functionTimeout (for example, to 30 minutes) or set it to -1 for no limit. Enabling Always On or scaling out does not change the per-execution timeout on a Consumption plan, and simply increasing functionTimeout while remaining on the Consumption plan will still be capped at 10 minutes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Consumption plan in Azure Functions?
What is the Azure Functions Premium plan?
What does functionTimeout mean in Azure Functions?
You have the source code and a Dockerfile in the current directory on your workstation. The workstation lacks Docker engine and has limited CPU resources. You need to build the image in Azure and push it to an Azure Container Registry named contosoacrdemo, tagging the resulting image as web:v1. Which Azure CLI command should you run?
docker build -t contosoacrdemo.azurecr.io/web:v1 .
az container create --registry-login-server contosoacrdemo.azurecr.io --image web:v1 .
az acr task create --registry contosoacrdemo --name buildweb --image web:v1 --context .
az acr build --registry contosoacrdemo --image web:v1 .
Answer Description
The az acr build command sends the current directory context to the Azure Container Registry build service, which performs the docker build operation in Azure and automatically pushes the resulting image to the specified registry. Using --registry identifies the target ACR, and --image supplies the desired repository name and tag. The trailing period (.) indicates the local context to upload. az acr task create sets up a persistent task but does not immediately build the image; docker build runs locally and therefore requires Docker Engine; az container create provisions a container instance and does not build or push images.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the Azure Container Registry (ACR)?
What does the '--registry' flag do in the az acr build command?
Why is az acr build preferred when Docker engine is not installed locally?
You manage an Azure API Management instance that contains a product named Public and an API named Weather that is associated with the product. Every operation in API Management currently requires an Ocp-Apim-Subscription-Key header. You are migrating the Weather API to use Azure AD OAuth 2.0 bearer tokens instead of subscription keys. Calls to Weather must succeed when a valid Azure AD token is supplied even if no subscription key is present, and the requirement must not affect the other APIs in the Public product. What should you do in the Azure portal to meet the requirement?
Enable the global "Bypass subscription key" setting for the API Management gateway.
In the Settings blade of the Public product, clear the "Subscription required" option and save the change.
In the Settings blade of the Weather API, clear the "Subscription required" option and save the change.
Add a validate-jwt policy to the inbound section of the Weather API.
Answer Description
The Weather API must be exempt from the subscription-key check while the rest of the APIs in the Public product keep their current setting. Setting the Subscription required option to Off for the Weather API removes the need for callers to pass the Ocp-Apim-Subscription-Key header or query parameter for that API only. Disabling the requirement at the product level would also disable it for every other API in the Public product. Adding a validate-jwt policy enforces token validation but does not override the key requirement, so requests without a key would still be rejected. There is no global "Bypass subscription key" switch in API Management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the 'Subscription required' option in Azure API Management?
How does Azure AD OAuth 2.0 bearer token authentication work?
What is the role of the 'validate-jwt' policy in Azure API Management?
You are building an ASP.NET Core worker service that runs inside an AKS cluster and pings several internal HTTP endpoints every minute. The endpoints are private and must not be tested from the public internet, but you want the results to appear in the Application Insights Availability blade and support alerting. Which implementation meets the requirement?
Create an AvailabilityTelemetry object for each probe and send it with TelemetryClient.TrackAvailability.
Publish a custom numeric value with TelemetryClient.TrackMetric representing success (1) or failure (0) for each probe.
Enable codeless Application Insights auto-instrumentation in the cluster and depend on Smart Detection to raise availability alerts.
Use TelemetryClient.TrackDependency when calling each endpoint and treat failures as availability issues.
Answer Description
Only the TrackAvailability API records synthetic test results that surface in the Availability blade. Creating an AvailabilityTelemetry object (or calling TelemetryClient.TrackAvailability directly) lets you pass the test name, timestamp, duration, success flag, location, and optional diagnostic details. TrackDependency records outbound calls for performance analysis, TrackMetric stores numeric values, and relying on auto-instrumentation with Smart Detection does not generate synthetic availability data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an AvailabilityTelemetry object in Application Insights?
How does TrackAvailability differ from TrackDependency in Application Insights?
What is the Application Insights Availability blade used for?
You are developing an Azure Container App named orders-worker that processes messages from an Azure Service Bus queue. The app must run zero replicas when the queue is empty but scale out to as many as 20 replicas as the number of pending messages grows. HTTP ingress is disabled. Which scaling rule type should you configure for orders-worker to meet these requirements?
Add a cron scaler rule that increases replica count during working hours.
Set minReplicas to 1 and configure CPU-based scaling only.
Define a KEDA scaling rule of type "azure-servicebus" with queueName and connection settings.
Enable external HTTP ingress and rely on the built-in HTTP autoscaler.
Answer Description
KEDA provides the event-driven scaling used by Azure Container Apps. To scale on the length of an Azure Service Bus queue you must add a KEDA rule whose type is "azure-servicebus" (or the equivalent alias "azure-servicebus-queue"). The rule monitors the queue length and changes replica counts, allowing the app to drop to zero when no messages exist and grow up to the maximum you set (20 in this scenario). HTTP ingress scaling works only for HTTP traffic, CPU metrics do not scale to zero, and a cron scaler simply schedules fixed replica counts rather than reacting to queue depth.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is KEDA in Azure?
How does the 'azure-servicebus' KEDA scaler work?
Why doesn't HTTP ingress autoscaling apply in this scenario?
Your e-commerce web app must allow unauthenticated clients to download pictures stored in an Azure Storage blob container during a marketing campaign that lasts two hours. After the campaign, you need a single action that immediately invalidates every URL you issued. Which technique should you implement?
A service-level SAS token that is associated with a stored access policy on the container.
A service-level SAS token created directly on each blob with a two-hour expiry.
Set the container access level to Blob (anonymous read access) and remove public access after two hours.
An account-level SAS token with read permissions limited to the blob service.
Answer Description
A service SAS that references a stored access policy is created on the container. Each URL contains only the signed identifier of the policy, not its own start and expiry times. When the campaign ends, deleting or updating the stored access policy instantly revokes (or changes) the permissions for all SAS URLs that depend on it, so no individual link has to be regenerated. A service SAS created directly on each blob is still valid until its individual expiry time, an account SAS cannot be limited to just the container, and making the container public provides no centrally revocable tokens.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a service-level SAS token?
What is a stored access policy in Azure?
How do stored access policies differ from manually setting expiry times on individual SAS tokens?
You are developing a .NET 6 App Service web app that must upload files to an Azure Storage account by using the Azure SDK. The solution must avoid storing any credentials in code, and the identity must remain available even if the web app is deleted and recreated in another region. What should you do?
Create an Azure AD application with a client secret, store the secret in Azure Key Vault, and retrieve it at runtime by using DefaultAzureCredential.
Create a user-assigned managed identity, assign it the Storage Blob Data Contributor role on the storage account, and associate the identity with the web app.
Generate a service SAS token for the storage account and store the token in an App Service application setting.
Enable a system-assigned managed identity on the web app and grant it the Storage Blob Data Contributor role on the storage account.
Answer Description
A user-assigned managed identity is a standalone Azure resource whose lifecycle is independent of any single host resource and can be attached to multiple services. Creating such an identity, granting it the Storage Blob Data Contributor role on the storage account, and then associating it with the web app meets all requirements. A system-assigned identity would be deleted with the web app, while solutions that rely on SAS tokens or application secrets violate the requirement to avoid storing credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between user-assigned and system-assigned managed identities in Azure?
What is the Storage Blob Data Contributor role in Azure?
How does DefaultAzureCredential work with Azure AD and managed identities?
Your build script runs in Azure Cloud Shell and must build a Docker image from the current directory and publish it to an existing Azure Container Registry named contosoacr. The solution must not require the Cloud Shell environment to have the Docker engine installed. Which Azure CLI command should you run?
docker build -t contosoacr.azurecr.io/web:v1 . && docker push contosoacr.azurecr.io/web:v1
az acr import --name contosoacr --source . --image web:v1
az acr build --registry contosoacr --image web:v1 .
az container create --registry-login-server contosoacr.azurecr.io --image web:v1 --file Dockerfile
Answer Description
The az acr build command submits the build context to the Azure Container Registry task service, which performs the Docker build and pushes the resulting image back to the registry. Because the build happens in Azure, Docker does not need to be installed in the local environment. The command must reference the target registry and provide the desired image name and tag. az acr import copies an existing image from another registry, not a local build context. Combining docker build and docker push requires the Docker engine to be available, which contradicts the requirements. az container create deploys a running container instance and does not build or publish an image.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Container Registry (ACR)?
How does az acr build differ from docker build?
What is a Docker build context and why is it important?
You are developing a background service that runs on an Azure virtual machine and must read messages from users' mailboxes through Microsoft Graph without any interactive sign-in. You registered an app in Microsoft Entra ID, added the Mail.Read application permission, and will use MSAL.NET with the client-credentials flow. Which scope string should you supply to AcquireTokenForClient(...)?
Mail.Read
Answer Description
For the client-credentials flow you request an application permission access token. The Microsoft identity platform requires the special .default scope, which represents all application permissions that have been granted for the resource. Therefore the service should request the scope https://graph.microsoft.com/.default. This instructs Azure AD to issue a token containing every application permission consented on Microsoft Graph. Supplying Mail.Read alone or prefixed with the resource URI is valid only for delegated flows, and using the login endpoint URL is not a recognized scope format.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the client-credentials flow?
What does the .default scope represent and why is it required?
What is the difference between application and delegated permissions in Microsoft Graph?
Your JavaScript single-page application (SPA) must call an Azure Function App that is secured by the Microsoft Identity platform. Both applications are registered in the same Microsoft Entra ID tenant. To ensure the SPA can obtain an access token that the Function App will accept, what should you configure in Azure AD?
Configure both registrations as public client/native applications.
Assign a system-assigned managed identity to the SPA and give it access to the Function App.
Expose a custom delegated scope in the Function App registration and grant that scope as an API permission to the SPA.
Enable the implicit grant flow (ID tokens) on the SPA registration only.
Answer Description
The web API (the Function App) must first expose a scope so that clients can request permission to it. In the Function App's app registration you expose an API by defining a custom delegated scope (for example, api://
Enabling implicit grant for ID tokens only supports sign-in, not access to another API. Making the apps public clients or assigning a managed identity to the SPA does not affect interactive token acquisition between a browser client and a protected API.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a delegated scope in Microsoft Entra ID?
What does 'expose an API' mean in an Azure Function App registration?
Why doesn't enabling implicit grant for ID tokens support API access?
Your ASP.NET Core API is already instrumented with the Application Insights .NET SDK. When an upstream service responds with HTTP 429, you want to record a diagnostic entry that is stored as a trace with severity Warning and that carries a custom dimension named RetryAfterSeconds. Which code snippet accomplishes this without any additional configuration changes?
telemetryClient.TrackEvent("Service throttling", new Dictionary<string,string> { { "RetryAfterSeconds", retryAfterSeconds.ToString() } }, null);
telemetryClient.TrackTrace("Service throttling", new Dictionary<string,string> { { "RetryAfterSeconds", retryAfterSeconds.ToString() } }, SeverityLevel.Warning);
telemetryClient.TrackTrace("Service throttling", TraceSeverity.Warning, new Dictionary<string,string> { { "RetryAfterSeconds", retryAfterSeconds.ToString() } });
telemetryClient.TrackTrace("Service throttling", SeverityLevel.Warning, new Dictionary<string,string> { { "RetryAfterSeconds", retryAfterSeconds.ToString() } });
Answer Description
The TelemetryClient class exposes an overload of TrackTrace that accepts three parameters: the trace message, a SeverityLevel enum value, and an IDictionary<string,string> for custom properties. Calling this overload with SeverityLevel.Warning flags the trace as a warning, and the dictionary adds the custom RetryAfterSeconds dimension that will appear as a custom attribute in Application Insights. The other snippets are invalid: one places the arguments in an unsupported order, one uses a non-existent TraceSeverity enum, and one sends the data as an event rather than a trace.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Application Insights in Azure?
What is the difference between Trace and Event in Application Insights?
What is a SeverityLevel in Application Insights?
A general-purpose v2 storage account contains a container named payroll. Your application uploads each month's payroll PDFs to this container. For the first 30 days the files are read frequently; after that they are rarely accessed but must be kept for seven years. You need an automated solution that minimizes storage costs without affecting the first 30 days of usage. What should you do?
Create a lifecycle management rule that moves blobs in the payroll container to the Cool tier 30 days after the last modification and to the Archive tier 90 days after the last modification.
Modify the upload code so that each PDF is written directly to the Archive tier.
Set the default access tier of the payroll container to Cool so that new uploads are immediately stored in the Cool tier.
Enable blob soft delete on the storage account with a retention period of seven years.
Answer Description
Moving the blobs automatically between access tiers provides the lowest cost over the files' lifetime. A lifecycle management rule can be configured on a general-purpose v2 account to tier blobs to Cool after 30 days (lower storage cost with moderate availability) and then to Archive after 90 days (lowest cost, long-term storage). The rule requires no code changes and still keeps the files in Hot during the initial active month. Setting a container access tier to Cool keeps the blobs there from day one, increasing early-access latency and retrieval charges. Uploading directly to Archive would block normal reads during the first 30 days because Archive blobs must be rehydrated before they can be accessed. Soft delete is for protecting against accidental deletion, not cost management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob lifecycle management?
What is the difference between Cool and Archive tiers in Azure Blob Storage?
How does rehydration work for blobs in the Archive tier?
Your background service runs without user interaction and must periodically list all users in your Azure AD tenant by calling Microsoft Graph. Using the Microsoft Identity platform, which OAuth 2.0 grant type and Microsoft Graph permission type should you implement to meet the requirement?
Authorization code grant with a delegated permission such as User.Read.All
Device code grant with a delegated permission such as Group.Read.All
Client credentials grant with an application permission such as User.Read.All
Implicit grant with an application permission such as Directory.Read.All
Answer Description
Because the service has no signed-in user, delegated permissions are impossible. The Microsoft identity platform recommends the OAuth 2.0 client credentials grant for daemon or service applications. This grant issues a token on behalf of the application itself, so it must request an application permission such as User.Read.All. Authorization code, device code, and implicit grants are user-interactive and use delegated permissions, which would fail when no user is present. Application permissions in the implicit flow are not supported.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the OAuth 2.0 client credentials grant?
What is the difference between application and delegated permissions in Microsoft Graph?
Why can't authorization code, device code, or implicit grants work for background services?
Your ASP.NET Core web app is instrumented with Application Insights and appears in Application Map. Users report pages that call an on-premises SQL Server run slowly. In the map you see a thick dependency line between the Web App node and the SQL node. What action inside Application Map lets you open a complete end-to-end trace of one of those slow calls to locate the delay?
Configure a Standard availability test and review its results in the Failures blade.
Start Live Metrics Stream from the toolbar to observe real-time server counters for the web app.
Select the dependency link and choose "Go to details" to open End-to-end transaction details.
Switch to the Usage workspace and open the Users view to evaluate average session duration.
Answer Description
Selecting either a node or the dependency link opens a side pane that summarizes request and dependency metrics. From that pane, choosing "Go to details" opens the End-to-end transaction details blade for a single instance of the operation. This blade shows a Gantt chart of the request, downstream dependencies, and timing breakdowns, enabling you to pinpoint where the slowdown occurs. Live Metrics Stream, Usage analytics, and availability tests do not provide full distributed-trace detail for a specific call.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Application Map in Azure Application Insights?
How does the 'Go to details' feature help in troubleshooting slow requests?
What are alternative tools to monitor real-time data in Azure beyond Application Map?
You are developing an ASP.NET Core Web API protected with the Microsoft Identity platform (v2 endpoint). Client apps will call the API either on behalf of a signed-in user (delegated flow) or as a daemon service (client-credentials flow). The API must programmatically verify the permission conveyed in the token. Which claim should the API evaluate in each scenario?
Delegated flow - check the scp (scope) claim; client-credentials flow - check the roles claim.
Delegated flow - check the roles claim; client-credentials flow - check the scp (scope) claim.
Delegated flow - check the groups claim; client-credentials flow - check the scope claim.
Delegated flow - check the aud claim; client-credentials flow - check the appid claim.
Answer Description
Access tokens obtained through delegated flows include the scp (scope) claim, which lists the delegated permissions that the user and client app have for the target API. Tokens obtained through the client-credentials flow never carry scp; instead they include the roles claim that lists the application roles (app-only permissions) granted to the calling service principal. Therefore, the API should inspect the scp claim when the call is on behalf of a user and the roles claim when the call is made by a daemon or background service. Inspecting roles for delegated tokens or scp for app-only tokens will always fail, because those claims are not issued in those contexts. Other claims such as aud, appid, or groups do not directly convey the permission being requested.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between delegated flow and client-credentials flow in Microsoft Identity Platform?
How does the scp (scope) claim function in the token during a delegated flow?
Why does the client-credentials flow use the roles claim instead of scp?
You are creating an Azure Functions app for a production workload. The function must avoid cold starts by keeping at least one instance warm, scale out automatically to thousands of concurrent executions when traffic spikes, integrate with an Azure Virtual Network for secure data access, and incur no per-execution billing charges. When configuring the hosting for the function app in the Azure portal, which hosting plan should you select?
App Service (Dedicated) plan
Consumption plan
Isolated (App Service Environment) plan
Premium plan
Answer Description
The Azure Functions Premium plan keeps pre-warmed instances running to eliminate cold start, can automatically scale to many instances, supports VNet integration, and is billed by reserved instance core-seconds rather than individual executions. The Consumption plan also auto-scales but has cold starts, lacks VNet integration without NAT gateway, and charges per execution. An App Service (Dedicated) plan avoids per-execution costs but does not scale to zero and requires manual or rules-based scaling, making unexpected high concurrency expensive. The Elastic Premium (preview) option is synonymous with the Premium plan's capabilities but is not shown as a separate hosting choice in the portal; the appropriate selection remains the Premium plan.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are cold starts in Azure Functions?
How does Azure Functions integrate with virtual networks (VNets)?
How does the billing model for the Consumption plan differ from the Premium plan?
You are configuring an alert in Azure Monitor for an Application Insights resource. The alert must trigger as close to real time as possible and must not generate additional Log Analytics ingestion or query charges. Which signal type meets both requirements?
A scheduled query alert that runs the same KQL query every minute.
A log-based metric created from a KQL query on the Exceptions table.
A custom metric emitted by an Azure Function after reading telemetry from Logs.
The built-in pre-aggregated Application Insights metric.
Answer Description
Pre-aggregated (standard) Application Insights metrics are calculated automatically when telemetry is ingested. They become available in the Azure Monitor metrics database within seconds, can be evaluated by metric alerts at a 1-minute frequency, and incur no extra Log Analytics costs. Log-based metrics are updated by running a Kusto query on the Logs store; this introduces several-minute latency and the query execution counts toward Log Analytics charges. Scheduled query alerts have similar cost and latency characteristics, and custom code that emits its own metric still pays Log Analytics ingestion if it first queries Logs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Application Insights metrics in Azure Monitor?
What is KQL and how is it used in Azure Monitor?
Why are custom metrics emitted by Azure Functions not ideal for real-time alerts?
Your team is building an ASP.NET Core 6 Web API that will be secured by the Microsoft Identity platform. The API must respond with HTTP 401 when no bearer token is present and with HTTP 403 when the token does not contain the access_as_user scope. Which Program.cs configuration meets these requirements?
Call AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer(options => ); do not add additional authorization policies.
Call AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme).AddMicrosoftIdentityWebApp(configuration.GetSection("AzureAd")) and enable PKCE.
Call AddMicrosoftIdentityWebApiAuthentication(configuration, "AzureAd") and set JwtBearerOptions.SuppressMapInboundClaims = true without configuring extra policies.
Call AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddMicrosoftIdentityWebApi(configuration.GetSection("AzureAd")); then add an authorization policy that requires the access_as_user scope.
Answer Description
Calling AddMicrosoftIdentityWebApi configures the JWT bearer middleware so that unauthenticated requests are intercepted and converted to HTTP 401. Adding an authorization policy that requires the access_as_user scope causes ASP.NET Core to return HTTP 403 when a valid token lacks that scope. The WebApp/OpenID Connect helpers are intended for interactive server-rendered apps, and AddJwtBearer without an explicit scope-checking authorization policy would accept any token that passes signature and audience validation, so it would not generate the required 403 response.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the AddAuthentication method in ASP.NET Core?
What is an authorization policy in ASP.NET Core and how does it work?
What is the difference between AddMicrosoftIdentityWebApi and AddMicrosoftIdentityWebApp?
You run a build pipeline on an Azure VM that acts as a self-hosted Azure DevOps agent. The pipeline builds Docker images and must push them to a Standard tier Azure Container Registry (ACR) named contosoacr. You must use the least-privileged Azure-native authentication method available. What should you configure?
Create an Azure AD application and assign it the Owner role at the subscription scope; authenticate with its client secret.
Enable the admin user on contosoacr and use its username and password in the pipeline.
Enable anonymous pull access on contosoacr and use docker push without authentication.
Grant the VM's system-assigned managed identity the AcrPush role on contosoacr.
Answer Description
The VM can obtain an Azure Active Directory access token for the registry through its system-assigned managed identity. Assigning that identity the AcrPush role on the specific registry lets it push and pull images while avoiding broader permissions. Enabling the ACR admin user relies on long-lived credentials and is not least-privileged. Granting an application the Owner role at subscription scope exceeds the required rights. Anonymous pull allows unauthenticated pulls only and does not permit pushes, so it does not meet the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a system-assigned managed identity in Azure?
What is the AcrPush role in Azure?
Why is enabling the ACR admin user considered insecure?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.