Microsoft DevOps Engineer Expert Practice Test (AZ-400)
Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft DevOps Engineer Expert AZ-400 Information
Microsoft DevOps Engineer Expert (AZ-400) Overview
The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.
What You’ll Learn and Be Tested On
This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.
Who Should Take the Exam
The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.
Why Practice Tests Are Important
Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Free Microsoft DevOps Engineer Expert AZ-400 Practice Test
- 20 Questions
 - Unlimited
 - Design and implement processes and communicationsDesign and implement a source control strategyDesign and implement build and release pipelinesDevelop a security and compliance planImplement an instrumentation strategy
 
You are defining an infrastructure-as-code (IaC) strategy for an Azure landing zone that is provisioned with Bicep templates stored in an Azure Repos Git repository. The solution must meet the following requirements:
- A single set of template files must be promoted through dev, test, and prod so that every environment is created from exactly the same code base.
 - Template syntax validation and linting tests must run automatically on every pull request (PR).
 - Deployments to an environment must occur only after the corresponding branch is updated by an approved PR.
 
Which approach should you recommend?
Adopt trunk-based development with a single main branch containing shared Bicep templates and environment-specific parameter files, and configure a multi-stage YAML pipeline that runs PR validation tests and then, on merge to main, deploys sequentially to dev, test, and prod using environment approvals.
Create three separate Git repositories-one per environment-with identical templates, and configure independent pipelines that run tests and deploy on every commit to each repository's default branch.
Implement GitFlow: create long-lived develop, release, and master branches that each hold a copy of the templates, and trigger separate pipelines on every push to deploy the respective environment.
Use release-flow with a main branch for prod, short-lived release branches for dev and test, and trigger deployments only when a semantic-version tag is pushed to any branch.
Answer Description
Trunk-based development keeps one authoritative branch that contains the template code for every environment, ensuring all environments are created from the same files. A multi-stage YAML pipeline triggered by a pull-request validation build enforces automated linting and syntax checks before the code is merged. After the PR is approved and the main branch is updated, the same pipeline automatically proceeds to the dev, test, and prod deployment stages in sequence, each guarded by branch protection, environment checks, and approvals. The other options either duplicate templates across multiple long-lived branches or repositories-violating requirement 1-or deploy automatically on every push without the PR gate required in requirement 3.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is trunk-based development?
What is a multi-stage YAML pipeline in Azure DevOps?
How do environment approvals work in Azure DevOps pipelines?
What is trunk-based development?
What is a multi-stage YAML pipeline in Azure DevOps?
How does pull-request validation ensure code quality?
You are designing a GitHub Actions pull request workflow with two jobs: build and analyze. The build job builds a Linux container image, tags it with the commit SHA, and pushes it to an Azure Container Registry (ACR).
According to security policy, the analyze job must run CodeQL static analysis from inside the container image created by the build job. This ensures that only tools from the hardened image are used and that no additional tools are installed on the GitHub-hosted runner.
How should you configure the analyze job to meet this requirement?
Make the
analyzejob dependent on thebuildjob using theneedsproperty, and add acontainerproperty to theanalyzejob that references the image in ACR.Add the parameter
run-in-container: trueto every CodeQL action step in theanalyzejob.Add the option
build-mode: containerunder thegithub/codeql-action/initstep in theanalyzejob.In the
analyzejob, set acontainerproperty only on thegithub/codeql-action/analyzestep.
Answer Description
The correct approach involves a two-job workflow. The analyze job must be configured to run only after the build job completes successfully; this is accomplished by adding a needs: build property to the analyze job. To run all of the job's steps inside the desired container, the container property is specified at the analyze job level, referencing the image that the build job pushed to the registry. This ensures the entire analysis, including the CodeQL init, build, and analyze steps, runs in the hardened container environment. The other options propose using non-existent parameters like build-mode or run-in-container, or incorrectly scope the container to a single step, which would not satisfy the security requirement for the entire analysis process.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the 'needs' property in GitHub Actions, and why is it important?
How does the 'container' property work at the job level in GitHub Actions?
What is CodeQL, and why is it run inside the container in this workflow?
You manage an Azure DevOps project that uses Azure Boards. A production web app hosted in Azure App Service is instrumented with workspace-based Application Insights, which stores its telemetry in an existing Log Analytics workspace. Operations require that when the number of HTTP 5xx requests for the app exceeds 50 in any five-minute period, a bug must automatically be created in the Azure DevOps project that contains the query results causing the alert. What should you configure to meet this requirement?
Create an Azure Monitor log alert rule on the workspace and link an action group that triggers a Logic App using the Azure DevOps connector to create a work item.
Configure Azure DevOps service hooks to subscribe to Azure Monitor metric alerts raised in the resource group.
Enable Continuous Export from Application Insights to Azure DevOps Analytics and define a work-item creation rule on the exported data.
Add a Smart Detection rule in Application Insights and configure e-mail notification to the project's default team.
Answer Description
A log alert rule in Azure Monitor can run a Kusto query against the Log Analytics workspace and evaluate it on a fixed schedule. When the threshold condition is met, the alert fires and invokes the actions defined in an attached action group. By choosing Logic App as the action type, you can call a Logic App workflow that uses the built-in Azure DevOps connector to create a work item (for example, a bug) and include any details returned by the query. Continuous export cannot write directly to Azure DevOps, Azure DevOps service hooks deliver events from Azure DevOps to external systems rather than the reverse, and Smart Detection rules in Application Insights only send notifications such as e-mail-they do not automatically open work items.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Log Alert Rule in Azure Monitor?
What is the role of a Logic App in this configuration?
What is a workspace-based Application Insights?
What is a log alert rule in Azure Monitor?
How does a Logic App use the Azure DevOps connector?
What is the difference between Azure Monitor alerts and Azure DevOps service hooks?
You are defining an Azure Pipelines YAML stage that contains three jobs:
- UnitTests
 - ApiTests
 - PublishArtifacts The two test jobs must run concurrently to shorten execution time. The PublishArtifacts job must start only after both tests have finished successfully, and you do not want to serialize the test jobs or move them to a different stage. Which YAML configuration should you add to the PublishArtifacts job to meet these requirements?
 
Add the line
condition: succeeded()to the PublishArtifacts job and omit the dependsOn property.Move PublishArtifacts to its own stage and set a stage-level
dependsOnfor the test stage.Set
strategy: parallelwithmaxParallel: 2at the stage level to control job concurrency and ordering.Add the line
dependsOn: [UnitTests, ApiTests]to the PublishArtifacts job and leave the two test jobs without dependencies.
Answer Description
In Azure Pipelines, jobs within the same stage run in parallel by default unless an explicit dependency is declared with dependsOn. Adding dependsOn: [UnitTests, ApiTests] to the PublishArtifacts job correctly configures the pipeline to wait for both UnitTests and ApiTests to complete successfully before starting. Since the two test jobs have no dependencies on each other, they will execute concurrently. Using only a condition statement does not establish an execution order; the job would still attempt to run in parallel with the others. The strategy: parallel property is set on a job, not a stage, and is used to run multiple instances of that single job, not to coordinate different jobs. Moving PublishArtifacts to a separate stage would work but is explicitly forbidden by the question's constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the `dependsOn` property do in Azure Pipelines YAML?
How do jobs within the same stage execute by default in Azure Pipelines?
What is the purpose of the `strategy: parallel` property in Azure Pipelines YAML?
What does 'dependsOn' do in Azure Pipelines YAML?
Why do jobs run in parallel by default in Azure Pipelines YAML?
What is the difference between 'dependsOn' and 'condition' in Azure Pipelines YAML?
You are defining a multi-stage YAML pipeline in Azure DevOps. All deployments that target the Production environment must wait for approval from the Operations security group and must also satisfy an Azure Monitor alert gate that is evaluated during deployment. You want these requirements to apply no matter which pipeline deploys to Production. What is the correct way to implement the approvals and gates?
Add an
approvals:block under theenvironmentproperty in the YAML file and list both the Operations group and the Azure Monitor gate.Create an organization-wide policy that restricts deployments to Production and attach the Azure Monitor alert as a variable group permission.
Configure the manual approval and Azure Monitor gate directly on the Production environment in the Azure DevOps portal so every pipeline that references that environment inherits the checks.
Set the
approval: trueattribute on the deployment job and reference an Azure Monitor service connection in the YAML.
Answer Description
Approvals and checks such as group-based manual approval and Azure Monitor alert gates are properties of the environment itself. They are configured once on the Production environment (through the Azure DevOps portal or by using the REST API). Any YAML pipeline that references that environment automatically inherits those checks. YAML does not include syntax to declare or override environment approvals or gates inside a pipeline file, so attempting to add approvals: or similar keys to the YAML will be ignored or cause an error. Organization-level policies and variable group permissions do not provide deployment gating for an environment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Azure DevOps environments?
How do approval gates work in Azure DevOps?
What is an Azure Monitor alert gate?
What is an Azure DevOps environment in the context of pipelines?
What is an Azure Monitor alert gate, and how is it evaluated during deployment?
Can approvals and gates for environments in Azure DevOps be configured using YAML pipelines?
You are standardizing your organization's Azure Pipelines by extracting a frequently used job into a reusable YAML template named build-dotnet.yml. The job relies on values that different teams keep in their own variable groups. You want the calling pipeline to decide at queue time which variable group the template should import, without having to modify the template itself for every team.
Which approach should you implement inside build-dotnet.yml so that the same template can be reused with any variable group name chosen by the caller?
Convert the job into a task group instead of a YAML template and set the variable group as a task group parameter.
Reference the variable group name through a runtime variable like $(varGroupName) inside the variables section.
Declare a string parameter such as parameters: - name: varGroupName and import the group with variables:
- group: ${{ parameters.varGroupName }}
 
Read the group name from a secure environment variable that each team sets on the agent machine.
Answer Description
A template can expose a string parameter that the consuming pipeline sets to the name of its preferred variable group. Inside the template you reference that parameter in the variables section by using compile-time expression syntax. When the caller provides a different value for that parameter, the variable-group reference is replaced at template expansion, allowing the same template to work with any variable group. Referencing the variable group through a runtime variable such as $(varGroup) or environment variables will not work because the group name must be resolved before the pipeline is run. Task group parameters and predefined system variables cannot be used to inject an arbitrary variable-group name.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is compile-time expression syntax (${{ }}) necessary for this solution?
What is the difference between using a YAML template and a task group for this scenario?
Why can't variable-group names be set using runtime or environment variables?
What is a YAML template, and why is it used in Azure Pipelines?
What is the difference between compile-time and runtime in Azure Pipelines?
Why can’t task group parameters or system variables inject variable-group names?
Your team's Azure Repos Git repository contains several years of history in which artists committed *.png and *.fbx game-asset files that are typically 200-400 MB each. Cloning the repo now takes more than 30 minutes and CI pipelines routinely exceed the 5 GB checkout limit. You must migrate the existing objects to Git Large File Storage (LFS) so that only lightweight pointer files remain in regular Git history, while preserving commit metadata. Which approach meets the goal with the least manual effort?
Install Git LFS locally, then execute git lfs migrate import --include=".png,.fbx" --include-ref=refs/heads/*; afterward push the rewritten history with git push --force-with-lease --all && git push --force-with-lease --tags.
Run git lfs track ".png" ".fbx", commit the updated .gitattributes file, and push the branch normally.
Enable the Large File Checkout option in Azure Repos and perform git gc to prune existing large blobs.
Use git filter-branch with a tree-filter that calls git lfs track for each commit, then push the result to origin.
Answer Description
The git lfs migrate import command rewrites the repository's history, replacing the specified paths with LFS pointer files and adding the binary content to the LFS store. It preserves commit hashes except for those whose blobs are rewritten and automatically updates tags and branch tips. After the rewrite, a force-with-lease push is required to replace the server-side history. Merely tracking patterns with git lfs track affects only future commits, while git filter-branch requires complex scripting and is discouraged for large histories. Azure Repos has no server option named "Large File Checkout," so enabling such a setting would not migrate existing blobs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Git LFS and how does it manage large files?
What does git lfs migrate import do during a migration?
What is the significance of using 'force-with-lease' when pushing changes after migration?
What is Git LFS and how does it work?
Why is the git lfs migrate import command necessary?
Why is force-with-lease push used after migrating to Git LFS?
You are configuring an Azure DevOps YAML pipeline that will run on a self-hosted agent in Azure. During execution, the pipeline must:
- Download a PFX certificate named web-api-cert from an Azure Key Vault and save it as a secure file on the agent.
 - Generate a JSON Web Token (JWT) by calling the Azure Key Vault key jwt-signing. The private portion of the key must never leave the vault; the pipeline only needs to invoke the signing operation.
 
You create a service principal for the pipeline and add it to the Key Vault access policies. Which single set of permissions meets the requirements while following the principle of least privilege?
Certificates: Get | Keys: Sign
Secrets: Get, List | Keys: Get
Secrets: Get | Keys: Sign
Certificates: Get | Keys: Sign, Decrypt
Answer Description
To export web-api-cert with its private key, the pipeline needs Secret - Get permission because Key Vault stores the PFX payload of a certificate as a secret. Fetching the certificate object alone would return only public metadata, not the private key.
To create the JWT without exposing the private key, the pipeline must invoke the Sign operation against the jwt-signing key. That requires Key - Sign permission. No other key operations (for example, Decrypt or Get) are necessary because the pipeline does not need to read or decrypt the key material.
Therefore, the minimal permission set is:
- Secrets: Get
 - Keys: Sign
 
All other answer choices either omit a required permission or grant unnecessary operations that violate the least-privilege principle.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege in Azure Key Vault?
Why does fetching a certificate's private key require 'Secret: Get' permission?
What is the 'Sign' operation in Azure Key Vault keys?
Why is the Secrets:Get permission required for exporting the PFX certificate?
Why does the pipeline use Key:Sign instead of other key operations like Get or Decrypt?
What is the principle of least privilege, and how does it apply to Azure Key Vault permissions?
Your Azure DevOps project hosts a Git repository. The main branch is protected by branch policies that require a successful CI build and at least two reviewers before pull requests (PRs) can be completed. A small group of release managers must be able to occasionally finish urgent PRs into main without waiting for these checks. No other user must be able to bypass the policy. Which branch permission should you set to Allow for the release-manager group and explicitly Deny for all other groups?
Bypass policies when pushing
Bypass policies when completing pull requests
Contribute to pull requests
Force push (rewrite history)
Answer Description
The permission that overrides branch policies during the PR completion process is "Bypass policies when completing pull requests." Granting this permission to release managers lets them complete a PR without satisfying reviewer or build requirements. Setting the same permission to Deny (or leaving it Not set) for all other groups ensures that nobody else can circumvent the branch policies. "Bypass policies when pushing" applies only to direct pushes, not PR completions. "Force push (rewrite history)" allows history rewrites but does not affect policy checks. "Contribute to pull requests" lets users comment or vote but does not override policies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between 'Bypass policies when completing pull requests' and 'Bypass policies when pushing'?
Why is 'Force push (rewrite history)' not suitable for bypassing branch policies?
How do branch policies help secure the main branch in Azure DevOps?
What is the difference between 'Bypass policies when completing pull requests' and 'Bypass policies when pushing'?
Why is 'Force push (rewrite history)' not suitable for bypassing branch policies?
How does Azure DevOps enforce branch policies during pull requests?
Your organization has 20 developers working on a single Azure Repos Git repository for an internal microservice. The microservice is deployed to production multiple times per day via a fully automated pipeline. Management reports that long-lived feature branches being merged at the end of each sprint often cause complex merge conflicts and delay releases. You need to redesign the branching strategy to reduce integration pain while still allowing developers to experiment in isolation for a few hours. Which branching approach should you recommend?
Require each developer to work in a personal fork and submit periodic patches directly to the main branch without pull-request validation.
Adopt trunk-based development by requiring every change to merge into the main branch within 24 hours through short-lived feature branches and mandatory pull requests.
Switch to a GitFlow model with permanent develop and release branches, merging feature branches only after user acceptance testing.
Continue using feature branches but extend branch lifetimes to the full release cycle and freeze the main branch except for hotfixes.
Answer Description
Trunk-based development encourages very short-lived branches that are merged back into the main (trunk) branch at least daily, which keeps the integration surface small and prevents the accumulation of merge debt. Pull-request policies can still be applied so that developers can work in isolation for a few hours, submit a PR, and integrate quickly. Extending the lifetime of feature branches or adding permanent develop and release branches does the opposite-it increases divergence and makes later merges riskier. Requiring personal forks without PR validation makes integration even less frequent and would undermine continuous delivery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is trunk-based development?
Why do long-lived branches result in merge conflicts?
How does trunk-based development improve continuous delivery?
What is trunk-based development?
Why are long-lived feature branches problematic?
How do pull requests work with trunk-based development?
Your team manages hundreds of Azure virtual machines across multiple subscriptions. You must ensure the nginx package is installed and running on every Ubuntu VM, detect and automatically remediate configuration drift about every 15 minutes, surface compliance data through Azure Policy, and store the configuration definition as code in GitHub. Which Azure-native service should you use to meet these requirements?
Azure Image Builder
Azure Automation State Configuration (DSC)
Azure Automanage Machine Configuration
Azure App Configuration Feature Manager
Answer Description
Azure Automanage Machine Configuration (based on the Guest Configuration agent) assigns a guest configuration policy that can both audit and remediate settings such as installed packages. The agent reevaluates approximately every 15 minutes, and compliance results are surfaced through Azure Policy for reporting and governance. Configuration code is written declaratively (DSC-style) and can be stored in source control such as GitHub. Azure Automation State Configuration provides DSC but has no direct Azure Policy integration. Azure App Configuration Feature Manager handles feature flags, not VM configuration. Azure Image Builder creates custom images and does not enforce ongoing configuration.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Automanage Machine Configuration?
How does Azure Policy integrate with Azure Automanage Machine Configuration?
How can configuration definitions be stored as code in GitHub and used with Azure Automanage Machine Configuration?
Can you explain what Azure Automanage Machine Configuration does in more detail?
How is Azure Policy used in combination with Azure Automanage Machine Configuration?
What makes the Guest Configuration agent effective in detecting and remediating configuration drift?
Your organization uses multi-stage YAML pipelines in Azure DevOps to deploy dozens of microservices to a production environment. Leadership wants a delivery metric that shows the number of successful deployments to production each day over the last 30 days. You must deliver this metric in Power BI with minimal custom coding, while allowing product owners to filter by service (pipeline name).
Which approach should you use to obtain the data that drives the metric?
Query Azure Boards work items using the OData
WorkItemsentity and count those that have the state 'Closed' and the tag 'Production Deployment'.Enable Continuous Export in Application Insights for each service and run a Kusto query in Power BI against the exported
customEvents.Create a custom Azure DevOps Analytics view that exposes
PipelineRundata, then connect Power BI through the built-in Data Connector.Connect Power BI to the Azure DevOps Analytics OData feed and query the
PipelineRunsentity set.
Answer Description
The correct approach is to connect Power BI directly to the Azure DevOps Analytics OData feed. The PipelineRuns entity set within the Analytics data model contains authoritative data for each execution of a YAML pipeline, including the outcome, environment details, completion date, and pipeline name. This allows for the creation of a report on successful production deployments per day with filtering capabilities, fulfilling the requirements with minimal custom work.
Creating an Analytics View is incorrect because this feature is limited to Azure Boards work item data and does not support pipeline data. Using Azure Boards work items with a specific tag is not an authoritative source for deployment outcomes and relies on manual process adherence. Enabling continuous export from Application Insights would require custom instrumentation in every pipeline to send deployment events, which violates the 'minimal custom coding' requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure DevOps Analytics OData feed?
What kind of data is available in the `PipelineRuns` entity set?
How does Power BI connect to the Azure DevOps Analytics OData feed?
Can you explain the Azure DevOps Analytics OData feed?
What data does the `PipelineRuns` entity set contain?
How does Power BI connect to Azure DevOps Analytics for reporting?
Your organization uses Azure Pipelines to deploy an ASP.NET Core application. Compliance rules require that any pipeline run that successfully deploys to the Prod environment must keep all build artifacts, logs, and test results for at least five years. All other runs can follow the default 30-day project retention policy. The multi-stage deployment pipeline is defined in YAML and must remain self-service for developers. You need an automated solution that meets the compliance requirement while minimizing storage costs for non-production runs. What should you recommend?
Create an Azure Artifacts feed with a 1,825-day retention policy and publish all build outputs as packages to that feed.
Define a retention policy in the pipeline's YAML that sets
daysToKeepto 1,825 for all runs.Set the project-level pipeline retention policy to 1,825 days.
Add a job in the production stage that invokes the Azure DevOps REST API to create a retention lease on the current run after a successful deployment.
Answer Description
A project- or pipeline-wide retention period would keep every run for five years, dramatically increasing storage consumption. Azure Artifacts retention policies cover only packages and do not preserve the entire pipeline run's logs or test results. Instead, the production stage can include a lightweight script or task that calls the Azure DevOps REST API to set the keepForever property to true for the current run. This action creates a retention lease that overrides normal cleanup policies, keeping the specific run and all of its artifacts indefinitely, while other runs continue to be deleted after 30 days.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a retention lease in Azure DevOps?
How does the Azure DevOps REST API help in managing pipeline retention policies?
Why is invoking the Azure DevOps REST API to set a retention lease a better option than increasing the project-wide retention policy?
What is a retention lease in Azure DevOps?
How do you use the Azure DevOps REST API to create a retention lease?
What is the advantage of using REST API for pipeline retention over project-wide retention settings?
You instrument an ASP.NET Core web app with Application Insights. You must discover which API operations are responsible for the worst client-perceived latency over the last 24 hours by calculating the 95th-percentile request duration and listing only the five slowest operations whose 95th percentile is greater than 3 000 ms. Which Kusto Query Language (KQL) statement should you run in Log Analytics to meet the requirement?
requests | where timestamp > ago(1d) | summarize percentile(duration, 95) by operationName | where duration > 3000 | top 5 by duration desc
requests | where timestamp > ago(1d) | summarize percent_duration = percentile(duration, 95) by operationName | where percent_duration < 3000 | top 5 by percent_duration asc
requests | where timestamp > ago(1d) | summarize P95 = percentile(duration, 95) by operationName | where P95 > 3000 | top 5 by P95 desc
requests | where timestamp > ago(1d) | summarize avgduration = avg(duration) by operationName | where avgduration > 3000 | top 5 by avgduration desc
Answer Description
The query must:
- Retrieve request telemetry from the last day (requests table, timestamp filter).
 - Use the percentile aggregation to compute the 95th-percentile duration for each operationName.
 - Filter out any operation whose P95 is not above 3 000 ms.
 - Return only the five operations with the highest P95 values.
 
The correct query meets every criterion: requests | where timestamp > ago(1d) | summarize P95 = percentile(duration, 95) by operationName | where P95 > 3000 | top 5 by P95 desc
The distractors fail because they either:
- Compare the raw duration column instead of the percentile result (making the filter ineffective).
 - Use average rather than percentile, giving the wrong measure of tail latency.
 - Keep operations below 3 000 ms or sort ascending, returning the fastest operations instead of the slowest.
 
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Kusto Query Language (KQL) and where is it used?
What does the 'percentile' function do in KQL?
Why is the 95th percentile used for performance analysis instead of average duration?
What is Kusto Query Language (KQL)?
What does percentile(duration, 95) mean in the query?
Why is P95 used instead of average(duration)?
Within an Azure DevOps project that hosts multiple Git repositories, management wants to guarantee that only members of the Release Managers Azure AD group can change repository or branch permissions. All other developers must still be able to create branches, push commits, and submit pull requests. You need to make a single configuration change that will automatically apply to any existing or future repositories in the project. What should you do?
Create a mandatory branch policy on the default branch of every repository that requires an approval from the Release Managers group.
Use Azure AD Privileged Identity Management to assign the Azure DevOps Project Administrator role exclusively to the Release Managers group.
At the project level, open Git repositories security and set Manage permissions to Allow for the Release Managers group and Deny for Project Contributors.
Add the Release Managers group as Administrators and remove all other groups in the permissions page of each individual repository.
Answer Description
Setting the Git repositories security at the project level controls the default permissions that every repository inherits. By granting Allow on the Manage permissions right to the Release Managers group and setting Deny (or leaving Not set) for other groups such as Project Contributors, only Release Managers can edit repository or branch permissions. Because the setting is applied at the project scope, every current and newly created repository inherits the rule without further action. Branch policies or per-repository administrator assignments would have to be recreated for each repo, and Azure AD role assignments do not control granular Git permissions inside Azure DevOps.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of the 'Manage permissions' setting in Azure DevOps Git repositories?
How does project-level Git security inheritance work in Azure DevOps?
What is the difference between branch policies and repository security settings in Azure DevOps?
What is the role of 'Manage permissions' in Azure DevOps Git repositories?
Why is configuring Git repository security at the project level recommended?
How does Azure AD Privileged Identity Management differ from repository-level security in Azure DevOps?
Your organization operates an Azure virtual machine scale set that hosts 150 Ubuntu Server 20.04 instances. You must collect guest-level performance counters (CPU, memory, disk, and network) and visualize inter-process dependencies for these VMs in an existing Log Analytics workspace. The solution should require the least ongoing administration and align with Microsoft's recommended Azure Monitor agent strategy. Which action should you perform?
Deploy the legacy Log Analytics (MMA) agent extension to the scale set and enable the Service Map solution.
Enable VM Insights for the scale set and choose the existing Log Analytics workspace.
Create a Data Collection Rule that targets the scale set, then manually install the Dependency agent on every instance.
Install the Azure Diagnostics extension on each VM and configure guest-level metric collection to the workspace.
Answer Description
Enabling VM Insights from the portal (or ARM/CLI) for the scale set is the quickest, lowest-effort method that also follows Microsoft's agent roadmap. The onboarding wizard automatically installs and configures the Azure Monitor agent (AMA) and the Dependency agent, then links them to the selected Log Analytics workspace. Installing only the Azure Diagnostics extension collects some metrics but cannot build dependency maps. Creating a Data Collection Rule plus manual Dependency agent installation meets the requirements but adds unnecessary administrative steps. Deploying the legacy Log Analytics (MMA) agent and Service Map solution is no longer recommended and conflicts with Microsoft's transition to AMA.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Monitor Agent (AMA)?
What is VM Insights in Azure Monitor?
How does the Dependency agent work with VM Insights?
What is VM Insights?
What is the Azure Monitor Agent (AMA)?
What is the Dependency agent and why is it important?
You manage multiple Azure Kubernetes Service (AKS) clusters across different subscriptions. You must start collecting node-level performance metrics (CPU, memory, and disk), Kubernetes event logs, and container stdout/stderr streams in a single Log Analytics workspace so that the clusters automatically appear in Azure Monitor Container Insights workbooks. Network security policy blocks privileged containers that are not Microsoft-signed, and you need the quickest approach with minimal manual effort. What should you do for each cluster?
Use Helm to install the azuremonitor/container-azm-ms-agentchart on each cluster, providing the workspace ID and key manually.
From the Azure portal, open each AKS cluster, select Insights, click Enable, and choose the shared Log Analytics workspace.
Enable VM Insights for the cluster nodes and deploy the Dependency agent extension to every node pool.
Add the Application Insights SDK to every container image and set the instrumentation key as an environment variable.
Answer Description
Enabling Container Insights from the Azure portal (Insights blade) pushes Microsoft-signed monitoring components to the cluster as a DaemonSet and automatically connects them to the selected Log Analytics workspace. This collects performance counters, container logs, and Kubernetes events that drive the Container Insights workbooks. Injecting the Application Insights SDK gathers only application telemetry, not node or container metrics. VM Insights targets traditional virtual machines and does not ingest Kubernetes events or container logs. Manually deploying the Helm chart also works, but it requires additional steps and script maintenance, making it slower and more error-prone than the portal-based enablement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Monitor Container Insights and its purpose?
What are DaemonSets and why are they used in AKS monitoring?
Why is using the Azure portal quicker than Helm for enabling AKS monitoring?
What is Azure Monitor Container Insights?
What is a DaemonSet, and why is it used for monitoring?
How does logging in a Log Analytics workspace help with troubleshooting?
You manage an e-commerce solution instrumented with Application Insights. The custom metric CheckoutTime (milliseconds) is stored in the customMetrics table of a Log Analytics workspace. You must display the daily 95th-percentile CheckoutTime for the last seven days in Azure Monitor Logs. Which Kusto Query Language (KQL) query should you run?
customMetrics | summarize pct95 = percentile(value, 95) by name, bin(timestamp, 1d)
customMetrics | where name == "CheckoutTime" and timestamp > ago(7d) | summarize pct95 = pctile(value, 0.95) by bin(timestamp, 1h)
customMetrics | where name == "CheckoutTime" and timestamp > ago(7d) | summarize avgCheckout = avg(value) by bin(timestamp, 1d)
customMetrics | where name == "CheckoutTime" and timestamp > ago(7d) | summarize pct95 = percentile(value, 95) by bin(timestamp, 1d)
Answer Description
The query must filter the customMetrics table to the CheckoutTime metric for the required time range, then calculate the 95th percentile of the value column and group the result into one-day time buckets. The percentile() aggregation is the correct KQL function; pctile() is not valid syntax. Aggregating by one-hour bins or omitting the initial filter would not satisfy the reporting requirements, and averaging values would not produce a percentile-based metric.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Log Analytics workspace in Azure?
What is the difference between percentile() and pctile() in KQL?
How does the bin() function work in KQL?
The main branch of an Azure Repos Git repository must be protected so that code can be merged only after at least two members of the Security group approve the pull request and all discussion threads are resolved. Senior developers sometimes push minor documentation fixes to the pull request before completion; these pushes must not invalidate already collected approvals. Which branch policy configuration meets the requirements?
Create a branch policy on main with Minimum number of reviewers set to 2, add the Security group as a required reviewer, enable Require discussion resolution, and enable Reset code reviewer votes when there are new changes.
Lock the main branch and grant the Security group Contribute via Pull Request only; do not configure any additional branch policies.
Create a branch policy on main with Minimum number of reviewers set to 2, add the Security group as a required reviewer, enable Require discussion resolution, and leave Reset code reviewer votes when there are new changes cleared.
Create a branch policy that automatically includes the Security group as optional reviewers, sets Minimum number of reviewers to 0, and enables Auto-complete when two approvals are present.
Answer Description
A branch policy can enforce both reviewer and discussion-resolution requirements. Setting Minimum number of reviewers to 2 and adding the Security group as a required reviewer guarantees that at least two Security members approve every pull request. Enabling Require discussion resolution blocks completion while any active thread exists. Leaving Reset code reviewer votes when there are new changes cleared ensures that additional non-functional pushes (for example, typo fixes) do not throw away approvals already granted, so maintainers are not forced to restart the review cycle. Selecting the reset option would cancel approvals on every push, while lowering the reviewer count, making reviewers optional, or locking the branch without policies would all fail to meet one or more requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'Reset code reviewer votes when there are new changes' policy do?
What is the role of 'Required reviewers' in Azure Repos branch policies?
Why is 'Require discussion resolution' important in branch policies?
Your team wants to visualize code churn so that developers can see whether large, risky changes are still entering the main branch late in the iteration. You need a dashboard widget that shows, for the last 30 days, the total number of lines added and deleted per repository. You decide to create an Analytics query in Azure DevOps and pin the result to a dashboard. Which Analytics view or table and measures should you base the query on to obtain the required numbers?
Query the Code Churn view and aggregate the LinesAdded and LinesDeleted measures.
Query the Work Item Snapshot table and sum the Story Points and Effort measures.
Query the Build table and average the BuildDurationSeconds measure.
Query the Pull Request view and total the ReviewersDurationSeconds measure.
Answer Description
The Azure DevOps Analytics Code Churn view (also referred to in OData as the CodeChurn table) records one row per file changed in each commit and exposes the LinesAdded and LinesDeleted measures. By aggregating these two measures over a time range you get total churn (lines added plus lines deleted) per repository. Work Item, Build, and Pull Request tables do not contain line-level code delta information, so they cannot provide churn metrics directly.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Code Churn in Azure DevOps?
How can you aggregate LinesAdded and LinesDeleted in Azure DevOps Analytics?
Why aren't Work Item, Build, or Pull Request tables suitable for visualizing code churn?
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.