00:20:00

Microsoft DevOps Engineer Expert Practice Test (AZ-400)

Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft DevOps Engineer Expert AZ-400
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft DevOps Engineer Expert AZ-400 Information

Microsoft DevOps Engineer Expert (AZ-400) Overview

The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.

What You’ll Learn and Be Tested On

This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.

Who Should Take the Exam

The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.

Why Practice Tests Are Important

Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Microsoft DevOps Engineer Expert AZ-400 Logo
  • Free Microsoft DevOps Engineer Expert AZ-400 Practice Test

  • 20 Questions
  • Unlimited
  • Design and implement processes and communications
    Design and implement a source control strategy
    Design and implement build and release pipelines
    Develop a security and compliance plan
    Implement an instrumentation strategy
Question 1 of 20

You are defining an infrastructure-as-code (IaC) strategy for an Azure landing zone that is provisioned with Bicep templates stored in an Azure Repos Git repository. The solution must meet the following requirements:

  1. A single set of template files must be promoted through dev, test, and prod so that every environment is created from exactly the same code base.
  2. Template syntax validation and linting tests must run automatically on every pull request (PR).
  3. Deployments to an environment must occur only after the corresponding branch is updated by an approved PR.

Which approach should you recommend?

  • Adopt trunk-based development with a single main branch containing shared Bicep templates and environment-specific parameter files, and configure a multi-stage YAML pipeline that runs PR validation tests and then, on merge to main, deploys sequentially to dev, test, and prod using environment approvals.

  • Create three separate Git repositories-one per environment-with identical templates, and configure independent pipelines that run tests and deploy on every commit to each repository's default branch.

  • Implement GitFlow: create long-lived develop, release, and master branches that each hold a copy of the templates, and trigger separate pipelines on every push to deploy the respective environment.

  • Use release-flow with a main branch for prod, short-lived release branches for dev and test, and trigger deployments only when a semantic-version tag is pushed to any branch.

Question 2 of 20

You are designing a GitHub Actions pull request workflow with two jobs: build and analyze. The build job builds a Linux container image, tags it with the commit SHA, and pushes it to an Azure Container Registry (ACR).

According to security policy, the analyze job must run CodeQL static analysis from inside the container image created by the build job. This ensures that only tools from the hardened image are used and that no additional tools are installed on the GitHub-hosted runner.

How should you configure the analyze job to meet this requirement?

  • Make the analyze job dependent on the build job using the needs property, and add a container property to the analyze job that references the image in ACR.

  • Add the parameter run-in-container: true to every CodeQL action step in the analyze job.

  • Add the option build-mode: container under the github/codeql-action/init step in the analyze job.

  • In the analyze job, set a container property only on the github/codeql-action/analyze step.

Question 3 of 20

You manage an Azure DevOps project that uses Azure Boards. A production web app hosted in Azure App Service is instrumented with workspace-based Application Insights, which stores its telemetry in an existing Log Analytics workspace. Operations require that when the number of HTTP 5xx requests for the app exceeds 50 in any five-minute period, a bug must automatically be created in the Azure DevOps project that contains the query results causing the alert. What should you configure to meet this requirement?

  • Create an Azure Monitor log alert rule on the workspace and link an action group that triggers a Logic App using the Azure DevOps connector to create a work item.

  • Configure Azure DevOps service hooks to subscribe to Azure Monitor metric alerts raised in the resource group.

  • Enable Continuous Export from Application Insights to Azure DevOps Analytics and define a work-item creation rule on the exported data.

  • Add a Smart Detection rule in Application Insights and configure e-mail notification to the project's default team.

Question 4 of 20

You are defining an Azure Pipelines YAML stage that contains three jobs:

  • UnitTests
  • ApiTests
  • PublishArtifacts The two test jobs must run concurrently to shorten execution time. The PublishArtifacts job must start only after both tests have finished successfully, and you do not want to serialize the test jobs or move them to a different stage. Which YAML configuration should you add to the PublishArtifacts job to meet these requirements?
  • Add the line condition: succeeded() to the PublishArtifacts job and omit the dependsOn property.

  • Move PublishArtifacts to its own stage and set a stage-level dependsOn for the test stage.

  • Set strategy: parallel with maxParallel: 2 at the stage level to control job concurrency and ordering.

  • Add the line dependsOn: [UnitTests, ApiTests] to the PublishArtifacts job and leave the two test jobs without dependencies.

Question 5 of 20

You are defining a multi-stage YAML pipeline in Azure DevOps. All deployments that target the Production environment must wait for approval from the Operations security group and must also satisfy an Azure Monitor alert gate that is evaluated during deployment. You want these requirements to apply no matter which pipeline deploys to Production. What is the correct way to implement the approvals and gates?

  • Add an approvals: block under the environment property in the YAML file and list both the Operations group and the Azure Monitor gate.

  • Create an organization-wide policy that restricts deployments to Production and attach the Azure Monitor alert as a variable group permission.

  • Configure the manual approval and Azure Monitor gate directly on the Production environment in the Azure DevOps portal so every pipeline that references that environment inherits the checks.

  • Set the approval: true attribute on the deployment job and reference an Azure Monitor service connection in the YAML.

Question 6 of 20

You are standardizing your organization's Azure Pipelines by extracting a frequently used job into a reusable YAML template named build-dotnet.yml. The job relies on values that different teams keep in their own variable groups. You want the calling pipeline to decide at queue time which variable group the template should import, without having to modify the template itself for every team.

Which approach should you implement inside build-dotnet.yml so that the same template can be reused with any variable group name chosen by the caller?

  • Convert the job into a task group instead of a YAML template and set the variable group as a task group parameter.

  • Reference the variable group name through a runtime variable like $(varGroupName) inside the variables section.

  • Declare a string parameter such as parameters: - name: varGroupName and import the group with variables:

    • group: ${{ parameters.varGroupName }}
  • Read the group name from a secure environment variable that each team sets on the agent machine.

Question 7 of 20

Your team's Azure Repos Git repository contains several years of history in which artists committed *.png and *.fbx game-asset files that are typically 200-400 MB each. Cloning the repo now takes more than 30 minutes and CI pipelines routinely exceed the 5 GB checkout limit. You must migrate the existing objects to Git Large File Storage (LFS) so that only lightweight pointer files remain in regular Git history, while preserving commit metadata. Which approach meets the goal with the least manual effort?

  • Install Git LFS locally, then execute git lfs migrate import --include=".png,.fbx" --include-ref=refs/heads/*; afterward push the rewritten history with git push --force-with-lease --all && git push --force-with-lease --tags.

  • Run git lfs track ".png" ".fbx", commit the updated .gitattributes file, and push the branch normally.

  • Enable the Large File Checkout option in Azure Repos and perform git gc to prune existing large blobs.

  • Use git filter-branch with a tree-filter that calls git lfs track for each commit, then push the result to origin.

Question 8 of 20

You are configuring an Azure DevOps YAML pipeline that will run on a self-hosted agent in Azure. During execution, the pipeline must:

  1. Download a PFX certificate named web-api-cert from an Azure Key Vault and save it as a secure file on the agent.
  2. Generate a JSON Web Token (JWT) by calling the Azure Key Vault key jwt-signing. The private portion of the key must never leave the vault; the pipeline only needs to invoke the signing operation.

You create a service principal for the pipeline and add it to the Key Vault access policies. Which single set of permissions meets the requirements while following the principle of least privilege?

  • Certificates: Get | Keys: Sign

  • Secrets: Get, List | Keys: Get

  • Secrets: Get | Keys: Sign

  • Certificates: Get | Keys: Sign, Decrypt

Question 9 of 20

Your Azure DevOps project hosts a Git repository. The main branch is protected by branch policies that require a successful CI build and at least two reviewers before pull requests (PRs) can be completed. A small group of release managers must be able to occasionally finish urgent PRs into main without waiting for these checks. No other user must be able to bypass the policy. Which branch permission should you set to Allow for the release-manager group and explicitly Deny for all other groups?

  • Bypass policies when pushing

  • Bypass policies when completing pull requests

  • Contribute to pull requests

  • Force push (rewrite history)

Question 10 of 20

Your organization has 20 developers working on a single Azure Repos Git repository for an internal microservice. The microservice is deployed to production multiple times per day via a fully automated pipeline. Management reports that long-lived feature branches being merged at the end of each sprint often cause complex merge conflicts and delay releases. You need to redesign the branching strategy to reduce integration pain while still allowing developers to experiment in isolation for a few hours. Which branching approach should you recommend?

  • Require each developer to work in a personal fork and submit periodic patches directly to the main branch without pull-request validation.

  • Adopt trunk-based development by requiring every change to merge into the main branch within 24 hours through short-lived feature branches and mandatory pull requests.

  • Switch to a GitFlow model with permanent develop and release branches, merging feature branches only after user acceptance testing.

  • Continue using feature branches but extend branch lifetimes to the full release cycle and freeze the main branch except for hotfixes.

Question 11 of 20

Your team manages hundreds of Azure virtual machines across multiple subscriptions. You must ensure the nginx package is installed and running on every Ubuntu VM, detect and automatically remediate configuration drift about every 15 minutes, surface compliance data through Azure Policy, and store the configuration definition as code in GitHub. Which Azure-native service should you use to meet these requirements?

  • Azure Image Builder

  • Azure Automation State Configuration (DSC)

  • Azure Automanage Machine Configuration

  • Azure App Configuration Feature Manager

Question 12 of 20

Your organization uses multi-stage YAML pipelines in Azure DevOps to deploy dozens of microservices to a production environment. Leadership wants a delivery metric that shows the number of successful deployments to production each day over the last 30 days. You must deliver this metric in Power BI with minimal custom coding, while allowing product owners to filter by service (pipeline name).

Which approach should you use to obtain the data that drives the metric?

  • Query Azure Boards work items using the OData WorkItems entity and count those that have the state 'Closed' and the tag 'Production Deployment'.

  • Enable Continuous Export in Application Insights for each service and run a Kusto query in Power BI against the exported customEvents.

  • Create a custom Azure DevOps Analytics view that exposes PipelineRun data, then connect Power BI through the built-in Data Connector.

  • Connect Power BI to the Azure DevOps Analytics OData feed and query the PipelineRuns entity set.

Question 13 of 20

Your organization uses Azure Pipelines to deploy an ASP.NET Core application. Compliance rules require that any pipeline run that successfully deploys to the Prod environment must keep all build artifacts, logs, and test results for at least five years. All other runs can follow the default 30-day project retention policy. The multi-stage deployment pipeline is defined in YAML and must remain self-service for developers. You need an automated solution that meets the compliance requirement while minimizing storage costs for non-production runs. What should you recommend?

  • Create an Azure Artifacts feed with a 1,825-day retention policy and publish all build outputs as packages to that feed.

  • Define a retention policy in the pipeline's YAML that sets daysToKeep to 1,825 for all runs.

  • Set the project-level pipeline retention policy to 1,825 days.

  • Add a job in the production stage that invokes the Azure DevOps REST API to create a retention lease on the current run after a successful deployment.

Question 14 of 20

You instrument an ASP.NET Core web app with Application Insights. You must discover which API operations are responsible for the worst client-perceived latency over the last 24 hours by calculating the 95th-percentile request duration and listing only the five slowest operations whose 95th percentile is greater than 3 000 ms. Which Kusto Query Language (KQL) statement should you run in Log Analytics to meet the requirement?

  • requests | where timestamp > ago(1d) | summarize percentile(duration, 95) by operationName | where duration > 3000 | top 5 by duration desc

  • requests | where timestamp > ago(1d) | summarize percent_duration = percentile(duration, 95) by operationName | where percent_duration < 3000 | top 5 by percent_duration asc

  • requests | where timestamp > ago(1d) | summarize P95 = percentile(duration, 95) by operationName | where P95 > 3000 | top 5 by P95 desc

  • requests | where timestamp > ago(1d) | summarize avgduration = avg(duration) by operationName | where avgduration > 3000 | top 5 by avgduration desc

Question 15 of 20

Within an Azure DevOps project that hosts multiple Git repositories, management wants to guarantee that only members of the Release Managers Azure AD group can change repository or branch permissions. All other developers must still be able to create branches, push commits, and submit pull requests. You need to make a single configuration change that will automatically apply to any existing or future repositories in the project. What should you do?

  • Create a mandatory branch policy on the default branch of every repository that requires an approval from the Release Managers group.

  • Use Azure AD Privileged Identity Management to assign the Azure DevOps Project Administrator role exclusively to the Release Managers group.

  • At the project level, open Git repositories security and set Manage permissions to Allow for the Release Managers group and Deny for Project Contributors.

  • Add the Release Managers group as Administrators and remove all other groups in the permissions page of each individual repository.

Question 16 of 20

Your organization operates an Azure virtual machine scale set that hosts 150 Ubuntu Server 20.04 instances. You must collect guest-level performance counters (CPU, memory, disk, and network) and visualize inter-process dependencies for these VMs in an existing Log Analytics workspace. The solution should require the least ongoing administration and align with Microsoft's recommended Azure Monitor agent strategy. Which action should you perform?

  • Deploy the legacy Log Analytics (MMA) agent extension to the scale set and enable the Service Map solution.

  • Enable VM Insights for the scale set and choose the existing Log Analytics workspace.

  • Create a Data Collection Rule that targets the scale set, then manually install the Dependency agent on every instance.

  • Install the Azure Diagnostics extension on each VM and configure guest-level metric collection to the workspace.

Question 17 of 20

You manage multiple Azure Kubernetes Service (AKS) clusters across different subscriptions. You must start collecting node-level performance metrics (CPU, memory, and disk), Kubernetes event logs, and container stdout/stderr streams in a single Log Analytics workspace so that the clusters automatically appear in Azure Monitor Container Insights workbooks. Network security policy blocks privileged containers that are not Microsoft-signed, and you need the quickest approach with minimal manual effort. What should you do for each cluster?

  • Use Helm to install the azuremonitor/container-azm-ms-agentchart on each cluster, providing the workspace ID and key manually.

  • From the Azure portal, open each AKS cluster, select Insights, click Enable, and choose the shared Log Analytics workspace.

  • Enable VM Insights for the cluster nodes and deploy the Dependency agent extension to every node pool.

  • Add the Application Insights SDK to every container image and set the instrumentation key as an environment variable.

Question 18 of 20

You manage an e-commerce solution instrumented with Application Insights. The custom metric CheckoutTime (milliseconds) is stored in the customMetrics table of a Log Analytics workspace. You must display the daily 95th-percentile CheckoutTime for the last seven days in Azure Monitor Logs. Which Kusto Query Language (KQL) query should you run?

  • customMetrics | summarize pct95 = percentile(value, 95) by name, bin(timestamp, 1d)

  • customMetrics | where name == "CheckoutTime" and timestamp > ago(7d) | summarize pct95 = pctile(value, 0.95) by bin(timestamp, 1h)

  • customMetrics | where name == "CheckoutTime" and timestamp > ago(7d) | summarize avgCheckout = avg(value) by bin(timestamp, 1d)

  • customMetrics | where name == "CheckoutTime" and timestamp > ago(7d) | summarize pct95 = percentile(value, 95) by bin(timestamp, 1d)

Question 19 of 20

The main branch of an Azure Repos Git repository must be protected so that code can be merged only after at least two members of the Security group approve the pull request and all discussion threads are resolved. Senior developers sometimes push minor documentation fixes to the pull request before completion; these pushes must not invalidate already collected approvals. Which branch policy configuration meets the requirements?

  • Create a branch policy on main with Minimum number of reviewers set to 2, add the Security group as a required reviewer, enable Require discussion resolution, and enable Reset code reviewer votes when there are new changes.

  • Lock the main branch and grant the Security group Contribute via Pull Request only; do not configure any additional branch policies.

  • Create a branch policy on main with Minimum number of reviewers set to 2, add the Security group as a required reviewer, enable Require discussion resolution, and leave Reset code reviewer votes when there are new changes cleared.

  • Create a branch policy that automatically includes the Security group as optional reviewers, sets Minimum number of reviewers to 0, and enables Auto-complete when two approvals are present.

Question 20 of 20

Your team wants to visualize code churn so that developers can see whether large, risky changes are still entering the main branch late in the iteration. You need a dashboard widget that shows, for the last 30 days, the total number of lines added and deleted per repository. You decide to create an Analytics query in Azure DevOps and pin the result to a dashboard. Which Analytics view or table and measures should you base the query on to obtain the required numbers?

  • Query the Code Churn view and aggregate the LinesAdded and LinesDeleted measures.

  • Query the Work Item Snapshot table and sum the Story Points and Effort measures.

  • Query the Build table and average the BuildDurationSeconds measure.

  • Query the Pull Request view and total the ReviewersDurationSeconds measure.