00:20:00

Microsoft DevOps Engineer Expert Practice Test (AZ-400)

Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft DevOps Engineer Expert AZ-400
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft DevOps Engineer Expert AZ-400 Information

Microsoft DevOps Engineer Expert (AZ-400) Overview

The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.

What You’ll Learn and Be Tested On

This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.

Who Should Take the Exam

The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.

Why Practice Tests Are Important

Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Microsoft DevOps Engineer Expert AZ-400 Logo
  • Free Microsoft DevOps Engineer Expert AZ-400 Practice Test

  • 20 Questions
  • Unlimited
  • Design and implement processes and communications
    Design and implement a source control strategy
    Design and implement build and release pipelines
    Develop a security and compliance plan
    Implement an instrumentation strategy

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

Your company builds multiple JavaScript microservices in Azure DevOps. Security policy requires that any third-party npm package be stored in an internal repository so that future removals from the public registry do not break builds. In addition, only versions explicitly approved by release engineering may be consumed by production pipelines while development teams should be able to test newer versions quickly. Which approach meets these requirements with minimal administrative effort?

  • Add a service connection to npmjs.org and lock all package.json dependencies to specific build numbers to prevent unintended changes.

  • Create a single Azure Artifacts npm feed, add npmjs.org as an upstream source, and use feed views to promote packages through Development and Production stages.

  • Mirror the entire npmjs.org registry to an Azure Blob Storage static website and configure it as a private npm registry for all pipelines.

  • Publish internal packages to a GitHub Packages npm registry for each project and let package.json reference npmjs.org directly for open-source dependencies.

Question 2 of 20

A co-worker accidentally rewrote the history of the remote release branch by using git reset --hard and then force-pushing the changes. This action removed the two most recent commits. After running git fetch in your local repository to synchronize with the remote, you are tasked with recovery. Which command should you execute to find the hashes of the lost commits?

  • git reflog

  • git log --graph --decorate

  • git fsck --lost-found

  • git stash list

Question 3 of 20

Your Azure DevOps project must notify an internal line-of-business (LOB) HTTP endpoint whenever a work item is moved to the Done state under the Fabrikam\Web area path. The LOB endpoint expects a POST request, requires a Personal Access Token (PAT) supplied in a custom HTTP header, and must not receive calls for any other state changes.

Which Azure DevOps configuration meets these requirements while generating the fewest possible outbound calls?

  • Enable the continuous deployment trigger on the team's release pipeline and set a post-deployment REST call task that targets the LOB endpoint.

  • Build an Azure Logic App that polls the Azure DevOps REST API every 5 minutes for work items whose State changed to Done and then calls the LOB endpoint.

  • Add a Service hooks subscription that posts work-item update events to a Microsoft Teams channel, then configure a Teams connector to forward messages to the LOB endpoint.

  • Create a Service hooks subscription that uses the Work item updated trigger, adds filters for State = Done and Area Path = Fabrikam\Web, supplies the PAT in a custom header, and posts to the LOB endpoint URL.

Question 4 of 20

Your organization uses Azure DevOps for work tracking. You need to build a shared dashboard that helps portfolio managers monitor flow efficiency. The dashboard must display (1) the average cycle time for user stories completed during the current sprint and (2) the average lead time for user stories over the last three sprints. The data must refresh automatically and allow managers to drill down to the underlying work items without requiring additional licenses or external services. Which solution should you recommend?

  • Schedule a nightly export of work items to CSV using the Azure DevOps CLI and visualize the data in Excel Online.

  • Build a Power BI report that queries the Azure DevOps REST APIs and pin the report to the dashboard.

  • Create a query-based chart widget that groups user stories by Completed Date and averages effort.

  • Add the Cycle Time and Lead Time Analytics widgets to an Azure DevOps dashboard and scope them by team iteration dates.

Question 5 of 20

Your team uses Azure DevOps Services for all repositories and pipelines. You need a hosted solution to store private npm and NuGet packages. Developers must be able to:

  • cache public packages from npmjs.com and nuget.org so builds still succeed if those services are offline,
  • apply different permissions to distinct internal teams at the individual package level,
  • avoid operating any additional infrastructure. Which approach should you recommend?
  • Create a single Azure Artifacts feed, enable upstream sources to npmjs.com and nuget.org, and publish all internal packages to that feed.

  • Enable GitHub Packages for each repository and publish internal packages there while consuming public packages directly from their original registries.

  • Push all internal packages as OCI artifacts to Azure Container Registry and configure builds to restore dependencies from the registry.

  • Deploy an on-premises Artifacts proxy server that periodically synchronizes with GitHub Packages and external registries.

Question 6 of 20

You are designing a multi-stage YAML pipeline in Azure DevOps that deploys to a staging environment and then to production. The pipeline must pause automatically and require approval from at least two members of the Azure DevOps security group named release-admins before any job targeting production starts. You want the approval configuration to be reusable across pipelines and managed outside of pipeline code. What should you do?

  • Configure a branch policy on the main branch that mandates two reviewers from the release-admins group before any pull request can be completed.

  • Set the production job to use a pool that maps to the release-admins group and define approvals: 2 within the job's YAML definition.

  • Insert a ManualValidation task in the production stage YAML and configure it to allow only the release-admins group with a minimum reviewer count of two.

  • Create an environment named Prod, add an "Approvals and checks" entry that requires at least two reviewers from the release-admins group, and reference the environment in the production deployment job.

Question 7 of 20

You maintain a 130 GB mono-repository hosted in Azure Repos Git. Developers complain that simple commands such as git status and git log become progressively slower a few weeks after cloning, even though local disk space is still sufficient. You decide to use the open-source Scalar tool to keep each developer clone performant without asking engineers to remember periodic maintenance steps. Which action should you perform on every existing clone to achieve this goal?

  • Re-clone the repository by using "scalar clone --single-branch".

  • Execute "scalar register" in the root of each local repository.

  • Increase core.preloadIndex to true and raise pack.windowMemory.

  • Set the configuration value gc.auto to 0 in every clone.

Question 8 of 20

Your company uses Azure DevOps Services Boards to manage work for several product teams. Leadership asks for a Power BI report that shows the 'average cycle time (days)' for completed 'User Story' work items, grouped by Area Path, for the past six months so they can monitor flow efficiency trends. You will connect Power BI directly to the Azure DevOps Analytics service. Which Analytics view or entity should you use as the primary data source to calculate the requested metric accurately?

  • Current Work Items view (WorkItems entity) using only the latest revision

  • Work Item History view (WorkItemRevision entity) aggregating field changes

  • Legacy FlowFields view from the deprecated Azure DevOps Data Warehouse

  • Work Item Snapshot view (WorkItemSnapshot entity) filtered to User Story work items

Question 9 of 20

Your Azure Repos Git repository has grown to more than 5 GB because several hundred *.mp4 files were committed months ago. Clone and fetch operations are now slow. You must reduce the repository's size while still keeping the videos available in the history through Git Large File Storage (LFS). The solution should rewrite the existing commits with minimal manual effort and require contributors to take only one clear action afterward. Which approach should you recommend?

  • Enable Scalar on the current repository to virtualize large files and execute "git maintenance run" on all clients.

  • Add *.mp4 entries to .gitattributes, run "git gc --aggressive" on the server, and ask contributors to pull the latest commits.

  • Run "git lfs migrate import --include='*.mp4' --everything" in a local clone, force-push the rewritten branches and tags, then instruct all contributors to reclone the repository.

  • Create a new repository, enable Git LFS, recommit the mp4 files via LFS, and delete the original repository.

Question 10 of 20

You are designing a branching strategy for an Azure Repos Git project that supports 150 developers. Each developer must integrate code several times per day, and the automated build pipeline should always run from a single branch to maintain a linear history. In addition, production hotfixes must be delivered quickly without blocking ongoing feature development. Which branching model best satisfies these requirements?

  • GitFlow with long-lived develop, feature, release, and hotfix branches

  • Forking workflow where each developer works in a personal fork and merges back to the upstream repository only at milestone completion

  • Trunk-based development with a single main branch and short-lived feature and hotfix branches merged at least daily

  • Feature branching that keeps a dedicated branch per feature until it is completed at the end of each sprint

Question 11 of 20

You are standardizing the build pipelines for dozens of repositories that contain a mix of .NET Framework, .NET Core, and native C++ projects. You need a single YAML template that, when invoked after the build stage, will automatically locate all produced unit-test assemblies, execute the tests regardless of framework (MSTest, NUnit, or xUnit), and publish the results to the Tests tab without an extra publishing step. Which built-in Azure Pipelines task should you include in the template to meet these requirements?

  • DotNetCoreCLI@2 task with command "test"

  • PublishTestResults@2 task

  • CTest@1 task

  • Visual Studio Test task (VisualStudioTest@2)

Question 12 of 20

You need to define a branching and deployment strategy for a GitHub repository that hosts an enterprise-wide Node.js microservice. The requirements are:

  • The 'main' branch must always contain code that is ready to be promoted to production.
  • New work should be isolated without blocking other teams.
  • Code reviews and automated tests must run before the code is merged.
  • The repository's history should stay linear and easy to trace. Which approach satisfies these requirements and follows GitHub Flow principles?
  • Create environment branches named 'dev', 'stage', and 'prod'; merge pull requests sequentially between them to promote releases while allowing direct pushes to 'dev' for urgent fixes.

  • Create short-lived topic branches from 'main', open a pull request that requires at least one approving review and a passing CI workflow, then squash and merge the pull request back into 'main', triggering an automated deployment pipeline.

  • Maintain a persistent 'development' branch where all features are merged. Once a sprint ends, create a pull request from 'development' to 'main' and deploy after manual testing.

  • Commit directly to 'main', tag a semantic version when ready, and trigger the release pipeline from the tag without using pull requests or CI checks.

Question 13 of 20

Your Azure DevOps project uses an Azure Repos Git repository. You need to ensure that developers can read the release branch and submit pull requests that target that branch, but they must be prevented from pushing commits directly to it. Which branch-level permission configuration meets this requirement for the Developers security group on the release branch?

  • Read: Allow, Create branch: Deny, Contribute: Allow

  • Read: Allow, Contribute: Deny, Contribute to pull requests: Allow

  • Leave all permissions at their inherited values and enable a branch policy that requires one approval

  • Read: Deny, Contribute: Not set, Contribute to pull requests: Allow

Question 14 of 20

Your organization uses Azure Boards for work tracking and GitHub Enterprise Cloud for source control. Management requires automatic, bidirectional traceability between user stories and the commits and pull requests created as part of the GitHub Flow process. Developers must not add hyperlinks manually and administrative effort must be minimal. Which action should you take to meet these requirements?

  • Create an Azure DevOps service hook that posts Work item updated events to a webhook configured in the GitHub repository.

  • Add the Azure Boards work item URL to the pull request template and require developers to keep it updated for every change.

  • Insert an Associate-Work-Items task into the build pipeline and pass work item IDs through a pipeline variable for every build.

  • Install and configure the Azure Boards app for the GitHub organization, then have developers include the pattern AB# in commit messages or pull request descriptions.

Question 15 of 20

You are designing an Azure Pipelines YAML pipeline to deploy version 2.0 of a microservice to Azure Kubernetes Service (AKS). Business requirements state:

  • Database schema changes that ship with 2.0 must be applied exactly once before any 2.0 pods start receiving traffic.
  • Production traffic must be shifted gradually from the existing 1.x pods to 2.0 pods, with the ability to halt and roll back at any percentage.

Which pipeline design satisfies these requirements while following DevOps best practices?

  • Create two stages in the YAML pipeline. Stage 1 runs a deployment job that executes a migration script against the production database. Stage 2 depends on Stage 1 and runs a Kubernetes manifest task configured with strategy: canary and incremental traffic percentages (e.g., 10%, 30%, 100%), promoting the release after each successful increment.

  • Package the migration script as an init container inside the 2.0 deployment so that every new pod applies the schema before starting; deploy the manifest with a rolling update strategy.

  • Perform a blue-green deployment by creating a parallel AKS namespace for 2.0, apply the migration after the namespace is live, and switch traffic by updating the cluster's DNS entry.

  • Use a Helm upgrade task with the --atomic flag and a pre-upgrade hook that applies the migration, then set replicas to gradually increase using kubectl scale commands in separate jobs.

Question 16 of 20

Your organization hosts its source code in GitHub Enterprise Cloud and uses Azure DevOps Services to run all build and release pipelines. You are asked to create a new multi-stage YAML pipeline stored in the repository and to configure the integration so that:

  • Azure Pipelines can publish commit statuses and required checks on every pull request.
  • No long-lived personal access tokens (PATs) are stored in the Azure DevOps project or in the repository.

After creating the pipeline in Azure DevOps, which action must you take in GitHub to satisfy these requirements?

  • Create a GitHub personal access token scoped to repo:status and store it as a secret variable in the pipeline.

  • Install the Azure Pipelines GitHub App in the organization and grant it access to the repository.

  • Enable Dependabot security updates for the repository.

  • Add an organization-level webhook that targets the Azure DevOps hooks endpoint.

Question 17 of 20

Your organization maintains thousands of NUnit integration tests that run in an Azure Pipelines build for the main branch. Quality engineers must visualize in Power BI the seven-day rolling average test pass rate for the main branch only and must exclude any results that come from test‐rerun attempts. Which Analytics OData entity should you query to meet these requirements with the least additional data manipulation?

  • TestResult

  • BuildTestResultDailySummary

  • BuildTimelineRecords

  • TestRuns

Question 18 of 20

Your team stores production incident records in an Azure Log Analytics workspace. The workspace contains a custom table named Incident_CL with these columns:

  • IncidentId_g (string)
  • State_s (string)
  • CreatedTime_t (datetime)
  • ResolvedTime_t (datetime)

You need to add a metric tile to an Azure dashboard that shows the average mean time to recovery (MTTR) for the last 30 days. Incidents that are still open must be excluded. Which Kusto Query Language (KQL) statement should you use?

  • Incident_CL | where CreatedTime_t > ago(30d) | summarize avg(ResolvedTime_t - CreatedTime_t)

  • Incident_CL | where CreatedTime_t > ago(30d) and isnotnull(ResolvedTime_t) | extend TTR = ResolvedTime_t - CreatedTime_t | summarize avg(TTR)

  • Incident_CL | where CreatedTime_t > ago(30d) and State_s == "Resolved" | summarize avg(CreatedTime_t - ResolvedTime_t)

  • Incident_CL | where CreatedTime_t > ago(30d) and isnotnull(ResolvedTime_t) | summarize avg(datetime_diff('minute', CreatedTime_t, ResolvedTime_t))

Question 19 of 20

A development team is migrating a four-year-old Git repository that contains roughly 8 GB of Photoshop (PSD) and PNG assets to Azure Repos. During a trial push, the operation is rejected because several objects are larger than 100 MB. The team must preserve the assets in the repository history, reduce clone size, and allow future pushes of the same file types without further rejections or manual steps. Which approach should you recommend?

  • Use a sparse checkout to exclude the graphics directory locally, then push the existing repository to Azure Repos.

  • Install git-fat on developer machines so that large files are stored outside Git when the repository is pushed to Azure Repos.

  • Rewrite the repository with git lfs migrate import to convert *.psd and *.png files to LFS pointers, then push to Azure Repos with Git LFS enabled.

  • Split the repository with git filter-repo and keep the binary assets only on a separate branch before pushing.

Question 20 of 20

Your company streams Microsoft Defender for Cloud recommendations to an Azure Log Analytics workspace by enabling continuous export. Security analysts need a KPI that shows the mean time to remediate (MTTR) for critical container-image vulnerabilities during the last 30 days, so that the KPI can be pinned to an Azure Workbook.

Which Kusto query should you recommend?

  • SecurityAlert | where Severity == "High" and AlertName == "Container registry image vulnerable" | summarize MTTR = avg(datetime_diff("hour", TimeGenerated, ClosedTime))

  • SecurityRecommendation | where RecommendationSeverity == "High" and RecommendationType == "ContainerRegistryVulnerabilities" | summarize firstDetected = arg_min(TimeGenerated, Status), lastResolved = arg_max(TimeGenerated, Status) by RecommendationId | where Status == "Resolved" | extend HoursToFix = datetime_diff("hour", firstDetected, lastResolved) | summarize MTTR = avg(HoursToFix)

  • SecurityRecommendation | where RecommendationSeverity == "High" and RecommendationType == "ContainerRegistryVulnerabilities" | extend HoursToFix = datetime_diff("hour", TimeGenerated, ResolvedTime) | summarize MTTR = avg(HoursToFix)

  • SecurityRecommendation | where RecommendationSeverity == "High" and RecommendationType == "ContainerRegistryVulnerabilities" | summarize firstDetected = arg_min(TimeGenerated, Status), lastResolved = arg_max(TimeGenerated, Status) by RecommendationId | where Status == "Resolved" | extend HoursToFix = datetime_diff("hour", lastResolved, firstDetected) | summarize MTTR = avg(HoursToFix)