00:20:00

Microsoft DevOps Engineer Expert Practice Test (AZ-400)

Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft DevOps Engineer Expert AZ-400
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft DevOps Engineer Expert AZ-400 Information

Microsoft DevOps Engineer Expert (AZ-400) Overview

The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.

What You’ll Learn and Be Tested On

This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.

Who Should Take the Exam

The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.

Why Practice Tests Are Important

Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Microsoft DevOps Engineer Expert AZ-400 Logo
  • Free Microsoft DevOps Engineer Expert AZ-400 Practice Test

  • 20 Questions
  • Unlimited
  • Design and implement processes and communications
    Design and implement a source control strategy
    Design and implement build and release pipelines
    Develop a security and compliance plan
    Implement an instrumentation strategy
Question 1 of 20

An organization stores client secrets in Azure Key Vault and uses a YAML pipeline in Azure DevOps to deploy resources. Compliance mandates that:

  • Secrets must never appear in logs or artifacts.
  • Only tasks that require a secret may read it at runtime.
  • Build administrators must be unable to view or export the secret values from the Azure DevOps portal. Which design meets all requirements while keeping the pipeline definition entirely in Git?
  • Create a variable group in Azure DevOps, manually add each secret as a secret variable, and reference the group in the pipeline.

  • Declare the secrets directly in the YAML file by using variables with the isSecret: true attribute and reference them in the tasks.

  • Commit an encrypted JSON file containing the secrets to the repository and decrypt it during the build by using a GPG private key stored as a secure file.

  • Call the AzureKeyVault@2 task in the job to download only the required secrets at runtime; reference the resulting secret variables in subsequent tasks.

Question 2 of 20

You are designing the branch and release process for a new microservice repository in GitHub. The team wants to adopt GitHub Flow to ensure every change is traceable from code review to production and can be deployed multiple times per day through their Azure Pipelines CI/CD. Which practice is required to correctly implement GitHub Flow?

  • For every change, create a short-lived feature branch from main, open a pull request for peer review, and merge the pull request only after it receives at least one approval and all automated CI checks pass.

  • Maintain a long-lived develop branch for integration and create release branches for deployments.

  • Use trunk-based development where developers push changes directly to main, using feature flags to hide incomplete work and bypassing pull requests for trivial changes.

  • Require that all code changes are committed directly to the main branch to be deployed in a weekly batch.

Question 3 of 20

Your team follows a requirement-based testing approach in Azure DevOps. They want a built-in, one-click view that shows each User Story together with the test cases that validate it and the pass/fail outcome of the most recent pipeline run. You need to configure the artifacts so that the Test Plans "Requirement traceability" view can generate this matrix without additional manual effort.

Which configuration provides the required end-to-end traceability?

  • Assign the identical Iteration Path to both the User Story and its test cases.

  • Prefix the title of every test case with the work item ID of the User Story.

  • Create a link of type "Tests" from each User Story to the related test cases.

  • Add the same tag to the User Story and to each validating test case.

Question 4 of 20

Your team wants Azure Monitor to create an Azure DevOps work item every time a Log Analytics-based alert with severity = Sev0 is triggered. You start creating an action group and choose the Azure DevOps action type, but the portal warns that a required prerequisite is missing. Which prerequisite must be satisfied before you can successfully add the Azure DevOps action to the group?

  • Generate a personal access token in Azure DevOps with Work Items (read and write) scope and supply it to the action group.

  • Configure an Azure DevOps service hook that listens for alert events from Azure Monitor.

  • Install the Azure Boards extension for Visual Studio Code on the build server used by the project.

  • Register Azure Monitor as a multi-tenant application in Azure Active Directory and grant it the Work Item API permission.

Question 5 of 20

You instrument an ASP.NET Core web app with Application Insights. You must discover which API operations are responsible for the worst client-perceived latency over the last 24 hours by calculating the 95th-percentile request duration and listing only the five slowest operations whose 95th percentile is greater than 3 000 ms. Which Kusto Query Language (KQL) statement should you run in Log Analytics to meet the requirement?

  • requests | where timestamp > ago(1d) | summarize percent_duration = percentile(duration, 95) by operationName | where percent_duration < 3000 | top 5 by percent_duration asc

  • requests | where timestamp > ago(1d) | summarize percentile(duration, 95) by operationName | where duration > 3000 | top 5 by duration desc

  • requests | where timestamp > ago(1d) | summarize P95 = percentile(duration, 95) by operationName | where P95 > 3000 | top 5 by P95 desc

  • requests | where timestamp > ago(1d) | summarize avgduration = avg(duration) by operationName | where avgduration > 3000 | top 5 by avgduration desc

Question 6 of 20

You are designing an Azure Pipelines YAML pipeline that will run only on Microsoft-hosted agents. The pipeline must deploy Bicep templates to an Azure subscription while meeting the following requirements:

  • Do not store any long-lived client secrets or certificates in Azure DevOps.
  • Rely on short-lived tokens issued by Azure AD.
  • Allow scoping permissions to a single resource group. Which authentication approach should you implement in the pipeline's service connection to meet the requirements?
  • Create an Azure Resource Manager service connection that uses a service principal secured by a client secret stored in a variable group.

  • Create an Azure Resource Manager service connection that uses Workload Identity Federation (OIDC) with a federated credential on an Azure AD application.

  • Store an App Service publish profile as a secure file and reference it during the deployment stage.

  • Enable a system-assigned managed identity on the Microsoft-hosted agent and grant it the required Azure RBAC role.

Question 7 of 20

Your organization has an Azure virtual machine named VM1 that runs Windows Server 2022. The team plans to onboard VM1 to VM Insights by using the current Azure Monitor agent (AMA) so that both guest performance metrics and dependency maps are collected in an existing Log Analytics workspace. Which agent configuration must be present on VM1 before VM Insights can start sending the required telemetry?

  • Azure Monitor agent and Telegraf agent

  • Only the Azure Monitor agent

  • Azure Monitor agent and Dependency agent

  • Only the Log Analytics (MMA) agent

Question 8 of 20

You are designing an Azure Pipelines YAML definition with two stages, Build and Test. The Test stage uses a matrix strategy to run jobs across multiple platforms. You need to ensure the Test stage only begins after the Build stage completes successfully, and that the entire pipeline fails if any job in the Test stage's matrix fails. Which YAML structure correctly implements these requirements?

  • Define the Test stage with a dependsOn: Build property and no custom condition.

  • Define the Test stage with dependsOn: Build and condition: always().

  • Define the Test stage with condition: succeeded('Build') but without a dependsOn property.

  • Add continueOnError: true to the job within the Test stage's matrix.

Question 9 of 20

Your organization has a single Azure Repos Git repository that contains several microservices. Eight Scrum teams work in parallel and must keep a high rate of continuous integration (CI). You want to:

  • Detect integration issues within one working day,
  • Keep feature work isolated until it is ready,
  • Produce hot-fixes for the current production version after a release without blocking new development.
    Which branching strategy best satisfies these requirements?
  • Gitflow with a permanent develop branch, long-lived feature branches, and separate release branches merged back into develop and main after QA.

  • Trunk-based development with short-lived feature branches off main and time-boxed release branches created only when a version is shipped.

  • Environment branches (dev, test, prod) with features committed directly to the environment branch that matches the current deployment stage.

  • Fork-and-pull model where each team maintains its own fork and submits pull requests to a shared upstream repository.

Question 10 of 20

Your organization stores its source code in Azure DevOps Repos. You need the build stage of a new multi-language YAML pipeline to automatically scan every commit for secrets, vulnerable open-source dependencies, Infrastructure-as-Code misconfigurations, and other security issues. The solution must use a single task, output SARIF-formatted results, and break the build if any high-severity findings are detected, without requiring you to configure each scanner individually. Which task should you add to the pipeline?

  • Add the MicrosoftSecurityDevOps@1 task from the Microsoft Security DevOps extension.

  • Add an OWASP Dependency Check task to scan third-party libraries.

  • Add a Trivy@0 task to perform container image vulnerability scanning.

  • Add the CodeQLAnalysis@0 task and configure a CodeQL database for each language.

Question 11 of 20

Your team maintains 60 Windows and Linux virtual machines (VMs) in Azure that are used as self-hosted build agents for Azure Pipelines. You must be able to review CPU, memory, and disk utilization trends for all build agents from a single Azure Monitor workbook without signing in to each VM or manually deploying extensions one by one. You plan to store the collected data in a central Log Analytics workspace.

Which action should you perform first to meet these requirements?

  • Assign a Guest Configuration policy that audits high CPU usage on all VMs.

  • Enable Azure Monitor VM Insights for the subscription and link it to the Log Analytics workspace.

  • Install the Azure Diagnostics extension on each VM and send metrics to the workspace.

  • Enable Azure Monitor Container Insights on the Log Analytics workspace.

Question 12 of 20

Your team uses a multi-stage YAML pipeline that has a 'Build' stage for running integration tests and a 'Deploy' stage that targets a production environment. You need to ensure the 'Deploy' stage only runs if the test pass rate from the 'Build' stage is at least 95%. The solution must use built-in Azure DevOps functionality and automatically check the threshold after the test run completes.

Which configuration should you implement?

  • On the production environment, configure a quality gate for the 'Tests pass ratio' check with a threshold of 95.

  • In the 'Build' stage, add the parameter failTaskOnFailedTests: true to a PublishTestResults@2 task that runs after the tests.

  • In the 'Build' stage, add the parameter minimumTestsPassedPercentage: 95 to the VsTest@2 task.

  • On the 'Deploy' stage, add the condition: succeeded('Build').

Question 13 of 20

You maintain a multi-stage Azure Pipelines YAML definition for a large .NET solution. The pipeline already generates and publishes code coverage reports in the Cobertura format. The security team requires that the build must fail automatically if the overall line coverage drops below 80%. You have already installed the 'Build Quality Checks' extension from the Visual Studio Marketplace. You need to implement this requirement with the least amount of configuration.

Which task should you add to the pipeline to meet this requirement?

  • A PublishCodeCoverageResults task with a failBuildOnCoverageBelow: 80 parameter.

  • A PowerShell task with a script to parse the Cobertura XML and fail the build.

  • A branch policy with a code coverage status check.

  • A BuildQualityChecks task with the coverage threshold set to 80.

Question 14 of 20

You need to write a Kusto Query Language (KQL) statement that returns the average server-side duration of requests for each operation in the last 30 minutes, omitting rows with empty duration values, and orders the results so the slowest operation appears first. Which query meets these requirements?

  • Requests
    | where timestamp > ago(30m) and isnotempty(duration)
    | extend avg_duration = avg(duration) by operation_Name
    | sort avg_duration desc
    
  • Requests
    | where timestamp > ago(30m) and isnotempty(duration)
    | summarize avg(duration) by operation_Name
    | order by avg_duration asc
    
  • Requests
    | where timestamp > ago(30m) and isempty(duration)
    | summarize duration = avg(duration) by operation_Name
    | order by duration desc
    
  • Requests
    | where timestamp > ago(30m) and isnotempty(duration)
    | summarize avg_duration = avg(duration) by operation_Name
    | sort by avg_duration desc
    
Question 15 of 20

Your company stores all code in GitHub Enterprise Cloud and deploys workloads to both Azure and AWS. The security team enforces FedRAMP High rules that prohibit long-lived cloud credentials in CI/CD systems. Instead, pipelines must obtain short-lived tokens issued through OpenID Connect (OIDC) at run time. Pipeline definitions must live in the same repository as the code. You need to recommend a deployment automation solution that meets these requirements with the least additional components or custom tasks. Which solution should you choose?

  • Azure Pipelines YAML pipelines with an AWS service connection configured from long-lived access keys and an Azure service-principal secret

  • GitHub Actions with GitHub-hosted runners and federated OIDC credentials to Azure and AWS

  • Azure Pipelines classic release pipelines with environment-specific service connections that store the required cloud access keys

  • GitHub Actions on self-hosted runners that use repository secrets to store AWS and Azure access keys

Question 16 of 20

Your team maintains a single monolithic Git repository in Azure Repos. Management wants to move from bi-monthly releases to several production deployments each week. The new process must minimize merge conflicts, keep branch histories short, and allow incomplete functionality to be disabled until it is production-ready. Which branching strategy best meets these requirements?

  • Implement GitFlow with separate long-lived develop, release, and hotfix branches.

  • Adopt trunk-based development with short-lived feature branches protected by feature flags.

  • Use Release Flow and maintain companion release branches indefinitely after each deployment.

  • Move to a forking workflow where each developer works in a personal fork and submits pull requests to the main repository.

Question 17 of 20

You are managing an Azure DevOps project with a default retention policy to keep runs for 30 days. For a single YAML pipeline named 'webapp-ci', you must implement a more granular policy:

  • Build records must be retained for 90 days for auditing.
  • Published pipeline artifacts must be deleted after 14 days to conserve storage.
  • The last 10 successful runs on the 'main' branch must be kept regardless of age.

A team member attempts to configure this in the azure-pipelines.yml file, but you inform them that not all requirements can be met directly in YAML. Which requirement must be configured outside of the pipeline's YAML file, in the pipeline's settings UI?

  • Deleting artifacts after 14 days while keeping the build record for 90 days.

  • Retaining build records for 90 days.

  • Keeping the last 10 successful runs on the main branch.

  • Applying the policy only to the main branch.

Question 18 of 20

You are creating a multi-stage Azure Pipelines YAML file. The first stage named build compiles the application. A second stage named deploy must run only when the build stage finishes successfully and the pipeline run was triggered by a direct push to the main branch (refs/heads/main). You decide to add a condition statement to the deploy stage. Which condition expression meets the requirement?

  • condition: and(always(), eq(variables['Build.SourceBranch'], 'main'))

  • condition: and(succeeded(), eq(variables['Build.Reason'], 'IndividualCI'))

  • condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))

  • condition: succeeded() && variables['Build.SourceBranchName'] == 'main'

Question 19 of 20

You are standardizing Azure DevOps pipelines that run on Microsoft-hosted agents. The jobs must deploy to Azure subscriptions in the same Microsoft Entra ID (Azure AD) tenant, but you want to eliminate any stored client secrets or certificates in the project. Pipelines should obtain an identity automatically at run time while still allowing you to scope permissions granularly at the resource-group level. Which approach meets the requirements with the least operational overhead?

  • Use an Azure DevOps personal access token (PAT) in the service connection and grant the PAT access to the target subscription.

  • Enable a system-assigned managed identity on each Microsoft-hosted agent and reference it from Azure CLI tasks in the pipeline.

  • Create an Azure AD application and service principal, add a federated credential that trusts the Azure DevOps organization, and configure an Azure Resource Manager service connection that uses workload identity federation.

  • Create a service principal, generate a client secret, and store the secret in an Azure DevOps variable group referenced by each pipeline.

Question 20 of 20

Your company uses GitHub Flow and stores all code in a GitHub Enterprise Cloud organization that is connected to Azure Pipelines. You must shorten the feedback cycle when a build job fails. The solution has to meet the following requirements:

  • A new GitHub issue must be created automatically and assigned to the author of the last commit that triggered the failed pipeline.
  • Team members subscribed to the repository must receive the standard GitHub e-mail and in-app notifications for the new issue.
  • The implementation must require as little custom scripting as possible.

You create a fine-grained personal access token (PAT) that has the issues:write permission and add it to the pipeline as a secret variable.

Which action should you add to the failed branch of the Azure Pipeline to meet the requirements?

  • Enable a branch protection rule that requires a successful check and sets "Include administrators" to ON.

  • Configure a GitHub Service Hook that triggers on the "build completed" event from Azure Pipelines and creates an issue.

  • Add the GitHub Issues task from the Azure Pipelines Marketplace and configure it to create an issue that is assigned to $(Build.RequestedForEmail).

  • Add a Bash script task that uses curl to call the GitHub REST API v3 and open an issue in the repository.