Microsoft DevOps Engineer Expert Practice Test (AZ-400)
Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft DevOps Engineer Expert AZ-400 Information
Microsoft DevOps Engineer Expert (AZ-400) Overview
The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.
What You’ll Learn and Be Tested On
This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.
Who Should Take the Exam
The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.
Why Practice Tests Are Important
Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Free Microsoft DevOps Engineer Expert AZ-400 Practice Test
- 20 Questions
- Unlimited time
- Design and implement processes and communicationsDesign and implement a source control strategyDesign and implement build and release pipelinesDevelop a security and compliance planImplement an instrumentation strategy
You are designing an Azure Pipelines YAML definition with two stages, Build and Test. The Test stage uses a matrix strategy to run jobs across multiple platforms. You need to ensure the Test stage only begins after the Build stage completes successfully, and that the entire pipeline fails if any job in the Test stage's matrix fails. Which YAML structure correctly implements these requirements?
Define the
Teststage withdependsOn: Buildandcondition: always().Define the
Teststage withcondition: succeeded('Build')but without adependsOnproperty.Add
continueOnError: trueto the job within theTeststage's matrix.Define the
Teststage with adependsOn: Buildproperty and no customcondition.
Answer Description
The dependsOn property at the stage level is used to enforce execution order, making the Test stage wait for the Build stage. By default, a stage with this dependency will only run if the preceding stage succeeds (an implicit condition: succeeded()). If any job within the Test stage's matrix fails, the Test stage itself fails. Because there are no subsequent stages with conditions that would override this outcome, the entire pipeline run will correctly be marked as failed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the 'dependsOn' property in Azure Pipelines?
How does the matrix strategy work in the 'Test' stage?
What happens if any job fails in a matrix stage?
Your organization keeps a 15-GB monorepo in Azure Repos Git that has grown over eight years. New developers on Windows report that initial clone and subsequent fetch operations take an unacceptably long time. You must recommend a solution that reduces onboarding time, leaves the commit history intact, and automatically performs background maintenance such as repack and garbage collection while remaining fully compatible with standard Git commands. What should you recommend?
Move all historical content to Git Large File Storage (LFS) and keep only pointers in the repository.
Configure sparse checkout patterns so that each developer pulls only the folders they need.
Have developers install and register the repository with Scalar to enable partial cloning and automated maintenance.
Migrate the repository to Team Foundation Version Control (TFVC) to avoid Git scalability limits.
Answer Description
Scalar accelerates work with very large repositories by creating a partial clone that downloads only the data a developer actually touches and by scheduling background maintenance to keep the local clone efficient. It runs as a thin layer over standard Git, so existing commands continue to work and the full history is preserved. Sparse checkout alone still requires the full object database to be downloaded, migrating to TFVC replaces Git entirely, and moving content to LFS targets binary files but does not solve the fundamental performance issues of cloning and fetching a multi-gigabyte history.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Scalar and how does it work with Git?
What is the difference between sparse checkout and partial cloning in Git?
How does Git Large File Storage (LFS) differ from Scalar in handling large repositories?
Your organization uses Azure Boards for backlog management and GitHub Enterprise Cloud for source control. You need to implement GitHub Flow so that every commit pushed to the main branch is automatically linked to the corresponding Azure Boards work item, providing end-to-end traceability without requiring developers to open Azure Boards. Which action should you take to meet this requirement?
Install the Azure Boards app for the GitHub organization and instruct developers to include the pattern AB#
in commit messages or pull-request titles. Create a client-side Git hook that calls the Azure DevOps REST API to associate each commit with the selected work item.
Add a custom GitHub Action that parses commits after each push and updates Azure Boards work items through the Boards API.
Enable repository-level branch protection in Azure Boards and configure it to require an associated work item on every push.
Answer Description
Installing the Azure Boards app for the GitHub organization establishes a secure connection between GitHub and Azure DevOps. When developers include the shorthand AB#
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is GitHub Flow?
How does the Azure Boards app integrate with GitHub?
What is the significance of AB#<work-item-ID> in commit messages?
A company uses a single Azure DevOps organization that already contains several projects. External consultants must be able to read code in any existing or future repository, but they must not access Boards, Pipelines, Test Plans, or change repository settings. You need to configure Azure DevOps so that administrators add each consultant only once and no additional maintenance is required when new projects are created. What should you do?
Add each consultant as a Stakeholder in every current and future project.
Add the consultants to the Project Collection Build Service group at the organization level.
Create a custom Contributors group inside each project and grant it read-only permissions on the repositories.
Create a custom security group at the organization level, grant it only the Code → Read permission, and add the consultants to that group.
Answer Description
Creating a custom security group at the organization level lets you manage membership in a single place. When you grant that group the Code → Read permission at the organization level, Azure DevOps automatically creates a corresponding group inside every current and future project, inheriting the read-only permission for all repositories. Because the group has no permissions for Boards, Pipelines, or other resources, consultants can only view code. Adding consultants as Stakeholders would prevent any code access, while creating or modifying project-level groups would require changes each time a new project is added.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a custom security group in Azure DevOps?
How does inheriting permissions work in Azure DevOps?
Why are external consultants restricted from accessing Boards, Pipelines, and Test Plans?
You manage an Azure DevOps Git repository that contains a main branch and dozens of long-lived feature branches. You must redesign the pull-request (PR) workflow for main to meet these requirements:
- Every PR must build and run unit tests in an Azure Pipelines build before it can be approved.
- At least two members of the Backend Leads Azure DevOps security group must approve the PR before it can be completed.
- All active discussion threads must be resolved before merge.
- After all policies succeed, developers should be able to click Complete once and have the PR merge automatically when the remaining checks finish.
Which single configuration action meets all the requirements?
On the main branch, add a Build Validation policy that automatically queues the unit-test pipeline and mark it as Required; add the Backend Leads group as Required reviewers with a minimum of 2; enable Comment resolution required, and leave Auto-complete enabled.
Create a status-check policy that listens for succeeded events from the unit-test pipeline; set the minimum reviewer count to 2 but leave the Required reviewers list empty; do not enable Comment resolution required.
Require squash merges and enforce linked work items; configure a Build Validation policy set to optional so that developers can manually queue the pipeline; add Backend Leads as optional reviewers.
Enable a branch lock on main; require that feature branches be rebased on the latest main before completion; rely on pipeline branch triggers to run unit tests after the merge.
Answer Description
Configuring a branch policy on the main branch lets you gate every pull request. A required Build Validation policy queues the designated pipeline automatically and blocks approval until the build and unit tests pass. Adding the Backend Leads group as Required reviewers with the minimum-reviewer count set to 2 ensures that two members of that group must approve the PR. Enabling Comment resolution required prevents completion while any discussion thread is still active. Because policies are evaluated continuously, developers can enable Auto-complete when they click Complete; the PR is then merged automatically as soon as the build and review policies finish, satisfying the final requirement. The other options either fail to enforce the build, do not restrict approval to the required group, or cannot guarantee automatic completion after policies succeed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Build Validation policy in Azure DevOps?
Why is enabling 'Comment resolution required' important for PR workflows?
How does Auto-complete work for pull requests in Azure DevOps?
Your company builds multiple JavaScript microservices in Azure DevOps. Security policy requires that any third-party npm package be stored in an internal repository so that future removals from the public registry do not break builds. In addition, only versions explicitly approved by release engineering may be consumed by production pipelines while development teams should be able to test newer versions quickly. Which approach meets these requirements with minimal administrative effort?
Add a service connection to npmjs.org and lock all package.json dependencies to specific build numbers to prevent unintended changes.
Publish internal packages to a GitHub Packages npm registry for each project and let package.json reference npmjs.org directly for open-source dependencies.
Create a single Azure Artifacts npm feed, add npmjs.org as an upstream source, and use feed views to promote packages through Development and Production stages.
Mirror the entire npmjs.org registry to an Azure Blob Storage static website and configure it as a private npm registry for all pipelines.
Answer Description
An Azure Artifacts npm feed can be configured with an upstream source that transparently caches any package downloaded from npmjs.org, ensuring future builds are not affected if the package is later removed from the public registry. Within the same feed you can create views (for example, Development, Test, and Production) and promote specific package versions between those views after they pass validation. Development pipelines resolve packages from a less-restricted view, while production pipelines are scoped to the Production view, satisfying the approval gate. GitHub Packages, direct registry references, or a custom mirror either fail to cache automatically, lack built-in promotion workflows, or introduce unnecessary management overhead.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Artifacts npm feed?
What are upstream sources in Azure Artifacts?
What are feed views in Azure Artifacts and how do they work?
You maintain several Azure Pipelines that run nightly integration tests against six different database versions. The tests are defined in a single YAML job that uses a matrix strategy, so six parallel Microsoft-hosted agents start each time the pipeline runs. Management wants to keep the test coverage unchanged but limit the number of simultaneously billed Microsoft-hosted jobs to two in order to lower costs. Which YAML change should you implement to meet the requirement?
Change the pool definition from Microsoft-hosted to self-hosted agents.
Configure a condition that runs the job only when the variable Agent.JobServerParallelism equals 2.
Add the setting "maxParallel: 2" under the job's matrix strategy.
Insert "dependsOn" references so each matrix job waits for the previous job to finish.
Answer Description
The matrix strategy supports the maxParallel setting, which caps the number of matrix replicas that can run at the same time. Setting maxParallel to 2 ensures that no more than two of the six matrix jobs execute concurrently, thereby limiting consumption of Microsoft-hosted parallel jobs while still running all six test combinations sequentially. Adding dependsOn would require splitting the job into separate definitions and would still launch multiple agents unless manually serialized. Using a runtime expression on Agent.JobServerParallelism does not throttle matrix concurrency. Switching to self-hosted agents avoids Microsoft-hosted job charges but does not satisfy the requirement to keep using Microsoft-hosted agents while simply limiting concurrency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a matrix strategy in Azure Pipelines?
How does maxParallel work in the matrix strategy?
Why is switching to self-hosted agents not a suitable solution in this case?
Your team's Azure Repos Git repository has a shared branch named feature-alpha. A teammate accidentally executed git reset --hard HEAD~3 followed by git push --force, removing the last three commits from the remote branch. You have an existing local clone of the repository and just ran git fetch. You must now restore feature-alpha on the server. Which Git command should you run first in your local repository to locate the lost commit IDs?
git fetch --force origin feature-alphagit revert HEAD~3..HEADgit reflog origin/feature-alphagit cherry-pick origin/feature-alpha@{1}
Answer Description
After a force-push rewrites a remote branch's history, the lost commit objects still exist on the server for a time but are no longer referenced by the branch tip. When you run git fetch in your local repository, your remote-tracking branch (origin/feature-alpha) is updated to match the new, incorrect tip. Critically, Git records this change in your local reflog for that remote-tracking branch. By running git reflog origin/feature-alpha, you can inspect the recent history of that reference and find the commit SHA it pointed to just before the fetch updated it. Once you have the correct SHA, you can restore the branch on the remote (e.g., git push origin <sha>:refs/heads/feature-alpha). The other commands are incorrect; git revert creates new commits instead of restoring old ones, git cherry-pick requires you to already know the commit SHA, and git fetch --force is not the correct command for discovering lost commit hashes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Git reflog, and how does it work?
What does `git reset --hard` do, and how does it differ from a soft reset?
How is `git push --force` different from a regular `git push`?
Your Azure DevOps project contains a single-stage YAML pipeline that is triggered by every push to the main branch. During periods of high activity, developers often push several commits within a few minutes, leading to multiple runs being queued and unnecessary consumption of Microsoft-hosted agents. You must ensure that no more than one run of this pipeline executes at any time and that any in-progress or queued run is automatically cancelled when a newer commit is detected, thereby reducing agent usage charges and queue length.
Which YAML modification should you implement to meet this requirement?
Configure the pool section with demands that set minimumParallelJobs to 1.
Enable batching by adding batch: true to the continuous-integration trigger.
Add a root-level concurrency block that specifies a shared group name and sets cancelInProgress to true.
Define a retention policy with minimumRuns: 1 and days: 0 to keep only the latest successful run.
Answer Description
A pipeline-level concurrency block lets you place all runs of the pipeline into the same named group and, when cancelInProgress is set to true, Azure Pipelines automatically cancels any earlier queued or running instance as soon as a newer run is queued. This guarantees that at most one run is active, eliminating wasted agent minutes.
Batching a CI trigger defers queuing new runs until the current one completes but does not cancel a running build, so multiple runs can still execute consecutively.
Retention policies control how long completed runs and their artifacts are stored; they do not affect how many runs can execute concurrently.
Pool settings such as demands or a fictitious minimumParallelJobs property cannot limit pipeline-level concurrency or cancel in-progress runs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a concurrency block in Azure Pipelines?
How does batching work in a CI trigger, and why is it not ideal here?
What is the purpose of retention policies in Azure Pipelines, and why don't they apply here?
Your organization manages work tracking in Azure DevOps and hosts source code in a GitHub Enterprise Cloud organization.
- Finance analysts must be able to create and edit work items, run queries, and view dashboards, but they must not read repositories or trigger pipelines.
- A contracted UX designer needs to clone, create branches, and push changes to a single repository, but must not gain access to any other organization resources.
Which combination of access assignments meets these requirements while following the principle of least privilege?
Give the finance analysts Basic access in Azure DevOps and invite the UX designer as an outside collaborator with read-only permission on the required repository.
Give the finance analysts Stakeholder access in Azure DevOps and assign the UX designer to a team inside the GitHub organization that grants read access across all repositories.
Give the finance analysts Basic access in Azure DevOps and add the UX designer to the GitHub organization as a member with write permission on the repository.
Give the finance analysts Stakeholder access in Azure DevOps and invite the UX designer as an outside collaborator with write permission on the required repository.
Answer Description
Stakeholder access in Azure DevOps permits viewing, creating, and editing work items, as well as accessing boards and dashboards, but blocks access to Repos and Pipelines-exactly what the finance analysts need without granting excess rights. In GitHub, an outside collaborator is not an organization member yet can be granted read, write, or admin permission for selected repositories. Granting the UX designer outside-collaborator status with write permission on only the required repository lets the designer clone and push while keeping every other repository and organization resource inaccessible. The alternative choices either over-provision analysts (Basic access) or grant the designer broader or insufficient rights, violating least-privilege principles.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Stakeholder access in Azure DevOps?
Who is an outside collaborator in GitHub?
What does the principle of least privilege mean?
Your team maintains a YAML multi-stage Azure Pipeline that deploys a Java web app to Azure Kubernetes Service (AKS). The release stage needs a password-protected PFX certificate (app_tls.pfx) that must never be committed to the Git repository. The certificate must be usable only while the job runs and should be wiped from the build agent automatically after completion. Which approach should you implement to meet these requirements with minimal changes to the existing pipeline?
Store the certificate as a secret in Azure Key Vault and use the AzureKeyVault@2 task to inject it into environment variables during the job.
Upload app_tls.pfx to the pipeline Library as a secure file and add a DownloadSecureFile@1 task that references the file in the release stage.
Commit the certificate to a private Git submodule and add an authenticated checkout step to pull the submodule during the release stage.
Define a secret variable named APP_CERT that contains a base64-encoded copy of the certificate and decode it with a script step at runtime.
Answer Description
Uploading the certificate to the Library as a secure file and adding a DownloadSecureFile@1 task meets every stated requirement. Secure files are stored encrypted in Azure DevOps, are not exposed in the repository, and the task downloads them to a temporary location that is deleted automatically at the end of the job. Secret variables or Azure Key Vault can hold text values but do not natively handle binary certificate files or guarantee automatic cleanup. Committing the file-even in a private submodule-places the sensitive data in source control, violating the security constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure DevOps pipeline Library?
How does the DownloadSecureFile@1 task work?
Why is storing sensitive data in a private Git repository not recommended?
You lead a team that delivers infrastructure-as-code through both GitHub Actions and Azure Pipelines.
Requirements
- In GitHub, the AZURE_SP_CLIENT_SECRET must be available only to jobs that target the Prod environment and must be masked if someone tries to echo it.
- In Azure DevOps, the same secret must be shared by several YAML pipelines, kept encrypted, and automatically update if the value is rotated in Azure Key Vault.
Which combination of platform features satisfies all the requirements with the least administrative effort?
Store AZURE_SP_CLIENT_SECRET as an organization secret in GitHub Actions and in a library variable group that is not linked to Key Vault in Azure Pipelines.
Store AZURE_SP_CLIENT_SECRET as a secret in the Prod environment in GitHub Actions and use a variable group linked to Azure Key Vault in Azure Pipelines.
Store AZURE_SP_CLIENT_SECRET in an environment file committed to the repo and encrypted with GPG for GitHub, and expose it through a service connection in Azure Pipelines.
Store AZURE_SP_CLIENT_SECRET as a repository secret in GitHub Actions and as a secret variable defined in each YAML pipeline in Azure DevOps.
Answer Description
GitHub environment-level secrets are exposed only to workflows that explicitly reference the environment and are always redacted in log output, meeting GitHub's scoping and masking needs.
In Azure DevOps, a variable group that is linked to an Azure Key Vault keeps the secret encrypted, lets multiple pipelines consume it, and automatically synchronizes when the secret is rotated in Key Vault.
Repository or organization secrets in GitHub do not limit access to a particular environment, and plain pipeline variables in Azure DevOps do not support automatic rotation from Key Vault. Service connections authenticate to Azure but do not distribute the secret to jobs. Therefore, using environment secrets in GitHub together with a Key Vault-linked variable group in Azure Pipelines is the only option that fulfills every constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a GitHub environment secret and how does it differ from a repository secret?
How does Azure Key Vault integration with Azure DevOps enhance secret management?
Why do service connections in Azure Pipelines not fulfill secret distribution requirements?
Your team wants Azure Monitor to create an Azure DevOps work item every time a Log Analytics-based alert with severity = Sev0 is triggered. You start creating an action group and choose the Azure DevOps action type, but the portal warns that a required prerequisite is missing. Which prerequisite must be satisfied before you can successfully add the Azure DevOps action to the group?
Install the Azure Boards extension for Visual Studio Code on the build server used by the project.
Register Azure Monitor as a multi-tenant application in Azure Active Directory and grant it the Work Item API permission.
Configure an Azure DevOps service hook that listens for alert events from Azure Monitor.
Generate a personal access token in Azure DevOps with Work Items (read and write) scope and supply it to the action group.
Answer Description
The Azure DevOps action in an Azure Monitor action group authenticates by using a personal access token (PAT). The PAT must be generated beforehand in the target Azure DevOps organization and scoped to at least Work Items (read and write). Without supplying the PAT, Azure Monitor cannot connect to Azure DevOps to create work items. Service hooks, IDE extensions, and app registrations are not required for this integration.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Personal Access Token (PAT) in Azure DevOps?
How do you create a Personal Access Token (PAT) in Azure DevOps?
Why is a PAT required instead of other authentication methods for Azure Monitor integration?
A build pipeline that runs on Microsoft-hosted Linux agents restores about 300 MB of Maven dependencies from the public Maven Central repository on every execution. The team complains about long queue-to-publish times and the organization wants to reduce egress traffic charges. You must implement a change that
- shortens the pipeline run time,
- minimizes repeated network downloads,
- automatically refreshes the stored files when the project's pom.xml changes,
- requires no manual maintenance of the agent machines. Which action should you take?
Commit the entire .m2 directory to the Git repository using Git LFS so agents can clone the dependencies.
Switch the pipeline to use the ubuntu-latest image, which includes more pre-installed build tools and caches.
Publish the .m2 directory as a pipeline artifact in the first stage and download it in later stages of every run.
Add a Cache@2 task that stores the local Maven repository using a key based on the SHA-1 hash of pom.xml.
Answer Description
Using the Cache task creates a pipeline-managed cache on the agent pool. The key can include a hash of pom.xml so the cache is automatically invalidated when dependencies change. Subsequent jobs on any Microsoft-hosted agent will download the cache from Azure Storage instead of Maven Central, greatly reducing run time and egress cost while requiring no agent customization. Pipeline artifacts are intended for sharing outputs between jobs or runs and do not provide automatic invalidation, Git LFS would bloat the repository and increase storage cost, and selecting a different hosted image does not guarantee the required dependencies will be pre-installed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is using Cache@2 task more efficient than using pipeline artifacts for this scenario?
What is the role of SHA-1 hashing in the Cache@2 task?
Why is switching to the ubuntu-latest image an incorrect solution?
You are designing a branching strategy for an Azure Repos Git project that supports 150 developers. Each developer must integrate code several times per day, and the automated build pipeline should always run from a single branch to maintain a linear history. In addition, production hotfixes must be delivered quickly without blocking ongoing feature development. Which branching model best satisfies these requirements?
Trunk-based development with a single main branch and short-lived feature and hotfix branches merged at least daily
GitFlow with long-lived develop, feature, release, and hotfix branches
Forking workflow where each developer works in a personal fork and merges back to the upstream repository only at milestone completion
Feature branching that keeps a dedicated branch per feature until it is completed at the end of each sprint
Answer Description
Trunk-based development keeps a single long-lived trunk (often named main) that every developer merges into at least once per day, so the build pipeline can always build from one branch with a linear history. Short-lived feature branches are created only when necessary and are merged back rapidly, minimizing integration risk. Hotfix and release branches are also cut from the trunk, allowing urgent fixes to ship immediately while normal feature work continues. GitFlow and other long-lived feature branching models intentionally introduce multiple permanent branches (for example, develop or feature branches) that delay integration and complicate the history. A forking workflow spreads work across personal repositories, which hampers the required frequent integration and centralized CI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is trunk-based development, and how does it work?
Why is GitFlow branching unsuitable for quick hotfixes?
How does trunk-based development handle integration risks?
You are defining a multi-stage Azure Pipelines YAML file. The production stage contains two deployment jobs named Deploy-Database and Deploy-WebApp. The WebApp deployment must not begin until the database deployment completes successfully, but both jobs must remain in the same stage so they share the same environment approval. Which YAML approach will reliably enforce the required execution order?
Merge the database and web app tasks into one job, counting on task order within that job to control sequencing.
Define Deploy-Database as a deployment job and Deploy-WebApp as a regular job because deployment jobs automatically run first.
Configure the Deploy-WebApp job with dependsOn: Deploy-Database and leave the default succeeded() condition in place.
Set timeoutInMinutes: 0 on Deploy-WebApp and use a script task to poll the database deployment status before continuing.
Answer Description
In Azure Pipelines you control the execution order of jobs inside a stage by using job-level dependencies. By adding "dependsOn: Deploy-Database" to the Deploy-WebApp job (or, equivalently, listing the job name under the dependsOn collection) you create an explicit dependency graph. The job will start only after its dependency finishes and returns a successful result. Including the default "condition: succeeded()" (or explicitly setting the condition) further guarantees the second deployment runs only on a successful upstream result. Relying on task order, variable polling, or assuming deployment jobs run after regular jobs does not guarantee sequencing across separate jobs and therefore cannot ensure reliably ordered dependency deployments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a deployment job in Azure Pipelines?
How does the 'dependsOn' property work in Azure Pipelines?
What is the purpose of the condition property in Azure Pipelines?
You manage an Azure DevOps Git repository that contains a protected main branch. Your organization has these requirements:
- All changes must reach main through a pull request.
- A pull request can be completed only after at least two reviewers approve it and the pipeline Build-CI finishes successfully.
- Members of the Contributors group must never be able to override these rules, but members of the Project Administrators group must be able to bypass the rules during emergency fixes.
Which configuration meets the requirements?
Enable Code Owners on the repository, select "Request review from Code Owners", add Build-CI as a status policy, and allow the Contributors group the "Complete pull request" permission while removing "Bypass policies when pushing" from all groups.
Create a branch policy on main that sets Minimum number of reviewers to 2 and adds a Build validation using Build-CI; then deny "Bypass policies when pushing" for the Contributors group and allow that permission for the Project Administrators group.
Create the branch policy with two reviewers and Build-CI, and grant the Contributors group the "Exempt from policy enforcement" permission while leaving Project Administrators unchanged.
Deny the "Push" permission on the repository for Contributors and allow "Force push" for Project Administrators; do not configure any branch policies.
Answer Description
Branch policies provide granular merging restrictions. Requiring a minimum number of reviewers and adding a build-validation policy blocks completion of any pull request until both conditions are met. Preventing Contributors from bypassing the policy is achieved by explicitly denying the "Bypass policies when pushing" permission on the main branch for the Contributors group. Granting that same permission to Project Administrators lets them override the policy when needed. No other option simultaneously enforces the reviews and build, blocks Contributors from overriding, and still lets administrators merge in emergencies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a branch policy in Azure DevOps?
How does build validation work with branch policies?
What does 'Bypass policies when pushing' mean in Azure DevOps?
You run load tests against an Azure App Service that is instrumented with Application Insights. During the test, the /api/checkout operation occasionally exceeds its 5-second SLA. You need to determine whether the excessive latency originates inside the app or inside a downstream dependency. Which Kusto Query Language (KQL) statement should you run in Log Analytics to generate a time-series chart that shows the average request duration together with the average duration of related dependency calls for the same operation?
dependencies | where target contains '/api/checkout' | summarize avg(duration) by bin(timestamp, 1m)
requests | where name == '/api/checkout' | summarize avg(duration) by bin(timestamp, 1m)
requests | where name == '/api/checkout' | project operation_Id, requestDuration = duration, timestamp | join kind=inner ( dependencies | project operation_Id, dependencyDuration = duration ) on operation_Id | summarize avg(requestDuration), avg(dependencyDuration) by bin(timestamp, 1m)
traces | where message contains '/api/checkout' | summarize avg(duration) by bin(timestamp, 1m)
Answer Description
The requests and dependencies tables store data for incoming requests and outgoing calls, respectively. Both tables share the same correlation identifier, operation_Id, which enables end-to-end analysis of a single transaction. By projecting the identifier from each table and performing an inner join on operation_Id, you can correlate each /api/checkout request with its dependency calls. After the join, summarizing the two duration columns over a time bin produces a single chart that compares the average server processing time (requestDuration) with the average time spent in external calls (dependencyDuration). Queries that look only at requests or only at dependencies-or that filter on the wrong column-cannot expose which side contributes more to the total latency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Application Insights in Azure?
What is Kusto Query Language (KQL) used for?
What is the purpose of operation_Id in the context of Application Insights?
The main branch of an Azure Repos Git repository must accept changes only through pull requests. Requirements are: at least two human reviewers, the code author may not approve, any new push must invalidate existing approvals, a CI build must succeed before completion, and direct pushes, force pushes, and branch deletion must all be blocked. Which configuration meets every requirement while still allowing pull-request merges into main?
Apply a branch policy that automatically adds two reviewers but allows author approval, keeps existing approvals after updates, and adds required build validation. Finally, lock the main branch so no one can push.
Configure environment approvers in the release pipeline, trigger it from main, leave Contribute permission set to Allow for developers, and mark build validation as optional in the branch policy.
Enable repository-level settings to require signed commits and restrict merges to squash only; do not modify branch security. Rely on developers to open pull requests voluntarily.
Create a branch policy on main that requires two reviewers, disallows author approval, resets approvals on new pushes, and adds a required build-validation pipeline. Then use Branch security to deny Force push and Delete permissions for the Contributors group, leaving Contribute unset.
Answer Description
A branch policy can enforce PR-based reviews and build validation. Setting "Minimum number of reviewers" to 2, clearing "Allow requestors to approve," and enabling "Reset approvals when new changes are pushed" satisfies the review-workflow rules. Adding a build-validation check ensures the pipeline passes before completion. Permissions, not a branch lock, are needed to keep the branch usable while protecting it; denying Force push and Delete on the branch security tab prevents both actions and, because Contribute is not explicitly allowed, blocks direct pushes but still permits the PR merge service account. The other answers fail by either locking the branch (which disables merges), allowing self-approval, omitting approval resets, or leaving the branch unprotected against direct or force pushes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a branch policy, and how does it work in Azure Repos?
How do branch permissions differ from branch locks in Azure Repos?
What is build validation in Azure Repos, and why is it required for pull requests?
The DevOps team stores project documentation in an Azure DevOps wiki. A wiki page contains the following Markdown:
sequenceDiagram
participant API
participant DB
API->>DB: Query
When the page is viewed in the web portal, the diagram appears as plain text instead of a graphic. You must resolve the issue without installing browser extensions or using external services. What should you do to make the diagram render correctly?
Surround the diagram with triple-backtick fences and specify mermaid as the code block language (
mermaid ...).Replace the content with a PlantUML block that starts with @startuml and ends with @enduml.
Rename the Markdown page from .md to .mmd so Azure DevOps detects a Mermaid file.
Turn on Preview rendering for diagrams in the Wiki settings under Experimental features.
Answer Description
Azure DevOps can render Mermaid diagrams that are written in Markdown, but the diagram must be enclosed in a fenced code block whose language identifier is mermaid. Placing the keyword mermaid after the opening triple backticks activates the built-in renderer and the diagram is displayed as graphics. PlantUML blocks are not rendered natively, there is no setting to enable a special preview mode, and file extensions such as .mmd are not used to trigger rendering inside a wiki page.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Markdown and why is it used in Azure DevOps wiki?
What is Mermaid and how does it work in Azure DevOps wiki?
How does triple-backtick fencing work in Markdown?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.