Microsoft DevOps Engineer Expert Practice Test (AZ-400)
Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft DevOps Engineer Expert AZ-400 Information
Microsoft DevOps Engineer Expert (AZ-400) Overview
The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.
What You’ll Learn and Be Tested On
This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.
Who Should Take the Exam
The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.
Why Practice Tests Are Important
Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Free Microsoft DevOps Engineer Expert AZ-400 Practice Test
- 20 Questions
 - Unlimited
 - Design and implement processes and communicationsDesign and implement a source control strategyDesign and implement build and release pipelinesDevelop a security and compliance planImplement an instrumentation strategy
 
Free Preview
This test is a free preview, no account required. 
 Subscribe to unlock all content, keep track of your scores, and access AI features!
Your company builds multiple JavaScript microservices in Azure DevOps. Security policy requires that any third-party npm package be stored in an internal repository so that future removals from the public registry do not break builds. In addition, only versions explicitly approved by release engineering may be consumed by production pipelines while development teams should be able to test newer versions quickly. Which approach meets these requirements with minimal administrative effort?
Add a service connection to npmjs.org and lock all package.json dependencies to specific build numbers to prevent unintended changes.
Create a single Azure Artifacts npm feed, add npmjs.org as an upstream source, and use feed views to promote packages through Development and Production stages.
Mirror the entire npmjs.org registry to an Azure Blob Storage static website and configure it as a private npm registry for all pipelines.
Publish internal packages to a GitHub Packages npm registry for each project and let package.json reference npmjs.org directly for open-source dependencies.
Answer Description
An Azure Artifacts npm feed can be configured with an upstream source that transparently caches any package downloaded from npmjs.org, ensuring future builds are not affected if the package is later removed from the public registry. Within the same feed you can create views (for example, Development, Test, and Production) and promote specific package versions between those views after they pass validation. Development pipelines resolve packages from a less-restricted view, while production pipelines are scoped to the Production view, satisfying the approval gate. GitHub Packages, direct registry references, or a custom mirror either fail to cache automatically, lack built-in promotion workflows, or introduce unnecessary management overhead.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Artifacts npm feed?
What are upstream sources in Azure Artifacts?
What are feed views in Azure Artifacts and how do they work?
A co-worker accidentally rewrote the history of the remote release branch by using git reset --hard and then force-pushing the changes. This action removed the two most recent commits. After running git fetch in your local repository to synchronize with the remote, you are tasked with recovery. Which command should you execute to find the hashes of the lost commits?
git refloggit log --graph --decorategit fsck --lost-foundgit stash list
Answer Description
git reflog records every change to the tips of references (like branches) in the local repository, including those created by reset, merge, and pull operations. Because you have already fetched the force-pushed changes, the reflog for your remote-tracking branch (origin/release) contains the history of its tip, including the hash it pointed to before the fetch. By inspecting the reflog, you can find the previous hash and restore the branch. The default retention for unreachable reflog entries is 30 days. git fsck --lost-found can find dangling objects but lacks the chronological context of the reflog, making it harder to identify the correct commit. git log --graph only shows commits in the current reachable history and will not show the lost commits. git stash list is for managing stashed changes and is unrelated to commit history recovery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is git reflog, and how does it help in commit recovery?
What is the difference between git reflog and git fsck --lost-found?
How does git fetch affect the reflog for remote-tracking branches?
Your Azure DevOps project must notify an internal line-of-business (LOB) HTTP endpoint whenever a work item is moved to the Done state under the Fabrikam\Web area path. The LOB endpoint expects a POST request, requires a Personal Access Token (PAT) supplied in a custom HTTP header, and must not receive calls for any other state changes.
Which Azure DevOps configuration meets these requirements while generating the fewest possible outbound calls?
Enable the continuous deployment trigger on the team's release pipeline and set a post-deployment REST call task that targets the LOB endpoint.
Build an Azure Logic App that polls the Azure DevOps REST API every 5 minutes for work items whose State changed to Done and then calls the LOB endpoint.
Add a Service hooks subscription that posts work-item update events to a Microsoft Teams channel, then configure a Teams connector to forward messages to the LOB endpoint.
Create a Service hooks subscription that uses the Work item updated trigger, adds filters for State = Done and Area Path = Fabrikam\Web, supplies the PAT in a custom header, and posts to the LOB endpoint URL.
Answer Description
Service hooks let an Azure DevOps project push event-based notifications to external systems. Creating a Service hooks subscription with the "Work item updated" trigger allows you to:
- Filter on exact field changes (State equals Done and Area Path equals Fabrikam\Web) so no other updates raise the event.
 - Specify the LOB endpoint's URL as the webhook target.
 - Add a custom header that carries the PAT for the LOB API. Because the event fires only when the filter matches, the LOB system receives the minimal number of calls required. The other options either poll, use unrelated pipeline triggers, or deliver notifications to Microsoft Teams rather than the LOB endpoint.
 
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Service Hooks in Azure DevOps?
What is a Personal Access Token (PAT) and how is it used?
How does filtering on 'State' and 'Area Path' minimize outbound calls in Service Hooks?
Your organization uses Azure DevOps for work tracking. You need to build a shared dashboard that helps portfolio managers monitor flow efficiency. The dashboard must display (1) the average cycle time for user stories completed during the current sprint and (2) the average lead time for user stories over the last three sprints. The data must refresh automatically and allow managers to drill down to the underlying work items without requiring additional licenses or external services. Which solution should you recommend?
Schedule a nightly export of work items to CSV using the Azure DevOps CLI and visualize the data in Excel Online.
Build a Power BI report that queries the Azure DevOps REST APIs and pin the report to the dashboard.
Create a query-based chart widget that groups user stories by Completed Date and averages effort.
Add the Cycle Time and Lead Time Analytics widgets to an Azure DevOps dashboard and scope them by team iteration dates.
Answer Description
The Cycle Time and Lead Time widgets that are part of the Azure DevOps Analytics extension read data directly from the Analytics service, refresh automatically each time the dashboard loads, and provide built-in drill-through to the individual work items that compose the metric. Because the Analytics extension is included with Azure DevOps and does not depend on third-party tools, it satisfies the licensing and maintenance constraints. Power BI can meet the metric requirements but adds an external service and licensing overhead. Exporting to Excel or using a basic query-based chart requires manual refresh and lacks true cycle-time calculations, making them unsuitable for an automated management dashboard.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Cycle Time and Lead Time in Azure DevOps?
What is the Azure DevOps Analytics extension?
Why is Power BI not suitable for this solution?
Your team uses Azure DevOps Services for all repositories and pipelines. You need a hosted solution to store private npm and NuGet packages. Developers must be able to:
- cache public packages from npmjs.com and nuget.org so builds still succeed if those services are offline,
 - apply different permissions to distinct internal teams at the individual package level,
 - avoid operating any additional infrastructure. Which approach should you recommend?
 
Create a single Azure Artifacts feed, enable upstream sources to npmjs.com and nuget.org, and publish all internal packages to that feed.
Enable GitHub Packages for each repository and publish internal packages there while consuming public packages directly from their original registries.
Push all internal packages as OCI artifacts to Azure Container Registry and configure builds to restore dependencies from the registry.
Deploy an on-premises Artifacts proxy server that periodically synchronizes with GitHub Packages and external registries.
Answer Description
Azure Artifacts provides hosted feeds that support npm and NuGet. An upstream source can be configured for npmjs.com and nuget.org; the first restore caches the external package inside the feed so future restores succeed even if the public registry is unavailable. Feeds offer granular security where permissions can be set per package or per view. GitHub Packages can host packages but cannot proxy and cache upstream registries, Azure Container Registry is focused on container images, and deploying an on-premises proxy contradicts the requirement to avoid extra infrastructure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Azure Artifacts feeds?
How do upstream sources work in Azure Artifacts?
Why is Azure Artifacts preferred over alternatives like GitHub Packages or Azure Container Registry?
You are designing a multi-stage YAML pipeline in Azure DevOps that deploys to a staging environment and then to production. The pipeline must pause automatically and require approval from at least two members of the Azure DevOps security group named release-admins before any job targeting production starts. You want the approval configuration to be reusable across pipelines and managed outside of pipeline code. What should you do?
Configure a branch policy on the main branch that mandates two reviewers from the release-admins group before any pull request can be completed.
Set the production job to use a pool that maps to the release-admins group and define approvals: 2 within the job's YAML definition.
Insert a ManualValidation task in the production stage YAML and configure it to allow only the release-admins group with a minimum reviewer count of two.
Create an environment named Prod, add an "Approvals and checks" entry that requires at least two reviewers from the release-admins group, and reference the environment in the production deployment job.
Answer Description
Approvals that are defined as a check on an Azure DevOps environment are evaluated every time any pipeline job targets that environment. By adding a check of type "Approvals and checks" to the Prod environment and specifying the release-admins group and a minimum of two approvers, you ensure that the pause occurs before any deployment to Prod. Because the environment resource is shared, the same approval policy applies automatically to every YAML pipeline that references that environment without needing additional code. A ManualValidation task (or custom YAML fields) would embed the approval in a single pipeline, while branch policies or pool settings are unrelated to environment-level deployment approvals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure DevOps environment?
What are 'Approvals and checks' in Azure DevOps?
How does using an environment with approvals simplify pipeline management?
You maintain a 130 GB mono-repository hosted in Azure Repos Git. Developers complain that simple commands such as git status and git log become progressively slower a few weeks after cloning, even though local disk space is still sufficient. You decide to use the open-source Scalar tool to keep each developer clone performant without asking engineers to remember periodic maintenance steps. Which action should you perform on every existing clone to achieve this goal?
Re-clone the repository by using "scalar clone --single-branch".
Execute "scalar register" in the root of each local repository.
Increase core.preloadIndex to true and raise pack.windowMemory.
Set the configuration value gc.auto to 0 in every clone.
Answer Description
Scalar wraps several Git performance features (commit-graph, multi-pack-index, incremental repack, and prefetch) and can run them automatically through the Windows Task Scheduler or cron. Running "scalar register" inside an already-cloned repository adds the repo to Scalar's maintenance list and installs the scheduled background tasks that keep repository metadata up-to-date. The other options either disable automatic maintenance, require a fresh clone, or modify unrelated configuration and therefore will not continually optimize existing clones.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Scalar, and how does it optimize Git repositories?
What is the purpose of the 'scalar register' command?
How do commit-graph and multi-pack-index features improve Git performance?
Your company uses Azure DevOps Services Boards to manage work for several product teams. Leadership asks for a Power BI report that shows the 'average cycle time (days)' for completed 'User Story' work items, grouped by Area Path, for the past six months so they can monitor flow efficiency trends. You will connect Power BI directly to the Azure DevOps Analytics service. Which Analytics view or entity should you use as the primary data source to calculate the requested metric accurately?
Current Work Items view (WorkItems entity) using only the latest revision
Work Item History view (WorkItemRevision entity) aggregating field changes
Legacy FlowFields view from the deprecated Azure DevOps Data Warehouse
Work Item Snapshot view (WorkItemSnapshot entity) filtered to User Story work items
Answer Description
Cycle time measures the duration from when work begins on an item (e.g., when its state changes to 'Active') until it is completed. To calculate this metric for historical trends, a data source that records daily snapshots of work item states is required. The Azure DevOps Analytics 'Work Item Snapshot' view (and the corresponding WorkItemSnapshot entity in the OData feed) is designed for this purpose. It stores a daily snapshot of every work item, which allows Power BI to calculate the time spent between specific state transitions and aggregate these values over time. This entity supports the required grouping by Area Path and filtering by Work Item Type.
The 'Current Work Items' view only exposes the latest revision of each work item, so it lacks the historical data needed for trend calculations. The 'Work Item History' view tracks every single field change, making it overly granular and inefficient for calculating aggregate metrics like average cycle time; it is better suited for detailed auditing. The 'Legacy FlowFields' view is part of the deprecated Data Warehouse and is not available for Azure DevOps Services (cloud).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the Work Item Snapshot view capture cycle time more accurately?
What is the difference between Work Item Snapshot and Work Item History views?
Why is 'Legacy FlowFields' not suitable for Azure DevOps Services?
Your Azure Repos Git repository has grown to more than 5 GB because several hundred *.mp4 files were committed months ago. Clone and fetch operations are now slow. You must reduce the repository's size while still keeping the videos available in the history through Git Large File Storage (LFS). The solution should rewrite the existing commits with minimal manual effort and require contributors to take only one clear action afterward. Which approach should you recommend?
Enable Scalar on the current repository to virtualize large files and execute "git maintenance run" on all clients.
Add *.mp4 entries to .gitattributes, run "git gc --aggressive" on the server, and ask contributors to pull the latest commits.
Run "git lfs migrate import --include='*.mp4' --everything" in a local clone, force-push the rewritten branches and tags, then instruct all contributors to reclone the repository.
Create a new repository, enable Git LFS, recommit the mp4 files via LFS, and delete the original repository.
Answer Description
Running "git lfs migrate import" rewrites every reachable commit, replacing the large *.mp4 blobs with lightweight LFS pointer objects while preserving commit metadata. Using the --everything switch ensures all branches and tags are rewritten at once. After the force-push, the server no longer stores the binary blobs, so repository size drops dramatically. Because history is rewritten, every contributor must reclone or at least reset local references; a fresh clone is the simplest, least error-prone step.
 Adding patterns to .gitattributes and simply running git gc cannot remove blobs that are already part of history. Creating a brand-new repository discards commit history, violating the requirement to keep it. Scalar improves performance for large repositories but does not migrate blobs into LFS or shrink repository size.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Git LFS and how does it work?
Why is the 'git lfs migrate import' command necessary for this solution?
Why do contributors need to reclone the repository after the migration?
You are designing a branching strategy for an Azure Repos Git project that supports 150 developers. Each developer must integrate code several times per day, and the automated build pipeline should always run from a single branch to maintain a linear history. In addition, production hotfixes must be delivered quickly without blocking ongoing feature development. Which branching model best satisfies these requirements?
GitFlow with long-lived develop, feature, release, and hotfix branches
Forking workflow where each developer works in a personal fork and merges back to the upstream repository only at milestone completion
Trunk-based development with a single main branch and short-lived feature and hotfix branches merged at least daily
Feature branching that keeps a dedicated branch per feature until it is completed at the end of each sprint
Answer Description
Trunk-based development keeps a single long-lived trunk (often named main) that every developer merges into at least once per day, so the build pipeline can always build from one branch with a linear history. Short-lived feature branches are created only when necessary and are merged back rapidly, minimizing integration risk. Hotfix and release branches are also cut from the trunk, allowing urgent fixes to ship immediately while normal feature work continues. GitFlow and other long-lived feature branching models intentionally introduce multiple permanent branches (for example, develop or feature branches) that delay integration and complicate the history. A forking workflow spreads work across personal repositories, which hampers the required frequent integration and centralized CI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is trunk-based development, and how does it work?
Why is GitFlow branching unsuitable for quick hotfixes?
How does trunk-based development handle integration risks?
You are standardizing the build pipelines for dozens of repositories that contain a mix of .NET Framework, .NET Core, and native C++ projects. You need a single YAML template that, when invoked after the build stage, will automatically locate all produced unit-test assemblies, execute the tests regardless of framework (MSTest, NUnit, or xUnit), and publish the results to the Tests tab without an extra publishing step. Which built-in Azure Pipelines task should you include in the template to meet these requirements?
DotNetCoreCLI@2 task with command "test"
PublishTestResults@2 task
CTest@1 task
Visual Studio Test task (VisualStudioTest@2)
Answer Description
The Visual Studio Test task (VisualStudioTest@2) can discover and run any test assemblies that use adapters supported by Visual Studio, including MSTest, NUnit, and xUnit tests compiled for .NET Framework, .NET Core, or native code. The task automatically publishes the results to Azure DevOps, so no additional PublishTestResults task is required.
DotNetCoreCLI@2 runs only .NET Core tests and does not handle native C++ tests.
 CTask@1 (CTest) is limited to CMake/CTest native projects and cannot run managed-code tests.
 PublishTestResults@2 merely uploads result files that have already been produced; it does not execute tests or discover assemblies, so it cannot satisfy the requirement by itself.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Visual Studio Test task (VisualStudioTest@2)?
How does VisualStudioTest@2 automatically publish test results in Azure DevOps?
What makes VisualStudioTest@2 superior to DotNetCoreCLI@2 for multi-framework testing?
You need to define a branching and deployment strategy for a GitHub repository that hosts an enterprise-wide Node.js microservice. The requirements are:
- The 'main' branch must always contain code that is ready to be promoted to production.
 - New work should be isolated without blocking other teams.
 - Code reviews and automated tests must run before the code is merged.
 - The repository's history should stay linear and easy to trace. Which approach satisfies these requirements and follows GitHub Flow principles?
 
Create environment branches named 'dev', 'stage', and 'prod'; merge pull requests sequentially between them to promote releases while allowing direct pushes to 'dev' for urgent fixes.
Create short-lived topic branches from 'main', open a pull request that requires at least one approving review and a passing CI workflow, then squash and merge the pull request back into 'main', triggering an automated deployment pipeline.
Maintain a persistent 'development' branch where all features are merged. Once a sprint ends, create a pull request from 'development' to 'main' and deploy after manual testing.
Commit directly to 'main', tag a semantic version when ready, and trigger the release pipeline from the tag without using pull requests or CI checks.
Answer Description
GitHub Flow prescribes creating a short-lived branch from 'main' for every piece of work, pushing commits to that branch, and opening a pull request. The pull request triggers CI, enforces at least one approving review, and, after all checks pass, is merged back into 'main'. Using the squash and merge method keeps a linear history, and an automated pipeline can deploy 'main' on every successful merge. Having long-lived environment branches, promoting changes between multiple stable branches, or committing directly to 'main' violate GitHub Flow because they either lengthen branch life, bypass reviews/tests, or fragment the single source of truth.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is squash and merge used in GitHub Flow?
What are CI workflows in the context of GitHub Flow?
Why does GitHub Flow prefer short-lived branches instead of environment branches?
Your Azure DevOps project uses an Azure Repos Git repository. You need to ensure that developers can read the release branch and submit pull requests that target that branch, but they must be prevented from pushing commits directly to it. Which branch-level permission configuration meets this requirement for the Developers security group on the release branch?
Read: Allow, Create branch: Deny, Contribute: Allow
Read: Allow, Contribute: Deny, Contribute to pull requests: Allow
Leave all permissions at their inherited values and enable a branch policy that requires one approval
Read: Deny, Contribute: Not set, Contribute to pull requests: Allow
Answer Description
To stop direct pushes to a branch, the Contribute permission for that branch must be denied. Denying Contribute prevents users from pushing commits or merging directly. Developers still need Read to fetch the branch and Contribute to pull requests to submit pull-requests that target the branch. Setting Contribute to pull requests to Allow (or leaving it Inherited when it is Allow) lets them create PRs even though Contribute is denied. Denying Create branch would block them from creating feature branches, and denying Read would block all access. Simply relying on branch policies without changing permissions would still allow a user who has Contribute to bypass the policy by pushing directly from the command line. Therefore, the correct configuration is: Read = Allow (or Inherited), Contribute = Deny, Contribute to pull requests = Allow.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Contribute permission in Azure Repos Git?
What happens when Contribute is denied but Contribute to pull requests is allowed?
How does enabling branch policies differ from modifying branch-level permissions?
Your organization uses Azure Boards for work tracking and GitHub Enterprise Cloud for source control. Management requires automatic, bidirectional traceability between user stories and the commits and pull requests created as part of the GitHub Flow process. Developers must not add hyperlinks manually and administrative effort must be minimal. Which action should you take to meet these requirements?
Create an Azure DevOps service hook that posts Work item updated events to a webhook configured in the GitHub repository.
Add the Azure Boards work item URL to the pull request template and require developers to keep it updated for every change.
Insert an Associate-Work-Items task into the build pipeline and pass work item IDs through a pipeline variable for every build.
Install and configure the Azure Boards app for the GitHub organization, then have developers include the pattern AB#
in commit messages or pull request descriptions. 
Answer Description
Installing the Azure Boards app for the GitHub organization establishes an OAuth-based connection between the Azure DevOps project and the GitHub repositories. After the app is configured, including the pattern AB#
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does the Azure Boards app create bidirectional traceability with GitHub?
What is the AB#<work-item-ID> pattern and why is it required?
What are the benefits of using the Azure Boards app versus other methods like service hooks or build tasks?
You are designing an Azure Pipelines YAML pipeline to deploy version 2.0 of a microservice to Azure Kubernetes Service (AKS). Business requirements state:
- Database schema changes that ship with 2.0 must be applied exactly once before any 2.0 pods start receiving traffic.
 - Production traffic must be shifted gradually from the existing 1.x pods to 2.0 pods, with the ability to halt and roll back at any percentage.
 
Which pipeline design satisfies these requirements while following DevOps best practices?
Create two stages in the YAML pipeline. Stage 1 runs a deployment job that executes a migration script against the production database. Stage 2 depends on Stage 1 and runs a Kubernetes manifest task configured with strategy: canary and incremental traffic percentages (e.g., 10%, 30%, 100%), promoting the release after each successful increment.
Package the migration script as an init container inside the 2.0 deployment so that every new pod applies the schema before starting; deploy the manifest with a rolling update strategy.
Perform a blue-green deployment by creating a parallel AKS namespace for 2.0, apply the migration after the namespace is live, and switch traffic by updating the cluster's DNS entry.
Use a Helm upgrade task with the --atomic flag and a pre-upgrade hook that applies the migration, then set replicas to gradually increase using kubectl scale commands in separate jobs.
Answer Description
Creating two dependent stages guarantees the ordering constraint: the schema is migrated before any new pods become eligible to receive traffic. By using the built-in canary deployment strategy of the Kubernetes manifest task, the pipeline automatically handles incremental traffic shifts (e.g., 10%, 30%, 100%) and provides checkpoints for manual approval or automatic health evaluation, allowing a safe roll-back if issues occur. The other options either cannot guarantee that the migration runs first, do not provide true canary traffic shifting, or risk running the migration multiple times.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a canary deployment strategy in Kubernetes?
What is the purpose of a deployment job in Azure Pipelines?
Why is it important to apply database migrations before new pods receive traffic?
Your organization hosts its source code in GitHub Enterprise Cloud and uses Azure DevOps Services to run all build and release pipelines. You are asked to create a new multi-stage YAML pipeline stored in the repository and to configure the integration so that:
- Azure Pipelines can publish commit statuses and required checks on every pull request.
 - No long-lived personal access tokens (PATs) are stored in the Azure DevOps project or in the repository.
 
After creating the pipeline in Azure DevOps, which action must you take in GitHub to satisfy these requirements?
Create a GitHub personal access token scoped to repo:status and store it as a secret variable in the pipeline.
Install the Azure Pipelines GitHub App in the organization and grant it access to the repository.
Enable Dependabot security updates for the repository.
Add an organization-level webhook that targets the Azure DevOps hooks endpoint.
Answer Description
The most secure way for Azure Pipelines to interact with a GitHub repository is through the official Azure Pipelines GitHub App. Installing the app at the organization or repository level creates an OAuth-based trust that allows Azure Pipelines to fetch sources, create webhooks automatically, and publish status checks and commit statuses without storing a PAT inside Azure DevOps. A PAT (even if scoped to repo:status) would violate the stated policy, while webhooks alone cannot authenticate Azure Pipelines to post status checks. Enabling Dependabot is unrelated to build status integration, so the only action that meets both requirements is installing the Azure Pipelines GitHub App and granting it access to the repository.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Pipelines GitHub App?
Why is using a PAT less secure compared to the Azure Pipelines GitHub App?
How do webhooks differ from the Azure Pipelines GitHub App?
Your organization maintains thousands of NUnit integration tests that run in an Azure Pipelines build for the main branch. Quality engineers must visualize in Power BI the seven-day rolling average test pass rate for the main branch only and must exclude any results that come from test‐rerun attempts. Which Analytics OData entity should you query to meet these requirements with the least additional data manipulation?
TestResult
BuildTestResultDailySummary
BuildTimelineRecords
TestRuns
Answer Description
The BuildTestResultDailySummary entity is already aggregated by pipeline run date, branch, and outcome counts. It also exposes the IsRerun flag, which allows you to filter out rerun attempts directly in the OData query. Because the data is pre-grouped, you only need to apply a date window and branch filter in Power BI to calculate the rolling average pass rate. Other entities either provide only detailed per-test records (requiring heavy aggregation) or lack branch and rerun information, which would force additional joins or calculations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the OData entity in Azure Analytics?
What is the BuildTestResultDailySummary used for in this scenario?
What is the benefit of using IsRerun in the OData entity?
Your team stores production incident records in an Azure Log Analytics workspace. The workspace contains a custom table named Incident_CL with these columns:
- IncidentId_g (string)
 - State_s (string)
 - CreatedTime_t (datetime)
 - ResolvedTime_t (datetime)
 
You need to add a metric tile to an Azure dashboard that shows the average mean time to recovery (MTTR) for the last 30 days. Incidents that are still open must be excluded. Which Kusto Query Language (KQL) statement should you use?
Incident_CL | where CreatedTime_t > ago(30d) | summarize avg(ResolvedTime_t - CreatedTime_t)
Incident_CL | where CreatedTime_t > ago(30d) and isnotnull(ResolvedTime_t) | extend TTR = ResolvedTime_t - CreatedTime_t | summarize avg(TTR)
Incident_CL | where CreatedTime_t > ago(30d) and State_s == "Resolved" | summarize avg(CreatedTime_t - ResolvedTime_t)
Incident_CL | where CreatedTime_t > ago(30d) and isnotnull(ResolvedTime_t) | summarize avg(datetime_diff('minute', CreatedTime_t, ResolvedTime_t))
Answer Description
The query must calculate the difference between the resolution and creation timestamps, skip rows that do not yet have a ResolvedTime_t value, and average the resulting timespans over the last 30 days. Extending with a TTR column makes the calculation explicit, and summarize avg(TTR) returns a single timespan representing MTTR. Options that do not filter NULL resolution dates risk dividing by zero or skewing results, and queries that subtract in the wrong order produce negative durations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Log Analytics, and why is it used?
What is Kusto Query Language (KQL)?
Can you explain the purpose of the `extend` and `summarize` commands in KQL?
A development team is migrating a four-year-old Git repository that contains roughly 8 GB of Photoshop (PSD) and PNG assets to Azure Repos. During a trial push, the operation is rejected because several objects are larger than 100 MB. The team must preserve the assets in the repository history, reduce clone size, and allow future pushes of the same file types without further rejections or manual steps. Which approach should you recommend?
Use a sparse checkout to exclude the graphics directory locally, then push the existing repository to Azure Repos.
Install git-fat on developer machines so that large files are stored outside Git when the repository is pushed to Azure Repos.
Rewrite the repository with git lfs migrate import to convert *.psd and *.png files to LFS pointers, then push to Azure Repos with Git LFS enabled.
Split the repository with git filter-repo and keep the binary assets only on a separate branch before pushing.
Answer Description
Git Large File Storage (LFS) stores the binary content outside the normal Git object database and leaves a small pointer in the repository, so clone and fetch operations transfer only the pointer unless the actual file is requested. The command git lfs migrate import rewrites existing commits, replacing matching file patterns with LFS pointers so that the original large blobs are removed from history. After the migration, pushes succeed because no individual Git object exceeds the Azure Repos 100 MB limit, and future commits of those file types are automatically handled by LFS. Sparse checkout and repository splitting leave the large objects in the history that is pushed, so the size limitation still applies. Azure Repos does not provide server-side support for git-fat, and simply initializing git-fat on clients would not prevent large objects from being rejected.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Git LFS and why is it needed?
How does the git lfs migrate import command work?
What are the advantages of using Git LFS over alternatives like sparse checkout or repository splitting?
Your company streams Microsoft Defender for Cloud recommendations to an Azure Log Analytics workspace by enabling continuous export. Security analysts need a KPI that shows the mean time to remediate (MTTR) for critical container-image vulnerabilities during the last 30 days, so that the KPI can be pinned to an Azure Workbook.
Which Kusto query should you recommend?
SecurityAlert | where Severity == "High" and AlertName == "Container registry image vulnerable" | summarize MTTR = avg(datetime_diff("hour", TimeGenerated, ClosedTime))
SecurityRecommendation | where RecommendationSeverity == "High" and RecommendationType == "ContainerRegistryVulnerabilities" | summarize firstDetected = arg_min(TimeGenerated, Status), lastResolved = arg_max(TimeGenerated, Status) by RecommendationId | where Status == "Resolved" | extend HoursToFix = datetime_diff("hour", firstDetected, lastResolved) | summarize MTTR = avg(HoursToFix)
SecurityRecommendation | where RecommendationSeverity == "High" and RecommendationType == "ContainerRegistryVulnerabilities" | extend HoursToFix = datetime_diff("hour", TimeGenerated, ResolvedTime) | summarize MTTR = avg(HoursToFix)
SecurityRecommendation | where RecommendationSeverity == "High" and RecommendationType == "ContainerRegistryVulnerabilities" | summarize firstDetected = arg_min(TimeGenerated, Status), lastResolved = arg_max(TimeGenerated, Status) by RecommendationId | where Status == "Resolved" | extend HoursToFix = datetime_diff("hour", lastResolved, firstDetected) | summarize MTTR = avg(HoursToFix)
Answer Description
The SecurityRecommendation table contains one record when Defender for Cloud first detects a vulnerability (Status == "Active") and another when the issue is fixed (Status == "Resolved").
The correct query:
- Filters on critical severity (
RecommendationSeverity == "High") and the Container Registry recommendation type. - Creates pairs of Active/Resolved records for the same recommendation by using 
arg_min()to get the first detection andarg_max()to get the time it was resolved. - Calculates the positive time difference in hours using 
datetime_diff(), then averages it withavg()to obtain MTTR. 
The distractors are incorrect because they:
- Use the wrong table (
SecurityAlert), which tracks threats instead of vulnerability recommendations. - Calculate the time difference with the 
datetime_diffarguments in the wrong order, which would produce negative values. - Attempt to calculate a time difference on a single record without first pairing the "Active" and "Resolved" records, which is logically incorrect and references a non-existent column.
 
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Azure Log Analytics in streaming Defender for Cloud recommendations?
What is MTTR and why is it important for security analysts?
What does the `arg_min()` and `arg_max()` functions do in KQL?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.