Microsoft DevOps Engineer Expert Practice Test (AZ-400)
Use the form below to configure your Microsoft DevOps Engineer Expert Practice Test (AZ-400). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft DevOps Engineer Expert AZ-400 Information
Microsoft DevOps Engineer Expert (AZ-400) Overview
The Microsoft DevOps Engineer Expert (AZ-400) exam tests your ability to bring development and operations teams together. It focuses on designing and implementing continuous integration, delivery, and feedback within Microsoft Azure. Candidates are expected to know how to plan DevOps strategies, manage source control, build pipelines, and ensure security and compliance in software development. The goal of the certification is to prove that you can help organizations deliver software faster, more reliably, and with better quality.
What You’ll Learn and Be Tested On
This exam covers a wide range of topics that reflect real-world DevOps work. You will learn about configuring pipelines in Azure DevOps, managing infrastructure as code, and using version control systems like Git. You will also explore how to set up testing strategies, monitor system performance, and use automation to improve reliability. Since DevOps is about collaboration, the AZ-400 also tests your ability to communicate effectively across development, operations, and security teams.
Who Should Take the Exam
The AZ-400 certification is meant for professionals who already have experience in both software development and IT operations. You should be comfortable using Azure tools and services before taking the test. Microsoft recommends that you already hold either the Azure Administrator Associate or the Azure Developer Associate certification. This ensures that you have the foundational knowledge needed to succeed in the DevOps Engineer role.
Why Practice Tests Are Important
Taking practice tests is one of the best ways to prepare for the AZ-400 exam. They help you understand the question format and identify areas where you need more study. Practice tests simulate the real exam environment, which can reduce anxiety and boost your confidence on test day. They also help you improve time management and ensure you can apply your knowledge under pressure. Regularly reviewing your results from practice exams makes it easier to track your progress and focus on weak areas.

Free Microsoft DevOps Engineer Expert AZ-400 Practice Test
- 20 Questions
- Unlimited
- Design and implement processes and communicationsDesign and implement a source control strategyDesign and implement build and release pipelinesDevelop a security and compliance planImplement an instrumentation strategy
An organization stores client secrets in Azure Key Vault and uses a YAML pipeline in Azure DevOps to deploy resources. Compliance mandates that:
- Secrets must never appear in logs or artifacts.
- Only tasks that require a secret may read it at runtime.
- Build administrators must be unable to view or export the secret values from the Azure DevOps portal. Which design meets all requirements while keeping the pipeline definition entirely in Git?
Create a variable group in Azure DevOps, manually add each secret as a secret variable, and reference the group in the pipeline.
Declare the secrets directly in the YAML file by using variables with the isSecret: true attribute and reference them in the tasks.
Commit an encrypted JSON file containing the secrets to the repository and decrypt it during the build by using a GPG private key stored as a secure file.
Call the AzureKeyVault@2 task in the job to download only the required secrets at runtime; reference the resulting secret variables in subsequent tasks.
Answer Description
Using the AzureKeyVault@2 task downloads the required secrets during the job directly from Azure Key Vault. The secrets are exposed to the pipeline only as in-memory secret variables, which are automatically masked in logs and are not saved to artifacts. Because the values remain in Key Vault and are never stored in Azure DevOps, project or build administrators cannot later view them in the portal. Manually created secret variables (whether in a variable group or in YAML) are stored and can be viewed by anyone with edit privileges, and encrypted files checked into Git still expose the decryption key to the pipeline. Therefore, retrieving secrets just-in-time from Key Vault with AzureKeyVault@2 is the only option that satisfies every stated constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Key Vault and its role in securing secrets?
How does the AzureKeyVault@2 task work in an Azure DevOps pipeline?
Why is storing secrets as manual secret variables or encrypted files not compliant?
You are designing the branch and release process for a new microservice repository in GitHub. The team wants to adopt GitHub Flow to ensure every change is traceable from code review to production and can be deployed multiple times per day through their Azure Pipelines CI/CD. Which practice is required to correctly implement GitHub Flow?
For every change, create a short-lived feature branch from
main, open a pull request for peer review, and merge the pull request only after it receives at least one approval and all automated CI checks pass.Maintain a long-lived
developbranch for integration and createreleasebranches for deployments.Use trunk-based development where developers push changes directly to
main, using feature flags to hide incomplete work and bypassing pull requests for trivial changes.Require that all code changes are committed directly to the
mainbranch to be deployed in a weekly batch.
Answer Description
GitHub Flow is a lightweight workflow designed for continuous delivery. Its core practice involves creating short-lived feature branches from the main branch, which must always be in a deployable state. All changes are integrated via pull requests, which require peer review (approvals) and successful automated checks (CI builds) before being merged. This process ensures traceability and quality. Committing directly to main, using long-lived branches like develop or release (as in GitFlow), or bypassing pull requests are all inconsistent with the GitHub Flow model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is GitHub Flow?
How does GitHub Flow differ from GitFlow?
Why are pull requests important in GitHub Flow?
Your team follows a requirement-based testing approach in Azure DevOps. They want a built-in, one-click view that shows each User Story together with the test cases that validate it and the pass/fail outcome of the most recent pipeline run. You need to configure the artifacts so that the Test Plans "Requirement traceability" view can generate this matrix without additional manual effort.
Which configuration provides the required end-to-end traceability?
Assign the identical Iteration Path to both the User Story and its test cases.
Prefix the title of every test case with the work item ID of the User Story.
Create a link of type "Tests" from each User Story to the related test cases.
Add the same tag to the User Story and to each validating test case.
Answer Description
The Requirement traceability report relies on the dedicated "Tests" link type. When you add a link of this type between a requirement-level work item (such as a User Story or Bug) and one or more test cases, Azure DevOps automatically:
- Lists the relationship in the Boards and Test Plans UIs.
- Rolls up the latest automated or manual test outcome for every linked test case.
Because tags, iteration paths, or text in titles do not create formal links, they are ignored by the traceability views and queries.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the 'Tests' link type in Azure DevOps?
How does the 'Requirement traceability' view work in Azure DevOps?
Why don't tags, titles, or iteration paths provide traceability in Azure DevOps?
Your team wants Azure Monitor to create an Azure DevOps work item every time a Log Analytics-based alert with severity = Sev0 is triggered. You start creating an action group and choose the Azure DevOps action type, but the portal warns that a required prerequisite is missing. Which prerequisite must be satisfied before you can successfully add the Azure DevOps action to the group?
Generate a personal access token in Azure DevOps with Work Items (read and write) scope and supply it to the action group.
Configure an Azure DevOps service hook that listens for alert events from Azure Monitor.
Install the Azure Boards extension for Visual Studio Code on the build server used by the project.
Register Azure Monitor as a multi-tenant application in Azure Active Directory and grant it the Work Item API permission.
Answer Description
The Azure DevOps action in an Azure Monitor action group authenticates by using a personal access token (PAT). The PAT must be generated beforehand in the target Azure DevOps organization and scoped to at least Work Items (read and write). Without supplying the PAT, Azure Monitor cannot connect to Azure DevOps to create work items. Service hooks, IDE extensions, and app registrations are not required for this integration.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Personal Access Token (PAT) in Azure DevOps?
How do you create a Personal Access Token (PAT) in Azure DevOps?
Why is a PAT required instead of other authentication methods for Azure Monitor integration?
You instrument an ASP.NET Core web app with Application Insights. You must discover which API operations are responsible for the worst client-perceived latency over the last 24 hours by calculating the 95th-percentile request duration and listing only the five slowest operations whose 95th percentile is greater than 3 000 ms. Which Kusto Query Language (KQL) statement should you run in Log Analytics to meet the requirement?
requests | where timestamp > ago(1d) | summarize percent_duration = percentile(duration, 95) by operationName | where percent_duration < 3000 | top 5 by percent_duration asc
requests | where timestamp > ago(1d) | summarize percentile(duration, 95) by operationName | where duration > 3000 | top 5 by duration desc
requests | where timestamp > ago(1d) | summarize P95 = percentile(duration, 95) by operationName | where P95 > 3000 | top 5 by P95 desc
requests | where timestamp > ago(1d) | summarize avgduration = avg(duration) by operationName | where avgduration > 3000 | top 5 by avgduration desc
Answer Description
The query must:
- Retrieve request telemetry from the last day (requests table, timestamp filter).
- Use the percentile aggregation to compute the 95th-percentile duration for each operationName.
- Filter out any operation whose P95 is not above 3 000 ms.
- Return only the five operations with the highest P95 values.
The correct query meets every criterion: requests | where timestamp > ago(1d) | summarize P95 = percentile(duration, 95) by operationName | where P95 > 3000 | top 5 by P95 desc
The distractors fail because they either:
- Compare the raw duration column instead of the percentile result (making the filter ineffective).
- Use average rather than percentile, giving the wrong measure of tail latency.
- Keep operations below 3 000 ms or sort ascending, returning the fastest operations instead of the slowest.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Kusto Query Language (KQL) and where is it used?
What does the 'percentile' function do in KQL?
Why is the 95th percentile used for performance analysis instead of average duration?
You are designing an Azure Pipelines YAML pipeline that will run only on Microsoft-hosted agents. The pipeline must deploy Bicep templates to an Azure subscription while meeting the following requirements:
- Do not store any long-lived client secrets or certificates in Azure DevOps.
- Rely on short-lived tokens issued by Azure AD.
- Allow scoping permissions to a single resource group. Which authentication approach should you implement in the pipeline's service connection to meet the requirements?
Create an Azure Resource Manager service connection that uses a service principal secured by a client secret stored in a variable group.
Create an Azure Resource Manager service connection that uses Workload Identity Federation (OIDC) with a federated credential on an Azure AD application.
Store an App Service publish profile as a secure file and reference it during the deployment stage.
Enable a system-assigned managed identity on the Microsoft-hosted agent and grant it the required Azure RBAC role.
Answer Description
Workload Identity Federation lets Azure DevOps request a short-lived Azure AD access token for a specific app registration by presenting the pipeline's OpenID Connect (OIDC) token. No client secret or certificate is stored in Azure DevOps, and the service principal created for the registration can be assigned granular roles (for example, at a single resource-group scope). Microsoft-hosted agents cannot use managed identity, a stored publish profile exposes long-lived secrets, and a classic service principal with a client secret still requires secret storage and rotation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Workload Identity Federation in Azure?
How does OpenID Connect (OIDC) work in Azure Pipelines?
Why can’t Microsoft-hosted agents use managed identities for authentication?
Your organization has an Azure virtual machine named VM1 that runs Windows Server 2022. The team plans to onboard VM1 to VM Insights by using the current Azure Monitor agent (AMA) so that both guest performance metrics and dependency maps are collected in an existing Log Analytics workspace. Which agent configuration must be present on VM1 before VM Insights can start sending the required telemetry?
Azure Monitor agent and Telegraf agent
Only the Azure Monitor agent
Azure Monitor agent and Dependency agent
Only the Log Analytics (MMA) agent
Answer Description
VM Insights collects performance counters through the Azure Monitor agent. To build dependency and process maps on Windows machines, the solution also relies on the Dependency agent. When you enable VM Insights with AMA, Azure automatically installs (or expects) both the Azure Monitor agent and the Dependency agent on Windows hosts. The Log Analytics (MMA) and Telegraf agents are not used in this architecture, and AMA alone does not generate dependency data on Windows.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
ELI5: What is the Azure Monitor agent (AMA)?
What is the purpose of the Dependency agent in VM Insights?
How does VM Insights use the Azure Monitor agent and Dependency agent together?
You are designing an Azure Pipelines YAML definition with two stages, Build and Test. The Test stage uses a matrix strategy to run jobs across multiple platforms. You need to ensure the Test stage only begins after the Build stage completes successfully, and that the entire pipeline fails if any job in the Test stage's matrix fails. Which YAML structure correctly implements these requirements?
Define the
Teststage with adependsOn: Buildproperty and no customcondition.Define the
Teststage withdependsOn: Buildandcondition: always().Define the
Teststage withcondition: succeeded('Build')but without adependsOnproperty.Add
continueOnError: trueto the job within theTeststage's matrix.
Answer Description
The dependsOn property at the stage level is used to enforce execution order, making the Test stage wait for the Build stage. By default, a stage with this dependency will only run if the preceding stage succeeds (an implicit condition: succeeded()). If any job within the Test stage's matrix fails, the Test stage itself fails. Because there are no subsequent stages with conditions that would override this outcome, the entire pipeline run will correctly be marked as failed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the 'dependsOn' property in Azure Pipelines?
How does the matrix strategy work in the 'Test' stage?
What happens if any job fails in a matrix stage?
Your organization has a single Azure Repos Git repository that contains several microservices. Eight Scrum teams work in parallel and must keep a high rate of continuous integration (CI). You want to:
- Detect integration issues within one working day,
- Keep feature work isolated until it is ready,
- Produce hot-fixes for the current production version after a release without blocking new development.
Which branching strategy best satisfies these requirements?
Gitflow with a permanent develop branch, long-lived feature branches, and separate release branches merged back into develop and main after QA.
Trunk-based development with short-lived feature branches off main and time-boxed release branches created only when a version is shipped.
Environment branches (dev, test, prod) with features committed directly to the environment branch that matches the current deployment stage.
Fork-and-pull model where each team maintains its own fork and submits pull requests to a shared upstream repository.
Answer Description
Trunk-based development keeps everyone merging into a single mainline at least daily, allowing CI pipelines to detect integration problems quickly. Short-lived feature branches let teams isolate work but are merged back to main as soon as the build is green, keeping divergence small. When a product increment ships, a release branch is cut from the mainline; hot-fixes are applied to that release branch and cherry-picked into main so new development can continue unhindered. Gitflow encourages long-lived develop and feature branches, delaying integration. Environment branches or permanent release branches cause parallel work streams that contradict the daily-merge CI goal. Fork-based workflows target open-source contributions rather than coordinated internal teams.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is trunk-based development?
How does trunk-based development enable efficient continuous integration?
What is the role of release branches in trunk-based development?
Your organization stores its source code in Azure DevOps Repos. You need the build stage of a new multi-language YAML pipeline to automatically scan every commit for secrets, vulnerable open-source dependencies, Infrastructure-as-Code misconfigurations, and other security issues. The solution must use a single task, output SARIF-formatted results, and break the build if any high-severity findings are detected, without requiring you to configure each scanner individually. Which task should you add to the pipeline?
Add the MicrosoftSecurityDevOps@1 task from the Microsoft Security DevOps extension.
Add an OWASP Dependency Check task to scan third-party libraries.
Add a Trivy@0 task to perform container image vulnerability scanning.
Add the CodeQLAnalysis@0 task and configure a CodeQL database for each language.
Answer Description
The MicrosoftSecurityDevOps@1 task, provided by the Microsoft Security DevOps extension, orchestrates multiple analyzers (for example CredScan, ESLint, Bandit, PoliCheck, IaC scanners, and dependency-vulnerability tools) in a single step, emits standardized SARIF output, and allows you to set a fail-threshold for high-severity issues. CodeQLAnalysis@0 only performs static code analysis, while Dependency Check and Trivy each cover a single scope (dependency or container scanning). None of those alternatives meet the requirement to run comprehensive, multi-tool scanning through one task.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the MicrosoftSecurityDevOps@1 task used for?
What is SARIF, and why is it used in security scanning?
What are some examples of the analyzers coordinated by MicrosoftSecurityDevOps@1?
Your team maintains 60 Windows and Linux virtual machines (VMs) in Azure that are used as self-hosted build agents for Azure Pipelines. You must be able to review CPU, memory, and disk utilization trends for all build agents from a single Azure Monitor workbook without signing in to each VM or manually deploying extensions one by one. You plan to store the collected data in a central Log Analytics workspace.
Which action should you perform first to meet these requirements?
Assign a Guest Configuration policy that audits high CPU usage on all VMs.
Enable Azure Monitor VM Insights for the subscription and link it to the Log Analytics workspace.
Install the Azure Diagnostics extension on each VM and send metrics to the workspace.
Enable Azure Monitor Container Insights on the Log Analytics workspace.
Answer Description
Enabling VM Insights (Azure Monitor for VMs) at the subscription, resource group, or workspace level automatically deploys the Azure Monitor Agent (or Log Analytics VM extension on older images) to every selected VM, configures a data collection rule, and begins sending guest-level performance counters such as CPU, memory, and logical disk metrics to the chosen Log Analytics workspace. The built-in Insights workbook then surfaces this data across all onboarded machines.
Installing the Diagnostics extension only streams basic platform metrics and requires per-VM configuration. Container Insights is intended for Kubernetes nodes or Azure Container Instances, not standard VMs. A guest-configuration policy audit does not collect performance data or populate the Insights workbook. Therefore, enabling VM Insights is the correct first step.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does enabling Azure Monitor VM Insights differ from installing the Azure Diagnostics extension?
What is a Log Analytics workspace, and why is it important in this scenario?
What role does the Insights workbook play in Azure Monitor VM Insights?
Your team uses a multi-stage YAML pipeline that has a 'Build' stage for running integration tests and a 'Deploy' stage that targets a production environment. You need to ensure the 'Deploy' stage only runs if the test pass rate from the 'Build' stage is at least 95%. The solution must use built-in Azure DevOps functionality and automatically check the threshold after the test run completes.
Which configuration should you implement?
On the production environment, configure a quality gate for the 'Tests pass ratio' check with a threshold of 95.
In the 'Build' stage, add the parameter
failTaskOnFailedTests: trueto a PublishTestResults@2 task that runs after the tests.In the 'Build' stage, add the parameter
minimumTestsPassedPercentage: 95to the VsTest@2 task.On the 'Deploy' stage, add the condition:
succeeded('Build').
Answer Description
Configuring a quality gate on the environment targeted by the 'Deploy' stage is the correct approach. The 'Tests pass ratio' check can be added as a gate, which evaluates the results from the preceding 'Build' stage. Setting its threshold to 95 ensures that the 'Deploy' stage will only start if 95% or more of the tests passed. This method uses built-in functionality to control promotions between stages based on quality metrics.
Adding failTaskOnFailedTests: true to a PublishTestResults@2 task would fail the pipeline if any single test fails, which does not meet the 95% threshold requirement. The minimumTestsPassedPercentage parameter for the VsTest@2 task does not exist. Using a condition on the stage checks the status of the previous stage, but it does not evaluate the test pass percentage; the 'Build' stage would still succeed if tests ran but the pass rate was below 95% (assuming the test task itself did not fail).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a quality gate in Azure DevOps?
How does the 'Tests pass ratio' quality gate work?
Can the threshold check for the 'Tests pass ratio' be automated in Azure DevOps pipelines?
You maintain a multi-stage Azure Pipelines YAML definition for a large .NET solution. The pipeline already generates and publishes code coverage reports in the Cobertura format. The security team requires that the build must fail automatically if the overall line coverage drops below 80%. You have already installed the 'Build Quality Checks' extension from the Visual Studio Marketplace. You need to implement this requirement with the least amount of configuration.
Which task should you add to the pipeline to meet this requirement?
A
PublishCodeCoverageResultstask with afailBuildOnCoverageBelow: 80parameter.A
PowerShelltask with a script to parse the Cobertura XML and fail the build.A branch policy with a code coverage status check.
A
BuildQualityCheckstask with the coverage threshold set to 80.
Answer Description
The built-in PublishCodeCoverageResults task is used to publish reports, but it does not have an input to enforce a coverage threshold and fail the build. The correct approach, given the 'Build Quality Checks' extension is installed, is to use its provided task. The BuildQualityChecks task is specifically designed to act as a quality gate, allowing you to check metrics like code coverage and fail the build if a defined threshold is not met. Using a custom PowerShell script would require more effort than using the pre-built extension. A branch policy applies to pull request validation (typically for differential coverage), not the overall coverage of a main pipeline build.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'BuildQualityChecks' task do in Azure Pipelines?
What is the Cobertura format used for in Azure Pipelines?
Why is the 'PublishCodeCoverageResults' task not suitable for failing the build based on a threshold?
You need to write a Kusto Query Language (KQL) statement that returns the average server-side duration of requests for each operation in the last 30 minutes, omitting rows with empty duration values, and orders the results so the slowest operation appears first. Which query meets these requirements?
Requests | where timestamp > ago(30m) and isnotempty(duration) | extend avg_duration = avg(duration) by operation_Name | sort avg_duration descRequests | where timestamp > ago(30m) and isnotempty(duration) | summarize avg(duration) by operation_Name | order by avg_duration ascRequests | where timestamp > ago(30m) and isempty(duration) | summarize duration = avg(duration) by operation_Name | order by duration descRequests | where timestamp > ago(30m) and isnotempty(duration) | summarize avg_duration = avg(duration) by operation_Name | sort by avg_duration desc
Answer Description
The query must first filter the Requests table to the last 30 minutes and exclude rows where the duration value is missing. The summarize operator then calculates an aggregate - in this case the average - grouped by operation_Name, with Kusto automatically naming the new column avg_duration (or the name given in the assignment). Finally, sort (or order by) must be applied in descending order so that the largest average duration is shown first. Options that use isempty instead of isnotempty, use extend for aggregation, or sort ascending do not meet one or more of the stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the summarize operator in KQL?
How does the isnotempty() function work in KQL?
What is the difference between sort and order by in KQL?
Your company stores all code in GitHub Enterprise Cloud and deploys workloads to both Azure and AWS. The security team enforces FedRAMP High rules that prohibit long-lived cloud credentials in CI/CD systems. Instead, pipelines must obtain short-lived tokens issued through OpenID Connect (OIDC) at run time. Pipeline definitions must live in the same repository as the code. You need to recommend a deployment automation solution that meets these requirements with the least additional components or custom tasks. Which solution should you choose?
Azure Pipelines YAML pipelines with an AWS service connection configured from long-lived access keys and an Azure service-principal secret
GitHub Actions with GitHub-hosted runners and federated OIDC credentials to Azure and AWS
Azure Pipelines classic release pipelines with environment-specific service connections that store the required cloud access keys
GitHub Actions on self-hosted runners that use repository secrets to store AWS and Azure access keys
Answer Description
GitHub Actions natively supports OIDC federation to both Azure and AWS, allowing workflows to exchange a GitHub-issued token for short-lived cloud credentials at runtime without storing any secrets. This directly satisfies the FedRAMP requirement. Furthermore, the YAML workflow file resides in the same repository as the code, meeting another key requirement.
The distractors suggesting the use of long-lived access keys stored in repository secrets or traditional service connections explicitly violate the security policy. While modern Azure Pipelines also support OIDC via Workload Identity Federation for both Azure and AWS, choosing this option for a repository already in GitHub would require setting up a separate Azure DevOps project and service connections, introducing more components than the native GitHub Actions solution. Therefore, GitHub Actions is the solution that meets all requirements with the least additional components.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is OpenID Connect (OIDC) and how does it work with GitHub Actions?
Why does GitHub Actions with OIDC have an advantage compared to Azure Pipelines for this use case?
What are the benefits of using federated OIDC credentials over long-lived credentials?
Your team maintains a single monolithic Git repository in Azure Repos. Management wants to move from bi-monthly releases to several production deployments each week. The new process must minimize merge conflicts, keep branch histories short, and allow incomplete functionality to be disabled until it is production-ready. Which branching strategy best meets these requirements?
Implement GitFlow with separate long-lived develop, release, and hotfix branches.
Adopt trunk-based development with short-lived feature branches protected by feature flags.
Use Release Flow and maintain companion release branches indefinitely after each deployment.
Move to a forking workflow where each developer works in a personal fork and submits pull requests to the main repository.
Answer Description
Trunk-based development keeps all developers working in or very close to the main branch. Short-lived feature branches (often measured in hours or a few days) are merged back quickly through pull requests, which greatly reduces long-running divergence and merge conflicts. When a feature is not ready, it is typically hidden behind a feature flag, so deployments can continue on the same main line without blocking release frequency.
GitFlow relies on long-lived develop and release branches that introduce greater divergence and are not optimized for multiple weekly releases. Release Flow does support continuous delivery, but it uses short-lived release branches that are removed once a release is complete-maintaining them indefinitely would conflict with the stated goal of minimizing branch history. A forking workflow isolates every developer in a personal fork; while appropriate for large open-source projects, it adds extra overhead and does not inherently address the need for rapid, low-conflict integration. Therefore, adopting trunk-based development with short-lived feature branches and feature flags is the best fit.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do feature flags work in trunk-based development?
Why is trunk-based development preferred over GitFlow for frequent releases?
What are the disadvantages of adopting a forking workflow in this case?
You are managing an Azure DevOps project with a default retention policy to keep runs for 30 days. For a single YAML pipeline named 'webapp-ci', you must implement a more granular policy:
- Build records must be retained for 90 days for auditing.
- Published pipeline artifacts must be deleted after 14 days to conserve storage.
- The last 10 successful runs on the 'main' branch must be kept regardless of age.
A team member attempts to configure this in the azure-pipelines.yml file, but you inform them that not all requirements can be met directly in YAML. Which requirement must be configured outside of the pipeline's YAML file, in the pipeline's settings UI?
Deleting artifacts after 14 days while keeping the build record for 90 days.
Retaining build records for 90 days.
Keeping the last 10 successful runs on the
mainbranch.Applying the policy only to the
mainbranch.
Answer Description
While Azure Pipelines YAML files support several retention settings, they do not provide a way to set a separate, shorter retention period for artifacts compared to the build record itself. The days and minimumToKeep settings in the YAML retention block control the retention of the entire run record. To delete artifacts on a different schedule than the run, you must configure the 'Days to keep artifacts' setting in the pipeline's specific settings through the web UI. The other requirements, such as setting the number of days to keep the run, the minimum number of runs to keep, and scoping the policy to a specific branch, are all configurable within the YAML file.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why can't Azure Pipelines YAML files set different retention periods for artifacts and build records?
How do you configure 'Days to keep artifacts' in Azure DevOps pipeline settings UI?
What other retention settings can be configured in an Azure Pipelines YAML file?
You are creating a multi-stage Azure Pipelines YAML file. The first stage named build compiles the application. A second stage named deploy must run only when the build stage finishes successfully and the pipeline run was triggered by a direct push to the main branch (refs/heads/main). You decide to add a condition statement to the deploy stage. Which condition expression meets the requirement?
condition: and(always(), eq(variables['Build.SourceBranch'], 'main'))
condition: and(succeeded(), eq(variables['Build.Reason'], 'IndividualCI'))
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
condition: succeeded() && variables['Build.SourceBranchName'] == 'main'
Answer Description
The deploy stage must evaluate two facts before it runs: the overall pipeline has succeeded so far and the current run's branch is main. The predefined variable Build.SourceBranch contains the fully qualified branch name (for example, "refs/heads/main"). Therefore the correct expression combines succeeded() with an equality check on Build.SourceBranch by using the and() expression helper. Using Build.SourceBranchName would omit the refs/heads/ prefix, && operators are not valid YAML expressions, always() would ignore failures, and comparing Build.Reason does not restrict the branch that triggered the run.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the succeeded() function do in Azure Pipelines YAML?
Why is Build.SourceBranch used instead of Build.SourceBranchName?
What is the purpose of the and() expression helper in Azure Pipelines YAML?
You are standardizing Azure DevOps pipelines that run on Microsoft-hosted agents. The jobs must deploy to Azure subscriptions in the same Microsoft Entra ID (Azure AD) tenant, but you want to eliminate any stored client secrets or certificates in the project. Pipelines should obtain an identity automatically at run time while still allowing you to scope permissions granularly at the resource-group level. Which approach meets the requirements with the least operational overhead?
Use an Azure DevOps personal access token (PAT) in the service connection and grant the PAT access to the target subscription.
Enable a system-assigned managed identity on each Microsoft-hosted agent and reference it from Azure CLI tasks in the pipeline.
Create an Azure AD application and service principal, add a federated credential that trusts the Azure DevOps organization, and configure an Azure Resource Manager service connection that uses workload identity federation.
Create a service principal, generate a client secret, and store the secret in an Azure DevOps variable group referenced by each pipeline.
Answer Description
Workload identity federation lets an Azure AD application (service principal) trust the JSON Web Token (JWT) issued to each Azure DevOps job. Because the token exchange is based on OpenID Connect, no client secret or certificate is stored in Azure DevOps. The service principal created for the subscription can be restricted to the required resource groups. Managed identities cannot be used by Microsoft-hosted agents, and storing a client secret or personal access token keeps a long-lived secret that contradicts the requirement to remove stored credentials.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is workload identity federation in Azure?
What is OpenID Connect (OIDC) and how does it relate to Azure DevOps?
Why are managed identities not usable with Microsoft-hosted agents?
Your company uses GitHub Flow and stores all code in a GitHub Enterprise Cloud organization that is connected to Azure Pipelines. You must shorten the feedback cycle when a build job fails. The solution has to meet the following requirements:
- A new GitHub issue must be created automatically and assigned to the author of the last commit that triggered the failed pipeline.
- Team members subscribed to the repository must receive the standard GitHub e-mail and in-app notifications for the new issue.
- The implementation must require as little custom scripting as possible.
You create a fine-grained personal access token (PAT) that has the issues:write permission and add it to the pipeline as a secret variable.
Which action should you add to the failed branch of the Azure Pipeline to meet the requirements?
Enable a branch protection rule that requires a successful check and sets "Include administrators" to ON.
Configure a GitHub Service Hook that triggers on the "build completed" event from Azure Pipelines and creates an issue.
Add the GitHub Issues task from the Azure Pipelines Marketplace and configure it to create an issue that is assigned to $(Build.RequestedForEmail).
Add a Bash script task that uses curl to call the GitHub REST API v3 and open an issue in the repository.
Answer Description
The GitHub Issues Marketplace task lets you create or update issues in a repository directly from an Azure Pipeline without writing custom REST calls. By using the task, you can supply the previously stored PAT, set the action to Create, assign the issue to $(Build.RequestedForEmail) (the last pusher), and apply any required labels. When the task runs after a failed job, GitHub automatically sends its normal e-mail and web notifications to everyone watching the repository, fulfilling the notification requirement.
Using a Bash script or an Azure CLI task would work but would involve hand-crafting REST requests, increasing maintenance. A Service Hook in GitHub cannot be triggered from Azure Pipelines; service hooks send events out of GitHub, not into it. A branch protection rule only blocks merges and does not generate issues or notifications, so it does not meet the stated goals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a GitHub Issues task in Azure Pipelines?
What is a fine-grained Personal Access Token (PAT) and how is it used in Azure Pipelines?
Why didn’t the other options meet the requirements in the question?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.