Microsoft Azure Solutions Architect Expert Practice Test (AZ-305)
Use the form below to configure your Microsoft Azure Solutions Architect Expert Practice Test (AZ-305). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Solutions Architect Expert AZ-305 Information
Navigating the Microsoft Azure Solutions Architect Expert AZ-305 Exam
The Microsoft Azure Solutions Architect Expert AZ-305 exam is a pivotal certification for professionals who design and implement solutions on Microsoft's cloud platform. This exam validates a candidate's expertise in translating business requirements into secure, scalable, and reliable Azure solutions. Aimed at individuals with advanced experience in IT operations, including networking, virtualization, and security, the AZ-305 certification demonstrates subject matter expertise in designing cloud and hybrid solutions. Success in this exam signifies that a professional can advise stakeholders and architect solutions that align with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure.
The AZ-305 exam evaluates a candidate's proficiency across four primary domains. These core areas include designing solutions for identity, governance, and monitoring, which accounts for 25-30% of the exam. Another significant portion, 30-35%, is dedicated to designing infrastructure solutions. The exam also assesses the ability to design data storage solutions (20-25%) and business continuity solutions (15-20%). This structure ensures that certified architects possess a comprehensive understanding of creating holistic cloud environments that address everything from identity management and data storage to disaster recovery and infrastructure deployment.
The Strategic Advantage of Practice Exams
A crucial component of preparing for the AZ-305 exam is leveraging practice tests. Taking practice exams offers a realistic simulation of the actual test environment, helping candidates become familiar with the question formats, which can include multiple-choice, multi-response, and scenario-based questions. This familiarity helps in developing effective time management skills, a critical factor for success during the timed exam. Furthermore, practice tests are an excellent tool for identifying knowledge gaps. By reviewing incorrect answers and understanding the reasoning behind the correct ones, candidates can focus their study efforts more effectively on weaker areas.
The benefits of using practice exams extend beyond technical preparation. Successfully navigating these tests can significantly boost a candidate's confidence. As performance improves with each practice test, anxiety about the actual exam can be reduced. Many platforms offer practice exams that replicate the look and feel of the real test, providing detailed explanations for both correct and incorrect answers. This active engagement with the material is more effective than passive reading and is a strategic approach to ensuring readiness for the complexities of the AZ-305 exam.

Free Microsoft Azure Solutions Architect Expert AZ-305 Practice Test
- 20 Questions
- Unlimited time
- Design identity, governance, and monitoring solutionsDesign data storage solutionsDesign business continuity solutionsDesign infrastructure solutions
You are designing the messaging backbone for a multi-tenant SaaS platform hosted in Azure. Several front-end microservices will publish up to 15,000 events per second, each 40 KB or less. Downstream worker services must receive these events in near real-time with at-least-once delivery; strict ordering is not required. The solution must:
- Decouple producers from consumers through a publish/subscribe model.
- Automatically scale and require minimal operational management.
- Offer built-in integration triggers for Azure Functions.
Which Azure service should you recommend?
Azure Service Bus Premium topic
Azure Event Hubs Standard namespace with Capture enabled
Azure Storage Queue triggered by Azure Functions
Azure Event Grid domain
Answer Description
Azure Event Grid domains provide a fully managed, serverless publish/subscribe service that automatically scales to millions of events per second, supports event sizes up to 1 MB, and delivers events to multiple subscribers with at-least-once guarantees. Domains add tenant-level routing, making them ideal for multi-tenant SaaS scenarios. Event Grid integrates natively with Azure Functions using Event Grid triggers, allowing near real-time processing with minimal code.
Azure Service Bus topics deliver rich enterprise messaging features such as sessions and ordered delivery, but they require capacity planning and do not scale automatically to very high throughput. Azure Event Hubs is optimized for streaming telemetry; it uses a pull model, does not push events to multiple services, and needs consumer group management. Azure Storage Queues lack publish/subscribe fan-out and advanced routing. Therefore, Event Grid domains best meet the stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Event Grid domain?
What makes Azure Event Grid different from Azure Service Bus?
How does Azure Event Grid provide at-least-once delivery?
Contoso Ltd. employs 30 first-line support engineers who must be able to restart any virtual machine in the company's three Azure subscriptions during their 8-hour shift. Security policy requires that:
- Engineers receive only the minimum permissions necessary.
- Access must expire automatically at the end of each shift.
- A shift lead must approve the access request before it is granted. You need to recommend an authorization solution that meets the requirements while minimizing administrative effort. What should you recommend?
Add the engineers to the built-in Contributor role at each subscription scope and configure Azure AD Access Reviews to run once per month.
Use Azure AD PIM to make each engineer eligible for the built-in Virtual Machine Contributor role at the resource-group level with no approval workflow and a permanent assignment.
Create an Azure Automation runbook that restarts virtual machines and grant the engineers permission to invoke the runbook through an Azure DevOps pipeline.
Create a custom Azure RBAC role that includes only the Microsoft.Compute/virtualMachines/restart/action permission, onboard each subscription to Azure AD Privileged Identity Management, and assign the role as eligible directly to every engineer at the subscription scope. Configure PIM to require shift-lead approval and set the activation duration to eight hours.
Answer Description
Azure AD Privileged Identity Management (PIM) for Azure RBAC allows you to create eligible, time-bound assignments that can require approval and automatically expire after a maximum of eight hours. By defining a custom RBAC role that contains only the Microsoft.Compute/virtualMachines/restart/action permission, you enforce least privilege. Assigning that role as eligible directly to each engineer at the subscription scope avoids the limitation that group assignments cannot be made eligible, yet still requires only a one-time setup per engineer. The PIM settings let a shift lead approve each activation request. The Contributor and Virtual Machine Contributor roles grant unnecessary permissions, and monthly access reviews or automation pipelines do not deliver the required per-shift, approval-based, time-bound access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure RBAC and how does it work?
What is Azure AD Privileged Identity Management (PIM) and why is it useful?
How does using a custom RBAC role enforce least privilege?
You manage an e-commerce application that uses a single Azure SQL Database in the General Purpose tier (East US). A recent availability-zone outage made the database unavailable for 20 minutes. New requirements are:
- Database must stay writable if any single zone in East US fails.
- Recovery time after a zone failure must be under one minute.
- Cross-region disaster recovery is not required.
- No virtual-machine administration is acceptable.
Which change best meets these requirements?
Upgrade the database to the Business Critical service tier and enable zone-redundant configuration.
Configure active geo-replication to a secondary database in Central US.
Migrate to SQL Server on Azure Virtual Machines configured in an availability set.
Enable auto-failover groups between two Azure SQL Databases in different regions.
Answer Description
The Business Critical service tier for Azure SQL Database implements a four-replica Always On availability group. When zone redundancy is enabled, each replica is automatically placed in a different availability zone within the same region. If one zone fails, the service promotes a healthy replica in another zone-typically within 30 seconds-so the database remains online for both reads and writes, meeting the sub-minute RTO. Active geo-replication and auto-failover groups focus on cross-region recovery and do not guarantee sub-minute failover inside the primary region. Running SQL Server on Azure VMs in an availability set still leaves the workload vulnerable to a full zone outage and introduces VM management overhead. Therefore, upgrading to Business Critical and enabling zone redundancy satisfies all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Business Critical service tier in Azure SQL Database?
What is zone redundancy in Azure SQL Database?
Why is cross-region disaster recovery not required in this scenario?
You manage the database platform for a retail organization that operates 24/7. The primary 5-TB SQL Server 2016 database runs on-premises in an Always On availability group. Management has decided to move this database to Azure SQL Managed Instance. The business will tolerate no more than 30 minutes of total downtime during cut-over. Which Azure-based migration approach should you recommend to satisfy the downtime requirement while ensuring the target service remains fully managed?
Enable Azure Site Recovery to replicate the on-premises SQL Server virtual machines to Azure and fail over to the replicas during the maintenance window.
Set up Azure SQL Data Sync between the on-premises database and Azure SQL Managed Instance, then cut over once synchronization is complete.
Use Azure Database Migration Service deployed in the Premium tier to perform an online migration from the on-premises SQL Server to Azure SQL Managed Instance.
Export a BACPAC from the primary replica and import it into Azure SQL Managed Instance during the maintenance window.
Answer Description
Azure Database Migration Service (DMS) supports online migrations from on-premises SQL Server to Azure SQL Managed Instance when you deploy the service in the Premium tier. Online mode keeps the source database fully operational by continuously replicating changes until you choose to perform a brief cut-over-typically well under 30 minutes-for final synchronization and application redirection.
Exporting a BACPAC requires taking the source database offline and importing a single static snapshot, resulting in hours of downtime for a 5-TB database.
Azure Site Recovery replicates virtual machines, not managed database services; failing over would move the workload to an Azure VM-hosted SQL Server, not to Azure SQL Managed Instance, and still involves more downtime while the VM is brought online and clients are reconfigured.
Azure SQL Data Sync is designed for bidirectional data synchronization of selected tables and is neither intended for full-database migrations nor able to guarantee sub-hour cut-over windows for multi-terabyte workloads. Therefore, deploying Azure Database Migration Service in Premium tier with online migration is the only option that meets both the downtime constraint and the target platform requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Database Migration Service (DMS) and why is the Premium tier required for this migration?
Why isn't exporting a BACPAC file suitable for a 5-TB database migration?
What are the limitations of Azure SQL Data Sync in full-database migrations?
Your company renders high-resolution video frames using a proprietary command-line application that must run on Windows Server with GPU acceleration. Each night about 20,000 independent rendering jobs are submitted, and the output is written to Azure Blob Storage. You need a fully managed, cost-efficient compute platform that can automatically provision and deallocate hundreds of GPU-enabled virtual machines, handle job queuing and scheduling, and require minimal custom orchestration code. Which Azure service should you recommend?
Azure Functions running on a Premium plan with event triggers
Azure Kubernetes Service (AKS) with the Kubernetes Event-Driven Autoscaler (KEDA)
Azure Batch with GPU-enabled pools and automatic pool scaling
Azure Container Instances (ACI) with Event Grid-based job orchestration
Answer Description
Azure Batch is purpose-built for large-scale, parallel, or high-performance batch workloads. It automatically creates and manages pools of virtual machines (including GPU SKUs), queues and schedules jobs, distributes tasks, collects results, and scales the pool up or down so you pay only for the compute consumed. Azure Kubernetes Service, Azure Functions, and Azure Container Instances can run containerized workloads, but they require you to build and operate custom job schedulers or are constrained by execution limits and lack integrated VM-level autoscaling across thousands of tasks. Therefore, Azure Batch best meets the requirements for a managed, scalable batch rendering solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Batch, and why is it suited for GPU-intensive tasks?
Why isn’t Azure Kubernetes Service (AKS) suitable for this scenario?
What makes Azure Functions and Azure Container Instances less appropriate?
You are designing a long-term archive for 140 TB of security-camera footage that must be stored in Azure for at least seven years. Compliance mandates that the footage remain immutable for the entire retention period. The organization must still be able to read the data if the primary Azure region becomes unavailable, with a cross-region recovery-point objective (RPO) of no more than 15 minutes and no need for Microsoft to initiate a failover. Durability must be at least 99.999999999 percent, and overall cost should be minimized.
Which Azure storage configuration should you recommend?
Store the footage in an Azure Storage account that has a time-based immutable blob policy and is configured for read-access geo-zone-redundant storage (RA-GZRS).
Create two separate locally redundant storage (LRS) accounts in different regions and use scheduled AzCopy jobs to replicate data between them.
Store the footage in an Azure Storage account that has a time-based immutable blob policy and is configured for zone-redundant storage (ZRS).
Store the footage in an Azure Storage account that has a time-based immutable blob policy and is configured for geo-redundant storage (GRS).
Answer Description
Use an Azure Storage account that combines a time-based immutable blob policy with the read-access geo-zone-redundant storage (RA-GZRS) replication tier.
- The immutable blob policy enforces write-once, read-many (WORM) protection, blocking all modifications and deletions for the specified seven-year retention period, which satisfies the compliance requirement.
- RA-GZRS stores three synchronous copies across separate availability zones in the primary region (ZRS) and asynchronously replicates to a secondary region. Because the secondary region is available for immediate read access, applications can continue reading data even if the primary region is unavailable, with a typical cross-region RPO of under 15 minutes.
- RA-GZRS provides at least eleven nines (99.999999999 %) annual durability while costing less than building and managing multiple independent storage accounts.
Why the other options do not meet requirements:
- ZRS plus immutability keeps data in a single region, so it cannot withstand a regional outage.
- GRS plus immutability replicates to a secondary region but does not allow reads there until Microsoft triggers a failover, violating the read-access requirement.
- Manually copying between two LRS accounts neither guarantees the stipulated RPO nor delivers the platform-managed durability and availability provided by RA-GZRS.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is RA-GZRS in Azure Storage?
What is an immutable blob policy in Azure?
How does Azure ensure the durability of data stored with RA-GZRS?
Your organization manages six Azure subscriptions that are frequently reorganized. You must design a logging solution that meets the following requirements: collect all activity and resource diagnostic logs from every subscription in a single location, retain the data for at least 730 days even if a subscription or resource is deleted, and require the least ongoing administrative effort when new subscriptions are created. Which solution should you recommend?
Deploy an Event Hubs namespace in every subscription, stream all diagnostic data to an on-premises SIEM, and archive the data there for two years.
Create a Log Analytics workspace in a dedicated management subscription, use an Azure Policy initiative to configure subscription-level diagnostic settings that send all Activity Log and resource diagnostic logs to the workspace, and set the workspace retention to 730 days.
Configure each resource to send diagnostic logs to individual Application Insights instances and enable continuous export from each instance to an immutable storage account.
Enable export of each subscription's Activity Log to a storage account in the same subscription and configure lifecycle management to keep the data in the Cool tier for two years.
Answer Description
A dedicated Log Analytics workspace in a separate management subscription provides a single, durable location for logs that is unaffected if other subscriptions or resources are deleted. Subscription-level diagnostic settings can forward both Azure Activity Logs and resource diagnostic logs directly to this workspace. By assigning an Azure Policy initiative that automatically deploys and configures these diagnostic settings to any current or future subscription, you avoid manual per-subscription configuration and ensure consistent onboarding. The workspace's retention can be set to 730 days, meeting the compliance requirement. The other options either scatter logs across multiple storage accounts, rely on on-premises infrastructure, or use Application Insights (which is intended for application telemetry, not comprehensive platform logging) and therefore fail to meet centralization, durability, or administrative-effort requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Log Analytics Workspace?
What is an Azure Policy initiative and how does it simplify management?
Why is a separate management subscription recommended for log storage?
Your organization hosts a mission-critical Azure SQL Database in the Business Critical service tier. Compliance rules state that you must be able to restore the database to any point in time during the last 7 days and also keep a restorable copy of each month-end full backup for 10 years. Management wants the simplest native Azure solution that minimizes ongoing administration and storage costs. Which approach should you recommend?
Enable Azure SQL Database long-term retention (LTR) for monthly backups stored in Azure Blob storage and rely on the service's automatic point-in-time restore capability for the recent 7-day window.
Protect the database with Azure Backup by creating a Recovery Services vault and configuring a SQL in Azure VM backup policy.
Create an active geo-replica in another Azure region and use geo-restore and backup retention on that replica to satisfy both objectives.
Schedule an Azure Automation runbook to export a BACPAC file of the database to Azure Storage at the end of each month and keep the files for 10 years.
Answer Description
Azure SQL Database automatically takes full, differential, and transaction log backups that enable point-in-time restore for the retention period configured for the service tier-by default 7 days in Business Critical and extendable up to 35 days. For longer compliance retention, Azure SQL Database offers a built-in Long-Term Retention (LTR) feature that copies full backups to RA-GRS blob storage on a weekly, monthly, or yearly schedule and keeps them for up to 10 years. Enabling an LTR policy for monthly backups therefore meets the 10-year requirement without additional infrastructure or management overhead. Azure Backup cannot protect platform-as-a-service databases, geo-replication is intended for high availability rather than long-term retention, and exporting BACPACs with Automation adds operational complexity and cost compared to the native LTR capability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure SQL Database Long-Term Retention (LTR)?
How does Point-in-Time Restore work in Azure SQL Database?
What is the difference between RA-GRS and other storage redundancies in Azure?
You are planning a migration of 80 TB of video files that reside on an on-premises NAS to Azure. The available internet link averages 50 Mbps, so management wants the bulk of the data moved by shipping hardware. After the initial upload, about 200 GB of new and modified files will be generated each month and must be replicated automatically to an Azure Blob Storage account with minimal operational effort. Which solution should you recommend?
Configure Azure File Sync with a cloud endpoint in Azure Blob Storage and perform the initial synchronization over the internet.
Migrate the data by using Azure Database Migration Service with an offline export followed by continuous sync tasks.
Order an Azure Data Box for the initial seed transfer, then deploy Azure Data Box Gateway to handle ongoing incremental replication to Azure Blob Storage.
Use AzCopy with checkpoint-restart capability to upload all data over the WAN and schedule monthly jobs for incremental copies.
Answer Description
Azure Data Box provides about 100 TB of raw (≈80 TB usable) capacity and is designed for one-time, offline transfers in the 40-500 TB range. You can order the appliance, copy the 80 TB locally at high speed, and then ship it back to Microsoft for ingestion into your Azure Storage account. After seeding, deploying Azure Data Box Gateway as an on-premises virtual appliance enables continuous synchronization over the existing WAN link. The gateway uploads only changed data blocks, easily handling the expected 200 GB per month with minimal administration.
AzCopy over a 50 Mbps connection would take well over a month to upload 80 TB and would still require manual scheduling for recurring jobs. Azure File Sync targets Azure Files, not Blob Storage. Azure Database Migration Service is intended for structured databases, not unstructured file content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Data Box, and how does it work?
What is Azure Data Box Gateway, and how does it help with ongoing data replication?
Why wouldn’t AzCopy or Azure File Sync work for this scenario?
Your company has an on-premises SQL Server 2019 instance storing transactional data. The server sits in a secure network that blocks inbound internet traffic but allows outbound HTTPS (TCP 443). You must design an Azure solution that copies data from the database to Azure Data Lake Storage Gen2 every 15 minutes for analytics. The solution must be fully managed, provide scheduling and monitoring, and require no custom code. Which Azure Data Factory integration runtime should you recommend for the copy activity?
Self-hosted integration runtime installed on the on-premises server or a gateway machine
Azure-SSIS integration runtime running your existing SSIS packages
Azure integration runtime (AutoResolve) hosted in the target region
Managed Virtual Network (VNet) data flow runtime in Azure Data Factory
Answer Description
Because the SQL Server instance is hosted on-premises behind a firewall that blocks all inbound traffic, Azure Data Factory cannot initiate a direct connection from the Azure-hosted integration runtime. The correct approach is to install and register a self-hosted integration runtime (SHIR) inside the on-premises network. SHIR establishes only outbound, secure TCP 443 connections to the Azure Data Factory service, satisfying the security constraint while enabling the Copy activity to pull data every 15 minutes. The Azure integration runtime and Managed Virtual Network runtime run in Azure and cannot reach the on-prem server without opening inbound ports or using VPN/ExpressRoute. The Azure-SSIS integration runtime is intended for running existing SSIS packages; it is unnecessary when a code-free Copy activity is sufficient and would add cost and complexity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Self-hosted Integration Runtime (SHIR) in Azure Data Factory?
Why can’t Azure Integration Runtime connect to an on-premises SQL Server instance?
What is the difference between Azure Integration Runtime and Self-hosted Integration Runtime?
A company is modernizing an on-premises microservices application and plans to deploy containers to Azure. Mandatory requirements are:
- Each microservice must automatically scale from 0 to hundreds of instances on HTTP or queue events.
- Platform management of the underlying infrastructure must be minimized; developers must not administer Kubernetes clusters.
- Built-in support is needed for Dapr service invocation and pub/sub between microservices. Which Azure service should you recommend to host the containers?
Azure Kubernetes Service (AKS) with the cluster autoscaler and KEDA add-on
Azure Container Instances orchestrated by Azure Logic Apps
Azure App Service for Containers running in the Premium v3 tier with autoscale rules
Azure Container Apps
Answer Description
Azure Container Apps is a fully managed service that runs containers on an Azure-managed Kubernetes environment yet hides cluster management from developers. It supports event-driven horizontal scaling with KEDA, including scaling to zero when workloads are idle, and offers native integration with the Dapr runtime for service invocation and pub/sub. Azure Kubernetes Service also supports KEDA and Dapr but still requires customers to operate and secure the cluster and cannot completely eliminate baseline node costs. Azure App Service for Containers and Azure Container Instances do not provide Dapr integration or scale-to-zero capabilities. Therefore, Azure Container Apps best meets all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Dapr, and why is it important for microservices in Azure?
How does Azure Container Apps scale seamlessly from 0 to handle large workloads?
What differentiates Azure Container Apps from Azure Kubernetes Service (AKS)?
Your company hosts a mission-critical Azure SQL Database single database in the East US region. Compliance requires that you can restore the database to any point in time within the last 35 days. Business-continuity guidelines state that if the entire region becomes unavailable, the application must resume service in the paired region within minutes and with an RPO of less than five seconds, without manual intervention. You need to design the backup and disaster-recovery solution while keeping administrative effort low. Which Azure capability should you recommend?
Configure automatic geo-backup and perform geo-restore when needed.
Schedule nightly exports of the database to Azure Storage and re-import after a failure.
Enable zone-redundant configuration for the database.
Create an auto-failover group that replicates the database to a secondary server in the paired region.
Answer Description
An auto-failover group asynchronously replicates Azure SQL databases to a secondary server in a different region, typically keeping data lag under five seconds (meeting the RPO target) and enabling automatic failover within minutes (meeting the RTO target) with minimal operational overhead. Point-in-time restore for up to 35 days remains available because automated backups continue on both primary and secondary. Geo-restore based only on automated backups can have an RPO of up to one hour and requires manual action. Zone redundancy protects only against availability-zone failures inside the same region, not a regional outage. Nightly exports to Azure Storage are manual, increase management overhead, and can lose up to 24 hours of data, violating the RPO requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an auto-failover group in Azure SQL Database?
How does asynchronous replication work in auto-failover groups?
What is the difference between geo-backup/geo-restore and auto-failover groups?
Contoso ingests billions of semi-structured IoT telemetry events each hour. You plan to store the data by using Azure Cosmos DB with the Core (SQL) API. The solution must:
- guarantee 99.999 percent read and write availability even if an Azure region becomes unavailable,
- provide automatic failover without code changes, and
- keep write latency as low as possible for users worldwide. Which design should you recommend?
Create a Cosmos DB account that has multi-region writes enabled, add the required Azure regions, and configure automatic failover with a prioritized region list.
Create a single-write-region Cosmos DB account, replicate it to additional regions, and use Azure Traffic Manager with performance routing to direct clients; trigger failover manually when needed.
Deploy Azure Table storage accounts in two Azure regions with geo-redundant storage replication (RA-GRS) and implement client-side retry logic for failover.
Provision Azure SQL Database in two regions, configure an active geo-replication pair, and enable automatic failover groups for high availability.
Answer Description
Enabling multi-region writes on an Azure Cosmos DB account and turning on automatic failover satisfies all the stated requirements. Multi-region write accounts receive a 99.999 percent SLA for both reads and writes and can continue serving traffic from any healthy region if one becomes unavailable. Because every configured region can accept writes, users around the world write to the nearest region, minimizing latency.
The alternative designs fail to meet one or more requirements:
- A single-write-region Cosmos DB account-even with additional read regions-offers only a 99.99 percent write-availability SLA. Although such an account can be configured for automatic failover, it still falls short of the 99.999 percent availability target.
- Azure Table storage with geo-redundant replication guarantees only 99.9 percent write availability and relies on client-side retry logic for failover.
- Azure SQL Database with active geo-replication provides a 99.99 percent availability SLA, which does not meet the 99.999 percent requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is multi-region writes in Azure Cosmos DB?
How does automatic failover work in Azure Cosmos DB?
Why is Azure Cosmos DB better than Azure SQL Database or Azure Table storage for this scenario?
You are designing a multiregion e-commerce platform that runs stateless microservices on Azure Kubernetes Service in both East US and West US. The shopping-cart component requires sub-millisecond reads and writes, key-level time-to-live, configurable eviction policies, and automatic failover if an entire region becomes unavailable. You must choose a fully managed Azure service that minimizes code changes and keeps cached data synchronized between the two regions. Which solution should you recommend?
Enable global caching in Azure Front Door Standard to store shopping-cart data at edge locations.
Rely on each App Service instance's in-memory cache and configure application affinity for session stickiness.
Deploy Azure Cache for Redis Enterprise with active geo-replication between East US and West US.
Use Azure SQL Database with active geo-replication and store shopping-cart data in a relational table.
Answer Description
Azure Cache for Redis Enterprise supports active geo-replication, allowing caches in different regions to operate in active-active mode with automatic data synchronization and seamless failover. The Enterprise tier also supports persistence, key-level TTL, and all standard Redis eviction policies, while requiring only connection-string changes for most applications that already use Redis clients. Azure SQL Database is a relational store and cannot guarantee sub-millisecond cache latency. Azure Front Door Standard provides edge object caching but cannot store or replicate key-value shopping-cart data. In-memory caches inside App Service instances are not shared across regions and are lost during scale-out or failover. Therefore, Azure Cache for Redis Enterprise is the appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is active geo-replication in Azure Cache for Redis Enterprise?
What is the purpose of key-level TTL and eviction policies in Azure Cache for Redis?
Why isn't Azure SQL Database suitable for sub-millisecond cache latency?
Your company manages several Azure subscriptions. You must collect platform logs from all Azure virtual machines and Azure Firewall resources. Requirements:
- Retain every log record for 12 months in low-cost Azure storage.
- Simultaneously stream the same logs in near real time to the on-premises Splunk SIEM.
Which solution should you implement to meet these requirements with the least operational overhead?
Create an Azure Monitor diagnostic setting on each relevant resource that forwards logs to an Azure Storage account configured for the Cool access tier and to an Azure Event Hub namespace integrated with Splunk.
Use the Azure Monitor Data Collector API to push logs to Splunk and configure a Log Analytics workspace with 365-day retention for archival.
Configure immutable blob storage for log archival and use an Azure Automation runbook to copy the logs to Splunk on a daily schedule.
Enable Activity Log forwarding to an Azure Service Bus queue and trigger an Azure Function to write each message to Splunk and to blob storage.
Answer Description
Configure Azure Monitor diagnostic settings on each supported resource (or at the subscription level for activity logs) to send the same platform log categories to both an Azure Storage account and an Azure Event Hub namespace. Store the data in a storage account that uses the Cool tier for cost-effective 12-month retention. The Event Hub namespace acts as the streaming endpoint that the Splunk Add-on for Microsoft Cloud Services can subscribe to for near-real-time ingestion. A single diagnostic setting can route the identical logs to multiple destinations (up to five), so no additional automation or intermediate services are required. The alternative options rely on unsupported targets (Service Bus), lack real-time guarantees (Data Collector API), or add unnecessary operational complexity (custom Automation runbooks).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Monitor diagnostic setting?
What is the Cool access tier in Azure Storage?
How does Azure Event Hub integrate with Splunk?
Contoso plans to move 40 on-premises Windows and Linux virtual machines that run on VMware vSphere to Azure. Before choosing target sizes, you must gather 30 days of CPU and memory utilization, estimate monthly Azure compute costs, and map TCP dependencies between the virtual machines to plan a phased migration. You want to achieve this with a single Microsoft-provided solution and minimal manual effort. Which action should you take first?
Deploy the Azure Migrate appliance and create a Discovery and assessment project with agentless dependency visualization
Install the Azure Monitor agent on every virtual machine and use Azure Monitor to collect metrics and dependency maps
Install the Azure Site Recovery mobility service to replicate the virtual machines and analyze the replication reports
Run the Microsoft Assessment and Planning (MAP) Toolkit to generate a server capacity and dependency report
Answer Description
Azure Migrate: Discovery and assessment is designed for the pre-migration evaluation of on-premises servers. Deploying the Azure Migrate appliance enables agentless discovery of VMware VMs, collection of historical performance data, generation of right-sizing and cost estimates for Azure virtual machines, and built-in network dependency visualization.
Azure Monitor with the Azure Monitor Agent can collect performance counters and, with the Service Map solution, show dependencies, but it does not automatically create right-sizing or cost assessment reports, so you would still need another tool. The Microsoft Assessment and Planning Toolkit is a legacy, no-longer-updated, on-premises tool that lacks native Azure cost calculations and requires additional steps to discover network dependencies. The Azure Site Recovery mobility service is used for replication and failover testing, not for pre-migration assessment or cost estimation. Therefore, deploying the Azure Migrate appliance and creating a discovery and assessment project is the correct first step.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Migrate appliance?
What is agentless dependency visualization?
How does Azure Migrate calculate right-sizing and cost estimates?
Your company hosts dozens of REST and legacy SOAP services in Azure Kubernetes Service (AKS), Azure App Service, and an on-premises datacenter reachable only over ExpressRoute. You must expose all services through a single, internet-facing endpoint that offers a branded developer portal, transforms SOAP to REST, validates Azure AD tokens, and keeps on-premises traffic on the private network. Which service or combination should you use?
Azure API Management with the self-hosted gateway deployed on-premises
Azure Service Bus with Hybrid Connections and Azure AD Conditional Access
Azure Front Door Premium combined with Azure Functions proxy
Azure Application Gateway with Web Application Firewall and URL rewrite rules
Answer Description
Azure API Management natively provides a unified developer portal, policy-based request and response transformations (including SOAP-to-REST), built-in integration with Azure Active Directory for token validation, and a self-hosted gateway mode that can be deployed inside the on-premises network so internal traffic never leaves the private boundary. Service Bus, Application Gateway, and Front Door address messaging, load balancing, or global routing needs but lack the full API lifecycle features and transformation capabilities required for this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure API Management, and how does it work?
How does the self-hosted gateway in Azure API Management ensure private traffic stays within a network?
How does Azure API Management handle SOAP-to-REST transformation?
You are designing a compute platform for a containerized background worker that processes orders placed in an Azure Service Bus queue. The job runs a custom Docker image that includes proprietary machine-learning libraries larger than the 250-MB limit for Azure Functions code packages. During most weekdays the queue is empty, but from Friday night through Sunday morning it can exceed 50,000 messages and must be drained within four hours. The operations team wants the solution to scale automatically down to zero instances when idle and to require the least possible infrastructure management effort.
Which Azure compute service should you recommend?
Azure Kubernetes Service with the cluster autoscaler enabled
Azure Container Instances launched by an Azure Logic App each time messages arrive
Azure Functions in a Premium plan running the container image
Azure Container Apps with KEDA-based autoscaling on the Service Bus queue
Answer Description
Azure Container Apps natively runs container images without size limits, integrates with KEDA to scale on Azure Service Bus queue length, and can scale out rapidly (up to hundreds of replicas) or all the way to zero when the queue is empty, providing true serverless consumption pricing. Azure Container Instances lack event-driven autoscale and therefore would require additional orchestration logic. Azure Functions premium plan can run containers but keeps at least one instance warm, so it cannot scale to zero and incurs always-on costs. Azure Kubernetes Service offers full control and autoscaling, but cluster creation, node patching, and control-plane management contradict the requirement to minimize infrastructure management effort.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is KEDA, and how does it work with Azure Container Apps?
Why is Azure Kubernetes Service (AKS) not an ideal choice for this scenario?
How does Azure Container Apps handle proprietary machine-learning libraries in large container images?
Contoso Ltd. has completed the Strategy and Plan phases of the Microsoft Cloud Adoption Framework (CAF) for its data-center migration and has approved the backlog. Before starting workload migrations, you must ensure the Azure environment can host hundreds of virtual machines across multiple business units while enforcing governance, identity, and network standards. Which action should you recommend to align with the CAF Ready methodology?
Perform an agentless discovery and assessment of the on-premises virtual machines by using Azure Migrate.
Configure Azure Site Recovery to replicate the virtual machines to Azure and run a test failover.
Create a multi-stage Azure DevOps pipeline that provisions the required Azure resources by using ARM templates during each migration wave.
Deploy an Azure landing zone that implements a scalable platform foundation with management groups, Azure Policy, identity, and networking controls.
Answer Description
The Ready methodology of the Microsoft Cloud Adoption Framework focuses on preparing the Azure environment to receive migrated or newly built workloads. Key guidance includes establishing an enterprise-scale subscription and landing-zone architecture that defines management-group hierarchy, role-based access control, Azure Policy assignments, and hub-and-spoke networking. Deploying an Azure landing zone therefore directly satisfies the need for a scalable, governed platform foundation.
Using Azure Migrate for assessments, configuring Azure Site Recovery, or building deployment pipelines are valuable tasks, but they belong to the Assess/Plan and Adopt (Migrate & Innovate) stages rather than the Ready stage, and they do not in themselves create the enterprise control plane required before migrations begin.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure landing zone in the context of the CAF Ready methodology?
How does the Ready methodology differ from the Assess and Adopt stages in CAF?
What are management groups and their role in Azure governance?
A financial analytics team must run Monte Carlo simulations nightly. The simulations are packaged as Linux containers, stateless, and require about 500 vCPUs for six hours. Administrators need automatic pool scaling, no cluster lifecycle management, and secure access to Azure Blob Storage by using a managed identity. Which Azure compute service best meets these requirements?
Azure Functions running on a Premium plan
Azure Container Instances (ACI) in a virtual network
Azure Kubernetes Service (AKS) with the Cluster Autoscaler
Azure Batch with a container pool
Answer Description
Azure Batch is designed for high-performance computing and large-scale, containerized batch workloads. It provisions and de-provisions compute node pools automatically, so administrators do not have to build, patch, or manage a cluster. A managed identity can be assigned to the Batch pool, allowing tasks to access Azure Blob Storage securely without storing secrets in code or containers. Azure Kubernetes Service would still require managing a Kubernetes control plane and node upgrades. Azure Container Instances lacks built-in job scheduling and autoscaling across hundreds of vCPUs. Although Azure Functions Premium can run long-running code, it is optimized for event-driven micro-tasks rather than coordinated HPC workloads that demand 500 vCPUs for several hours.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes Azure Batch suitable for high-performance computing (HPC) workloads?
How does a managed identity ensure secure access to Azure Blob Storage in Azure Batch?
Why isn't Azure Kubernetes Service (AKS) suitable for this scenario?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.