Microsoft Azure Solutions Architect Expert Practice Test (AZ-305)
Use the form below to configure your Microsoft Azure Solutions Architect Expert Practice Test (AZ-305). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Solutions Architect Expert AZ-305 Information
Navigating the Microsoft Azure Solutions Architect Expert AZ-305 Exam
The Microsoft Azure Solutions Architect Expert AZ-305 exam is a pivotal certification for professionals who design and implement solutions on Microsoft's cloud platform. This exam validates a candidate's expertise in translating business requirements into secure, scalable, and reliable Azure solutions. Aimed at individuals with advanced experience in IT operations, including networking, virtualization, and security, the AZ-305 certification demonstrates subject matter expertise in designing cloud and hybrid solutions. Success in this exam signifies that a professional can advise stakeholders and architect solutions that align with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure.
The AZ-305 exam evaluates a candidate's proficiency across four primary domains. These core areas include designing solutions for identity, governance, and monitoring, which accounts for 25-30% of the exam. Another significant portion, 30-35%, is dedicated to designing infrastructure solutions. The exam also assesses the ability to design data storage solutions (20-25%) and business continuity solutions (15-20%). This structure ensures that certified architects possess a comprehensive understanding of creating holistic cloud environments that address everything from identity management and data storage to disaster recovery and infrastructure deployment.
The Strategic Advantage of Practice Exams
A crucial component of preparing for the AZ-305 exam is leveraging practice tests. Taking practice exams offers a realistic simulation of the actual test environment, helping candidates become familiar with the question formats, which can include multiple-choice, multi-response, and scenario-based questions. This familiarity helps in developing effective time management skills, a critical factor for success during the timed exam. Furthermore, practice tests are an excellent tool for identifying knowledge gaps. By reviewing incorrect answers and understanding the reasoning behind the correct ones, candidates can focus their study efforts more effectively on weaker areas.
The benefits of using practice exams extend beyond technical preparation. Successfully navigating these tests can significantly boost a candidate's confidence. As performance improves with each practice test, anxiety about the actual exam can be reduced. Many platforms offer practice exams that replicate the look and feel of the real test, providing detailed explanations for both correct and incorrect answers. This active engagement with the material is more effective than passive reading and is a strategic approach to ensuring readiness for the complexities of the AZ-305 exam.

Free Microsoft Azure Solutions Architect Expert AZ-305 Practice Test
- 20 Questions
- Unlimited
- Design identity, governance, and monitoring solutionsDesign data storage solutionsDesign business continuity solutionsDesign infrastructure solutions
You are designing a centralized logging solution for a company that runs several hundred Azure virtual machines, Azure Kubernetes Service clusters, and Azure SQL databases distributed across 20 subscriptions. Security policy requires that:
- All platform and workload logs are queryable within five minutes of creation.
- Log data must be retained for at least seven years to satisfy regulatory audits.
- Administrators will use Kusto Query Language (KQL) to troubleshoot and create alert rules. Which approach meets the requirements while keeping administrative overhead low?
Send all diagnostic and activity logs to a single Log Analytics workspace configured for short-term retention, and enable Azure Monitor data export to an Azure Storage account that uses lifecycle policies for seven-year archival.
Deploy an Azure HDInsight cluster with Kafka to collect logs and store them in Azure Data Lake Storage, using Hive queries for reporting.
Create diagnostic settings on every resource to stream logs to Azure Event Hubs, then ingest the data into a third-party SIEM for storage and analysis.
Write logs directly to an Azure Storage account in the Cool tier and use Azure Synapse serverless SQL pool to query the data when needed.
Answer Description
A Log Analytics workspace provides near real-time ingestion and a rich KQL query experience that administrators already use for alerting. Retaining only the recent data (for example, 30-90 days) in the workspace keeps ingestion and query costs predictable. Azure Monitor data export (or the legacy Continuous Export) can automatically copy every newly ingested record to a storage account, where lifecycle management policies move the data to Cool or Archive tiers for inexpensive seven-year retention. Event Hubs, HDInsight, or direct Storage ingestion would require additional infrastructure or prevent KQL-based alerting, and Synapse-based querying of archived blobs does not deliver the five-minute latency the operations team needs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Kusto Query Language (KQL)?
How does Azure Monitor data export work?
What is an Azure Log Analytics workspace?
Contoso hosts a business-critical Azure SQL Database in the General Purpose tier. Compliance requires all full database backups to be kept for seven years. Administrators must also be able to restore the database to any point in time within the most recent 30 days with minimal management overhead. You need to recommend the most cost-effective native Azure solution that satisfies both requirements. Which approach should you recommend?
Configure a long-term retention (LTR) policy on the Azure SQL Database to store weekly full backups for seven years and use the service's automatic backups for 30-day point-in-time restore.
Create a Recovery Services vault and use Azure Backup to protect the Azure SQL Database with a seven-year retention policy.
Migrate the database to an Azure SQL Managed Instance in the Business Critical tier, enable auto-failover groups, and take weekly snapshots saved to Blob storage for seven years.
Schedule an Azure Automation runbook to export the database as weekly BACPAC files to Azure Storage and apply lifecycle rules to retain them for seven years.
Answer Description
Azure SQL Database already creates automatic full, differential, and transaction-log backups that provide point-in-time restore (PITR) for up to 35 days in the General Purpose tier, meeting the 30-day PITR requirement without extra configuration. To meet the seven-year compliance need, you can enable the long-term retention (LTR) feature on the database. LTR automatically copies the weekly full backups produced by the built-in service to RA-GRS storage and keeps them for the period you specify (up to 10 years), with no servers or scripts to maintain and costs limited to the storage consumed.
Azure Backup cannot protect Platform as a Service (PaaS) SQL databases; it supports SQL Server running in Azure VMs, so that option is invalid. Exporting weekly BACPACs with Automation scripts introduces additional management overhead and does not provide PITR capability. Migrating to a Managed Instance with manual snapshots would raise costs and still require operational effort to manage snapshots. Therefore, configuring an LTR policy on the existing Azure SQL Database while relying on the default automated backups is the optimal and most economical solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between point-in-time restore (PITR) and long-term retention (LTR)?
How does Azure SQL Database handle automatic backups?
What is RA-GRS storage, and why is it used for LTR backups?
Contoso runs an Azure SQL Managed Instance in the business-critical tier. Compliance requires the company to be able to restore the database to any point in time within the last seven days and to keep one full backup taken on the first day of every month for five years. The solution must minimize cost and administrative effort. Which approach should you recommend?
Enable Azure Site Recovery to replicate the managed instance to a secondary region and rely on recovery points for both short-term and long-term restores.
Use the built-in automated backups for Azure SQL Managed Instance, set point-in-time retention to seven days, and configure a long-term backup retention policy that stores monthly full backups in Azure Blob storage for 60 months.
Create an Azure Backup Recovery Services vault and apply a SQL Server in Azure VM backup policy to protect the managed instance with daily and monthly backups retained for five years.
Develop an Azure Automation runbook that exports a BACPAC file of the database to Azure Storage each month for five-year retention and disable the service's automated backups.
Answer Description
Azure SQL Managed Instance performs automatic full, differential, and transaction-log backups that support point-in-time restore (PITR) for a configurable retention period of up to 35 days; setting this to seven days satisfies the short-term recovery objective with no additional infrastructure. The service also supports Long-Term Retention (LTR), which copies full backups to RA-GRS blob storage on a monthly, weekly, or yearly schedule and keeps them for up to 10 years. Defining an LTR policy to retain a monthly backup for 60 months (five years) meets the compliance requirement at minimal cost and with no extra management overhead. Azure Backup vault policies and ASR do not natively protect Managed Instance databases, and manual BACPAC exports increase operational effort and remove the benefit of built-in PITR. Therefore, configuring automated backups with a seven-day PITR window and an LTR policy for monthly backups is the correct solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure SQL Managed Instance?
What is Point-in-Time Restore (PITR) in Azure SQL?
How does Long-Term Retention (LTR) work for Azure SQL backups?
Tailwind Traders operates in Americas, EMEA, and APAC, and will create hundreds of Azure subscriptions per region. Compliance requires that deployments stay within region-specific Azure locations and that every resource automatically inherits a costCenter tag from its subscription. Regional IT staff must be able to add further policies and control costs for their own subscriptions without affecting other regions. You need to recommend a governance design that minimizes ongoing administrative effort. What should you recommend?
Enable resource locks on each subscription and use Azure RBAC deny assignments to restrict regions, while requiring governance scripts to add the costCenter tag manually.
Apply Azure Blueprints at the resource-group level inside each subscription to enforce allowed regions and tagging, without using management groups.
Create a root management group, then a management group for each region (Americas, EMEA, APAC). Assign an Azure Policy initiative containing Allowed locations and Inherit costCenter tag to each regional management group and place regional subscriptions beneath them.
Create a single management group that holds all subscriptions and assign the Allowed locations and Inherit costCenter tag policies separately to every subscription with region-specific parameters.
Answer Description
Management groups provide a hierarchy for policy and RBAC inheritance. Creating one management group for each geography under the tenant root lets you assign an Azure Policy initiative to each region once and have it flow to all current and future subscriptions in that region. The initiative can include the built-in Allowed locations policy with parameters set to the permitted datacenters for that geography and the Inherit tag policy so resources automatically receive the costCenter tag. Subscriptions placed under the proper regional management group inherit these settings, and regional administrators can still add policies at lower scopes without affecting other regions. Assigning policies to every subscription or resource group would multiply administration effort, and resource locks or deny assignments do not address automatic tag inheritance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Policy initiative, and how does it work?
Why should Tailwind Traders use management groups for governance instead of applying policies to individual subscriptions?
How does Azure's costCenter tag inheritance work at the subscription level?
A company runs two Windows Server 2019 Azure virtual machines that host a stateless REST API. The VMs are deployed in a single availability set in the West Europe region and are front-ended by an Azure Load Balancer. A recent datacenter-wide outage in West Europe caused both VMs to become unavailable, violating the business requirement of a 99.99 percent uptime SLA for the compute tier. The solution must continue to use only the West Europe region and must not require changes to the application code. Which change should you recommend to meet the availability requirement?
Redeploy the application to an Azure Virtual Machine Scale Set configured for zone redundancy across three Availability Zones in West Europe.
Increase the fault domain count of the existing availability set from 2 to 3 to improve fault tolerance.
Enable Azure Site Recovery to replicate the two VMs to another region and configure automatic failover.
Add an Azure Standard Load Balancer in front of the existing availability set to distribute traffic evenly across the two VMs.
Answer Description
An availability set places all its virtual machines in the same datacenter, distributing them only across fault and update domains. A complete datacenter outage therefore takes the entire set offline, and Microsoft guarantees only a 99.95 percent SLA for two or more VMs in an availability set. Deploying the instances in an Azure Virtual Machine Scale Set or in individual VMs that are distributed across at least two Availability Zones within the same region provides isolation from a single datacenter failure. Microsoft offers a 99.99 percent virtual-machine SLA when two or more instances are placed in different Availability Zones in the same region. Azure Site Recovery targets disaster recovery across regions and does not provide in-region high availability; adding a load balancer or increasing fault domains cannot protect against a full datacenter failure. Therefore, creating a zone-redundant VM scale set (or moving the VMs to separate Availability Zones behind the existing load balancer) is the correct way to meet the 99.99 percent SLA without requiring application changes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Virtual Machine Scale Set?
What are Availability Zones in Azure?
Why does enabling Azure Site Recovery not help with in-region high availability?
Your company ingests two data streams from a consumer IoT product. Devices send about 5 GB/hour of JSON telemetry that dashboards must query for the last seven days with sub-100 ms latency and allow flexible schema changes. Devices also upload 2 MB JPEG images that are accessed often for 30 days, seldom after, but must be retained for five years. To meet requirements at the lowest cost and administration effort, which Azure storage combination should you recommend?
Azure Data Lake Storage Gen2 to store both telemetry and images in a single storage account with hierarchical namespace enabled
Azure Cosmos DB (NoSQL) with autoscale throughput for telemetry, and Azure Blob Storage with lifecycle rules to move images from the Hot tier to Cool after 30 days and to Archive after 180 days
Azure SQL Database Hyperscale for telemetry and Azure Files with the Cool access tier for images
Azure Cache for Redis to store telemetry and zone-redundant Premium SSD managed disks for images
Answer Description
Azure Cosmos DB for NoSQL guarantees single-digit millisecond reads and writes and is schema-agnostic, making it well-suited for high-velocity, flexible JSON telemetry. Enabling autoscale throughput lets the service automatically adjust RU/s based on load, reducing cost when traffic is low. Azure Blob Storage is the most economical option for large binary objects such as JPEGs. A lifecycle management policy that moves blobs from the Hot tier to Cool after 30 days and to Archive after 180 days keeps frequently accessed images performant while minimizing long-term storage costs, and requires no manual administration.
Azure SQL Database Hyperscale offers strong relational capabilities but incurs higher costs and imposes rigid schemas, making it less appropriate for rapidly changing JSON telemetry; Azure Files and managed disks are costlier than object storage for large, infrequently accessed images. Azure Data Lake Storage Gen2 can store both data types but would not provide the required millisecond read latency without additional compute (for example, Azure Synapse or Data Explorer) and may raise overall cost and complexity. Redis is an in-memory cache, not a durable store for long-term telemetry or images. Therefore, using Cosmos DB with autoscale for telemetry plus tiered Azure Blob Storage for images best balances features, performance, and cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cosmos DB, and why is it suitable for IoT telemetry?
What are the Azure Blob Storage tiers, and how do lifecycle management rules work?
Why isn’t Azure SQL Database Hyperscale or Azure Data Lake Storage Gen2 a good fit for this scenario?
You are preparing a migration plan for 200 on-premises VMware virtual machines that host a three-tier .NET application and a back-end SQL Server farm. You must determine which virtual machines can move to Azure with no remediation, obtain performance-based Azure VM size recommendations, visualize traffic dependencies to create migration groups, and generate an estimated monthly cost for running the workloads in Azure. Which Azure service or tool should you use to gather all of this information?
Create an Azure Migrate project and run the Discovery and assessment tooling with dependency visualization enabled.
Run the Azure Total Cost of Ownership (TCO) Calculator for the environment and map inter-VM traffic by using Service Map.
Configure Azure Site Recovery for the virtual machines and use Azure Cost Management + Billing to project expenses.
Export performance counters from vCenter to Azure Monitor and analyze them by using Azure Monitor Workbooks.
Answer Description
Azure Migrate provides a unified discovery and assessment capability for servers and databases. After an on-premises appliance performs discovery, the Server Assessment tool analyzes performance over time, recommends right-sized Azure VM SKUs, flags compatibility issues, calculates estimated Azure costs, and can optionally collect network data to produce dependency maps that aid in grouping servers. Azure Site Recovery focuses on replication, Azure Monitor plus Workbooks does not produce cost or sizing guidance, and the TCO Calculator lacks dependency mapping and detailed compatibility analysis.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Migrate tool and why is it suitable for this scenario?
How does dependency visualization in Azure Migrate work?
Why are the other tools not suitable for this scenario?
A factory ingests 5 million JSON telemetry events per hour (~15 GB), each with 50-100 varying properties. Engineers need ad-hoc queries on the latest seven days, returning results in under two seconds when filtering by deviceId and any property. Data older than 30 days moves to Azure Data Lake Storage. For hot data, select a storage service that 1) auto-indexes all JSON properties, 2) offers single-digit-millisecond read/write latency globally with SLA, and 3) provides elastic throughput with minimal admin. Which service meets these needs?
Azure Data Lake Storage Gen2 with hierarchical namespace enabled
Azure Table storage with partition and row key based on deviceId and timestamp
Azure SQL Database Hyperscale tier with JSON columns and secondary indexes
Azure Cosmos DB using the Core (SQL) API with provisioned or autoscale throughput
Answer Description
Azure Cosmos DB configured for the Core (SQL) API automatically indexes all JSON properties, so new or changing sensor attributes require no schema updates. The service offers single-digit-millisecond latency for reads and writes worldwide and a 99.999-percent availability SLA when multi-region writes are enabled. Provisioned or autoscale throughput allows the database to handle millions of events per second with minimal administration. Azure Table storage cannot deliver the required low-latency queries or automatic rich indexing. Azure Data Lake Storage Gen2 is optimized for batch analytics, not interactive queries. Azure SQL Database Hyperscale requires explicit schema and index management and is better suited to structured relational workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes Azure Cosmos DB suitable for storing JSON telemetry data?
How does elastic throughput in Azure Cosmos DB work?
Why are other options like Azure Table Storage or Azure SQL Database not suitable for this use case?
Your organization keeps hourly clickstream JSON files in an Azure Data Lake Storage Gen2 container. Data engineers must ingest the files, flatten nested JSON arrays, perform several joins, and load the results into Azure SQL Database every hour. They want a managed, code-free service that can orchestrate the workflow, provide visual data-lineage, automatically spin up and shut down compute clusters, and integrate with Azure Monitor-while minimizing operational overhead. Which Azure service best meets these requirements?
An Azure Stream Analytics job that processes the JSON files with SQL-style queries
Azure Functions triggered by Event Grid that call stored procedures to load data
Azure Data Factory Mapping Data Flows executed on managed Spark clusters
An Azure Logic Apps workflow that moves files from ADLS Gen2 to Azure SQL Database
Answer Description
Azure Data Factory (ADF) Mapping Data Flows are designed for precisely this scenario. ADF offers a visual, code-free authoring experience that lets data engineers build complex transformations such as JSON flattening, joins, aggregations, and look-ups without writing Spark code. Each data-flow execution is run on an automatically provisioned, scaled-out Azure-managed Spark cluster that starts when the job begins and is torn down when the job finishes, so you pay only for the runtime. ADF pipelines supply native scheduling, dependency management, and integration with Azure Monitor for logging and alerting, while the Data Flow canvas provides lineage and monitoring views.
The other options do not fully satisfy the requirements:
- Azure Stream Analytics focuses on streaming data and offers limited support for complex, batch-oriented transformations like multitable joins and JSON hierarchy flattening.
- Azure Logic Apps provide workflow orchestration but lack the scalable data-transformation engine required for large-volume joins and complex data shaping.
- Azure Functions would require substantial custom code for parsing, transformation, scaling, and monitoring, increasing operational overhead rather than reducing it.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Azure Data Factory Mapping Data Flows handle JSON flattening?
What makes Azure-managed Spark clusters suitable for this use case?
How does Azure Data Factory integrate with Azure Monitor for observability?
Your company plans to onboard several new Azure subscriptions for different business units. The security team must ensure that each subscription meets NIST SP 800-53 controls, bundles the required Azure Policy assignments with specific role assignments and a standardized resource hierarchy, supports versioning so changes can flow through dev, test, and production management groups, and can be assigned in a single step. Which Azure service should you recommend?
Azure Advisor recommendations
An Azure Policy initiative assigned at the root management group
Microsoft Defender for Cloud regulatory compliance dashboard
Azure Blueprints
Answer Description
Azure Blueprints is designed for governing at scale. It lets architects package Azure Policy definitions, role-based access control assignments, resource group structures, and ARM templates into a single reusable blueprint. Blueprints are versioned, so updates can move through development, test, and production rings, and a single assignment applies the entire package to a subscription. An initiative assigned at a management group would only deploy policies and lacks integrated RBAC, template deployment, and versioning. Microsoft Defender for Cloud's regulatory compliance dashboard reports on compliance posture but does not deploy or version governance artifacts. Azure Advisor delivers optimization suggestions rather than enforceable compliance controls.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do Azure Blueprints help ensure compliance with NIST SP 800-53 controls?
What is the difference between Azure Policy and Azure Blueprints?
How does the Azure Blueprints versioning feature work?
A retail company is modernizing its Azure-hosted e-commerce platform. When a new customer account is created, three independent microservices (email confirmation, resource provisioning, and analytics) must react to the event. The solution must: provide loose coupling with fan-out to multiple subscribers; support attribute-based filtering; guarantee at-least-once delivery; allow easy addition of future Azure service event sources such as Blob Storage; and require minimal infrastructure management. Which Azure service should you recommend as the core event distribution mechanism?
Azure Notification Hubs
Azure Service Bus topic with multiple subscriptions
Azure Event Hubs Standard namespace
Azure Event Grid using a custom topic
Answer Description
Azure Event Grid is designed for large-scale, serverless, event-driven architectures. It delivers events in near real time with an at-least-once guarantee, automatically manages infrastructure, and offers built-in integrations with many Azure services-including Storage, IoT Hub, and custom topics-simplifying the onboarding of new event sources. Event Grid supports fan-out by allowing multiple event handlers to subscribe to the same topic and provides subject and advanced filtering to route events only to interested subscribers.
Azure Service Bus topics can broadcast messages to multiple subscriptions, but Service Bus is optimized for enterprise messaging scenarios that require ordered, state-full messages rather than lightweight publish/subscribe events, and it does not natively integrate with diverse Azure service events without additional code. Azure Event Hubs focuses on high-throughput data ingestion for streaming analytics, not generalized event routing with filtering. Azure Notification Hubs targets mobile push notifications and is not intended to act as an event backbone for microservices. Therefore, Azure Event Grid with a custom topic best meets all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Event Grid, and how does it support event-driven architectures?
How does Azure Event Grid ensure at-least-once delivery?
What is the difference between Azure Event Grid and Azure Service Bus?
You administer two Windows Server 2022 Azure virtual machines that run a mission-critical workload in an availability set. The business dictates that, in the event of data corruption, you must be able to recover either the whole VM or individual files. The recovery point objective (RPO) for workload data must not exceed 15 minutes, and the solution must rely only on native Azure capabilities with minimal operational overhead. Which approach should you recommend?
Create an Azure Automation runbook that triggers managed disk incremental snapshots every 15 minutes and copies them to Azure Storage.
Configure Azure Site Recovery (ASR) to continuously replicate the virtual machines to a secondary Azure region.
Enable Azure Backup on the virtual machines with the shortest (hourly) backup schedule.
Enable Azure Site Recovery for the virtual machines and also enable Azure Backup with a daily backup policy.
Answer Description
Azure Site Recovery (ASR) can continuously replicate an Azure VM to another Azure region with an effective RPO measured in minutes, easily meeting the 15-minute requirement. However, ASR replicas are crash-consistent images of the entire VM and do not provide item-level recovery. Enabling Azure Backup on the same VMs adds daily application-consistent backups that support file-level restore. Using ASR together with Azure Backup therefore satisfies both the aggressive RPO for full-VM recovery and the need to restore individual files, while remaining fully managed by Azure. The other options fail to meet one or more requirements: Azure Backup alone cannot deliver a 15-minute RPO; ASR alone lacks file-level recovery; and a custom Automation-driven snapshot strategy introduces extra management overhead and does not provide built-in file recovery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Azure Backup and Azure Site Recovery (ASR)?
What is the meaning of Recovery Point Objective (RPO) in this context?
Why can't Azure Backup or ASR alone meet the requirements in this case?
A company hosts a media-streaming web app in the East US 2 region. User-uploaded videos are stored in an Azure general-purpose v2 storage account that currently uses locally redundant storage (LRS). After a recent region-wide outage, the CTO asks you to redesign the storage layer to achieve the following goals:
- Videos must remain available for read access if the entire East US 2 region becomes unavailable.
- Write operations must continue to be directed only to East US 2 to keep latency low.
- The solution should meet or exceed a 99.99 percent SLA for read availability while keeping costs lower than options that add both zone and geo redundancy.
Which Azure Storage redundancy option should you recommend?
Zone-redundant storage (ZRS) in the East US 2 region
Read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS)
Geo-redundant storage (GRS)
Answer Description
Read-access geo-redundant storage (RA-GRS) keeps three synchronous copies in the primary region and maintains asynchronous geo-replication of another three copies to a paired secondary region. Because the secondary endpoint is readable at all times, applications can switch to it automatically or manually if the primary region is unavailable, providing up to 99.99 percent read availability. Writes still occur only in the primary region, so application latency and cost remain lower than options such as geo-zone-redundant storage (GZRS) or read-access GZRS, which add zone-level replication and higher pricing. Zone-redundant storage (ZRS) protects only against single-zone failures within one region, and standard GRS does not allow reads from the secondary until after a failover. Therefore, RA-GRS is the appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is RA-GRS and how does it differ from GRS?
Why does RA-GRS meet the 99.99% SLA for read availability?
What are the cost benefits of RA-GRS compared to GZRS?
Your company runs a mission-critical line-of-business application that uses an Azure SQL Database single database in the West Europe region. Management requires the database tier to provide at least 99.995 percent availability within the region and to fail over automatically with zero data loss during any planned or unplanned outage. Operational overhead must remain minimal, and the application connection string must not change. Which solution should you recommend?
Migrate to SQL Server on Azure virtual machines in an availability set with a SQL Server Failover Cluster Instance.
Switch to the General Purpose service tier and configure an auto-failover group between West Europe and North Europe.
Move the database to the Business Critical service tier and enable zone-redundant configuration.
Enable active geo-replication to a secondary database in North Europe.
Answer Description
Configuring the database in the Business Critical service tier with zone redundancy enables SQL Database's built-in quorum-based high-availability architecture to place multiple synchronous replicas across three availability zones in the same region. Automatic failover occurs transparently with no data loss, the SLA increases to 99.995 percent, and the existing connection string remains valid because the listener endpoint does not change.
Active geo-replication or auto-failover groups replicate asynchronously to another region, provide only a 99.99 percent intra-region SLA, and may require updating endpoints. Deploying SQL Server on Azure VMs inside an availability set achieves just a 99.95 percent SLA and imposes full infrastructure and patching management responsibilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Business Critical service tier in Azure SQL Database?
What is zone redundancy, and how does it work in Azure?
How does the failover process work in Azure SQL Database with the Business Critical tier?
A company is modernizing an on-premises microservices application and plans to deploy containers to Azure. Mandatory requirements are:
- Each microservice must automatically scale from 0 to hundreds of instances on HTTP or queue events.
- Platform management of the underlying infrastructure must be minimized; developers must not administer Kubernetes clusters.
- Built-in support is needed for Dapr service invocation and pub/sub between microservices. Which Azure service should you recommend to host the containers?
Azure Container Apps
Azure Kubernetes Service (AKS) with the cluster autoscaler and KEDA add-on
Azure Container Instances orchestrated by Azure Logic Apps
Azure App Service for Containers running in the Premium v3 tier with autoscale rules
Answer Description
Azure Container Apps is a fully managed service that runs containers on an Azure-managed Kubernetes environment yet hides cluster management from developers. It supports event-driven horizontal scaling with KEDA, including scaling to zero when workloads are idle, and offers native integration with the Dapr runtime for service invocation and pub/sub. Azure Kubernetes Service also supports KEDA and Dapr but still requires customers to operate and secure the cluster and cannot completely eliminate baseline node costs. Azure App Service for Containers and Azure Container Instances do not provide Dapr integration or scale-to-zero capabilities. Therefore, Azure Container Apps best meets all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Dapr, and why is it important for microservices in Azure?
How does Azure Container Apps scale seamlessly from 0 to handle large workloads?
What differentiates Azure Container Apps from Azure Kubernetes Service (AKS)?
Your company maintains 10,000 user accounts in several on-premises Active Directory forests and has multiple Azure subscriptions. The organization wants every user to have a single cloud identity while keeping their existing on-premises credentials. You must retire the current AD FS farm, avoid opening any additional inbound firewall ports, and enable Azure AD Conditional Access and multifactor authentication for Microsoft 365 and custom SaaS applications. Which identity management approach should you recommend?
Maintain AD FS federation and publish the AD FS farm through Azure AD Application Proxy.
Deploy Azure AD Domain Services and join all Azure virtual machines to the managed domain.
Configure Azure AD Password Hash Synchronization and disable on-premises authentication.
Implement Azure AD Pass-through Authentication with Seamless Single Sign-On and retire the AD FS infrastructure.
Answer Description
Pass-through Authentication (PTA) with Seamless Single Sign-On allows Azure AD to validate a user's password against on-premises Active Directory through outbound-only HTTPS requests from lightweight authentication agents. This removes the need for AD FS servers and does not require exposing inbound ports. Because authentication is handled by Azure AD, Conditional Access policies and multifactor authentication can be applied to cloud apps.
Password Hash Synchronization would meet cloud sign-in requirements but would not satisfy organizations that want on-premises password validation, and it cannot enforce on-premises sign-in policies in real time. Continuing with AD FS keeps the infrastructure you are asked to retire and still requires inbound access to the federation servers. Azure AD Domain Services provides domain join and LDAP/Kerberos for Azure VMs but is not intended to replace user sign-in flows to Microsoft 365 or SaaS applications. Therefore, implementing Azure AD Pass-through Authentication with Seamless SSO is the most appropriate solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AD Pass-through Authentication?
How does Seamless Single Sign-On (SSO) work?
Why is Azure AD Conditional Access important?
A financial-services company runs its core workload in a single Azure SQL Database in the General Purpose tier. Administrators must be able to restore the database to any point in time within the last 30 days and also recover a full backup from any week during the next 10 years. The solution must require minimal ongoing management and incur the lowest possible storage cost. Which Azure SQL capability should you recommend?
Use Azure Backup to protect the database by installing the Azure Backup agent and scheduling weekly backups for 10 years.
Create an Automation runbook that exports the database as a BACPAC file to Azure Storage every week and deletes files older than 10 years.
Enable a long-term backup retention (LTR) policy on the Azure SQL Database to store weekly full backups in Azure Blob Storage for up to 10 years.
Configure active geo-replication to a secondary database in another region and keep the replica online for 10 years.
Answer Description
Azure SQL Database automatically retains short-term, point-in-time backups for up to 35 days, satisfying the 30-day requirement. By enabling the built-in long-term retention (LTR) policy, weekly full backups are copied to cost-effective Azure Blob Storage and can be kept for as long as 10 years, meeting the compliance mandate with minimal administrative overhead. Active geo-replication and auto-failover groups address availability but do not provide decade-long backup retention. Using the Azure Backup agent or manual BACPAC exports would introduce additional operational effort and higher costs compared to the managed LTR feature.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a long-term retention (LTR) policy in Azure SQL Database?
How does point-in-time restore work in Azure SQL Database?
Why is Azure Blob Storage used for long-term retention in this solution?
A media company must migrate 800 TB of unstructured video files that currently reside on an on-premises NAS hosted in a secure, offline network segment. The WAN link to Azure is limited to 200 Mbps and is shared with other workloads, so the company wants to avoid saturating it. The migration must be completed within four weeks, and after the data is transferred there will be no further synchronization requirements. Which Azure service should you recommend to perform the migration?
Use AzCopy to copy the data over the existing 200 Mbps ExpressRoute circuit directly into Blob Storage.
Order an Azure Data Box Heavy appliance for an offline bulk transfer to Azure Blob Storage.
Implement Azure File Sync to replicate the NAS shares to an Azure file share and then tier cold data to the cloud.
Deploy Azure Data Box Gateway as a virtual appliance onsite and let it stream data continuously to Blob Storage.
Answer Description
Because the company needs to move hundreds of terabytes within a short time frame and cannot rely on limited bandwidth, an offline seeding approach is required. Azure Data Box Heavy is a Microsoft-managed ruggedized appliance that can transfer up to 1 PB of data per order and is designed specifically for large, one-time bulk moves to Azure Storage. Azure File Sync and Azure Data Box Gateway depend on the existing network, which would take several months to move 800 TB over a 200 Mbps link. Copying over ExpressRoute with AzCopy also relies on network bandwidth and would exceed the four-week window even with a dedicated 200 Mbps circuit. Therefore, Azure Data Box Heavy is the most suitable choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Data Box Heavy and when should it be used?
Why is Azure Data Box Gateway not suitable for this scenario?
How does Azure Data Box differ from Azure File Sync for data migration?
Your company has a tenant with a production and a development subscription. Security requirements mandate that all production storage accounts use only General Purpose v2 with customer-managed keys, while development is exempt. You need a solution that enforces the requirement automatically across any current or future production subscriptions and provides a single compliance report. What should you recommend?
Assign an Azure Policy initiative to the production management group and a separate initiative to the development management group.
Create an Azure Blueprint that includes the policy and apply the blueprint to each subscription.
Create a custom RBAC role that denies creation of non-compliant storage accounts and assign it at the subscription level.
Assign individual policy definitions to each storage resource group in every subscription.
Answer Description
Assigning an Azure Policy initiative at the management group level lets you bundle multiple policy definitions-such as allowed SKUs and required encryption-and apply them to every subscription contained in that management group. All existing and newly created subscriptions that move under the production management group will automatically inherit the policy assignments, and compliance data is aggregated at the management group scope. Assigning at resource-group scope would require ongoing manual work and yield fragmented reporting. Blueprints would still need to be applied to each subscription and do not automatically affect future subscriptions. RBAC custom roles control permissions but cannot enforce configuration settings such as storage account SKU or encryption.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure Policy initiative?
What is the difference between a management group and a subscription in Azure?
Why is using Azure Policy initiatives more effective than custom RBAC roles for this scenario?
You administer an on-premises SQL Server 2016 Always On availability group that hosts a 4-TB transactional database generating about 50 GB of new data daily. The business has approved migrating this workload to Azure SQL Managed Instance. The migration must allow thorough pre-cutover testing, keep the source database online during data movement, and limit the final cutover outage to less than 30 minutes. You want to minimize custom scripting and use Microsoft-provided tooling. Which migration approach should you recommend?
Perform an online migration with Azure Database Migration Service, enabling continuous replication until the planned cutover.
Implement log shipping to a SQL Server VM in Azure, then upgrade the VM in place to Managed Instance using the Azure portal.
Export a BACPAC using Azure Data Migration Assistant, then import the file into the managed instance.
Configure on-premises SQL Server as a publisher and Azure SQL Managed Instance as a transactional replication subscriber, then switch applications over.
Answer Description
Azure Database Migration Service (DMS) supports online migrations from SQL Server to Azure SQL Managed Instance. In an online migration, DMS sets up continuous data replication while the source database remains fully operational, enabling validation and testing on the target. When ready, a brief cutover stops replication, applies any final changes, and redirects clients-typically achievable in well under 30 minutes.
Exporting a 4-TB database to a BACPAC would require extended downtime for export, transfer, and import. Transactional replication to Managed Instance is supported but involves complex configuration and manual fail-over steps, making it less suitable for a streamlined, Microsoft-managed process. Log shipping cannot restore to a Managed Instance, and upgrading a SQL Server VM does not convert it to a managed PaaS offering.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Database Migration Service (DMS)?
How does continuous replication work in Azure Database Migration Service?
Why is transactional replication or log shipping not suitable for this scenario?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.