Microsoft Azure Solutions Architect Expert Practice Test (AZ-305)
Use the form below to configure your Microsoft Azure Solutions Architect Expert Practice Test (AZ-305). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Solutions Architect Expert AZ-305 Information
Navigating the Microsoft Azure Solutions Architect Expert AZ-305 Exam
The Microsoft Azure Solutions Architect Expert AZ-305 exam is a pivotal certification for professionals who design and implement solutions on Microsoft's cloud platform. This exam validates a candidate's expertise in translating business requirements into secure, scalable, and reliable Azure solutions. Aimed at individuals with advanced experience in IT operations, including networking, virtualization, and security, the AZ-305 certification demonstrates subject matter expertise in designing cloud and hybrid solutions. Success in this exam signifies that a professional can advise stakeholders and architect solutions that align with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure.
The AZ-305 exam evaluates a candidate's proficiency across four primary domains. These core areas include designing solutions for identity, governance, and monitoring, which accounts for 25-30% of the exam. Another significant portion, 30-35%, is dedicated to designing infrastructure solutions. The exam also assesses the ability to design data storage solutions (20-25%) and business continuity solutions (15-20%). This structure ensures that certified architects possess a comprehensive understanding of creating holistic cloud environments that address everything from identity management and data storage to disaster recovery and infrastructure deployment.
The Strategic Advantage of Practice Exams
A crucial component of preparing for the AZ-305 exam is leveraging practice tests. Taking practice exams offers a realistic simulation of the actual test environment, helping candidates become familiar with the question formats, which can include multiple-choice, multi-response, and scenario-based questions. This familiarity helps in developing effective time management skills, a critical factor for success during the timed exam. Furthermore, practice tests are an excellent tool for identifying knowledge gaps. By reviewing incorrect answers and understanding the reasoning behind the correct ones, candidates can focus their study efforts more effectively on weaker areas.
The benefits of using practice exams extend beyond technical preparation. Successfully navigating these tests can significantly boost a candidate's confidence. As performance improves with each practice test, anxiety about the actual exam can be reduced. Many platforms offer practice exams that replicate the look and feel of the real test, providing detailed explanations for both correct and incorrect answers. This active engagement with the material is more effective than passive reading and is a strategic approach to ensuring readiness for the complexities of the AZ-305 exam.

Free Microsoft Azure Solutions Architect Expert AZ-305 Practice Test
- 20 Questions
- Unlimited
- Design identity, governance, and monitoring solutionsDesign data storage solutionsDesign business continuity solutionsDesign infrastructure solutions
You manage several Azure SQL Databases that run in the General Purpose service tier. Corporate policy requires the following for all database backups:
- Backups must remain available if the primary Azure region suffers an outage.
- Auditors must be able to recover the databases in the paired secondary region without the need to maintain a continuously running secondary database.
- The solution should use the lowest-cost built-in capability and must not require creating or managing active geo-replicated databases.
Which configuration should you recommend to meet these requirements?
Enable long-term retention (LTR) backups in a secondary region.
Implement active geo-replication to a secondary database in the paired region.
Enable zone-redundant backup storage for each database.
Configure read-access geo-redundant (RA-GRS) backup storage for the databases.
Answer Description
Configuring the databases to use read-access geo-redundant (RA-GRS) backup storage satisfies the requirements. With RA-GRS, Azure automatically copies each automated backup to the paired secondary region. If the primary region becomes unavailable, you can perform a geo-restore in the secondary region, meeting the recovery requirement without having to maintain an active replica. Because the backups are stored on low-cost RA-GRS Azure Blob Storage, no additional compute charges accrue. Zone-redundant backup storage protects only against zonal failures inside the same region. Active geo-replication keeps a live secondary database running, which contravenes the cost and management constraints. Long-term retention adds archival capability but does not guarantee cross-region availability or protection for recent backups.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is RA-GRS backup storage?
How does geo-restore work for Azure SQL Databases?
Why is zone-redundant storage insufficient for regional outages?
Contoso operates a public SaaS API that is deployed in active-active mode across three Azure regions. You must design a single entry point that: terminates TLS, supports path-based routing and session affinity, automatically directs each client to the lowest-latency healthy backend, and performs fail-over to another region within seconds if an outage occurs, without relying on client DNS cache expiration. Which Azure service best meets these requirements?
Azure cross-region Standard Load Balancer
Azure Application Gateway deployed in each region and distributed by Azure DNS weighted records
Azure Front Door Standard/Premium
Azure Traffic Manager with performance routing
Answer Description
Azure Front Door provides global HTTP/HTTPS load-balancing from Microsoft edge points of presence. It performs TLS termination, URL and path-based routing, and optional cookie-based session affinity, while continuously probing backend health. Because traffic is steered at Layer 7 through a single anycast IP that stays constant even when a backend becomes unhealthy, Front Door can redirect new connections to another healthy region within seconds and does not depend on client-side DNS TTL expiry. Azure Traffic Manager works only at the DNS layer, offers no path routing or SSL offload, and its fail-over speed is limited by DNS caching. Deploying Azure Application Gateway in each region behind Azure DNS still relies on DNS-based routing and lacks global anycast acceleration. A cross-region Azure Standard Load Balancer operates at Layer 4 without TLS termination or path-based routing, so it cannot satisfy the requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Front Door and why is it suitable here?
How does Azure Front Door compare to Azure Traffic Manager?
What is the difference between Layer 4 and Layer 7 load balancers?
You manage 50 Azure subscriptions for a global company. The security team operates a third-party SIEM that ingests data from Azure Event Hubs. All Azure Activity logs and resource diagnostic logs must be streamed to the SIEM in near-real time and retained in Azure for at least 365 days for audit purposes. Operational effort must be kept to a minimum. Which solution should you recommend?
Assign an Azure Policy that configures diagnostic settings on all subscriptions to send logs to a centralized Log Analytics workspace (with 365-day retention) and to a shared Event Hub namespace used by the SIEM.
Create an Automation Account in each subscription that regularly exports logs to an Azure Storage account and then forwards the files to the SIEM over REST.
Deploy Azure Sentinel in every subscription and use Sentinel data connectors and playbooks to push collected logs to the SIEM.
Enable Continuous Export from Azure Monitor to an Azure Storage account and configure the SIEM to pull log files from the storage account.
Answer Description
Diagnostic settings can be deployed at scale by using built-in Azure Policy definitions. A single assignment can automatically enable diagnostic settings on subscriptions and resources, routing platform logs and metrics to multiple destinations: a Log Analytics workspace (where retention can be configured for 365 days or longer) and an Event Hub namespace. The workspace satisfies the audit-retention requirement, while the Event Hub feed is consumed by the SIEM. The solution is entirely managed and requires no custom code or ongoing maintenance. The other options either rely on custom runbooks, unsupported continuous export paths, or duplicate Sentinel deployments and therefore impose higher operational overhead or do not meet the technical requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Policy, and how does it work for automating settings like diagnostic configuration?
What is an Azure Event Hub namespace, and why is it used in this solution?
Why is a centralized Log Analytics workspace recommended for log retention in this solution?
Developers commit code to Azure Repos Git. Web APIs are deployed to multiple App Service instances in different Azure subscriptions. You must design an automated solution that builds on every commit, runs tests, deploys to staging slots, and pauses for manual approval before promoting to production. The deployment definition must reside with the code, and you want to avoid introducing additional services. Which Azure service should you recommend to implement the pipeline?
Azure DevOps multi-stage YAML pipelines
GitHub Actions workflows with environment protection rules
Azure Automation runbooks triggered by webhooks
Azure App Service Deployment Center with local Git integration
Answer Description
Multi-stage YAML pipelines in Azure DevOps meet all stated requirements. The pipeline definition is stored as code in the same Azure Repos repository, enabling versioning and collaboration. Build, test, and deployment jobs can target resources in any subscription through service connections, and stages can map to environments that support manual approval gates before production deployment. GitHub Actions would require moving the repository or linking an additional service, while Azure Automation runbooks and Deployment Center lack integrated build, test, and gated multi-stage release capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Azure DevOps YAML pipelines, and how do they work?
What makes Azure DevOps YAML pipelines preferable over GitHub Actions for this solution?
How does environment-based manual approval work in Azure DevOps pipelines?
Contoso Ltd. employs 30 first-line support engineers who must be able to restart any virtual machine in the company's three Azure subscriptions during their 8-hour shift. Security policy requires that:
- Engineers receive only the minimum permissions necessary.
- Access must expire automatically at the end of each shift.
- A shift lead must approve the access request before it is granted. You need to recommend an authorization solution that meets the requirements while minimizing administrative effort. What should you recommend?
Add the engineers to the built-in Contributor role at each subscription scope and configure Azure AD Access Reviews to run once per month.
Create an Azure Automation runbook that restarts virtual machines and grant the engineers permission to invoke the runbook through an Azure DevOps pipeline.
Use Azure AD PIM to make each engineer eligible for the built-in Virtual Machine Contributor role at the resource-group level with no approval workflow and a permanent assignment.
Create a custom Azure RBAC role that includes only the Microsoft.Compute/virtualMachines/restart/action permission, onboard each subscription to Azure AD Privileged Identity Management, and assign the role as eligible directly to every engineer at the subscription scope. Configure PIM to require shift-lead approval and set the activation duration to eight hours.
Answer Description
Azure AD Privileged Identity Management (PIM) for Azure RBAC allows you to create eligible, time-bound assignments that can require approval and automatically expire after a maximum of eight hours. By defining a custom RBAC role that contains only the Microsoft.Compute/virtualMachines/restart/action permission, you enforce least privilege. Assigning that role as eligible directly to each engineer at the subscription scope avoids the limitation that group assignments cannot be made eligible, yet still requires only a one-time setup per engineer. The PIM settings let a shift lead approve each activation request. The Contributor and Virtual Machine Contributor roles grant unnecessary permissions, and monthly access reviews or automation pipelines do not deliver the required per-shift, approval-based, time-bound access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure RBAC and how does it work?
What is Azure AD Privileged Identity Management (PIM) and why is it useful?
How does using a custom RBAC role enforce least privilege?
Contoso Ltd. has an on-premises Active Directory forest with 10,000 users. The company will adopt several Azure and SaaS applications that support SAML 2.0 or OAuth 2.0. Security requirements: users must sign in with their on-premises domain credentials; multi-factor authentication (MFA) must be enforced for all cloud logons; no user password hashes may be stored in Azure AD. You must recommend an authentication solution that meets the requirements while keeping additional on-premises infrastructure to a minimum. Which solution should you recommend?
Deploy an Active Directory Federation Services (AD FS) farm and configure federated authentication with Azure AD Multi-Factor Authentication Server.
Implement Azure AD Pass-through Authentication with Seamless Single Sign-On and enable Azure AD Multi-Factor Authentication.
Configure Azure AD Password Hash Synchronization with Seamless Single Sign-On and Conditional Access to enforce Multi-Factor Authentication.
Create an Azure AD B2C tenant and integrate the on-premises Active Directory as an identity provider by using custom policies.
Answer Description
Azure AD Pass-through Authentication (PTA) uses lightweight agents to validate a user's password directly against the on-premises domain controller during sign-in, so no password hashes are ever stored in Azure AD. PTA supports Seamless Single Sign-On, allowing users to authenticate with their existing corporate credentials, and Azure AD Conditional Access can require MFA for all cloud sign-ins. This approach needs only a few agent installations and avoids the extra servers, proxy configuration, and certificate management overhead required by an AD FS farm. Password Hash Synchronization does not meet the requirement to avoid storing hashes in Azure AD, while Azure AD B2C is intended for external identities rather than corporate users. Therefore, implementing PTA with Seamless SSO and Azure AD MFA is the best fit.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AD Pass-through Authentication (PTA)?
How does Seamless Single Sign-On (SSO) work in Azure AD?
Why is Azure AD Pass-through Authentication preferred over AD FS in this scenario?
You are designing a compute platform for a containerized background worker that processes orders placed in an Azure Service Bus queue. The job runs a custom Docker image that includes proprietary machine-learning libraries larger than the 250-MB limit for Azure Functions code packages. During most weekdays the queue is empty, but from Friday night through Sunday morning it can exceed 50,000 messages and must be drained within four hours. The operations team wants the solution to scale automatically down to zero instances when idle and to require the least possible infrastructure management effort.
Which Azure compute service should you recommend?
Azure Kubernetes Service with the cluster autoscaler enabled
Azure Functions in a Premium plan running the container image
Azure Container Apps with KEDA-based autoscaling on the Service Bus queue
Azure Container Instances launched by an Azure Logic App each time messages arrive
Answer Description
Azure Container Apps natively runs container images without size limits, integrates with KEDA to scale on Azure Service Bus queue length, and can scale out rapidly (up to hundreds of replicas) or all the way to zero when the queue is empty, providing true serverless consumption pricing. Azure Container Instances lack event-driven autoscale and therefore would require additional orchestration logic. Azure Functions premium plan can run containers but keeps at least one instance warm, so it cannot scale to zero and incurs always-on costs. Azure Kubernetes Service offers full control and autoscaling, but cluster creation, node patching, and control-plane management contradict the requirement to minimize infrastructure management effort.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is KEDA, and how does it work with Azure Container Apps?
Why is Azure Kubernetes Service (AKS) not an ideal choice for this scenario?
How does Azure Container Apps handle proprietary machine-learning libraries in large container images?
You are planning the assessment phase for migrating 450 VMware vSphere virtual machines from an on-premises datacenter to Microsoft Azure. The migration plan must discover all virtual machines without installing software inside each guest, collect 30 days of performance data to recommend right-sized target SKUs, map inter-VM network dependencies without requiring in-guest agents, and produce a total cost of ownership (TCO) comparison between the current environment and Azure. Which on-premises component should you deploy first to satisfy all these requirements?
Run the Microsoft Assessment and Planning (MAP) Toolkit
Deploy the Azure Migrate appliance configured for Discovery and Assessment
Install the Azure Site Recovery Mobility service on each virtual machine
Onboard the servers to Azure Arc and install the Azure Monitor Dependency agent
Answer Description
The Azure Migrate appliance performs agentless discovery of VMware vSphere inventories, streams performance counters for up to 30 days to build right-sizing recommendations, and supports agentless network dependency mapping by querying vCenter for traffic flows. When the collected data is uploaded to the Azure Migrate project, the service can generate readiness reports and TCO comparisons. The other options either require installing an agent on every VM (Site Recovery Mobility service), are deprecated and lack Azure-specific sizing (MAP Toolkit), or provide dependency data without integrated cost and sizing assessments (Azure Arc with dependency agent).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Migrate appliance, and why is it used for migration assessments?
How does the Azure Migrate appliance perform agentless network dependency mapping?
What is a Total Cost of Ownership (TCO) comparison in Azure Migrate?
Your organization must safeguard customer-owned data-encryption keys used by several Azure services. The solution must meet the following requirements:
- Keys must reside in a FIPS 140-2 Level 3 validated hardware security module (HSM).
- The HSM must run in a dedicated single-tenant boundary that Microsoft manages for you.
- Administrators must assign granular permissions by using Azure role-based access control (RBAC). Which Azure service should you recommend for storing the keys?
Azure Key Vault Managed HSM
Azure Dedicated Hardware Security Module (HSM) service
Azure Key Vault Premium tier
Azure Confidential Ledger
Answer Description
Azure Key Vault Managed HSM is a fully managed, single-tenant HSM-as-a-service operated by Microsoft. The underlying nShield HSMs are validated to FIPS 140-2 Level 3, and the service integrates natively with Azure RBAC, letting you grant fine-grained roles such as Key Administrator or Key Reader. The Azure Key Vault Premium tier uses multi-tenant HSMs and is certified only to Level 2, while Azure Dedicated HSM-although Level 3 and single-tenant-requires customers to manage the appliance directly and does not leverage Azure RBAC. Azure Confidential Ledger is not intended for cryptographic key storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is FIPS 140-2 Level 3 and why is it important?
How does Azure Key Vault Managed HSM integrate with Azure RBAC?
What is the difference between Azure Key Vault Managed HSM and Azure Dedicated HSM?
A mission-critical Linux virtual machine (VM) runs an internal line-of-business API for a European customer. The VM (Standard D8s v4, premium SSD) is currently deployed as a single instance in the West Europe region.
The solution must meet the following requirements:
- Remain fully operational if an entire Azure datacenter inside West Europe becomes unavailable.
- Provide a VM connectivity SLA of at least 99.99 percent.
- Keep all customer data inside the West Europe region.
- Minimize additional licensing and ongoing management overhead.
Which deployment option should you recommend?
Enable Azure Site Recovery to replicate the VM to another availability zone in West Europe and configure automatic failover.
Redeploy the workload as a virtual machine scale set with a minimum of two instances distributed across West Europe availability zones and fronted by an Azure Standard Load Balancer.
Add the VM to an availability set configured with two fault domains and two update domains in the existing datacenter.
Move the VM to an Azure Dedicated Host group that spans two fault domains within West Europe.
Answer Description
The only way to achieve a 99.99 percent virtual-machine connectivity SLA inside a single Azure region is to run at least two VM instances distributed across different availability zones and front them with a standard load-balancer-backed endpoint. A virtual machine scale set with zone distribution automates instance deployment and balancing while avoiding any additional licensing costs and requiring minimal management effort.
An availability set does not protect against a full datacenter (zone) outage and is covered by a 99.95 percent SLA. Azure Site Recovery normally fails over to a different region and requires manual or scripted orchestration, so it does not guarantee the required intra-regional 99.99 percent availability. Deploying the workload on dedicated hosts would add significant cost and still not provide zone redundancy unless combined with additional components, so it also fails to meet the requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an availability zone in Azure?
What is the role of an Azure Standard Load Balancer in achieving 99.99% SLA?
How does a virtual machine scale set simplify management and ensure high availability?
A retailer stores 5 TB of semi-structured catalog data in JSON format and serves it to web and mobile apps worldwide. The solution must provide the following:
- 99.999 percent read and write availability
- Automatic regional failover in less than one minute if a region is unavailable
- Recovery point objective (RPO) of under five seconds during regional outages
- Ability to accept writes from any region
Which Azure data-platform configuration meets these requirements with the least administrative effort?
Deploy Azure Cosmos DB across multiple Azure regions with multi-region writes enabled and session consistency.
Store the data in an Azure SQL Database Hyperscale instance and configure active geo-replication to a secondary region.
Deploy Azure Cosmos DB in two regions with a single write region and automatic failover.
Store the data in an Azure Storage account configured for read-access geo-zone-redundant storage (RA-GZRS).
Answer Description
Azure Cosmos DB distributed across multiple regions with multi-region (multi-write) enabled is the only Azure data service that meets all stated objectives. The SLA guarantees 99.999 percent availability for both reads and writes and automatic regional failover in under 60 seconds. Cross-regional replication latency is typically under five seconds, satisfying the RPO requirement. A single-write Cosmos DB account delivers only 99.99 percent write availability. Azure Storage RA-GZRS offers eventual consistency and no defined RPO. Azure SQL Database active geo-replication provides 99.99 percent availability and an RPO typically under five seconds, but it does not reach the required 99.999 percent availability, making it unsuitable.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Azure Cosmos DB the best choice for this solution?
What is session consistency in Azure Cosmos DB, and why is it relevant here?
How does Azure Cosmos DB achieve automatic regional failover in under 60 seconds?
Your company is building a multi-tenant SaaS solution hosted in Azure. Each tenant receives its own operational database. You expect about 250 small databases that normally consume roughly 1 DTU but can spike to 60 DTUs for a few hours each month. Management wants to minimize overall compute cost without manual intervention while keeping platform maintenance low. Which Azure relational database deployment option should you recommend?
Create an Azure SQL Database elastic pool sized for the combined peak demand and place all tenant databases in the pool.
Deploy each tenant database as an individual Azure SQL Database in the serverless compute tier with auto-pause enabled.
Provision an Azure SQL Managed Instance in the General Purpose tier and host all tenant databases within the instance.
Deploy all tenant data in a single Azure SQL Database at the Business Critical tier and use workload management to enforce per-tenant limits.
Answer Description
An Azure SQL Database elastic pool lets many databases share a pool of compute and storage (measured as eDTUs or vCores). Because tenant workloads peak at different times, the pool can be sized for the combined, rather than individual, demand. When a particular tenant experiences a spike, the service automatically allocates additional compute from the shared pool, while idle databases consume virtually no resources-reducing total cost and operational effort. Running every tenant as an individual serverless database would bill each database separately for its active compute, eliminating the benefit of resource sharing. Hosting all data in a single large database or in a managed instance would reserve dedicated compute continuously, driving costs higher. Therefore, an appropriately sized Azure SQL Database elastic pool best meets the requirements for cost efficiency, automatic bursting, and minimal maintenance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure SQL Database elastic pool?
How do DTUs work in an Azure SQL Database elastic pool?
Why is a serverless compute tier not ideal for multi-tenant SaaS solutions?
Your company hosts an on-premises ASP.NET Core web API that must be consumed by several external partner organizations. The partners already authenticate with their own Azure AD tenants. You must expose the API while meeting the following requirements:
- Authenticate and authorize users by using Azure AD groups.
- Enforce Azure AD Conditional Access policies for multifactor authentication.
- Avoid opening any inbound ports through the corporate firewall or adding new perimeter-network infrastructure.
You need to recommend the simplest Azure-based approach.
Which solution should you recommend?
Set up Active Directory Federation Services (AD FS) with Web Application Proxy in a perimeter network and federate the partner Azure AD tenants.
Deploy Azure AD Domain Services in Azure, join the API server to the managed domain, and enable Azure AD Kerberos authentication.
Establish a site-to-site VPN to Azure and publish the API behind an internal Load Balancer fronted by Azure Application Gateway.
Install an Azure AD Application Proxy connector on the on-premises network and publish the API through Azure AD Application Proxy.
Answer Description
Azure AD Application Proxy meets all the stated requirements. An Application Proxy connector installed inside the corporate network establishes only outbound connections to Azure, so no inbound firewall ports are required. When the on-premises API is published through Application Proxy, users authenticate with Azure AD, allowing authorization through Azure AD groups and enforcement of Conditional Access policies, including MFA.
A site-to-site VPN with an Azure Load Balancer still requires network publishing and does not, by itself, integrate with Azure AD Conditional Access. Deploying Azure AD Domain Services only provides domain join and Kerberos within Azure virtual networks; it neither publishes the on-premises API nor eliminates firewall changes. Implementing AD FS with Web Application Proxy introduces additional perimeter servers and still requires inbound HTTPS ports, increasing complexity compared to Application Proxy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AD Application Proxy and how does it work?
What are the main benefits of Azure AD Application Proxy versus other solutions?
How does Azure AD Conditional Access reinforce MFA for users accessing the API?
A retail company runs a 500 GB mission-critical SQL Server 2019 database on-premises. They plan to migrate it to Azure with minimal code changes. The solution must meet these requirements:
- The platform must handle operating-system and database patching and retain automated backups for 30 days.
- Provide synchronous high availability across at least two availability zones in the same region.
- Deliver predictable read/write latency below 5 milliseconds.
Which Azure service and service tier should you recommend?
Azure SQL Database Hyperscale tier
Azure Virtual Machine running SQL Server 2019 Enterprise with an Always On availability group
Azure SQL Managed Instance in the Business Critical tier
Azure SQL Database single database in the Premium tier
Answer Description
Azure SQL Managed Instance in the Business Critical tier is a fully managed PaaS offering that provides near-100% compatibility with on-premises SQL Server, so application code and database features rarely need modification after migration. Because it is a managed service, Microsoft handles OS and database patching and supplies automated backups with up to 35-day retention by default. The Business Critical tier uses a four-node Always On availability group architecture that can be configured as zone-redundant, giving synchronous replication across multiple availability zones within a region and ensuring high availability without additional administration. It also places the primary and replicas on local SSD storage, supporting single-digit-millisecond latency for both reads and writes.
In contrast, a single Azure SQL Database in the Premium tier might require feature changes due to lower compatibility and does not guarantee instance-level features. An Azure SQL Virtual Machine would still leave patching and backup management to the customer and would require manual configuration of an availability group. The Hyperscale service tier offers high scalability but provides asynchronous replicas only and is not designed primarily for synchronous zone-redundant HA. Therefore, the Managed Instance Business Critical tier best meets all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure SQL Managed Instance?
What is the Business Critical service tier?
How does synchronous replication work in Azure SQL Managed Instance?
A financial-services company plans to migrate an on-premises risk-calculation engine to Azure. The solution runs only on Windows Server 2022 and needs 5-500 identical VM instances that must grow or shrink automatically according to CPU utilization. All instances must be patched by replacing them with the latest version of a single golden image, and they must be distributed across three availability zones to meet a 99.99 percent SLA while keeping administrative effort low. Which Azure service meets all the requirements?
Provision Windows Server 2022 VMs on Azure Dedicated Hosts distributed across zones and use Azure Functions to start and stop VMs on demand.
Use Azure Batch with a custom Windows Server 2022 node image to run the workload and rely on built-in pool autoscaling.
Deploy a Virtual Machine Scale Set that uses a versioned Windows Server 2022 image from an Azure Compute Gallery and spans three Availability Zones.
Create an Availability Set of Windows Server 2022 VMs built from a managed image and configure Azure Automation runbooks to add or remove VMs.
Answer Description
Virtual Machine Scale Sets (VMSS) let you deploy and manage a group of identical or near-identical VMs as a single resource. When you use zone-redundant VMSS with either Uniform or Flexible orchestration, the instances can be spread across multiple Availability Zones, which enables the 99.99 percent VM uptime SLA. VMSS also integrates with autoscale rules that add or remove instances based on performance metrics such as CPU utilization, eliminating manual effort. By storing a versioned Windows Server 2022 image in an Azure Compute Gallery (formerly Shared Image Gallery) and configuring the scale set to use that image, you can roll out OS and application updates simply by updating the image version and performing a rolling upgrade. Availability Sets (without scale sets) provide no built-in autoscale and only fault-domain redundancy. Dedicated hosts require more management and offer no native autoscale. Azure Batch supports autoscale but does not guarantee zone-level redundancy or the 99.99 percent VM SLA and is optimized for stateless batch, not long-running VM workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Virtual Machine Scale Set (VMSS) in Azure?
What is an Azure Compute Gallery, and how does it help with versioned images?
What are Availability Zones, and how do they contribute to the 99.99% SLA in Azure?
Your company hosts an e-commerce application on virtual machines deployed in two Azure regions in an active-active pattern. You must publish the site through one globally reachable hostname that provides TLS/SSL termination close to users, URL path-based routing to different microservice endpoints, automatic failover to the secondary region if the primary becomes unavailable, and centralized protection against common web exploits such as the OWASP Top 10. Which Azure networking service should you include in the solution design to satisfy all of these requirements?
Expose the application through Azure Firewall instances advertising public IP prefixes over ExpressRoute.
Deploy Azure Traffic Manager in priority mode and point it to regional Azure Load Balancer public IP addresses.
Deploy Azure Front Door and apply a Web Application Firewall (WAF) policy to it.
Place an Azure Application Gateway v2 in each region and use a cross-region Azure Load Balancer to distribute traffic.
Answer Description
Azure Front Door delivers a single anycast public endpoint announced from Microsoft's global edge, giving users one globally distributed hostname. Front Door performs layer-7 load balancing with built-in TLS/SSL offload, URL path-based routing, and health-probe-driven failover between multiple back-end regions. It also integrates directly with Azure Web Application Firewall policies, providing centralized protection against OWASP Top 10 threats.
Azure Traffic Manager offers only DNS-based redirection, not a single anycast IP, and cannot perform SSL offload or provide a WAF. Standard or cross-region Azure Load Balancer works at layer 4 and lacks path-based routing and WAF features. Azure Firewall is a stateful network firewall, not a global web front-end, and does not supply anycast endpoints or application-layer load-balancing capabilities. Therefore, Azure Front Door with WAF is the only option that satisfies every stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Front Door?
How does Azure Web Application Firewall (WAF) protect against OWASP Top 10 threats?
What is the difference between Azure Front Door and Azure Traffic Manager?
Your company runs several microservices on Azure Kubernetes Service (AKS). Each service exposes its own internal REST endpoint. A new B2B program requires that you:
- Publish a single public HTTPS endpoint for partners.
- Enforce subscription keys, Azure AD authentication, and per-partner call quotas.
- Perform lightweight request/response transformations and response caching without modifying the microservices. Which Azure service should you recommend to meet all of these requirements with minimum redevelopment effort?
Create an Azure Application Gateway with a Web Application Firewall to route partner traffic.
Implement an Azure Functions HTTP-triggered proxy that forwards requests to each microservice.
Publish each microservice through Azure Service Bus topics and let partners subscribe.
Deploy Azure API Management in front of the AKS services and expose a unified API.
Answer Description
Azure API Management (APIM) provides a developer-friendly façade over multiple back-end services. It can expose a single public endpoint that routes to different AKS services, enforce subscription keys and Azure AD authentication, apply policy-based rate limits and quotas per caller, perform request/response transformations, and enable in-memory response caching-all without changing the existing microservices. Azure Application Gateway lacks built-in developer subscription management and transformation policies. Azure Functions would require redeveloping proxy logic and does not provide turnkey quotas or caching across existing services. Azure Service Bus is a messaging service rather than an HTTP API gateway and cannot expose REST endpoints directly.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure API Management (APIM) and why is it recommended in this scenario?
How does APIM enforce subscription keys and authenticate requests?
What types of transformations can APIM perform on requests and responses?
Your company ingests two data streams from a consumer IoT product. Devices send about 5 GB/hour of JSON telemetry that dashboards must query for the last seven days with sub-100 ms latency and allow flexible schema changes. Devices also upload 2 MB JPEG images that are accessed often for 30 days, seldom after, but must be retained for five years. To meet requirements at the lowest cost and administration effort, which Azure storage combination should you recommend?
Azure Cache for Redis to store telemetry and zone-redundant Premium SSD managed disks for images
Azure SQL Database Hyperscale for telemetry and Azure Files with the Cool access tier for images
Azure Cosmos DB (NoSQL) with autoscale throughput for telemetry, and Azure Blob Storage with lifecycle rules to move images from the Hot tier to Cool after 30 days and to Archive after 180 days
Azure Data Lake Storage Gen2 to store both telemetry and images in a single storage account with hierarchical namespace enabled
Answer Description
Azure Cosmos DB for NoSQL guarantees single-digit millisecond reads and writes and is schema-agnostic, making it well-suited for high-velocity, flexible JSON telemetry. Enabling autoscale throughput lets the service automatically adjust RU/s based on load, reducing cost when traffic is low. Azure Blob Storage is the most economical option for large binary objects such as JPEGs. A lifecycle management policy that moves blobs from the Hot tier to Cool after 30 days and to Archive after 180 days keeps frequently accessed images performant while minimizing long-term storage costs, and requires no manual administration.
Azure SQL Database Hyperscale offers strong relational capabilities but incurs higher costs and imposes rigid schemas, making it less appropriate for rapidly changing JSON telemetry; Azure Files and managed disks are costlier than object storage for large, infrequently accessed images. Azure Data Lake Storage Gen2 can store both data types but would not provide the required millisecond read latency without additional compute (for example, Azure Synapse or Data Explorer) and may raise overall cost and complexity. Redis is an in-memory cache, not a durable store for long-term telemetry or images. Therefore, using Cosmos DB with autoscale for telemetry plus tiered Azure Blob Storage for images best balances features, performance, and cost.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cosmos DB, and why is it suitable for IoT telemetry?
What are the Azure Blob Storage tiers, and how do lifecycle management rules work?
Why isn’t Azure SQL Database Hyperscale or Azure Data Lake Storage Gen2 a good fit for this scenario?
An insurance company runs a mission-critical policy administration system on SQL Server 2017 Enterprise Edition. The database uses SQL Agent jobs, cross-database queries, SQL CLR procedures, and Service Broker messaging. To reduce operational overhead, the company will migrate to Azure, requiring automatic backups, built-in high availability, and minimal code changes in a platform as a service (PaaS) model. Which Azure data service should you recommend?
Azure SQL Database elastic pool in the Premium tier
Azure SQL Database single database in the Business Critical tier
Azure SQL Managed Instance in the General Purpose tier
SQL Server 2022 on an Azure Virtual Machine protected by Azure Backup
Answer Description
Azure SQL Managed Instance is a fully managed PaaS offering that delivers near-100 percent compatibility with the on-premises SQL Server engine. It supports key features such as SQL Agent, cross-database queries, SQL CLR, Service Broker, and linked servers, while automatically providing backups, patching, and built-in high availability. Azure SQL Database single databases or elastic pools lack full support for SQL Agent and Service Broker, requiring application changes or external job services. Running SQL Server on an Azure VM offers full feature parity but is an IaaS solution that leaves most administration tasks-backups, patching, and HA configuration-to the customer, conflicting with the requirement for a PaaS service. Therefore, Azure SQL Managed Instance best satisfies all stated constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Azure SQL Managed Instance and Azure SQL Database?
Why is Azure SQL Managed Instance considered a PaaS solution?
How does high availability work in Azure SQL Managed Instance?
Contoso Ltd. plans to consolidate data from several on-premises Oracle and SAP databases and from the Salesforce SaaS application into an Azure Data Lake Storage Gen2 account. The data must be loaded daily, enriched with code-free transformations, and then written to an Azure Synapse Analytics dedicated SQL pool. The solution must minimise infrastructure management, provide a large library of built-in connectors, and allow visual pipeline authoring, scheduling, and monitoring. Which Azure service should you recommend as the primary data integration and orchestration layer?
Azure Event Grid
Azure Databricks
Azure Data Factory
Azure Logic Apps
Answer Description
Azure Data Factory is Microsoft's fully managed data integration service. It provides more than 90 native connectors covering on-premises sources such as Oracle and SAP, and SaaS sources like Salesforce. Using a self-hosted integration runtime, it can securely pull data from on-premises systems into Azure Data Lake Storage Gen2. Data Factory's Mapping Data Flows offer a fully visual, code-free environment for complex data transformations that can write the results directly to an Azure Synapse Analytics dedicated SQL pool. Pipelines can be authored visually or as JSON, scheduled with triggers, and monitored from the built-in monitoring hub, all without managing underlying infrastructure.
Azure Logic Apps focuses on event-driven application and workflow integration rather than large-scale data movement and transformation. Azure Databricks provides powerful analytics and transformation but requires Spark knowledge and cluster management, and it lacks the breadth of ready-made connectors and low-code orchestration features. Azure Event Grid is used for event routing, not end-to-end data integration pipelines.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the self-hosted integration runtime in Azure Data Factory?
What are Mapping Data Flows in Azure Data Factory?
How does Azure Data Factory differ from Azure Databricks for data integration tasks?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.