🔥 40% Off Crucial Exams Memberships — This Week Only

6 hours, 44 minutes remaining!
00:20:00

Microsoft Azure Solutions Architect Expert Practice Test (AZ-305)

Use the form below to configure your Microsoft Azure Solutions Architect Expert Practice Test (AZ-305). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft Azure Solutions Architect Expert AZ-305
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft Azure Solutions Architect Expert AZ-305 Information

The Microsoft Azure Solutions Architect Expert AZ-305 exam is a pivotal certification for professionals who design and implement solutions on Microsoft's cloud platform. This exam validates a candidate's expertise in translating business requirements into secure, scalable, and reliable Azure solutions. Aimed at individuals with advanced experience in IT operations, including networking, virtualization, and security, the AZ-305 certification demonstrates subject matter expertise in designing cloud and hybrid solutions. Success in this exam signifies that a professional can advise stakeholders and architect solutions that align with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure.

The AZ-305 exam evaluates a candidate's proficiency across four primary domains. These core areas include designing solutions for identity, governance, and monitoring, which accounts for 25-30% of the exam. Another significant portion, 30-35%, is dedicated to designing infrastructure solutions. The exam also assesses the ability to design data storage solutions (20-25%) and business continuity solutions (15-20%). This structure ensures that certified architects possess a comprehensive understanding of creating holistic cloud environments that address everything from identity management and data storage to disaster recovery and infrastructure deployment.

The Strategic Advantage of Practice Exams

A crucial component of preparing for the AZ-305 exam is leveraging practice tests. Taking practice exams offers a realistic simulation of the actual test environment, helping candidates become familiar with the question formats, which can include multiple-choice, multi-response, and scenario-based questions. This familiarity helps in developing effective time management skills, a critical factor for success during the timed exam. Furthermore, practice tests are an excellent tool for identifying knowledge gaps. By reviewing incorrect answers and understanding the reasoning behind the correct ones, candidates can focus their study efforts more effectively on weaker areas.

The benefits of using practice exams extend beyond technical preparation. Successfully navigating these tests can significantly boost a candidate's confidence. As performance improves with each practice test, anxiety about the actual exam can be reduced. Many platforms offer practice exams that replicate the look and feel of the real test, providing detailed explanations for both correct and incorrect answers. This active engagement with the material is more effective than passive reading and is a strategic approach to ensuring readiness for the complexities of the AZ-305 exam.

Microsoft Azure Solutions Architect Expert AZ-305 Logo
  • Free Microsoft Azure Solutions Architect Expert AZ-305 Practice Test

  • 20 Questions
  • Unlimited time
  • Design identity, governance, and monitoring solutions
    Design data storage solutions
    Design business continuity solutions
    Design infrastructure solutions
Question 1 of 20

You are designing the messaging backbone for a multi-tenant SaaS platform hosted in Azure. Several front-end microservices will publish up to 15,000 events per second, each 40 KB or less. Downstream worker services must receive these events in near real-time with at-least-once delivery; strict ordering is not required. The solution must:

  • Decouple producers from consumers through a publish/subscribe model.
  • Automatically scale and require minimal operational management.
  • Offer built-in integration triggers for Azure Functions.

Which Azure service should you recommend?

  • Azure Service Bus Premium topic

  • Azure Event Hubs Standard namespace with Capture enabled

  • Azure Storage Queue triggered by Azure Functions

  • Azure Event Grid domain

Question 2 of 20

Contoso Ltd. employs 30 first-line support engineers who must be able to restart any virtual machine in the company's three Azure subscriptions during their 8-hour shift. Security policy requires that:

  • Engineers receive only the minimum permissions necessary.
  • Access must expire automatically at the end of each shift.
  • A shift lead must approve the access request before it is granted. You need to recommend an authorization solution that meets the requirements while minimizing administrative effort. What should you recommend?
  • Add the engineers to the built-in Contributor role at each subscription scope and configure Azure AD Access Reviews to run once per month.

  • Use Azure AD PIM to make each engineer eligible for the built-in Virtual Machine Contributor role at the resource-group level with no approval workflow and a permanent assignment.

  • Create an Azure Automation runbook that restarts virtual machines and grant the engineers permission to invoke the runbook through an Azure DevOps pipeline.

  • Create a custom Azure RBAC role that includes only the Microsoft.Compute/virtualMachines/restart/action permission, onboard each subscription to Azure AD Privileged Identity Management, and assign the role as eligible directly to every engineer at the subscription scope. Configure PIM to require shift-lead approval and set the activation duration to eight hours.

Question 3 of 20

You manage an e-commerce application that uses a single Azure SQL Database in the General Purpose tier (East US). A recent availability-zone outage made the database unavailable for 20 minutes. New requirements are:

  • Database must stay writable if any single zone in East US fails.
  • Recovery time after a zone failure must be under one minute.
  • Cross-region disaster recovery is not required.
  • No virtual-machine administration is acceptable.

Which change best meets these requirements?

  • Upgrade the database to the Business Critical service tier and enable zone-redundant configuration.

  • Configure active geo-replication to a secondary database in Central US.

  • Migrate to SQL Server on Azure Virtual Machines configured in an availability set.

  • Enable auto-failover groups between two Azure SQL Databases in different regions.

Question 4 of 20

You manage the database platform for a retail organization that operates 24/7. The primary 5-TB SQL Server 2016 database runs on-premises in an Always On availability group. Management has decided to move this database to Azure SQL Managed Instance. The business will tolerate no more than 30 minutes of total downtime during cut-over. Which Azure-based migration approach should you recommend to satisfy the downtime requirement while ensuring the target service remains fully managed?

  • Enable Azure Site Recovery to replicate the on-premises SQL Server virtual machines to Azure and fail over to the replicas during the maintenance window.

  • Set up Azure SQL Data Sync between the on-premises database and Azure SQL Managed Instance, then cut over once synchronization is complete.

  • Use Azure Database Migration Service deployed in the Premium tier to perform an online migration from the on-premises SQL Server to Azure SQL Managed Instance.

  • Export a BACPAC from the primary replica and import it into Azure SQL Managed Instance during the maintenance window.

Question 5 of 20

Your company renders high-resolution video frames using a proprietary command-line application that must run on Windows Server with GPU acceleration. Each night about 20,000 independent rendering jobs are submitted, and the output is written to Azure Blob Storage. You need a fully managed, cost-efficient compute platform that can automatically provision and deallocate hundreds of GPU-enabled virtual machines, handle job queuing and scheduling, and require minimal custom orchestration code. Which Azure service should you recommend?

  • Azure Functions running on a Premium plan with event triggers

  • Azure Kubernetes Service (AKS) with the Kubernetes Event-Driven Autoscaler (KEDA)

  • Azure Batch with GPU-enabled pools and automatic pool scaling

  • Azure Container Instances (ACI) with Event Grid-based job orchestration

Question 6 of 20

You are designing a long-term archive for 140 TB of security-camera footage that must be stored in Azure for at least seven years. Compliance mandates that the footage remain immutable for the entire retention period. The organization must still be able to read the data if the primary Azure region becomes unavailable, with a cross-region recovery-point objective (RPO) of no more than 15 minutes and no need for Microsoft to initiate a failover. Durability must be at least 99.999999999 percent, and overall cost should be minimized.

Which Azure storage configuration should you recommend?

  • Store the footage in an Azure Storage account that has a time-based immutable blob policy and is configured for read-access geo-zone-redundant storage (RA-GZRS).

  • Create two separate locally redundant storage (LRS) accounts in different regions and use scheduled AzCopy jobs to replicate data between them.

  • Store the footage in an Azure Storage account that has a time-based immutable blob policy and is configured for zone-redundant storage (ZRS).

  • Store the footage in an Azure Storage account that has a time-based immutable blob policy and is configured for geo-redundant storage (GRS).

Question 7 of 20

Your organization manages six Azure subscriptions that are frequently reorganized. You must design a logging solution that meets the following requirements: collect all activity and resource diagnostic logs from every subscription in a single location, retain the data for at least 730 days even if a subscription or resource is deleted, and require the least ongoing administrative effort when new subscriptions are created. Which solution should you recommend?

  • Deploy an Event Hubs namespace in every subscription, stream all diagnostic data to an on-premises SIEM, and archive the data there for two years.

  • Create a Log Analytics workspace in a dedicated management subscription, use an Azure Policy initiative to configure subscription-level diagnostic settings that send all Activity Log and resource diagnostic logs to the workspace, and set the workspace retention to 730 days.

  • Configure each resource to send diagnostic logs to individual Application Insights instances and enable continuous export from each instance to an immutable storage account.

  • Enable export of each subscription's Activity Log to a storage account in the same subscription and configure lifecycle management to keep the data in the Cool tier for two years.

Question 8 of 20

Your organization hosts a mission-critical Azure SQL Database in the Business Critical service tier. Compliance rules state that you must be able to restore the database to any point in time during the last 7 days and also keep a restorable copy of each month-end full backup for 10 years. Management wants the simplest native Azure solution that minimizes ongoing administration and storage costs. Which approach should you recommend?

  • Enable Azure SQL Database long-term retention (LTR) for monthly backups stored in Azure Blob storage and rely on the service's automatic point-in-time restore capability for the recent 7-day window.

  • Protect the database with Azure Backup by creating a Recovery Services vault and configuring a SQL in Azure VM backup policy.

  • Create an active geo-replica in another Azure region and use geo-restore and backup retention on that replica to satisfy both objectives.

  • Schedule an Azure Automation runbook to export a BACPAC file of the database to Azure Storage at the end of each month and keep the files for 10 years.

Question 9 of 20

You are planning a migration of 80 TB of video files that reside on an on-premises NAS to Azure. The available internet link averages 50 Mbps, so management wants the bulk of the data moved by shipping hardware. After the initial upload, about 200 GB of new and modified files will be generated each month and must be replicated automatically to an Azure Blob Storage account with minimal operational effort. Which solution should you recommend?

  • Configure Azure File Sync with a cloud endpoint in Azure Blob Storage and perform the initial synchronization over the internet.

  • Migrate the data by using Azure Database Migration Service with an offline export followed by continuous sync tasks.

  • Order an Azure Data Box for the initial seed transfer, then deploy Azure Data Box Gateway to handle ongoing incremental replication to Azure Blob Storage.

  • Use AzCopy with checkpoint-restart capability to upload all data over the WAN and schedule monthly jobs for incremental copies.

Question 10 of 20

Your company has an on-premises SQL Server 2019 instance storing transactional data. The server sits in a secure network that blocks inbound internet traffic but allows outbound HTTPS (TCP 443). You must design an Azure solution that copies data from the database to Azure Data Lake Storage Gen2 every 15 minutes for analytics. The solution must be fully managed, provide scheduling and monitoring, and require no custom code. Which Azure Data Factory integration runtime should you recommend for the copy activity?

  • Self-hosted integration runtime installed on the on-premises server or a gateway machine

  • Azure-SSIS integration runtime running your existing SSIS packages

  • Azure integration runtime (AutoResolve) hosted in the target region

  • Managed Virtual Network (VNet) data flow runtime in Azure Data Factory

Question 11 of 20

A company is modernizing an on-premises microservices application and plans to deploy containers to Azure. Mandatory requirements are:

  • Each microservice must automatically scale from 0 to hundreds of instances on HTTP or queue events.
  • Platform management of the underlying infrastructure must be minimized; developers must not administer Kubernetes clusters.
  • Built-in support is needed for Dapr service invocation and pub/sub between microservices. Which Azure service should you recommend to host the containers?
  • Azure Kubernetes Service (AKS) with the cluster autoscaler and KEDA add-on

  • Azure Container Instances orchestrated by Azure Logic Apps

  • Azure App Service for Containers running in the Premium v3 tier with autoscale rules

  • Azure Container Apps

Question 12 of 20

Your company hosts a mission-critical Azure SQL Database single database in the East US region. Compliance requires that you can restore the database to any point in time within the last 35 days. Business-continuity guidelines state that if the entire region becomes unavailable, the application must resume service in the paired region within minutes and with an RPO of less than five seconds, without manual intervention. You need to design the backup and disaster-recovery solution while keeping administrative effort low. Which Azure capability should you recommend?

  • Configure automatic geo-backup and perform geo-restore when needed.

  • Schedule nightly exports of the database to Azure Storage and re-import after a failure.

  • Enable zone-redundant configuration for the database.

  • Create an auto-failover group that replicates the database to a secondary server in the paired region.

Question 13 of 20

Contoso ingests billions of semi-structured IoT telemetry events each hour. You plan to store the data by using Azure Cosmos DB with the Core (SQL) API. The solution must:

  • guarantee 99.999 percent read and write availability even if an Azure region becomes unavailable,
  • provide automatic failover without code changes, and
  • keep write latency as low as possible for users worldwide. Which design should you recommend?
  • Create a Cosmos DB account that has multi-region writes enabled, add the required Azure regions, and configure automatic failover with a prioritized region list.

  • Create a single-write-region Cosmos DB account, replicate it to additional regions, and use Azure Traffic Manager with performance routing to direct clients; trigger failover manually when needed.

  • Deploy Azure Table storage accounts in two Azure regions with geo-redundant storage replication (RA-GRS) and implement client-side retry logic for failover.

  • Provision Azure SQL Database in two regions, configure an active geo-replication pair, and enable automatic failover groups for high availability.

Question 14 of 20

You are designing a multiregion e-commerce platform that runs stateless microservices on Azure Kubernetes Service in both East US and West US. The shopping-cart component requires sub-millisecond reads and writes, key-level time-to-live, configurable eviction policies, and automatic failover if an entire region becomes unavailable. You must choose a fully managed Azure service that minimizes code changes and keeps cached data synchronized between the two regions. Which solution should you recommend?

  • Enable global caching in Azure Front Door Standard to store shopping-cart data at edge locations.

  • Rely on each App Service instance's in-memory cache and configure application affinity for session stickiness.

  • Deploy Azure Cache for Redis Enterprise with active geo-replication between East US and West US.

  • Use Azure SQL Database with active geo-replication and store shopping-cart data in a relational table.

Question 15 of 20

Your company manages several Azure subscriptions. You must collect platform logs from all Azure virtual machines and Azure Firewall resources. Requirements:

  • Retain every log record for 12 months in low-cost Azure storage.
  • Simultaneously stream the same logs in near real time to the on-premises Splunk SIEM.

Which solution should you implement to meet these requirements with the least operational overhead?

  • Create an Azure Monitor diagnostic setting on each relevant resource that forwards logs to an Azure Storage account configured for the Cool access tier and to an Azure Event Hub namespace integrated with Splunk.

  • Use the Azure Monitor Data Collector API to push logs to Splunk and configure a Log Analytics workspace with 365-day retention for archival.

  • Configure immutable blob storage for log archival and use an Azure Automation runbook to copy the logs to Splunk on a daily schedule.

  • Enable Activity Log forwarding to an Azure Service Bus queue and trigger an Azure Function to write each message to Splunk and to blob storage.

Question 16 of 20

Contoso plans to move 40 on-premises Windows and Linux virtual machines that run on VMware vSphere to Azure. Before choosing target sizes, you must gather 30 days of CPU and memory utilization, estimate monthly Azure compute costs, and map TCP dependencies between the virtual machines to plan a phased migration. You want to achieve this with a single Microsoft-provided solution and minimal manual effort. Which action should you take first?

  • Deploy the Azure Migrate appliance and create a Discovery and assessment project with agentless dependency visualization

  • Install the Azure Monitor agent on every virtual machine and use Azure Monitor to collect metrics and dependency maps

  • Install the Azure Site Recovery mobility service to replicate the virtual machines and analyze the replication reports

  • Run the Microsoft Assessment and Planning (MAP) Toolkit to generate a server capacity and dependency report

Question 17 of 20

Your company hosts dozens of REST and legacy SOAP services in Azure Kubernetes Service (AKS), Azure App Service, and an on-premises datacenter reachable only over ExpressRoute. You must expose all services through a single, internet-facing endpoint that offers a branded developer portal, transforms SOAP to REST, validates Azure AD tokens, and keeps on-premises traffic on the private network. Which service or combination should you use?

  • Azure API Management with the self-hosted gateway deployed on-premises

  • Azure Service Bus with Hybrid Connections and Azure AD Conditional Access

  • Azure Front Door Premium combined with Azure Functions proxy

  • Azure Application Gateway with Web Application Firewall and URL rewrite rules

Question 18 of 20

You are designing a compute platform for a containerized background worker that processes orders placed in an Azure Service Bus queue. The job runs a custom Docker image that includes proprietary machine-learning libraries larger than the 250-MB limit for Azure Functions code packages. During most weekdays the queue is empty, but from Friday night through Sunday morning it can exceed 50,000 messages and must be drained within four hours. The operations team wants the solution to scale automatically down to zero instances when idle and to require the least possible infrastructure management effort.

Which Azure compute service should you recommend?

  • Azure Kubernetes Service with the cluster autoscaler enabled

  • Azure Container Instances launched by an Azure Logic App each time messages arrive

  • Azure Functions in a Premium plan running the container image

  • Azure Container Apps with KEDA-based autoscaling on the Service Bus queue

Question 19 of 20

Contoso Ltd. has completed the Strategy and Plan phases of the Microsoft Cloud Adoption Framework (CAF) for its data-center migration and has approved the backlog. Before starting workload migrations, you must ensure the Azure environment can host hundreds of virtual machines across multiple business units while enforcing governance, identity, and network standards. Which action should you recommend to align with the CAF Ready methodology?

  • Perform an agentless discovery and assessment of the on-premises virtual machines by using Azure Migrate.

  • Configure Azure Site Recovery to replicate the virtual machines to Azure and run a test failover.

  • Create a multi-stage Azure DevOps pipeline that provisions the required Azure resources by using ARM templates during each migration wave.

  • Deploy an Azure landing zone that implements a scalable platform foundation with management groups, Azure Policy, identity, and networking controls.

Question 20 of 20

A financial analytics team must run Monte Carlo simulations nightly. The simulations are packaged as Linux containers, stateless, and require about 500 vCPUs for six hours. Administrators need automatic pool scaling, no cluster lifecycle management, and secure access to Azure Blob Storage by using a managed identity. Which Azure compute service best meets these requirements?

  • Azure Functions running on a Premium plan

  • Azure Container Instances (ACI) in a virtual network

  • Azure Kubernetes Service (AKS) with the Cluster Autoscaler

  • Azure Batch with a container pool