00:20:00

Microsoft Azure AI Engineer Associate Practice Test (AI-102)

Use the form below to configure your Microsoft Azure AI Engineer Associate Practice Test (AI-102). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft Azure AI Engineer Associate AI-102
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft Azure AI Engineer Associate AI-102 Information

The Microsoft Certified: Azure AI Engineer Associate certification, earned by passing the AI‑102: Designing and Implementing a Microsoft Azure AI Solution exam, is designed for people who build, deploy, and manage AI solutions using Microsoft Azure. According to Microsoft the role of an Azure AI Engineer involves working across all phases: requirements definition, development, deployment, integration, maintenance, and tuning of AI solutions. To succeed you should have experience with programming (for example Python or C#), using REST APIs/SDKs, and working with Azure’s AI services.

Domains on Azure AI Engineer Exam

The AI-102 exam tests several key areas: planning and managing an Azure AI solution (about 15-20 % of the exam), implementing computer vision solutions (15-20 %), natural language processing solutions (30-35 %), knowledge mining/document intelligence (10-15 %), generative AI solutions (10-15 %), and content-moderation/decision-support solutions (10-15 %). It is important to review each area and gain hands-on practice with Azure AI services such as Azure AI Vision, Azure AI Language, Azure AI Search and Azure OpenAI.

Azure AI Engineer Practice Tests

One of the best ways to prepare for this exam is through practice tests. Practice tests let you experience sample questions that mimic the real exam style and format. They help you determine which topics you are strong in and which ones need more study. After taking a practice test you can review your incorrect answers and go back to the learning material or labs to fill those gaps. Many study guides recommend using practice exams multiple times as a key part of your preparation for AI-102.

Microsoft Azure AI Engineer Associate AI-102 Logo
  • Free Microsoft Azure AI Engineer Associate AI-102 Practice Test

  • 20 Questions
  • Unlimited
  • Plan and manage an Azure AI solution
    Implement generative AI solutions
    Implement an agentic solution
    Implement computer vision solutions
    Implement natural language processing solutions
    Implement knowledge mining and information extraction solutions

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

Your Azure DevOps YAML release pipeline must promote a new version of an Azure AI Language custom question-answering model from test to production by running az cognitiveservices account deployment create. Policies state that (1) no shared keys may be stored in the repo or pipeline and (2) credentials must rotate without editing the YAML. Which authentication method meets these requirements?

  • Create an Azure DevOps service connection that uses a service principal, and assign that principal the Cognitive Services Contributor role on the target Azure AI resource so the az CLI obtains an Azure AD access token at run time.

  • Store the subscription owner's username and password in a variable group and run az login in the pipeline with those credentials.

  • Save the Azure AI resource's primary key as a secret variable in the pipeline and pass it to the CLI task with the --key argument.

  • Generate a user-delegation SAS token for the Azure AI endpoint, store the token in a secure files library item, and reference it from the pipeline.

Question 2 of 20

You are designing an application that must scan millions of corporate PDF and image files, extract key entities by applying pre-built and custom AI skills, and let employees run fast full-text and faceted searches over the enriched content. Which Azure AI service should you select as the core of this knowledge mining solution?

  • Azure AI Search (Azure Cognitive Search)

  • Azure AI Document Intelligence (Form Recognizer)

  • Azure OpenAI Service

  • Azure Databricks with Delta Lake

Question 3 of 20

You have trained an object-detection project by using Custom Vision in Azure AI Vision. The model will be used on production-line cameras inside a factory network that has no reliable Internet connectivity. Inference must execute locally with less than 100 ms latency and without sending images to the cloud. Which deployment approach should you use?

  • Export the model and run it in the Azure AI Vision Docker container on an Azure Stack Edge or IoT Edge device within the factory.

  • Import the model into an Azure OpenAI resource and call it with chat completions from the production-line controllers.

  • Convert the model to ONNX and deploy it to a managed online endpoint in Azure Machine Learning.

  • Publish the model to the Custom Vision cloud prediction endpoint and invoke it from the factory over HTTPS.

Question 4 of 20

You are setting up a new generative-AI project in Azure AI Foundry. Before you write any code, you want a single manifest that lists the project's required Azure resources-workspace, storage account, Key Vault, Application Insights, and so on-so that you can run one Azure Developer CLI command to deploy the entire environment into a resource group. Which file must exist in the project root?

  • flow.dag.yaml

  • azure.yaml

  • foundry.yaml

  • environment.yml

Question 5 of 20

Your team is building a chat application that uses Azure OpenAI. Corporate policy requires that any incoming prompt with Hate or Sexual content whose severity score is 2 (Low) or higher be blocked before it can be forwarded to the model, and that jailbreak (prompt-injection) attacks be detected. Which action should you place at the very beginning of the request pipeline to meet this requirement?

  • Depend solely on the built-in Azure OpenAI completion content filter that runs after the model generates a response.

  • Apply an llm-content-safety policy (or call the Content Safety text:analyze API) with shieldPrompt=true and category thresholds Hate=2 and Sexual=2, and reject the request if any rule is triggered.

  • Store the conversation in a database and run periodic batch reviews with Azure AI Content Safety after the session ends.

  • Prepend a strict system message instructing the model to refuse disallowed topics and run the chat at temperature 0.

Question 6 of 20

You administer a multiservice Azure AI resource named contoso-ai. Developers report occasional 5xx responses when calling the resource's REST endpoint, but the Azure portal Metrics chart shows no obvious pattern. You must collect per-request data (operation name, response status, subscription key) for the last 24 hours and query it in your existing Log Analytics workspace. What should you do first?

  • Create a diagnostic setting on contoso-ai that streams the RequestLogs category to the Log Analytics workspace.

  • Enable Application Insights HTTP dependency tracking in each calling application.

  • Configure a Network Watcher connection monitor test that targets the contoso-ai endpoint.

  • Enable export of the Azure activity log for the contoso-ai resource group to the Log Analytics workspace.

Question 7 of 20

You have an Azure AI Vision resource named vision1. An Azure App Service uses a system-assigned managed identity. You must configure authentication so that the App Service calls the Vision REST API by using an Azure AD access token instead of an API key. Which Azure RBAC role should you assign to the managed identity at the scope of vision1?

  • Reader

  • Managed Identity Operator

  • Cognitive Services Contributor

  • Cognitive Services User

Question 8 of 20

Your company is building an internal tool to extract invoice fields from scanned PDFs. You created an Azure AI Document Intelligence resource. In a new .NET 6 console application, you need to call the prebuilt invoice model by using the official client library. Which NuGet package should you add to the project?

  • Microsoft.Azure.CognitiveServices.FormRecognizer

  • Azure.AI.TextAnalytics

  • Azure.AI.DocumentIntelligence

  • Azure.AI.FormRecognizer

Question 9 of 20

You are designing an Azure-based solution that processes incoming customer support emails. The application must automatically detect the email's language, determine sentiment, extract named entities, generate an extractive summary, and route the message by using a custom multi-label text classification model. You prefer to accomplish all these natural language processing tasks with a single Azure service that exposes REST APIs and SDKs. Which Azure service should you choose?

  • Azure OpenAI Service

  • Azure AI Speech

  • Azure AI Language

  • Azure AI Document Intelligence

Question 10 of 20

You are planning an Azure OpenAI-based virtual assistant. Corporate policy states that the assistant must never produce violent or sexual content, even if a user explicitly requests it, but it should continue to answer ordinary troubleshooting questions. According to Azure Responsible AI guidance, which design decision best meets this requirement?

  • Route all traffic through Azure Front Door with a custom WAF rule that searches for banned keywords related to violence or sexuality.

  • Run Text Analytics for Profanity on every user prompt before sending it to the model.

  • Enable the Azure OpenAI content safety policy and set the Violence and Sexual categories to the block severity threshold for the deployment endpoint.

  • Expose the model only through a private endpoint so that external users cannot call it directly.

Question 11 of 20

You are building a streaming web app for an international conference. The application must display near-real-time captions in each viewer's preferred language while the session is being broadcast, and you want to keep integration simple by sending the audio stream to a single Azure WebSocket endpoint that provides both speech-to-text and automatic translation. Which Azure AI Foundry service should you recommend?

  • Azure AI Speech

  • Azure Live Video Analytics

  • Azure AI Language

  • Azure AI Translator service

Question 12 of 20

Your company needs to build a summarization endpoint that processes individual documents up to 100,000 tokens. The stakeholder insists on minimizing inference latency and cost while maintaining accuracy. You decide to use Azure OpenAI models. Which base model should you select to best meet these requirements?

  • text-embedding-ada-002

  • GPT-3.5-Turbo with a 16k context window

  • GPT-4o (128k context, optimized for speed and cost)

  • GPT-4-Turbo with a 128k context window

Question 13 of 20

A company wants to automatically parse thousands of scanned PDF vendor invoices that arrive each month. The solution must extract key fields-such as invoice number, due date, tax amounts, and individual line items-and return the results as structured JSON through a REST API with only minimal custom training on a handful of sample documents. Which Azure AI service should you use to meet these requirements?

  • Azure AI Language Question Answering

  • Azure AI Vision Image Analysis

  • Azure AI Document Intelligence (Form Recognizer)

  • Azure AI Speech to Text

Question 14 of 20

You need to create an Azure AI Foundry hub that will be used by multiple generative-AI projects in your organization. The hub must meet the following requirements:

  • Be provisioned by using infrastructure as code so it can be deployed to multiple subscriptions.
  • Automatically create the minimum set of dependent Azure resources that any project inside the hub will need.

Which deployment approach should you use first to be sure that both requirements are satisfied?

  • Create the hub manually in the Azure portal and then export a template for reuse in other subscriptions.

  • Use an Azure PowerShell script that calls New-AzResourceGroupDeployment with inline parameters to provision only the hub.

  • Run an Azure CLI command that creates an Azure AI Foundry project and hub in a single step.

  • Deploy the hub by using an Azure Resource Manager (ARM) or Bicep template that defines the Azure AI Foundry hub resource and its dependencies.

Question 15 of 20

Your company must process 150,000 high-resolution TIFF images of scanned invoices every night. The goal is to extract all printed and handwritten text in the original language so that another service can later interpret the data. You do not need to identify key-value pairs, tables, or form structure. Which Azure service or feature should you select to meet the requirement with the least configuration effort?

  • Train a custom object-detection model in Azure Custom Vision to locate and extract the text.

  • Use the Read OCR operation provided by Azure AI Vision (Computer Vision service).

  • Use the prebuilt Layout model in Azure AI Document Intelligence to read the images.

  • Upload the images to Azure Video Indexer and retrieve the transcription from the insights JSON.

Question 16 of 20

You are designing an autonomous drone inspection solution that must run entirely at a remote construction site without reliable internet connectivity. The drone needs to generate concise natural-language summaries of detected safety risks on an embedded single-GPU device. Within Azure AI Foundry you must choose a model whose weights you can download and run locally while keeping GPU memory requirements low. Which model should you deploy?

  • DALL-E 3

  • GPT-4

  • Phi-2 language model

  • GPT-3.5-Turbo

Question 17 of 20

A company receives thousands of scanned invoices daily. They must automatically extract vendor name, invoice number, line items, and total amount as structured JSON so the data can be routed to their ERP system. The solution must work with varying layouts and require minimal training. Which Azure AI service should you use to meet these requirements?

  • Azure AI Speech Service - Speech to Text

  • Azure AI Document Intelligence (Form Recognizer)

  • Azure Cognitive Search

  • Azure AI Language Service - Conversational Language Understanding

Question 18 of 20

You are integrating an Azure OpenAI GPT-4 deployment into an Azure AI Foundry project by using the official OpenAI Python SDK. The following code is executed in a prompt flow but returns HTTP 404 "The deployment was not found":

openai.api_key = os.getenv("AZURE_OPENAI_KEY")
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.ChatCompletion.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}]
)

The Azure OpenAI resource contains a deployment named gpt4-prod, and the key and endpoint values are correct. Which change will allow the call to succeed when the code runs in Foundry?

  • Import the azure.ai.generative.openai package instead of openai and keep model="gpt-4" without any other changes.

  • Replace the model parameter with engine="gpt4-prod" and set the environment variable OPENAI_API_VERSION to "latest".

  • Set the environment variable OPENAI_API_TYPE to "azure" and replace the model parameter with deployment_id="gpt4-prod" in the ChatCompletion call.

  • Append "?api-version=2024-02-15-preview" to the endpoint URL while leaving the model parameter unchanged.

Question 19 of 20

You need to build a new SaaS feature that automatically drafts personalized long-form marketing emails from structured customer data. The feature must use a hosted GPT-4 model, allow prompt engineering and deployment controls, and remain inside the Azure compliance boundary. Which Azure AI Foundry service should you choose?

  • Azure AI Custom Vision

  • Azure OpenAI Service

  • Azure AI Speech Service

  • Azure AI Form Recognizer

Question 20 of 20

You manage an Azure AI Foundry Service resource that hosts a production model. Several data scientists need to design Azure Monitor workbooks and explore platform metrics for the resource, but they must not be able to view or regenerate the resource's API keys or change any configuration. Which built-in Azure role should you assign to each data scientist at the resource scope?

  • Cognitive Services Contributor at the resource-group scope

  • Cognitive Services User at the Azure AI Foundry Service scope

  • Monitoring Reader at the Azure AI Foundry Service scope

  • Cognitive Services Data Reader at the Azure AI Foundry Service scope