00:20:00

Microsoft Azure AI Engineer Associate Practice Test (AI-102)

Use the form below to configure your Microsoft Azure AI Engineer Associate Practice Test (AI-102). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft Azure AI Engineer Associate AI-102
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft Azure AI Engineer Associate AI-102 Information

The Microsoft Certified: Azure AI Engineer Associate certification, earned by passing the AI‑102: Designing and Implementing a Microsoft Azure AI Solution exam, is designed for people who build, deploy, and manage AI solutions using Microsoft Azure. According to Microsoft the role of an Azure AI Engineer involves working across all phases: requirements definition, development, deployment, integration, maintenance, and tuning of AI solutions. To succeed you should have experience with programming (for example Python or C#), using REST APIs/SDKs, and working with Azure’s AI services.

Domains on Azure AI Engineer Exam

The AI-102 exam tests several key areas: planning and managing an Azure AI solution (about 15-20 % of the exam), implementing computer vision solutions (15-20 %), natural language processing solutions (30-35 %), knowledge mining/document intelligence (10-15 %), generative AI solutions (10-15 %), and content-moderation/decision-support solutions (10-15 %). It is important to review each area and gain hands-on practice with Azure AI services such as Azure AI Vision, Azure AI Language, Azure AI Search and Azure OpenAI.

Azure AI Engineer Practice Tests

One of the best ways to prepare for this exam is through practice tests. Practice tests let you experience sample questions that mimic the real exam style and format. They help you determine which topics you are strong in and which ones need more study. After taking a practice test you can review your incorrect answers and go back to the learning material or labs to fill those gaps. Many study guides recommend using practice exams multiple times as a key part of your preparation for AI-102.

Microsoft Azure AI Engineer Associate AI-102 Logo
  • Free Microsoft Azure AI Engineer Associate AI-102 Practice Test

  • 20 Questions
  • Unlimited
  • Plan and manage an Azure AI solution
    Implement generative AI solutions
    Implement an agentic solution
    Implement computer vision solutions
    Implement natural language processing solutions
    Implement knowledge mining and information extraction solutions
Question 1 of 20

You are configuring an Azure AI Search indexer that imports data from an Azure SQL Database table containing a LastModifiedDate column of type datetime2. After the first full crawl, the indexer should process only those rows that were inserted or updated since the previous run, so that each incremental execution finishes quickly and minimizes database load. Which setting must you add to the indexer definition to meet this requirement?

  • Enable a SoftDeleteColumnDeletionDetectionPolicy on the LastModifiedDate column.

  • Set the indexer schedule interval to a short period, such as every five minutes.

  • Set the parameters.batchSize value to 1,000 documents per batch.

  • Configure a HighWaterMarkChangeDetectionPolicy that points to the LastModifiedDate column.

Question 2 of 20

You are creating a custom question answering project in Azure AI Language. The FAQ must respond in either English or Spanish, depending on the language of the user's query. The knowledge base content for each language is already available. According to Azure AI Language design recommendations, which approach should you implement to build a maintainable multi-language solution?

  • Create an English project and a Spanish project, then have the client detect the query language and call the corresponding project endpoint.

  • Create a single project, import both English and Spanish knowledge sources, and rely on the service to auto-detect language at query time.

  • Create one English project and configure a custom Translator model so Spanish questions are translated to English before they are sent to the project.

  • Create one project in English, enable cross-language semantic search, and return the same answers for Spanish queries.

Question 3 of 20

Your project in Azure AI Foundry must summarize support tickets containing up to 14 000 tokens and use Azure OpenAI function calling to write the summary to Cosmos DB. You are preparing the Deploy Model step in Foundry Studio. Which model should you select and deploy to satisfy these requirements while keeping cost as low as possible?

  • Deploy the gpt-35-turbo-16k chat completion model.

  • Deploy the text-embedding-ada-002 embeddings model.

  • Deploy the text-davinci-003 completions model.

  • Deploy the gpt-4-32k chat completion model.

Question 4 of 20

You are designing an autonomous drone inspection solution that must run entirely at a remote construction site without reliable internet connectivity. The drone needs to generate concise natural-language summaries of detected safety risks on an embedded single-GPU device. Within Azure AI Foundry you must choose a model whose weights you can download and run locally while keeping GPU memory requirements low. Which model should you deploy?

  • GPT-3.5-Turbo

  • GPT-4

  • DALL-E 3

  • Phi-2 language model

Question 5 of 20

You develop an Azure Function that calls the Azure AI Text Analytics sentiment analysis endpoint (v3.1) to process English product reviews. In addition to the overall document sentiment, you must retrieve the sentiment and confidence scores for each opinion about a product aspect (for example, battery or screen) that appears in the review. Which setting should you add to the request so that the service returns this extra granular information?

  • Add the query parameter opinionMining=true.

  • Specify stringIndexType=UnicodeCodePoint in the request body.

  • Add the query parameter showStats=true.

  • Add the query parameter includeOpinionMining=true.

Question 6 of 20

Your company processes about 10 million Text Analytics transactions every month by using the S0 tier of Azure AI Language in a single production subscription. Management wants to lower the monthly bill without reducing throughput or changing the existing solution's architecture. Which action should you take to achieve the greatest cost savings for this predictable monthly workload?

  • Move the Azure AI Language resource to a region with lower pricing and disable public network access.

  • Switch the resource from the S0 tier to the F0 tier.

  • Purchase a monthly commitment tier for the Azure AI Language service.

  • Create an Azure budget on the resource group that contains the Azure AI Language resource.

Question 7 of 20

You are developing an Azure Function written in Python that must send chat completion requests to a GPT-4 deployment named "chat-gpt4" in the Azure OpenAI resource "lit-openai". The security team prohibits storing service access keys in your source code, configuration files, or environment variables. You decide to use the system-assigned managed identity of the function for authentication. Which approach should you implement to meet the requirements?

  • Use the openai Python package and set openai.api_key to the managed identity's client ID while pointing openai.api_base to the Azure endpoint.

  • Add the managed identity to the Reader role on the resource group and create OpenAIClient with AzureKeyCredential initialized to an empty string.

  • Instantiate azure.ai.openai.OpenAIClient with DefaultAzureCredential, and assign the function's managed identity the Cognitive Services OpenAI User role on "lit-openai".

  • Retrieve the Azure OpenAI resource key from Azure Key Vault at startup and expose it as the OPENAI_API_KEY environment variable.

Question 8 of 20

You need to create a new agent by using the Azure CLI command az ai agent create against the Azure AI Foundry Agent Service. The agent must be immediately ready to process user messages after the command finishes. Which single property must you supply in the JSON specification so that the service can route prompts to the correct large language model?

  • model

  • instructions

  • managedIdentity

  • memoryStore

Question 9 of 20

An accounts payable team has thousands of vendor invoices in PDF and JPEG formats. They need to automatically identify and extract the invoice number, vendor name, due date, and total amount as JSON so that they can push the data into an ERP system. They prefer a managed Azure service that requires minimal custom model training. Which Azure service should you recommend?

  • Azure AI Language service - Named Entity Recognition

  • Azure AI Document Intelligence (formerly Form Recognizer) pre-built Invoice model

  • Azure AI Vision Read OCR API

  • Azure Cognitive Search indexing pipeline with built-in OCR skill

Question 10 of 20

You are developing an Azure Function that calls the Azure AI Vision (Computer Vision) v3.2 Analyze Image REST endpoint. For every uploaded photo, the function must return 1) bounding-box coordinates for each detected object and 2) a list of descriptive tags for the whole image, all in a single request. Which value should you provide for the visualFeatures query parameter to meet the requirement?

  • objects

  • categories,tags

  • objects,tags

  • description,objects

Question 11 of 20

You have trained an object-detection project by using Custom Vision in Azure AI Vision. The model will be used on production-line cameras inside a factory network that has no reliable Internet connectivity. Inference must execute locally with less than 100 ms latency and without sending images to the cloud. Which deployment approach should you use?

  • Import the model into an Azure OpenAI resource and call it with chat completions from the production-line controllers.

  • Convert the model to ONNX and deploy it to a managed online endpoint in Azure Machine Learning.

  • Export the model and run it in the Azure AI Vision Docker container on an Azure Stack Edge or IoT Edge device within the factory.

  • Publish the model to the Custom Vision cloud prediction endpoint and invoke it from the factory over HTTPS.

Question 12 of 20

You deployed an Azure OpenAI resource that serves multiple internal applications. The security team needs a weekly report that shows how many user prompts or completions were blocked by the built-in content filters and which harmful content category was responsible for each block. You will build the report by querying Azure Monitor workbooks. In the Azure portal, which configuration step should you perform first to ensure the necessary data is collected?

  • Enable the Content Safety option in the Networking blade of the Azure OpenAI resource.

  • Assign an Azure Policy that audits requests containing blocked content for the resource group.

  • Create a diagnostic setting for the Azure OpenAI resource that sends the ContentFiltering log category to a Log Analytics workspace.

  • Configure a diagnostic setting in Azure Key Vault and link the Azure OpenAI resource to that setting.

Question 13 of 20

Your team is building a voice-activated kiosk that must remain offline until the customer says "Hey Woodgrove". You trained the wake word in Speech Studio and downloaded the generated Woodgrove.table file. Using the Azure Speech SDK for C#, which implementation meets the requirement to detect the wake word locally without sending audio to the cloud?

  • Instantiate a KeywordRecognizer with an AudioConfig and supply a KeywordRecognitionModel created from Woodgrove.table.

  • Instantiate an IntentRecognizer, assign your Conversational Language Understanding project ID, and set the KeywordModelName property to Woodgrove.table.

  • Instantiate a SpeechRecognizer with AutoDetectSourceLanguage and load Woodgrove.table as a grammar file.

  • Instantiate a DialogServiceConnector in listen mode and set the Keyword to "Hey Woodgrove".

Question 14 of 20

You have deployed a custom Question Answering project to a language resource named lang-prod in the East US region. You want to query the knowledge base from a C# console app by using the Azure.AI.Language.QuestionAnswering client SDK. Which configuration values must you pass when constructing the QuestionAnsweringClient instance so that the application can successfully reach your published deployment?

  • The authoring key for the Language Studio workspace and the resource's region (East US)

  • The project name and the deployment name of the Question Answering model

  • The resource endpoint URI and an AzureKeyCredential built from the language resource key

  • The Azure subscription ID and the resource group name that contains lang-prod

Question 15 of 20

You need to choose a model in Azure AI Foundry to generate 1024×1024 photorealistic images for marketing. The model must embed C2PA provenance metadata in each image and rely on Microsoft's built-in content-safety filters. Which catalog model meets all the requirements?

  • DALL-E 3 (Azure OpenAI)

  • Stable Diffusion XL 1.0

  • Llama 2-70B-Chat

  • GPT-4 Turbo 128k

Question 16 of 20

You published an Azure AI Foundry project that contains a prompt flow named "summarize_docs" and successfully validated it in the Foundry portal. You now want to invoke the flow from a Python application by using the Azure AI Foundry SDK with the following code snippet:

from azure.ai.foundry import FoundryClient
from azure.identity import DefaultAzureCredential

client = FoundryClient(
    endpoint    = os.environ["FOUNDRY_ENDPOINT"],
    credential  = DefaultAzureCredential()
)
flow = client.get_prompt_flow("summarize_docs")
result = flow.invoke({"input_text": content})

When the invoke call runs, it raises an HTTP 404 error that states that no online deployment exists for the flow.

In the Foundry portal, which action must you take before the code will work without modification?

  • Deploy the "summarize_docs" flow to an online endpoint.

  • Export the project as an Azure Resource Manager (ARM) template and redeploy it.

  • Promote the flow from the dev stage to the staging stage.

  • Regenerate the Foundry workspace primary access key.

Question 17 of 20

You need to build a content-understanding pipeline that ingests thousands of JPG images, PDF documents, and MP4 videos from a single Azure Blob Storage container. For every file the solution must extract spoken or written text, identify key topics, and store the results in one Azure AI Search index so users can search across all modalities. Which combination of Azure AI services minimizes custom code?

  • Azure AI Video Indexer only, configured to output insights to JSON

  • Azure AI Search indexer with an OCR + Key Phrase skillset for JPG/PDF files, plus Azure AI Video Indexer for MP4 files

  • Azure AI Document Intelligence prebuilt Read model for JPG and PDF files, plus Azure AI Speech to Text for MP4 files

  • Azure AI Vision Image Analysis for images and PDFs together with Azure AI Language service for topic detection

Question 18 of 20

You deploy a Python web API that queries an Azure OpenAI gpt-35-turbo deployment. You must capture the following for every completion request so that your operations team can investigate latency spikes and analyze prompt quality in Application Insights:

  • total processing time on the Azure OpenAI side
  • number of prompt and completion tokens used
  • full prompt text and model response (truncated to 8 kB)

You plan to use OpenTelemetry-based distributed tracing, which is already configured to export spans to the same Application Insights instance that the rest of the web API uses.
Which single action should you perform in the Python project to ensure the required data is captured for each call to Azure OpenAI?

  • Add the azure-core-tracing-opentelemetry package and import its automatic patching helper at application startup.

  • Enable diagnostic logging on the Azure OpenAI resource and configure a Log Analytics workspace.

  • Install the opencensus-ext-azure package and configure the Azure exporter.

  • Add correlation ID headers manually to every request and write a custom middleware that records timings and headers.

Question 19 of 20

You trained a custom speech-to-text model in Speech Studio for en-US and deployed it, receiving a deployment (endpoint) ID. A C# desktop app creates a SpeechConfig instance by calling SpeechConfig.FromSubscription(key, region) and then instantiates a SpeechRecognizer for real-time recognition. To make sure the app uses your custom model instead of the base model, which SDK property should you set before creating the SpeechRecognizer?

  • Append the deployment ID to the subscription key that SpeechConfig.FromSubscription uses.

  • Assign the deployment ID to the SpeechConfig.EndpointId property.

  • Attach a Recognized event handler that sets a DeploymentId property on each recognition result.

  • Set SpeechConfig.SpeechRecognitionLanguage to the deployment ID value.

Question 20 of 20

Your company is containerizing a line-of-business application in Azure Kubernetes Service (AKS). The application must call an Azure AI Foundry Service endpoint at runtime, but security standards forbid embedding or storing access keys. All calls must be authenticated through Azure AD and credentials must rotate automatically. What should you configure to satisfy these requirements?

  • Generate an API key for the AI Foundry resource, store the key in Azure Key Vault, and inject it into the pods as an environment variable.

  • Enable the AI Foundry service firewall and allow traffic only from the AKS virtual network.

  • Create a shared access signature (SAS) token for the AI Foundry endpoint and distribute it to the application at deployment time.

  • Configure Azure AD workload identity by assigning a user-assigned managed identity to the AKS cluster and granting it the Cognitive Services User role on the Azure AI Foundry resource.