Microsoft Azure AI Engineer Associate Practice Test (AI-102)
Use the form below to configure your Microsoft Azure AI Engineer Associate Practice Test (AI-102). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Engineer Associate AI-102 Information
The Microsoft Certified: Azure AI Engineer Associate certification, earned by passing the AI‑102: Designing and Implementing a Microsoft Azure AI Solution exam, is designed for people who build, deploy, and manage AI solutions using Microsoft Azure. According to Microsoft the role of an Azure AI Engineer involves working across all phases: requirements definition, development, deployment, integration, maintenance, and tuning of AI solutions. To succeed you should have experience with programming (for example Python or C#), using REST APIs/SDKs, and working with Azure’s AI services.
Domains on Azure AI Engineer Exam
The AI-102 exam tests several key areas: planning and managing an Azure AI solution (about 15-20 % of the exam), implementing computer vision solutions (15-20 %), natural language processing solutions (30-35 %), knowledge mining/document intelligence (10-15 %), generative AI solutions (10-15 %), and content-moderation/decision-support solutions (10-15 %). It is important to review each area and gain hands-on practice with Azure AI services such as Azure AI Vision, Azure AI Language, Azure AI Search and Azure OpenAI.
Azure AI Engineer Practice Tests
One of the best ways to prepare for this exam is through practice tests. Practice tests let you experience sample questions that mimic the real exam style and format. They help you determine which topics you are strong in and which ones need more study. After taking a practice test you can review your incorrect answers and go back to the learning material or labs to fill those gaps. Many study guides recommend using practice exams multiple times as a key part of your preparation for AI-102.

Free Microsoft Azure AI Engineer Associate AI-102 Practice Test
- 20 Questions
- Unlimited time
- Plan and manage an Azure AI solutionImplement generative AI solutionsImplement an agentic solutionImplement computer vision solutionsImplement natural language processing solutionsImplement knowledge mining and information extraction solutions
You call the Azure AI Vision "analyze" endpoint with features=objects to process a photo. The service returns a JSON payload similar to the following (simplified):
{ "objects": [ { "object": "person", "confidence": 0.92, ... } ], "tags": [...], "metadata": }
Your application must raise an alert only when a person is detected with at least 0.8 probability. Which property in the response should your code evaluate?
categories.score
tags.confidence
objects[i].confidence
objects[i].score
Answer Description
The object detection results are returned in the "objects" collection. Each detected instance includes a "confidence" value ranging from 0-1 that indicates the model's probability for that specific object class. Reading the confidence of the object whose "object" field equals "person" lets you decide whether it meets the 0.8 threshold. The other listed properties represent scores for different features (category or tag detection) or do not exist, so they will not reflect the object-level probability you need.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure AI Vision 'analyze' endpoint used for?
What does the 'confidence' property represent in the Azure AI Vision response?
Can the 'objects[i].confidence' property be used for custom alerts?
You are creating a project inside Azure AI Foundry and must give your team access to GPT-4 Turbo for prompt flow experiments. The solution should avoid distributing secret keys and rely on role-based access that Foundry can automatically enforce across the project's resources. What is the most appropriate way to provision the capability?
Use the Foundry model catalog to add a GPT-4 Turbo deployment to the project; Foundry will create a managed Azure OpenAI connection that is secured with Entra ID and governed by RBAC.
Create a separate Azure OpenAI resource in the portal and send the generated access key to the development team.
Deploy the GPT-4 Turbo container image to Azure Container Instances and point Foundry to the public endpoint.
Register the Microsoft.CognitiveServices provider and submit an access request email; the GPT-4 Turbo model will then appear automatically in the project without further action.
Answer Description
Inside a Foundry project you can add an Azure OpenAI model straight from the model catalog. When you select GPT-4 Turbo and deploy it, Foundry automatically creates or links an Azure OpenAI resource and exposes it to the project through a managed connection secured with Microsoft Entra ID. Because the connection is handled by Foundry, developers authenticate with their Azure AD identities and no secret keys have to be shared. Creating a stand-alone Azure OpenAI resource, running the model in a container, or relying only on provider registration would either require manual key distribution or would not surface the model inside the project automatically.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Foundry model catalog?
How does Microsoft Entra ID enforce security in Azure AI Foundry?
What is role-based access control (RBAC) in Azure AI Foundry?
You are creating a custom agent with Azure AI Foundry Agent Service to automate a multi-step workflow. The agent must (1) call an internal REST API that returns the latest product catalog and (2) later schedule a Microsoft Teams meeting. You want the LLM to decide independently when to invoke each capability during the conversation. Which configuration step enables this behavior in the agent definition?
Implement a Semantic Kernel planner skill that calls the endpoints and add the skill to the agent.
Embed the endpoint URLs and usage examples directly in the agent's system prompt.
Register each REST endpoint as a tool by supplying its OpenAPI definition and allow automatic tool invocation.
Create a knowledge retrieval task and attach the REST endpoints as external data sources.
Answer Description
Agents in Azure AI Foundry Agent Service can invoke external capabilities that are registered as tools (also called functions). By supplying an OpenAPI (or JSON schema) description for each REST endpoint and marking the invocation mode as automatic, the service allows the model to choose when a tool should be executed during the dialogue. Simply embedding endpoint details in the prompt (raw text) does not provide structured function metadata, a knowledge retrieval task targets indexed documents, and a Semantic Kernel planner skill would require explicit orchestration logic instead of autonomous tool selection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of registering REST endpoints as tools in Azure AI Foundry?
What is an OpenAPI definition and why is it important for tool registration?
Why is marking the tool invocation mode as automatic necessary for autonomous decisions?
You are designing a Retrieval-Augmented Generation (RAG) solution in Azure AI Foundry. Your project will ground GPT-35-Turbo responses on 30 000 PDF pages stored in Azure Blob Storage. You create a data connection, enable automatic chunking, and plan to build a vector index so the prompt flow can retrieve the most relevant passages at runtime. Which Foundry resource must you create to store the embeddings so they can be queried by the prompt flow retrieval node?
A semantic memory collection in Azure Cosmos DB for NoSQL
An Azure Cache for Redis instance with the Search and Query modules enabled
A vector index backed by Azure AI Search
A feature store table in Azure Machine Learning
Answer Description
In Azure AI Foundry, grounding a model with your own data follows a RAG pattern. After connecting the raw documents, you build a vector index to persist the embeddings that are created from the automatically chunked text. The index is implemented on top of Azure AI Search and is the resource the retrieval node in a prompt flow queries at runtime. Other data stores such as Cosmos DB, Redis, or AML Feature Store are not used by Foundry for vector retrieval in a RAG solution.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a vector index in the context of Azure AI Foundry?
Why is Azure AI Search used for storing the vector index in RAG solutions?
What is the role of automatic chunking when creating embeddings for the vector index?
Your Azure AI Foundry project uses the DALL-E 3 model through the Azure OpenAI images/generations endpoint. You want to generate the largest landscape-oriented image the model currently supports so that more detail can be added later in your application. Which value should you assign to the size property in the request body to accomplish this?
1792x1024
512x512
2048x1024
1024x1792
Answer Description
DALL-E 3 accepts three size values: 1024x1024 (square), 1024x1792 (portrait), and 1792x1024 (landscape). Among these, 1792x1024 is both landscape-oriented and the largest in total pixel count for that orientation. Therefore, setting size to 1792x1024 produces the largest landscape image. 1024x1792 is portrait, 512x512 is not supported by DALL-E 3, and 2048x1024 exceeds the model's current limits and will be rejected.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DALL-E 3 and how does it generate images?
What happens if an unsupported size is used in the Azure OpenAI images/generations endpoint?
Why does 1792x1024 produce the largest landscape-oriented image supported by DALL-E 3?
An Azure AI Content Understanding pipeline must enrich PDF, PNG, MP4, and WAV files dropped into an Azure Blob Storage container, then push extracted text, speech, and video thumbnails into an Azure AI Search index. Before a single skillset can ingest all four formats, which prerequisite must you satisfy?
Install the Azure Cognitive Search optional Blob Storage media extension on the search service.
Enable the Azure Blob Storage change-feed on the container that receives the files.
Create and associate an Azure AI Video Indexer (ARM-based) account with a valid API key and location.
Deploy a custom Form Recognizer model in the same resource group as the AI Search service.
Answer Description
Azure AI Search indexers natively crack text-based formats such as PDF and can call built-in cognitive skills for images. However, audio and video enrichment relies on the Azure Video Indexer cognitive skill, which in turn requires that a Video Indexer resource be connected to your Azure subscription through an ARM-based account with an API key. Without creating and linking that account, the skill cannot extract spoken words or thumbnails, so MP4 and WAV files will fail. Enabling blob change feed, installing extensions, or deploying a Form Recognizer model does not meet this dependency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Video Indexer and what role does it play in content enrichment pipelines?
How does Azure AI Search integrate with different file formats like PDF, PNG, MP4, and WAV?
Why is enabling Azure Blob Storage change-feed not sufficient for this pipeline requirement?
You are generating speech using the Azure AI Speech service. The spoken output must meet three requirements:
- Pronounce the text "SQL" as the word "sequel".
- Slow the speech rate for that word.
- Insert a 300-millisecond pause immediately after the word. When authoring an SSML fragment to satisfy all three requirements, which set of SSML elements should you use?
<say-as interpret-as="spell-out">, <emphasis level="strong">, and <break strength="weak"/><sub alias="…">, <prosody rate="…">, and <break time="…"/><phoneme ph="…">, <prosody pitch="…">, and <say-as interpret-as="time"><mstts:express-as style="…">, <audio src="…">, and <break strength="x-strong"/>
Answer Description
The element with the alias attribute replaces the written text with an alternate pronunciation, so SQL forces "SQL" to be spoken as "sequel". To slow the delivery of that word, wrap the substitution in a
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the <sub> element with the alias attribute do in SSML?
How does the <prosody> element work in SSML, and what does 'rate' adjust?
What is the purpose of the <break> element in SSML?
You are developing an Azure Function that calls the Azure AI Vision v3.2 Analyze Image REST API on user-uploaded photos. The function must meet the following requirements:
- Detect whether an image contains explicit or racy content.
- Identify well-known brand logos that appear in the image.
- Produce a short, human-readable caption that describes the image. To minimize payload size, which set of visual features should you request in a single Analyze Image call?
Description, Objects, Color
Adult, Objects, Tags
Brands, Objects, Categories
Adult, Brands, Description
Answer Description
The Adult feature returns flags that indicate explicit or racy content. The Brands feature detects and identifies common brand logos such as Nike or Coca-Cola. The Description feature generates one or more concise captions of the image content. Requesting these three features satisfies all stated requirements without requesting unnecessary data, thereby reducing the response size. The other answer choices omit at least one required capability (adult content detection, brand recognition, or caption generation).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Adult feature in Azure AI Vision?
How does the Brands feature work in Azure AI Vision?
What does the Description feature provide in Azure AI Vision?
You have created a retrieval-augmented generation (RAG) prompt flow in Azure AI Studio. Before deploying the flow, you run an automatic evaluation run from the Evaluate pane and enable all built-in metrics. You want to determine whether the model is hallucinating or inventing facts that are not found in the documents returned by the retriever. Which metric should you focus on, and what does a consistently low score for that metric most likely indicate?
Groundedness - the answer contains content that is not supported by the retrieved passages.
Relevance - the answer does not correspond to the intent of the user question.
Fluency - the answer contains spelling or grammatical mistakes.
Semantic similarity - the answer's wording differs from a predefined reference answer.
Answer Description
The groundedness metric compares each factual statement in the model's answer with the passages that the retriever supplied. If the text cannot be traced back to the provided context, the groundedness score drops. Therefore, a low groundedness score signals that portions of the answer are unsupported by the retrieved documents-an indication of hallucination. Fluency concerns grammatical quality, relevance measures how well the answer addresses the user's question, and semantic similarity compares the answer to a reference answer; none of these directly measure whether the answer is backed by the source passages.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is groundedness in the context of RAG prompt flows?
How does groundedness differ from relevance and semantic similarity?
How can a consistently low groundedness score affect the deployment of a RAG model?
You use the gpt-3.5-turbo model through Azure OpenAI to generate promotional product descriptions. The model's output frequently has an inconsistent tone that is off-brand. You need to apply a prompt engineering technique to ensure the output consistently matches your company's specific style and voice. Which change should you make?
Reduce the temperature to 0.2 and lower the max_tokens value.
Increase the presence_penalty to 1.5 to encourage the model to use new words.
Configure a stop sequence consisting of two newline characters to force longer responses.
Add a system message containing style guide instructions and an example of the desired output.
Answer Description
Adding a system message that defines the model's persona, tone, and style, and includes an example of the desired output (a technique known as one-shot or few-shot learning), is the most effective way to control the brand voice. Lowering the temperature parameter makes the output more deterministic but does not teach it a specific style. A stop sequence is used to end generation at a specific point, not control tone. The presence_penalty parameter encourages the use of new words, which affects novelty, not the stylistic tone.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a system message in Azure OpenAI?
How does lowering the temperature parameter affect AI responses in Azure OpenAI?
What is one-shot or few-shot learning in prompt engineering?
You are developing a Node.js single-page application that uses the Azure AI Speech SDK to read dynamic text back to users. To keep bandwidth low while still using a container that is natively supported by most modern browsers, the audio must be streamed in an Ogg container with Opus compression at 16 kHz, mono. Which SpeechSynthesisOutputFormat value should you set on the SpeechConfig object to meet the requirement?
Ogg16Khz16BitMonoOpus
Raw24Khz16BitMonoPcm
Webm24Khz16BitMonoOpus
Riff16Khz16BitMonoPcm
Answer Description
The Ogg16Khz16BitMonoOpus value instructs the Speech service to return 16-kHz, 16-bit, single-channel audio that is Opus-encoded and wrapped in an Ogg container. This combination provides good quality at a low bitrate and is broadly supported in web browsers. The RIFF and RAW PMC formats do not use Opus compression, so they produce larger files. Webm24Khz16BitMonoOpus uses the WebM container and a higher 24-kHz sample rate, resulting in higher bandwidth than required.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SpeechSynthesisOutputFormat in Azure AI Speech SDK?
What is Opus compression and why is it used in speech synthesis?
Why is the Ogg container preferred for browser compatibility?
You need to translate several hundred .docx and .pptx files stored in an Azure Blob Storage container from French to German. You will call the Azure AI Translator Document Translation REST API from an Azure Function and must minimize data transferred between your function and the translation service. What should you include in the request payload to identify the documents to be translated?
Upload the documents to the Translator endpoint using multipart/form-data in the POST request.
Include a SAS URL for the source blob container and a SAS URL for an empty target container.
Open a persistent WebSocket connection and stream every document to the service.
Embed each document's binary content as a base64 string inside the JSON request body.
Answer Description
The Document Translation API is designed to work directly with Azure Blob Storage. A translation request supplies two shared access signature (SAS) URLs: one that points to the source container holding the original files and another that points to a target container where the translated copies will be written. Because the service pulls the documents from storage and pushes the results back without routing the files through your code, this approach keeps egress traffic low. Supplying raw document content-whether as base64, multipart/form-data, or a streamed WebSocket feed-would dramatically increase data movement and is not supported by the REST API for large-scale document translation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a SAS URL in Azure Blob Storage?
How does the Azure Translator Document Translation API minimize data transfer?
Why can't document content be sent directly as base64 or multipart/form-data to the Azure Translator API?
You need to build a Python‐based OCR pipeline that extracts text from multipage PDF files stored in Azure Blob Storage. You have provisioned an Azure AI Vision resource and will use the Read 3.2 REST API. Which implementation sequence correctly completes the pipeline?
- Upload the PDF to an Azure AI Document Intelligence resource.
- Call the prebuilt Receipt model's analyze endpoint.
- Read the readResults array in the synchronous response to obtain text.
- Send a PUT request to /vision/v3.0/ocr with the PDF content.
- Await the synchronous response that contains a regions array with text lines.
- Send a POST request to /vision/v3.2/read/analyze with the SAS URL of the PDF.
- Read the Operation-Location header that is returned.
- Repeatedly issue a GET request to the URL in Operation-Location until the status field equals "succeeded".
- Parse the lines field in the JSON response to obtain the extracted text.
- Stream every page of the PDF to /vision/v2.0/recognizeText with mode=Printed.
- After a fixed 30-second delay, issue a single GET call to /recognizeTextResults to retrieve text in boundingRegions.
Answer Description
The Read 3.2 API processes documents asynchronously. You start the analysis by calling the POST /read/analyze endpoint and supplying the PDF's SAS URL. The service immediately returns a 202 response that contains an Operation-Location header. Polling the URL in that header with GET requests returns status updates until the value becomes "succeeded". At that point the JSON payload includes a lines collection for each page that contains the extracted text and bounding boxes. Earlier Computer Vision OCR endpoints, synchronous receipt processing, and older recognizeText workflows do not meet the requirements for multipage PDFs processed with the Read 3.2 API.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob Storage?
What is a SAS URL and why is it used?
How does polling work in the Read API pipeline?
You send several text strings to the Azure AI Language detectLanguage REST endpoint (API version 2023-04-01). After the service returns a JSON response, you must ignore any document whose detected language confidence is below 0.7. Which response property path contains the confidence score you should evaluate?
documents[<index>].detectedLanguage.confidenceScoredocuments[<index>].detectedLanguages.confidenceScoredocuments[<index>].statistics.confidenceScoredocuments[<index>].detectedLanguage.confidence
Answer Description
In the JSON that the detectLanguage operation returns, each document object includes a detectedLanguage object. That object has a confidenceScore field whose value ranges from 0 to 1 and indicates how certain the service is that the identified language is correct. Statistics are only included when you request them and do not contain a language-confidence field. The property name is confidenceScore, not confidence, and the API no longer returns a detectedLanguages array, so reading the first element of such an array would fail.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the confidenceScore in Azure's detectLanguage response?
Why doesn't the detectLanguage API response contain a detectedLanguages array anymore?
When should statistics be requested in the detectLanguage API response?
You are integrating Azure AI Document Intelligence by calling the RecognizeReceipts REST API (v2.1). The solution must later render an on-screen overlay that highlights every detected word in the source JPEG image. By default, the service response only contains high-level field values. Which request modification ensures the API also returns line- and word-level OCR results with bounding boxes?
Append the query parameter "includeTextDetails=true" to the analyze request URL.
Set the request header "Content-Type" to "image/jpeg" instead of "multipart/form-data".
Specify model-version=2023-07-31-preview in the request header.
Use the prebuilt Layout model instead of the prebuilt Receipt model.
Answer Description
The RecognizeReceipts (and other prebuilt) operations expose the optional query parameter includeTextDetails. Setting includeTextDetails=true instructs the service to include the full OCR hierarchy-pages, lines, words-and their bounding polygons in the analysis result. Changing the model version or content-type header does not alter the granularity of returned content. Switching to the Layout model would provide words and lines but would no longer return the receipt-specific key-value pairs required by the application.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'includeTextDetails=true' parameter do in Azure AI Document Intelligence?
What are bounding boxes or polygons in OCR results?
Why is the prebuilt Receipt model better for receipt analysis than the Layout model?
You build a solution that calls the Computer Vision Analyze Image REST API version 3.2. For each uploaded photo you must return the following in as few requests as possible:
- a list of objects located in the image
- image tags
- all printed text that appears in the image What should you do?
Send one Analyze Image request with visualFeatures=Description, which returns tags, objects, and text.
Send a single Analyze Image request with visualFeatures set to Tags,Objects,Read.
Send an Analyze Image request with visualFeatures=Objects,Tags, then make a separate Read API request for the text.
Send one request with visualFeatures=Objects,Tags and include features=Read in the same request body.
Answer Description
The Analyze Image 3.2 endpoint supports multiple visual features in a single request, but it does not support optical character recognition. Printed or handwritten text must be extracted by a separate call to the Read API. Therefore you must first call Analyze Image with visualFeatures set to Objects and Tags, and then send a second call to the Read endpoint for the text. The other options are incorrect because:
- Read cannot be specified in the visualFeatures parameter.
- There is no separate features parameter that mixes Read with visualFeatures for version 3.2.
- The Description feature returns a caption and high-level tags only; it does not return object locations or extracted text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the Computer Vision Analyze Image API version 3.2?
What is the difference between the Analyze Image API and the Read API in Azure Cognitive Services?
Why can’t text extraction be handled by setting visualFeatures to 'Read' in the Analyze Image API?
You are designing a multi-team generative AI environment in Azure AI Foundry. All teams must be able to pull the same set of approved foundation models, but each team needs isolated experiment tracking, its own prompt flows, and separate spending limits. According to the Foundry resource hierarchy, which action should you perform first to meet the requirements?
Deploy an Azure OpenAI model deployment in a shared resource group.
Set up a dedicated Azure AI workspace with per-team cost management tags.
Create an Azure AI Foundry hub in the subscription.
Provision a separate Azure AI Foundry project for each team.
Answer Description
In Azure AI Foundry the top-level construct is the hub. A hub is created at the subscription level and defines the catalog of foundation models, centrally managed policy settings, and shared infrastructure budgets for everything that is built beneath it. Individual teams then create one or more projects inside the hub; projects inherit the approved models but have their own prompt flows, experiment artifacts, and cost tracking. Therefore, creating a hub is the necessary first step before the separate team projects can be provisioned. Creating projects, resource groups, or OpenAI deployments can only occur after a hub exists, and none of those actions alone provides the shared model catalog required across teams.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure AI Foundry hub?
How do Azure AI Foundry projects work?
What role do foundation models play in Azure AI Foundry?
You plan to train a custom document intelligence model by selecting the "Unlabeled" option in Azure AI Document Intelligence Studio. All training files will be stored in a single Azure Blob Storage container named training-data. To make sure the training operation succeeds, which requirement must this container satisfy before you start the training job?
The container must include a model_data.json manifest file that lists the relative path of every training document.
Every file name in the container must correspond to a row in a pre-defined schema table.
Each document must reside in a separate subfolder together with a JSON label file generated by Document Intelligence Studio.
The container must contain at least five documents that have the same layout.
Answer Description
When you choose the Unlabeled (formerly "template") training option, Azure AI Document Intelligence only needs sample documents that share a common layout. No label files or folder hierarchy is required, but the service enforces a minimum document count so that it can reliably infer the template. The container must therefore hold at least five sample documents of the same form type. Placing each document in its own folder, adding label JSON files, or including a model_data.json manifest are requirements for labeled training jobs, not for unlabeled ones. Filenames do not have to match any schema table.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the unlabeled training option require at least five documents?
What is the difference between labeled and unlabeled training?
How does Azure AI Document Intelligence handle files during unlabeled training?
You are designing an Azure-based solution that processes incoming customer support emails. The application must automatically detect the email's language, determine sentiment, extract named entities, generate an extractive summary, and route the message by using a custom multi-label text classification model. You prefer to accomplish all these natural language processing tasks with a single Azure service that exposes REST APIs and SDKs. Which Azure service should you choose?
Azure AI Language
Azure AI Document Intelligence
Azure OpenAI Service
Azure AI Speech
Answer Description
Azure AI Language (formerly the Cognitive Service for Language) offers built-in capabilities for language detection, sentiment analysis, entity recognition, and extractive summarization. It also allows you to train and deploy custom multi-label text classification models, so all of the required tasks can be performed by one service. Azure AI Document Intelligence focuses on extracting structured data from documents, not general NLP. Azure AI Speech targets speech-to-text and related audio scenarios. Azure OpenAI Service provides powerful generative models but is not optimized for turnkey sentiment analysis, language detection, or custom text classification workflows that the Language service supplies out of the box.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Language and how does it handle all those tasks?
What is the difference between Azure AI Language and Azure OpenAI Service?
What are REST APIs and SDKs, and how do they support Azure AI Language?
You plan to fine-tune the gpt-35-turbo model in Azure OpenAI so a customer-support bot always answers in your organization's preferred style and vocabulary. You have prepared 5 000 example conversations that demonstrate the desired tone, and low-latency response time is more important than flexible retrieval. Before you can submit a fine-tuning job from Azure AI Studio, which format must the training data satisfy?
A UTF-8 encoded .jsonl file where every line is a messages array of role/content pairs and the total tokens per line do not exceed 16 385.
Two plain-text files-one containing prompts and another containing matching completions-uploaded together as a dataset.
A single compressed .json file that Azure AI Studio automatically splits into batches during upload.
An Excel workbook with separate sheets for prompts and completions imported through Azure Data Factory.
Answer Description
Azure OpenAI fine-tuning jobs require a UTF-8-encoded JSON Lines (.jsonl) file. Each line contains one training example expressed as a messages array of role/content pairs (for example system, user, assistant). Every example must fit within the model's maximum context window-16 385 input tokens (plus up to 4 096 output tokens) for gpt-35-turbo. Other formats, such as a zipped JSON file, an Excel workbook, or separate prompt/completion text files, are rejected by Azure AI Studio.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a .jsonl file?
What does 'role/content pairs' mean in the training data?
What are tokens and why is a limit important?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.