Microsoft Azure AI Engineer Associate Practice Test (AI-102)
Use the form below to configure your Microsoft Azure AI Engineer Associate Practice Test (AI-102). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Engineer Associate AI-102 Information
The Microsoft Certified: Azure AI Engineer Associate certification, earned by passing the AI‑102: Designing and Implementing a Microsoft Azure AI Solution exam, is designed for people who build, deploy, and manage AI solutions using Microsoft Azure. According to Microsoft the role of an Azure AI Engineer involves working across all phases: requirements definition, development, deployment, integration, maintenance, and tuning of AI solutions. To succeed you should have experience with programming (for example Python or C#), using REST APIs/SDKs, and working with Azure’s AI services.
Domains on Azure AI Engineer Exam
The AI-102 exam tests several key areas: planning and managing an Azure AI solution (about 15-20 % of the exam), implementing computer vision solutions (15-20 %), natural language processing solutions (30-35 %), knowledge mining/document intelligence (10-15 %), generative AI solutions (10-15 %), and content-moderation/decision-support solutions (10-15 %). It is important to review each area and gain hands-on practice with Azure AI services such as Azure AI Vision, Azure AI Language, Azure AI Search and Azure OpenAI.
Azure AI Engineer Practice Tests
One of the best ways to prepare for this exam is through practice tests. Practice tests let you experience sample questions that mimic the real exam style and format. They help you determine which topics you are strong in and which ones need more study. After taking a practice test you can review your incorrect answers and go back to the learning material or labs to fill those gaps. Many study guides recommend using practice exams multiple times as a key part of your preparation for AI-102.

Free Microsoft Azure AI Engineer Associate AI-102 Practice Test
- 20 Questions
- Unlimited
- Plan and manage an Azure AI solutionImplement generative AI solutionsImplement an agentic solutionImplement computer vision solutionsImplement natural language processing solutionsImplement knowledge mining and information extraction solutions
You need to translate several hundred .docx and .pptx files stored in an Azure Blob Storage container from French to German. You will call the Azure AI Translator Document Translation REST API from an Azure Function and must minimize data transferred between your function and the translation service. What should you include in the request payload to identify the documents to be translated?
Upload the documents to the Translator endpoint using multipart/form-data in the POST request.
Embed each document's binary content as a base64 string inside the JSON request body.
Include a SAS URL for the source blob container and a SAS URL for an empty target container.
Open a persistent WebSocket connection and stream every document to the service.
Answer Description
The Document Translation API is designed to work directly with Azure Blob Storage. A translation request supplies two shared access signature (SAS) URLs: one that points to the source container holding the original files and another that points to a target container where the translated copies will be written. Because the service pulls the documents from storage and pushes the results back without routing the files through your code, this approach keeps egress traffic low. Supplying raw document content-whether as base64, multipart/form-data, or a streamed WebSocket feed-would dramatically increase data movement and is not supported by the REST API for large-scale document translation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a SAS URL in Azure Blob Storage?
How does the Azure Translator Document Translation API minimize data transfer?
Why can't document content be sent directly as base64 or multipart/form-data to the Azure Translator API?
You call the Azure AI Vision "analyze" endpoint with features=objects to process a photo. The service returns a JSON payload similar to the following (simplified):
{ "objects": [ { "object": "person", "confidence": 0.92, ... } ], "tags": [...], "metadata": }
Your application must raise an alert only when a person is detected with at least 0.8 probability. Which property in the response should your code evaluate?
objects[i].score
objects[i].confidence
categories.score
tags.confidence
Answer Description
The object detection results are returned in the "objects" collection. Each detected instance includes a "confidence" value ranging from 0-1 that indicates the model's probability for that specific object class. Reading the confidence of the object whose "object" field equals "person" lets you decide whether it meets the 0.8 threshold. The other listed properties represent scores for different features (category or tag detection) or do not exist, so they will not reflect the object-level probability you need.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure AI Vision 'analyze' endpoint used for?
What does the 'confidence' property represent in the Azure AI Vision response?
Can the 'objects[i].confidence' property be used for custom alerts?
You administer a multiservice Azure AI resource named contoso-ai. Developers report occasional 5xx responses when calling the resource's REST endpoint, but the Azure portal Metrics chart shows no obvious pattern. You must collect per-request data (operation name, response status, subscription key) for the last 24 hours and query it in your existing Log Analytics workspace. What should you do first?
Create a diagnostic setting on contoso-ai that streams the RequestLogs category to the Log Analytics workspace.
Configure a Network Watcher connection monitor test that targets the contoso-ai endpoint.
Enable export of the Azure activity log for the contoso-ai resource group to the Log Analytics workspace.
Enable Application Insights HTTP dependency tracking in each calling application.
Answer Description
The detailed, per-request information you require is emitted by the Azure AI Services platform as RequestLogs. By creating a diagnostic setting on the contoso-ai resource and selecting the RequestLogs (or AllLogs) category, you can stream those logs directly to an Azure Monitor Log Analytics workspace. Once the diagnostic setting is in place, every call-including its operation name, response code, and the key used-is recorded and becomes immediately queryable with Kusto queries. Exporting the subscription's activity log would only provide control-plane events, not request-level data. Connection Monitor tests and Application Insights dependency tracking would require additional configuration on every client and do not automatically capture the platform-level fields requested.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are RequestLogs and what data do they contain?
How do you create a diagnostic setting in Azure?
What is a Log Analytics workspace and how is it used?
You are configuring a prompt flow in Azure AI Foundry that calls the Azure OpenAI gpt-4o model to generate an ARM template snippet. The responses sometimes include explanatory sentences before the code block, which breaks downstream parsing. Which prompt change is most likely to make the model return only the template code and nothing else?
Raise the temperature parameter to 1.2 to make the model focus on code generation.
Modify the user message to read "Explain each step and then provide the ARM template."
Insert a system message such as "You are an assistant that returns only the requested ARM template JSON inside one markdown code block-no extra text."
Send the prompt as a function call message with a name like "return_template" without changing the system instructions.
Answer Description
Adding a system message that explicitly instructs the assistant to output only the desired artifact narrows the response format. System messages establish the assistant's behavior across the entire conversation, so telling the model to return just the code in a single markdown block greatly reduces narration. Increasing temperature encourages more varied text, not less. A function-calling role could constrain the schema but requires additional parameter definitions and isn't necessary here, while asking for explanations invites even more non-code output.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an ARM template in Azure?
What does the 'temperature parameter' do in Azure OpenAI Service?
What is the role of a system message in prompt engineering?
You create an Azure Video Indexer account and generate an API key. You are writing a script that calls the Get Account Access Token endpoint before uploading a video. Aside from specifying the location and account identifier in the URL, which HTTP header must the script include for the request to succeed?
Accept: application/json;odata=nometadata
x-ms-client-request-id
Content-Type: application/json
Ocp-Apim-Subscription-Key
Answer Description
Azure Video Indexer APIs require the caller to present the account's subscription key when requesting an access token. The key is supplied in the "Ocp-Apim-Subscription-Key" HTTP header. Other headers such as "Content-Type", "Accept", and "x-ms-client-request-id" are optional or used for different purposes and are not required for the access-token call.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the 'Ocp-Apim-Subscription-Key' in Azure APIs?
Why is the 'Get Account Access Token' endpoint important in Azure Video Indexer?
What is the difference between 'Ocp-Apim-Subscription-Key' and 'x-ms-client-request-id'?
Your team plans to create a custom question answering project in Language Studio that will ingest PDF documents stored on-premises. You have already provisioned an Azure AI Language resource in the East US region. Before you can create the project, which additional Azure resource must exist in the same subscription and region?
An Azure Cognitive Search service
An Azure Container Registry instance
An Azure Storage account with hierarchical namespace enabled
An Azure Bot Service resource
Answer Description
Custom question answering stores and searches the content of your knowledge sources by using Azure Cognitive Search. Language Studio therefore requires an Azure Cognitive Search service (any pricing tier) to be linked to the Azure AI Language resource before the project-creation wizard can continue. The other resources-an Azure Storage account, Container Registry, or Bot Service-are not mandatory prerequisites for creating the project, although they might be used later for storage, container deployment, or bot integration.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cognitive Search?
Why does Azure Cognitive Search need to be in the same region as the Azure AI Language resource?
How does Azure Cognitive Search handle PDF ingestion?
You are calling the Azure AI Vision Analyze Image (v3.2) REST API from a .NET application. The requirement is to (1) block images that contain adult or racy content and (2) return a list of searchable tags for approved images. The request must avoid any unnecessary processing such as text extraction, face detection, or object localization. Which combination of VisualFeatures values should you include in the request?
Categories and Description
Adult, Tags, and Objects
Adult and Tags
Tags, Faces, and Read
Answer Description
The Adult feature runs the adult and racy content classifier, while the Tags feature returns a lightweight set of tags describing objects and concepts in the image. Together they satisfy the moderation and tagging requirements. Adding Objects, Read, Faces, or Description would trigger extra processing steps that are not needed, increasing latency and cost. Categories/Description do not perform adult detection, and any feature set that omits Adult would fail the moderation requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the Azure AI Vision Analyze Image API?
What is the difference between VisualFeatures values Tags and Categories?
How does the Adult feature in Azure Vision API work?
You need to configure an Azure AI Search indexer that ingests data from an Azure SQL Database table every hour. During each run the indexer must transmit only rows that were added or updated after the previous crawl; unchanged rows must be ignored to reduce load on the database. Which configuration change should you make when you create the indexer?
Configure the indexer schedule with an interval of 1 hour and no start time.
Run the indexer with the
resetflag set totruebefore each scheduled execution.Enable a high-water-mark change detection policy on the indexer and map it to a
LastModifiedcolumn.Add a
dataChangeDetectionPolicywith an@odata.typeof#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy.
Answer Description
To enable incremental crawling against Azure SQL Database tables, you include a dataChangeDetectionPolicy of type SqlIntegratedChangeTrackingPolicy in the indexer definition. This policy relies on SQL Server's native change tracking feature and allows the indexer to retrieve only the rows that have changed since its last successful execution. While a high-water-mark policy can be used with Azure SQL (typically for views), SqlIntegratedChangeTrackingPolicy is the recommended and most efficient method for tables. Schedule settings alone control when the indexer runs but do not handle delta detection. Resetting the indexer forces a full crawl rather than an incremental one.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a dataChangeDetectionPolicy in Azure Cognitive Search?
How does SqlIntegratedChangeTrackingPolicy work in Azure SQL?
What is the difference between high-water-mark policy and SqlIntegratedChangeTrackingPolicy?
You are building a multi-step conversational assistant by using Azure AI Foundry Agent Service. The agent must:
- keep track of facts that the user gives in earlier turns so they can be reused later;
- persist that state even if the conversation is resumed days later from a different device.
Which Azure resource must you provision and reference in the agent's configuration file to meet these requirements?
An Azure Cosmos DB for NoSQL account
An Azure Service Bus namespace to hold conversation events
An Azure Storage account that exposes a blob container for transcripts
An Azure Cache for Redis instance used as session state
Answer Description
Azure AI Foundry Agent Service stores each conversation's message history in a thread store. In a Standard agent setup you implement Bring-Your-Own (BYO) thread storage by connecting an Azure Cosmos DB for NoSQL account to the project. The service writes every user and assistant message to containers such as thread-message-store and can later retrieve them, so the agent can recall prior facts even when the user returns on another device. Azure Cache for Redis, Azure Storage, and Azure Service Bus do not integrate with the service for persistent thread storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Azure Cosmos DB for NoSQL enable persistent thread storage?
Why can't Azure Cache for Redis be used for persistent thread storage in this setup?
What are the advantages of using Azure Cosmos DB for NoSQL in conversational AI setups?
Your company needs to build a web application that turns short bullet-point briefs into full product descriptions and social-media posts. The design team wants to call a pretrained large language model through a simple REST API and rely on built-in content filtering for responsible AI. Which Azure AI Foundry service best meets these requirements?
Azure AI Language Service
Azure Machine Learning managed online endpoint
Azure OpenAI Service
Azure Cognitive Search semantic ranker
Answer Description
Azure OpenAI Service exposes pretrained large language models such as GPT-4 and GPT-3.5 through REST and SDK endpoints, so no model training is required. The service also includes content filtering and other responsible-AI safeguards. Azure AI Language focuses on classic NLP tasks like entity recognition and summarization, not free-form text generation with large language models. Azure Machine Learning can host custom models but does not provide ready-to-use generative models or built-in filters. Azure Cognitive Search delivers indexing and search capabilities and cannot generate new text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure OpenAI Service?
How does content filtering work in Azure OpenAI Service?
What makes Azure AI Language different from Azure OpenAI Service?
You send a Recognize PII Entities REST request to Azure AI Language. Besides returning the array of detected entities, the response must contain a copy of the submitted text in which every PII substring is masked with asterisks. Which request property must you set so the service includes this redacted version of the text in its response?
Set stringIndexType to UnicodeCodePoint.
Set domain to "PII" in the request body.
Add disableServiceLogs set to true in the request header.
Set the redactionPolicy in the request parameters and choose a masking policy, such as characterMask.
Answer Description
The service returns the redactedText field only when a masking redaction policy is applied. You enable masking by setting the redactionPolicy property (for example, policyKind = characterMask or entityMask). If you omit redactionPolicy or set it to noMask, the service returns the original text and the redactedText field is omitted. The domain, stringIndexType, and disableServiceLogs settings do not control whether redactedText is included.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the redactionPolicy property in the Recognize PII Entities REST request?
What is the difference between characterMask and entityMask in the redactionPolicy property?
Why doesn't setting domains, stringIndexType, or disableServiceLogs affect the redactedText field?
You deployed an Azure AI Foundry (Azure OpenAI) resource that serves a GPT-4 model to multiple internal applications. To avoid unexpected service interruptions, you must receive an alert when token usage for the resource reaches 80 percent of its configured monthly quota. Without collecting custom logs or modifying any client code, which monitoring action should you take?
Enable an Azure Advisor recommendation alert for the Cognitive Services category.
Create an Azure Monitor metric alert rule on the built-in TotalTokens metric for the Azure AI resource and set the condition to fire at 80 percent of quota.
Send diagnostic logs to Log Analytics and build a query-based alert that counts token usage.
Configure an activity-log alert that triggers when any Microsoft.CognitiveServices/accounts/write event occurs.
Answer Description
Azure OpenAI automatically sends built-in platform metrics to Azure Monitor. In the Models - Usage category, the metric Total Tokens (REST name TotalTokens) counts the prompt and completion tokens consumed by the resource. Creating an Azure Monitor metric alert rule on that metric lets you trigger a notification when usage crosses a threshold such as 80 percent of your monthly quota-no additional instrumentation or diagnostic settings are required. Activity-log alerts, Advisor, and Log Analytics query alerts cannot directly evaluate running token totals unless the data is first streamed or collected separately, so they do not meet the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the TotalTokens metric in Azure OpenAI?
What is an Azure Monitor metric alert rule?
Why can't other options, like activity-log alerts or Azure Advisor, meet the requirement?
You create an Azure OpenAI resource named contoso-opai in the East US region. During application development, you must configure the HTTP client so that all REST API calls are sent to the resource's default endpoint. Which base URI should you use?
Answer Description
Azure OpenAI assigns each resource a default endpoint in the format https://
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the default endpoint for Azure OpenAI not include the region?
What is the significance of having a default endpoint in Azure OpenAI?
How is the resource name used to define the default endpoint?
When preparing Azure AI Video Indexer to analyze new training videos, you must enable automatic identification of employees who appear on camera. The videos are stored in Azure Blob Storage and will be uploaded programmatically for indexing. Which one-time configuration must you complete in Video Indexer before uploading the first video to achieve this goal?
Add the company domain as an allowed origin in the Video Indexer CORS settings.
Configure a custom brand detection model that includes company logos and products.
Enable automatic speaker name generation in the Video Indexer account settings.
Create and train a custom person model by uploading reference images of each employee.
Answer Description
To identify specific individuals who appear in a video, Azure AI Video Indexer relies on a custom person model. Creating this model and populating it with reference images allows the service to match faces detected in the video with known people. Actions such as configuring CORS, brand detection or speaker name generation do not supply the visual reference data required for facial identification, so they cannot fulfil the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do you create and train a custom person model in Azure Video Indexer?
What role does Azure Blob Storage play in video indexing?
Why are reference images crucial for facial recognition in Video Indexer?
You are designing a web application that processes thousands of customer product reviews in real time. The solution must automatically detect the language of each review, return a sentiment score, extract key phrases, identify named entities, and redact personally identifiable information without requiring you to train or host custom models. Which Azure service should you select?
Azure AI Content Safety
Azure OpenAI Service (GPT-3.5 Turbo)
Azure AI Document Intelligence
Azure AI Language
Answer Description
Azure AI Language provides pre-built REST and SDK operations for sentiment analysis, opinion mining, language detection, key-phrase extraction, named-entity recognition, and PII redaction. Because these capabilities are available out of the box, no model training or hosting is required. Azure OpenAI could achieve similar results but would need custom prompt engineering and carries higher cost. Azure AI Content Safety focuses on harmful content detection, not broad linguistic analysis. Azure AI Document Intelligence specializes in extracting structured data from scanned documents and forms rather than analyzing unstructured review text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Language?
How is Azure AI Language different from Azure OpenAI Service?
What is Named Entity Recognition (NER) in Azure AI Language?
An Azure OpenAI deployment named "chat-gpt-35" currently references model version gpt-35-turbo-0613 and is called by several production micro-services. A newer version (gpt-35-turbo-1106) has been released that offers lower latency. You must move all traffic to the new version with no client-side code changes and still be able to revert quickly if necessary. Which action should you take?
Provision a new Azure OpenAI resource in the same region with gpt-35-turbo-1106 and update DNS records to point clients to it.
Delete the existing deployment, then recreate it with the same name and the gpt-35-turbo-1106 model.
Create a second deployment that uses gpt-35-turbo-1106 and ask developers to switch to the new deployment name.
Edit the "chat-gpt-35" deployment and change its model version to gpt-35-turbo-1106.
Answer Description
Updating the existing deployment to point to the newer model version keeps the deployment name unchanged, so all client applications continue to call the same endpoint and API path. Because the change is confined to the deployment configuration, you can roll forward or roll back at any time by editing the deployment again, without touching application code or provisioning a new resource. Deleting the deployment first causes outage, while creating a separate deployment or resource would require every caller to update its deployment name or endpoint, violating the no-code-change requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure OpenAI deployment?
Why is editing the existing deployment beneficial in this scenario?
What does 'no client-side code changes' mean, and why is it important?
You are creating a project in Azure AI Foundry. The project must 1) expose an existing private Azure OpenAI gpt-35-turbo deployment, 2) hold a refreshable, managed vector index of the product catalog, and 3) let prompt-flow authors use both services without coding secrets. Which resources should you add?
Use Azure App Configuration to store both the OpenAI endpoint and the product catalog embeddings connection.
Add the existing Azure OpenAI resource and create a new Vector Index resource in the project.
Create new Azure Cosmos DB for NoSQL and store the embeddings there, then reference it from prompt flow.
Add the existing Azure OpenAI resource and link an Azure Key Vault that stores the vector index connection string.
Answer Description
A private Azure OpenAI deployment is surfaced to Foundry projects by adding the existing Azure OpenAI resource. To hold embeddings that can be refreshed, you create a managed Vector Index resource inside the project, which automatically generates the secure connection needed by prompt flows. A Key Vault alone, Cosmos DB, or App Configuration do not provide integrated vector-index functionality.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Vector Index in Azure AI Foundry?
How does Azure OpenAI integrate with Foundry projects?
Why can’t Key Vault, Cosmos DB, or App Configuration be used instead of a Vector Index?
You are developing a new prompt flow in an Azure AI Foundry project. The flow must let GPT-35-Turbo answer user questions by grounding the model with several thousand PDF research papers that are stored in an Azure Storage container. According to the recommended RAG pattern in Foundry, what should you do before adding a retrieval step to the flow so that the model can be grounded in the papers?
Add an Azure AI Content Safety policy to the prompt flow and enable on-the-fly file parsing.
Configure the GPT-35-Turbo deployment with a higher temperature and a system message instructing it to cite the papers.
Upload the PDFs to the project's data assets and reference the asset path in the retrieval step.
Create a vector index in Azure AI Search that contains text chunks and their embeddings for every PDF.
Answer Description
In Azure AI Foundry the first step of a Retrieval-Augmented Generation (RAG) solution is to make the unstructured data searchable. You do this by creating a vectorized Azure AI Search index that stores both the text chunks and their embeddings. The retrieval step in a prompt flow can then query this index and pass the most relevant chunks to the GPT-35-Turbo prompt. If you skip the indexing phase-or use a regular (keyword-only) index-semantic search and vector similarity ranking will not work, and the model cannot be properly grounded. Uploading the PDFs alone or adding them to the project's data assets does not provide the low-latency semantic search that RAG requires.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the RAG pattern in Azure AI Foundry?
What is a vectorized Azure AI Search index?
Why is semantic search necessary in RAG solutions?
You created a Conversational Language Understanding (CLU) project in Language Studio. The project contains 15 labeled utterances for the "ScheduleAppointment" intent and 12 labeled utterances for the "CancelAppointment" intent. You accept the default 80/20 data-splitting option and train the model.
After training, you notice that precision and recall vary widely between runs and that the "CancelAppointment" intent sometimes shows 0 percent recall.
Which explanation best describes why the evaluation results are unstable?
Standard training mode produces random metrics; you must switch to advanced training to get deterministic results.
Too few labeled examples remain in the test set after the 80/20 split, so one or two misclassified utterances greatly change the precision and recall for CancelAppointment.
CLU uses k-fold cross-validation until the project has at least 100 utterances, causing the metrics to fluctuate.
Precision and recall remain zero until every utterance contains at least one labeled entity, so metrics will stabilize after entities are added.
Answer Description
CLU always reserves a portion of the data (20 percent by default) as a blind test set. Because the CancelAppointment intent has fewer than the recommended 15 training examples, only a handful of test utterances are available after the split. With so few samples, a single misclassification can drop recall to 0 percent or raise it sharply in another run, making the reported metrics volatile. Adding more labeled utterances-especially for intents with fewer than 15 examples-will give the model more reliable training data and provide a larger test set, stabilizing the evaluation scores. The other options are incorrect because CLU does not switch to k-fold cross-validation, entity labels are not required for intent metrics, and standard training metrics are deterministic once the data volume is adequate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data splitting and why does CLU use an 80/20 split?
Why are more labeled utterances necessary to stabilize metrics?
What is recall, and why does it drop to 0 percent in some runs?
You are planning an Azure OpenAI-based virtual assistant. Corporate policy states that the assistant must never produce violent or sexual content, even if a user explicitly requests it, but it should continue to answer ordinary troubleshooting questions. According to Azure Responsible AI guidance, which design decision best meets this requirement?
Expose the model only through a private endpoint so that external users cannot call it directly.
Enable the Azure OpenAI content safety policy and set the Violence and Sexual categories to the block severity threshold for the deployment endpoint.
Route all traffic through Azure Front Door with a custom WAF rule that searches for banned keywords related to violence or sexuality.
Run Text Analytics for Profanity on every user prompt before sending it to the model.
Answer Description
Azure OpenAI provides a built-in content safety system that can be configured per deployment. By enabling the content filtering policy and setting the Violence and Sexual categories to the block severity threshold, the runtime will automatically refuse or safe-complete requests that include disallowed content while still allowing benign requests. Web application firewalls and generic text analytics profanity filters do not inspect model generations deeply enough for Responsible AI compliance, and a private endpoint only secures network access-it does not moderate content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure OpenAI content safety policy?
Why is a Web Application Firewall (WAF) not sufficient for this scenario?
What is the difference between profanity detection and content safety filtering in Azure AI?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.