Microsoft Azure AI Engineer Associate Practice Test (AI-102)
Use the form below to configure your Microsoft Azure AI Engineer Associate Practice Test (AI-102). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Engineer Associate AI-102 Information
The Microsoft Certified: Azure AI Engineer Associate certification, earned by passing the AI‑102: Designing and Implementing a Microsoft Azure AI Solution exam, is designed for people who build, deploy, and manage AI solutions using Microsoft Azure. According to Microsoft the role of an Azure AI Engineer involves working across all phases: requirements definition, development, deployment, integration, maintenance, and tuning of AI solutions. To succeed you should have experience with programming (for example Python or C#), using REST APIs/SDKs, and working with Azure’s AI services.
Domains on Azure AI Engineer Exam
The AI-102 exam tests several key areas: planning and managing an Azure AI solution (about 15-20 % of the exam), implementing computer vision solutions (15-20 %), natural language processing solutions (30-35 %), knowledge mining/document intelligence (10-15 %), generative AI solutions (10-15 %), and content-moderation/decision-support solutions (10-15 %). It is important to review each area and gain hands-on practice with Azure AI services such as Azure AI Vision, Azure AI Language, Azure AI Search and Azure OpenAI.
Azure AI Engineer Practice Tests
One of the best ways to prepare for this exam is through practice tests. Practice tests let you experience sample questions that mimic the real exam style and format. They help you determine which topics you are strong in and which ones need more study. After taking a practice test you can review your incorrect answers and go back to the learning material or labs to fill those gaps. Many study guides recommend using practice exams multiple times as a key part of your preparation for AI-102.

Free Microsoft Azure AI Engineer Associate AI-102 Practice Test
- 20 Questions
- Unlimited
- Plan and manage an Azure AI solutionImplement generative AI solutionsImplement an agentic solutionImplement computer vision solutionsImplement natural language processing solutionsImplement knowledge mining and information extraction solutions
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your Azure DevOps YAML release pipeline must promote a new version of an Azure AI Language custom question-answering model from test to production by running az cognitiveservices account deployment create. Policies state that (1) no shared keys may be stored in the repo or pipeline and (2) credentials must rotate without editing the YAML. Which authentication method meets these requirements?
Create an Azure DevOps service connection that uses a service principal, and assign that principal the Cognitive Services Contributor role on the target Azure AI resource so the az CLI obtains an Azure AD access token at run time.
Store the subscription owner's username and password in a variable group and run az login in the pipeline with those credentials.
Save the Azure AI resource's primary key as a secret variable in the pipeline and pass it to the CLI task with the --key argument.
Generate a user-delegation SAS token for the Azure AI endpoint, store the token in a secure files library item, and reference it from the pipeline.
Answer Description
Authenticating the az CLI through a service principal exposed in an Azure DevOps service connection avoids embedding the resource's primary or secondary keys in source control or pipeline variables. Granting the principal the Cognitive Services Contributor role on the target resource provides the least-privilege access needed to run az cognitiveservices account deployment create. When the secret, certificate, or workload-identity federation for the service principal is rotated, the service connection is updated independently of the YAML file, so the pipeline continues to work. The other approaches either store long-lived secrets directly in the pipeline, cannot be rotated transparently, or rely on user credentials rather than a dedicated principal, violating the stated policies.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a service principal in Azure?
Why is the Cognitive Services Contributor role used here?
How does credential rotation work with a service principal in Azure DevOps?
You are designing an application that must scan millions of corporate PDF and image files, extract key entities by applying pre-built and custom AI skills, and let employees run fast full-text and faceted searches over the enriched content. Which Azure AI service should you select as the core of this knowledge mining solution?
Azure AI Search (Azure Cognitive Search)
Azure AI Document Intelligence (Form Recognizer)
Azure OpenAI Service
Azure Databricks with Delta Lake
Answer Description
Azure AI Search (formerly Azure Cognitive Search) is purpose-built for knowledge mining scenarios. It can crawl a variety of data sources, apply an enrichment pipeline that includes pre-built and custom cognitive skills to extract structure from unstructured content, and then index the results to provide rich, full-text and faceted search experiences. Azure AI Document Intelligence can extract fields from individual documents but does not provide scalable indexing and search. Azure OpenAI Service focuses on large language models, and Azure Databricks with Delta Lake offers analytics but no turnkey enrichment and search capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Search and how does it work?
What is the difference between Azure AI Search and Azure AI Document Intelligence?
What are examples of use cases for Azure AI Search?
You have trained an object-detection project by using Custom Vision in Azure AI Vision. The model will be used on production-line cameras inside a factory network that has no reliable Internet connectivity. Inference must execute locally with less than 100 ms latency and without sending images to the cloud. Which deployment approach should you use?
Export the model and run it in the Azure AI Vision Docker container on an Azure Stack Edge or IoT Edge device within the factory.
Import the model into an Azure OpenAI resource and call it with chat completions from the production-line controllers.
Convert the model to ONNX and deploy it to a managed online endpoint in Azure Machine Learning.
Publish the model to the Custom Vision cloud prediction endpoint and invoke it from the factory over HTTPS.
Answer Description
Running the Custom Vision runtime container on an edge device keeps all inference on-premises, eliminates the need for constant Internet connectivity, and provides sub-second latency. Publishing to the cloud prediction endpoint or to an AML managed endpoint still requires outbound connectivity and adds network latency. Azure OpenAI cannot host vision models, so that option is not applicable.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure AI Vision Docker container?
What is an Azure Stack Edge or IoT Edge device?
Why is it important to use local inference for this scenario?
You are setting up a new generative-AI project in Azure AI Foundry. Before you write any code, you want a single manifest that lists the project's required Azure resources-workspace, storage account, Key Vault, Application Insights, and so on-so that you can run one Azure Developer CLI command to deploy the entire environment into a resource group. Which file must exist in the project root?
flow.dag.yaml
azure.yaml
foundry.yaml
environment.yml
Answer Description
Azure Developer CLI (azd) looks for a file named azure.yaml at the project root. The azure.yaml manifest contains services and resources nodes that declaratively list every Azure resource the project needs (such as an AI Foundry hub, Storage, Key Vault, and Application Insights). When you run azd up or azd provision, the CLI reads azure.yaml and generates/executes the underlying Bicep or ARM templates to provision all listed resources in the chosen resource group. Other YAML files-such as environment.yml (Python package environment) or flow.dag.yaml (prompt-flow definition)-configure runtime components but do not drive initial infrastructure deployment, and there is no supported foundry.yaml file in the Azure AI Foundry toolchain.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the azure.yaml file in Azure projects?
What are Bicep and ARM templates in Azure?
How do azd commands like 'azd up' and 'azd provision' work?
Your team is building a chat application that uses Azure OpenAI. Corporate policy requires that any incoming prompt with Hate or Sexual content whose severity score is 2 (Low) or higher be blocked before it can be forwarded to the model, and that jailbreak (prompt-injection) attacks be detected. Which action should you place at the very beginning of the request pipeline to meet this requirement?
Depend solely on the built-in Azure OpenAI completion content filter that runs after the model generates a response.
Apply an llm-content-safety policy (or call the Content Safety text:analyze API) with shieldPrompt=true and category thresholds Hate=2 and Sexual=2, and reject the request if any rule is triggered.
Store the conversation in a database and run periodic batch reviews with Azure AI Content Safety after the session ends.
Prepend a strict system message instructing the model to refuse disallowed topics and run the chat at temperature 0.
Answer Description
Apply Azure AI Content Safety to the user input in the inbound stage. An llm-content-safety policy (or a direct call to the text:analyze endpoint) lets you set category thresholds-Hate ≥ 2 and Sexual ≥ 2-and turn on shield-prompt=true. The policy blocks the request if the content exceeds the configured severity or if a prompt attack is detected, ensuring the model never sees disallowed content. Relying only on completion filters is too late, system-messages can be ignored, and batch reviews are not real-time.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the llm-content-safety policy in Azure and how does it work?
How does shieldPrompt=true enhance security in Azure OpenAI applications?
What are prompt-injection attacks, and why are they a threat to AI systems?
You administer a multiservice Azure AI resource named contoso-ai. Developers report occasional 5xx responses when calling the resource's REST endpoint, but the Azure portal Metrics chart shows no obvious pattern. You must collect per-request data (operation name, response status, subscription key) for the last 24 hours and query it in your existing Log Analytics workspace. What should you do first?
Create a diagnostic setting on contoso-ai that streams the RequestLogs category to the Log Analytics workspace.
Enable Application Insights HTTP dependency tracking in each calling application.
Configure a Network Watcher connection monitor test that targets the contoso-ai endpoint.
Enable export of the Azure activity log for the contoso-ai resource group to the Log Analytics workspace.
Answer Description
The detailed, per-request information you require is emitted by the Azure AI Services platform as RequestLogs. By creating a diagnostic setting on the contoso-ai resource and selecting the RequestLogs (or AllLogs) category, you can stream those logs directly to an Azure Monitor Log Analytics workspace. Once the diagnostic setting is in place, every call-including its operation name, response code, and the key used-is recorded and becomes immediately queryable with Kusto queries. Exporting the subscription's activity log would only provide control-plane events, not request-level data. Connection Monitor tests and Application Insights dependency tracking would require additional configuration on every client and do not automatically capture the platform-level fields requested.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are RequestLogs and what data do they contain?
How do you create a diagnostic setting in Azure?
What is a Log Analytics workspace and how is it used?
You have an Azure AI Vision resource named vision1. An Azure App Service uses a system-assigned managed identity. You must configure authentication so that the App Service calls the Vision REST API by using an Azure AD access token instead of an API key. Which Azure RBAC role should you assign to the managed identity at the scope of vision1?
Reader
Managed Identity Operator
Cognitive Services Contributor
Cognitive Services User
Answer Description
To call an Azure AI service with Azure AD, the calling identity must be granted a role that allows invocation of the service's inference operations. The least-privileged built-in role for this purpose is Cognitive Services User. It provides permissions to read and execute API calls against the resource while denying management actions. Cognitive Services Contributor would also work but grants unnecessary create, update, and delete rights. Reader permits only viewing resource metadata, and Managed Identity Operator is unrelated to Azure AI permissions. Assigning the Cognitive Services User role enables token-based access without exposing excess privileges.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the Cognitive Services User role in Azure?
What is the difference between the Cognitive Services User role and the Cognitive Services Contributor role?
What are managed identities in Azure, and how do they work with Azure AD for authentication?
Your company is building an internal tool to extract invoice fields from scanned PDFs. You created an Azure AI Document Intelligence resource. In a new .NET 6 console application, you need to call the prebuilt invoice model by using the official client library. Which NuGet package should you add to the project?
Microsoft.Azure.CognitiveServices.FormRecognizer
Azure.AI.TextAnalytics
Azure.AI.DocumentIntelligence
Azure.AI.FormRecognizer
Answer Description
The recommended client library for Azure AI Document Intelligence in .NET is distributed as the Azure.AI.DocumentIntelligence NuGet package. This package contains the DocumentIntelligenceClient class and related types that let you invoke layout, prebuilt, and custom models such as the prebuilt-invoice model. Azure.AI.FormRecognizer remains available for backward compatibility but targets the earlier Form Recognizer SDK; Microsoft.Azure.CognitiveServices.FormRecognizer is an even older management-plane SDK and lacks the new Document Analysis APIs. Azure.AI.TextAnalytics is for natural-language processing tasks and cannot analyze documents, so only Azure.AI.DocumentIntelligence meets the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure.AI.DocumentIntelligence NuGet package used for?
What is the difference between Azure.AI.DocumentIntelligence and Azure.AI.FormRecognizer?
Why is Azure.AI.TextAnalytics not suitable for analyzing documents like invoices?
You are designing an Azure-based solution that processes incoming customer support emails. The application must automatically detect the email's language, determine sentiment, extract named entities, generate an extractive summary, and route the message by using a custom multi-label text classification model. You prefer to accomplish all these natural language processing tasks with a single Azure service that exposes REST APIs and SDKs. Which Azure service should you choose?
Azure OpenAI Service
Azure AI Speech
Azure AI Language
Azure AI Document Intelligence
Answer Description
Azure AI Language (formerly the Cognitive Service for Language) offers built-in capabilities for language detection, sentiment analysis, entity recognition, and extractive summarization. It also allows you to train and deploy custom multi-label text classification models, so all of the required tasks can be performed by one service. Azure AI Document Intelligence focuses on extracting structured data from documents, not general NLP. Azure AI Speech targets speech-to-text and related audio scenarios. Azure OpenAI Service provides powerful generative models but is not optimized for turnkey sentiment analysis, language detection, or custom text classification workflows that the Language service supplies out of the box.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Language and how does it handle all those tasks?
What is the difference between Azure AI Language and Azure OpenAI Service?
What are REST APIs and SDKs, and how do they support Azure AI Language?
You are planning an Azure OpenAI-based virtual assistant. Corporate policy states that the assistant must never produce violent or sexual content, even if a user explicitly requests it, but it should continue to answer ordinary troubleshooting questions. According to Azure Responsible AI guidance, which design decision best meets this requirement?
Route all traffic through Azure Front Door with a custom WAF rule that searches for banned keywords related to violence or sexuality.
Run Text Analytics for Profanity on every user prompt before sending it to the model.
Enable the Azure OpenAI content safety policy and set the Violence and Sexual categories to the block severity threshold for the deployment endpoint.
Expose the model only through a private endpoint so that external users cannot call it directly.
Answer Description
Azure OpenAI provides a built-in content safety system that can be configured per deployment. By enabling the content filtering policy and setting the Violence and Sexual categories to the block severity threshold, the runtime will automatically refuse or safe-complete requests that include disallowed content while still allowing benign requests. Web application firewalls and generic text analytics profanity filters do not inspect model generations deeply enough for Responsible AI compliance, and a private endpoint only secures network access-it does not moderate content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure OpenAI content safety policy?
Why is a Web Application Firewall (WAF) not sufficient for this scenario?
What is the difference between profanity detection and content safety filtering in Azure AI?
You are building a streaming web app for an international conference. The application must display near-real-time captions in each viewer's preferred language while the session is being broadcast, and you want to keep integration simple by sending the audio stream to a single Azure WebSocket endpoint that provides both speech-to-text and automatic translation. Which Azure AI Foundry service should you recommend?
Azure AI Speech
Azure Live Video Analytics
Azure AI Language
Azure AI Translator service
Answer Description
Azure AI Speech exposes a single real-time endpoint (for example, wss://
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Speech used for?
How does Azure AI Speech's WebSocket endpoint work?
What are the differences between Azure AI Speech, Azure AI Translator, and Azure AI Language?
Your company needs to build a summarization endpoint that processes individual documents up to 100,000 tokens. The stakeholder insists on minimizing inference latency and cost while maintaining accuracy. You decide to use Azure OpenAI models. Which base model should you select to best meet these requirements?
text-embedding-ada-002
GPT-3.5-Turbo with a 16k context window
GPT-4o (128k context, optimized for speed and cost)
GPT-4-Turbo with a 128k context window
Answer Description
GPT-4o supports a 128k-token context window-enough for a 100,000-token document-and is engineered to run roughly twice as fast and at lower cost per token than GPT-4 Turbo. This makes it the best balance of large context, performance and price for the scenario. GPT-4-Turbo-128k also supports the needed context but is slower and more expensive. GPT-3.5-Turbo-16k cannot accept the required prompt length, and the Ada embedding model is not a generative language model, so it cannot directly perform summarization.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are tokens in the context of Azure OpenAI models?
Why is latency a critical factor in selecting an OpenAI model for summarization?
What makes GPT-4o more cost-effective compared to other models with similar capabilities?
A company wants to automatically parse thousands of scanned PDF vendor invoices that arrive each month. The solution must extract key fields-such as invoice number, due date, tax amounts, and individual line items-and return the results as structured JSON through a REST API with only minimal custom training on a handful of sample documents. Which Azure AI service should you use to meet these requirements?
Azure AI Language Question Answering
Azure AI Vision Image Analysis
Azure AI Document Intelligence (Form Recognizer)
Azure AI Speech to Text
Answer Description
Azure AI Document Intelligence is purpose-built for extracting structured data from documents such as invoices, receipts, and contracts. It offers pre-built and customizable models that can be trained with a small set of sample files and exposes the results through REST and SDK endpoints. Azure AI Vision focuses on image classification and object detection, Azure AI Language provides capabilities like question answering and sentiment analysis, and Azure AI Speech to Text converts spoken audio to text-none of which natively extract tabular and field information from invoices.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Azure AI Document Intelligence extract information from documents?
What kind of training does Azure AI Document Intelligence require?
What is the difference between Azure AI Document Intelligence and Azure AI Vision?
You need to create an Azure AI Foundry hub that will be used by multiple generative-AI projects in your organization. The hub must meet the following requirements:
- Be provisioned by using infrastructure as code so it can be deployed to multiple subscriptions.
- Automatically create the minimum set of dependent Azure resources that any project inside the hub will need.
Which deployment approach should you use first to be sure that both requirements are satisfied?
Create the hub manually in the Azure portal and then export a template for reuse in other subscriptions.
Use an Azure PowerShell script that calls New-AzResourceGroupDeployment with inline parameters to provision only the hub.
Run an Azure CLI command that creates an Azure AI Foundry project and hub in a single step.
Deploy the hub by using an Azure Resource Manager (ARM) or Bicep template that defines the Azure AI Foundry hub resource and its dependencies.
Answer Description
Deploying the hub with an Azure Resource Manager (ARM) or Bicep template meets both requirements. A template is fully declarative, so you can repeatedly deploy the same hub definition to any subscription. The template can include the hub plus its dependent resources (Azure Storage account, Azure Key Vault, Azure Application Insights, and optionally an Azure Container Registry and Azure AI Services account). When the hub is created with these dependencies, every new project you create under the hub automatically reuses those resources. Creating the hub manually or by using an ad-hoc PowerShell script is not idempotent across subscriptions, and project-level CLI commands cannot run until a hub already exists.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an ARM template and how does it ensure consistent deployment?
What is the difference between Bicep and ARM templates?
What does 'idempotent' mean in the context of Azure deployments?
Your company must process 150,000 high-resolution TIFF images of scanned invoices every night. The goal is to extract all printed and handwritten text in the original language so that another service can later interpret the data. You do not need to identify key-value pairs, tables, or form structure. Which Azure service or feature should you select to meet the requirement with the least configuration effort?
Train a custom object-detection model in Azure Custom Vision to locate and extract the text.
Use the Read OCR operation provided by Azure AI Vision (Computer Vision service).
Use the prebuilt Layout model in Azure AI Document Intelligence to read the images.
Upload the images to Azure Video Indexer and retrieve the transcription from the insights JSON.
Answer Description
The Read operation in Azure AI Vision (formerly the Computer Vision service) is a pre-built optical character recognition (OCR) capability that extracts printed and handwritten text from images and documents in more than 160 languages. It returns text lines and their bounding boxes but does not attempt to parse the document into key-value pairs or tables, making it ideal when you only need raw text. Azure AI Document Intelligence (Form Recognizer) targets structured extraction and would be more complex than necessary. Azure Custom Vision is intended for training image classification or object-detection models, not text extraction. Azure Video Indexer focuses on audio and video insights rather than still-image OCR.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Read OCR operation in Azure AI Vision?
How is Azure AI Vision different from Azure AI Document Intelligence?
Why isn't Azure Custom Vision the right choice for text extraction?
You are designing an autonomous drone inspection solution that must run entirely at a remote construction site without reliable internet connectivity. The drone needs to generate concise natural-language summaries of detected safety risks on an embedded single-GPU device. Within Azure AI Foundry you must choose a model whose weights you can download and run locally while keeping GPU memory requirements low. Which model should you deploy?
DALL-E 3
GPT-4
Phi-2 language model
GPT-3.5-Turbo
Answer Description
Phi-2 is a 2.7-billion-parameter open-weight language model published by Microsoft. Because the weights can be downloaded from the Azure AI model catalog, the model can be executed completely offline and even on a single consumer-class GPU, making it suitable for edge scenarios that lack network connectivity. GPT-4 and GPT-3.5-Turbo are only available as hosted Azure OpenAI endpoints, so they require cloud access and cannot be run locally. DALL-E 3 is an image generation model, not a text-generation model, and also relies on a hosted service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is the Phi-2 language model suitable for offline use?
What are the key differences between Phi-2 and GPT-4 or GPT-3.5-Turbo?
What makes Phi-2 efficient for single-GPU devices?
A company receives thousands of scanned invoices daily. They must automatically extract vendor name, invoice number, line items, and total amount as structured JSON so the data can be routed to their ERP system. The solution must work with varying layouts and require minimal training. Which Azure AI service should you use to meet these requirements?
Azure AI Speech Service - Speech to Text
Azure AI Document Intelligence (Form Recognizer)
Azure Cognitive Search
Azure AI Language Service - Conversational Language Understanding
Answer Description
Azure AI Document Intelligence (formerly Form Recognizer) provides prebuilt and custom models for extracting key-value pairs, tables, and line items from documents such as invoices, receipts, and contracts. It supports heterogeneous layouts, can learn from a small labeled sample set, and outputs structured JSON. Azure Cognitive Search focuses on full-text indexing and search, not detailed field extraction. Conversational Language Understanding targets dialog intents, and Speech to Text transcribes audio, so neither meets the document information-extraction scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Document Intelligence (Form Recognizer)?
How does Azure AI Document Intelligence support heterogeneous document layouts?
What are the differences between Azure AI Document Intelligence and Azure Cognitive Search?
You are integrating an Azure OpenAI GPT-4 deployment into an Azure AI Foundry project by using the official OpenAI Python SDK. The following code is executed in a prompt flow but returns HTTP 404 "The deployment was not found":
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
The Azure OpenAI resource contains a deployment named gpt4-prod, and the key and endpoint values are correct. Which change will allow the call to succeed when the code runs in Foundry?
Import the azure.ai.generative.openai package instead of openai and keep model="gpt-4" without any other changes.
Replace the model parameter with engine="gpt4-prod" and set the environment variable OPENAI_API_VERSION to "latest".
Set the environment variable OPENAI_API_TYPE to "azure" and replace the model parameter with deployment_id="gpt4-prod" in the ChatCompletion call.
Append "?api-version=2024-02-15-preview" to the endpoint URL while leaving the model parameter unchanged.
Answer Description
When you call the OpenAI Python SDK against an Azure OpenAI endpoint you must let the library know that it is talking to Azure and reference the model by its deployment name, not by the base model name. Setting the environment variable OPENAI_API_TYPE=azure switches the SDK into Azure mode, and replacing the model parameter with deployment_id="gpt4-prod" points the request to the existing GPT-4 deployment. The other options either omit the required API type setting, rely on a deprecated parameter, or use a different client library that would still require the same Azure-specific configuration.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why do you need to set OPENAI_API_TYPE to 'azure'?
What is the difference between the model parameter and the deployment_id parameter?
What is the purpose of appending the API version to the endpoint URL?
You need to build a new SaaS feature that automatically drafts personalized long-form marketing emails from structured customer data. The feature must use a hosted GPT-4 model, allow prompt engineering and deployment controls, and remain inside the Azure compliance boundary. Which Azure AI Foundry service should you choose?
Azure AI Custom Vision
Azure OpenAI Service
Azure AI Speech Service
Azure AI Form Recognizer
Answer Description
Azure OpenAI Service is the appropriate choice because it provides hosted large language models such as GPT-4, supports prompt engineering and fine-tuning, and is delivered entirely within the Azure compliance boundary. Form Recognizer focuses on document information extraction, Speech Service handles speech capabilities, and Custom Vision is intended for image classification or object detection-none of which supply the required generative text functionality.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure OpenAI Service?
What is prompt engineering and why is it important for GPT-4?
What does 'Azure compliance boundary' mean?
You manage an Azure AI Foundry Service resource that hosts a production model. Several data scientists need to design Azure Monitor workbooks and explore platform metrics for the resource, but they must not be able to view or regenerate the resource's API keys or change any configuration. Which built-in Azure role should you assign to each data scientist at the resource scope?
Cognitive Services Contributor at the resource-group scope
Cognitive Services User at the Azure AI Foundry Service scope
Monitoring Reader at the Azure AI Foundry Service scope
Cognitive Services Data Reader at the Azure AI Foundry Service scope
Answer Description
Monitoring Reader grants read-only access to all monitoring data, including metrics and the ability to create private workbooks. The role includes only */read control-plane actions and therefore does not permit operations such as Microsoft.CognitiveServices/accounts/listkeys/action that would expose or rotate the resource keys. Cognitive Services User and Cognitive Services Contributor both include the list-keys action, while Cognitive Services Data Reader can read Cognitive Services data but cannot access Azure Monitor metrics or workbooks. Therefore, Monitoring Reader best meets the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Monitoring Reader role in Azure?
What are Azure Monitor workbooks, and what are they used for?
Why doesn’t the Cognitive Services Data Reader role fit in this scenario?
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.