Microsoft Azure AI Engineer Associate AI-102 Practice Question

Your team is building a chat application that uses Azure OpenAI. Corporate policy requires that any incoming prompt with Hate or Sexual content whose severity score is 2 (Low) or higher be blocked before it can be forwarded to the model, and that jailbreak (prompt-injection) attacks be detected. Which action should you place at the very beginning of the request pipeline to meet this requirement?

  • Store the conversation in a database and run periodic batch reviews with Azure AI Content Safety after the session ends.

  • Prepend a strict system message instructing the model to refuse disallowed topics and run the chat at temperature 0.

  • Apply an llm-content-safety policy (or call the Content Safety text:analyze API) with shieldPrompt=true and category thresholds Hate=2 and Sexual=2, and reject the request if any rule is triggered.

  • Depend solely on the built-in Azure OpenAI completion content filter that runs after the model generates a response.

Microsoft Azure AI Engineer Associate AI-102
Plan and manage an Azure AI solution
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot