Microsoft Azure AI Engineer Associate AI-102 Practice Question

In an Azure AI Foundry prompt flow that streams technical summaries of internal design docs using a single ChatGPT-4-turbo node, you must add a model-reflection step to flag hallucinations before the response returns. Which configuration will best cut hallucinations without adding noticeable latency?

  • Insert a second ChatGPT-4-turbo node with temperature set to 0 that critiques the draft summary for factual errors and either approves it or returns a list of corrections.

  • Replace the reflection step with a second pass that re-generates the entire summary using GPT-4o in a separate deployment.

  • Increase the temperature of the existing generation node to 1.2 to encourage diverse answers, then stream the output.

  • Disable streaming on the generation node and wait for the full response before invoking any reflection logic.

Microsoft Azure AI Engineer Associate AI-102
Implement generative AI solutions
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot