AWS Certified AI Practitioner AIF-C01 Practice Question

Which risk of prompt engineering occurs when attackers insert malicious instructions into a model's context, such as data retrieved by RAG, so future prompts are influenced to produce incorrect or harmful answers?

  • Prompt hijacking

  • Prompt poisoning

  • Jailbreaking

  • Model drift

AWS Certified AI Practitioner AIF-C01
Applications of Foundation Models
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot