AWS Certified AI Practitioner AIF-C01 Practice Question

A user wants a foundation model to reveal restricted content. Which prompt technique is most often used to jailbreak the model by overriding its built-in safety instructions?

  • Adding a stop sequence that ends the response at a specific token

  • Applying Top-K sampling to limit the token selection pool

  • Reducing the temperature parameter to 0.1 for deterministic output

  • Injecting a new instruction that tells the model to ignore all earlier safety constraints

AWS Certified AI Practitioner AIF-C01
Applications of Foundation Models
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot