AWS Certified AI Practitioner AIF-C01 Practice Question

In Amazon Bedrock, inference pricing for large language models is based on the number of input and output tokens that the service processes. When cost is the primary concern, which configuration change will most directly reduce the amount you are billed for each request?

  • Shorten the prompt and lower the maximum allowed output length.

  • Raise the top-k value to widen the token selection pool.

  • Enable streaming responses instead of batch delivery.

  • Increase the temperature so the model samples less common tokens.

AWS Certified AI Practitioner AIF-C01
Applications of Foundation Models
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot