00:20:00

AWS Certified AI Practitioner Practice Test (AIF-C01)

Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified AI Practitioner AIF-C01
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified AI Practitioner AIF-C01 Information

The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.

Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.

Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

AWS Certified AI Practitioner AIF-C01 Logo
  • Free AWS Certified AI Practitioner AIF-C01 Practice Test

  • 20 Questions
  • Unlimited
  • Fundamentals of AI and ML
    Fundamentals of Generative AI
    Applications of Foundation Models
    Guidelines for Responsible AI
    Security, Compliance, and Governance for AI Solutions
Question 1 of 20

In the context of transformer-based large language models, what does tokenization mainly do before the model processes user input?

  • Convert the input text into a sequence of numeric IDs that represent words or sub-word pieces.

  • Compress the model's parameters to lower memory usage.

  • Encrypt user prompts before they are transmitted to the model endpoint.

  • Divide the dataset into training, validation, and test subsets.

Question 2 of 20

A company wants to build an internal chat assistant that can answer employee questions about proprietary HR policies stored in a private document repository. The team does not want to retrain the foundation model but needs answers that accurately reference the latest documents. Which approach best meets these requirements?

  • Perform full fine-tuning of the foundation model on the HR policy documents and redeploy the tuned model.

  • Raise the model's temperature during inference so that it can generate more detailed answers.

  • Pre-train a new large language model from scratch using all HR policy documents as the training corpus.

  • Use Retrieval-Augmented Generation to retrieve relevant policy documents at runtime and pass them to the model as context.

Question 3 of 20

A team is configuring Guardrails for Amazon Bedrock to stop the model from returning any personal data, such as phone numbers or email addresses. Which guardrail capability should they enable?

  • Stop sequence token

  • PII redaction

  • Denied topics policy

  • Content moderation severity threshold

Question 4 of 20

A startup wants to test several foundation models but avoid buying GPUs or making long-term commitments. Which Amazon Bedrock benefit most directly delivers cost-effectiveness for this goal?

  • Mandatory multi-year GPU reservation contracts

  • Customer-supplied servers for hosting every model endpoint

  • A fixed monthly subscription regardless of workload size

  • Usage-based pricing with no infrastructure to manage

Question 5 of 20

An e-commerce company builds a customer-service chatbot that sends each user query to a hosted foundation model together with a hidden system prompt that defines business rules. Which prompt-engineering risk must the team mitigate to stop attackers from supplying input that overrides or replaces those hidden instructions?

  • Data poisoning

  • Prompt hijacking

  • Model underfitting

  • Vanishing gradients

Question 6 of 20

A content moderation team needs to search thousands of chat transcripts by meaning instead of exact keywords. Which generative AI concept makes this possible by converting each text segment into a high-dimensional numeric vector that can be compared for semantic similarity?

  • Chunking

  • Embeddings

  • Diffusion models

  • Tokenization

Question 7 of 20

A company needs to run a foundation model locally on mobile devices that have only 2 GB of RAM and no dedicated GPU. Which consideration best supports selecting a smaller, less complex model architecture for this use case?

  • Reducing memory and compute requirements to meet on-device latency and footprint limits

  • Leveraging emergent advanced reasoning that appears in multi-billion-parameter models

  • Achieving the widest possible multilingual accuracy across 200+ languages

  • Supporting very long input contexts of tens of thousands of tokens

Question 8 of 20

Which scenario is least appropriate for an AI/ML solution because a simple, deterministic approach already meets the requirement?

  • Converting product prices from U.S. dollars to euros using the daily Central Bank exchange rate.

  • Classifying incoming customer support emails into high, medium, or low urgency.

  • Detecting potentially fraudulent credit card transactions in real time.

  • Recommending complementary products to shoppers based on items already in their cart.

Question 9 of 20

A startup is integrating a generative-AI chatbot with Amazon Bedrock. The team must ensure that user prompts containing hateful or violent language are blocked before the request reaches the foundation model. Which Guardrails for Amazon Bedrock capability should they use to apply this safeguard to the input?

  • Turn on PII redaction for all prompts submitted to the model.

  • Add a custom word filter that redacts profanity from model responses only.

  • Create a denied topics policy that lists hate and violence as restricted subjects.

  • Enable a content filter on the user input for hate and violence categories.

Question 10 of 20

To get repeatable, highly consistent answers from a text foundation model, which adjustment to the temperature inference parameter should the developer make?

  • Set the temperature close to 0 to minimize randomness.

  • Raise the temperature above 1.5 to reduce variability.

  • Increase the temperature so the model explores more token options.

  • Leave temperature unchanged and lower the max-tokens limit instead.

Question 11 of 20

To maintain data integrity of training data stored in a single Amazon S3 bucket, a team wants every overwrite or deletion to retain the previous copy so it can be recovered later if corruption occurs. Which S3 feature should they activate?

  • Enable S3 Versioning on the bucket

  • Enable S3 Transfer Acceleration

  • Configure an S3 Lifecycle rule to move data to Amazon S3 Glacier Flexible Retrieval

  • Enable S3 Object Lock in Compliance mode

Question 12 of 20

A video-streaming company wants to automatically propose relevant movies to each user, based on the titles they have previously watched and rated. Which type of AI application best suits this requirement?

  • A recommendation system that generates personalized content suggestions

  • A speech recognition service that transcribes dialogues in real time

  • A fraud detection model that flags unusual account activity

  • An image classification system that labels movie poster images

Question 13 of 20

Which Amazon SageMaker feature creates a standardized Model Card that stores details such as model purpose, training data, performance metrics, and ethical considerations to support transparency and compliance reviews?

  • Amazon SageMaker Model Monitor

  • Amazon SageMaker Clarify

  • AWS Artifact

  • Amazon SageMaker Model Cards

Question 14 of 20

An AI team stores large training datasets in Amazon S3. Company policy states that any dataset older than 3 years must be removed automatically to meet data retention requirements. Which AWS feature will allow the team to enforce this policy without writing custom deletion scripts?

  • Amazon CloudWatch Logs retention settings

  • AWS Config managed rule

  • Amazon S3 lifecycle configuration

  • AWS Artifact

Question 15 of 20

A startup will fine-tune a foundation model in Amazon SageMaker using confidential customer chat transcripts stored in Amazon S3. To follow AWS data governance and security best practices, which action should the company take before starting the training job?

  • Attach an IAM role to the training job that allows read-only access only to the specific S3 paths containing the transcripts.

  • Embed AWS access keys for the S3 bucket in the training script's environment variables to simplify access.

  • Copy the transcripts to an S3 bucket with public-read permissions so the training cluster can download them without credentials.

  • Convert the transcripts to unencrypted CSV files before uploading them to the training cluster for faster processing.

Question 16 of 20

In the context of responsible AI on AWS, which characteristic best defines a transparent (white-box) model?

  • Its internal decision logic can be directly inspected and understood by humans.

  • It protects intellectual property by hiding model parameters from end users.

  • It delivers the highest predictive accuracy by learning complex non-linear patterns that are difficult to interpret.

  • It prevents adversarial attacks by obfuscating training data and weights.

Question 17 of 20

A startup releases a generative-AI image service without any content safeguards. Soon, hateful images are produced and shared publicly, causing users to delete their accounts and post negative reviews. According to responsible-AI guidance, which legal risk does this scenario most clearly illustrate for the company?

  • Loss of customer trust

  • Violation of data residency requirements

  • Unexpected compute cost overruns

  • Intellectual property infringement

Question 18 of 20

A company finds that its customer-service chatbot mislabels requests from customers who write in a regional dialect, while accuracy for other customers remains high. What impact on demographic groups does this situation illustrate?

  • Higher misclassification rates for the underrepresented dialect caused by training data bias

  • Equal accuracy across all customer groups, indicating no measurable bias

  • Lower error for the underrepresented group as a result of effective regularization

  • Improved system throughput after endpoint scaling rather than any bias effect

Question 19 of 20

An ecommerce startup is building a customer-support chatbot on Amazon Bedrock. The bot must answer questions by using the company's product manuals stored in Amazon S3. To follow the Retrieval Augmented Generation (RAG) pattern, which additional component should the team add to the workflow?

  • Create a vector store such as an Amazon OpenSearch Service index that holds embeddings and returns relevant passages before invoking the model.

  • Add an AWS Lambda function that sets the model's temperature to 0 for deterministic answers.

  • Configure an Amazon S3 Access Point so the model can read all manuals directly during inference.

  • Run an Amazon SageMaker training job to fine-tune the foundation model on the S3 documents.

Question 20 of 20

A developer using Amazon Bedrock wants a text generation model to immediately stop producing further tokens when the string "###END###" appears in the output. Which inference parameter should the developer configure to achieve this?

  • Stop sequence

  • Top-K sampling value

  • Maximum output tokens

  • Temperature