00:20:00

AWS Certified AI Practitioner Practice Test (AIF-C01)

Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified AI Practitioner AIF-C01
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified AI Practitioner AIF-C01 Information

The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.

Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.

Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

AWS Certified AI Practitioner AIF-C01 Logo
  • Free AWS Certified AI Practitioner AIF-C01 Practice Test

  • 20 Questions
  • Unlimited
  • Fundamentals of AI and ML
    Fundamentals of Generative AI
    Applications of Foundation Models
    Guidelines for Responsible AI
    Security, Compliance, and Governance for AI Solutions
Question 1 of 20

A startup wants to build a chatbot that can answer user questions by referencing its existing PDF manuals, without retraining or fine-tuning the underlying foundation model. Which approach available in Amazon Bedrock best satisfies this need?

  • Apply Reinforcement Learning from Human Feedback (RLHF) to teach the model the manual content.

  • Use Retrieval-Augmented Generation (RAG) with indexed embeddings of the manuals.

  • Perform full fine-tuning of the foundation model on the PDF manuals.

  • Pre-train a new foundation model from scratch using the manuals as training data.

Question 2 of 20

During inference, how do diffusion models such as Stable Diffusion generally produce a new image from a text prompt?

  • They begin with random noise and iteratively remove noise through learned denoising steps until an image appears.

  • They retrieve the closest matching image from a database and apply style transfer to fit the prompt.

  • They encode the prompt into a latent vector and decode it once without any iterative refinement.

  • They start with a blank canvas and sequentially draw pixels based solely on attention weights.

Question 3 of 20

A team notices that a text foundation model hosted on Amazon Bedrock often returns lengthy answers that include extra, unrelated facts. Which prompt-engineering best practice will most directly help the team obtain only the information they need?

  • Use zero-shot prompting without additional context or constraints.

  • Rewrite the prompt to include clear, precise instructions that limit the scope and length of the response.

  • Raise the temperature setting to encourage more diverse token selection.

  • Insert several unrelated examples to make the prompt longer and more detailed.

Question 4 of 20

A company wants to automatically extract key phrases and overall sentiment from incoming customer support emails without building custom models. Which AWS managed AI service meets this requirement?

  • Amazon Comprehend

  • Amazon Polly

  • Amazon Rekognition

  • Amazon Translate

Question 5 of 20

A financial services company plans to deploy a generative-AI chatbot with Amazon Bedrock. Regulations require that all customer prompts remain private and must never be stored or reused for model training. Which built-in Bedrock capability directly addresses this compliance requirement?

  • It guarantees sub-100-millisecond response times from any AWS Region worldwide.

  • It automatically generates detailed GDPR audit reports for every prompt.

  • It encrypts customer data and ensures the content is not retained or used for model training.

  • It performs all inference on the company's on-premises hardware through AWS Outposts.

Question 6 of 20

An AWS team must pick a model for a fraud-detection project that will be reviewed by regulators. They can choose a simple linear model or a deep neural network. Which tradeoff between interpretability and model performance should the team expect?

  • Greater interpretability with the linear model but potentially lower predictive accuracy compared to the neural network.

  • Transparent models usually demand significantly more training data to remain interpretable.

  • Higher interpretability and higher accuracy when the team selects the deep neural network.

  • Transparent models typically have slower inference because explanations must be generated after each prediction.

Question 7 of 20

A binary classification dataset contains 95% records labeled as class A and only 5% as class B. According to responsible AI practices, how is this dataset's class distribution classified?

  • Randomly stratified dataset

  • Augmented dataset

  • Imbalanced dataset

  • Balanced dataset

Question 8 of 20

A retail company plans to use a large language model on AWS to create automated product descriptions. Which action best mitigates the risk of releasing inaccurate text to customers?

  • Implement a human approval workflow to review every generated description before it is published.

  • Increase the model's temperature setting to encourage more diverse responses.

  • Limit the prompt to a smaller context window to shorten generation time.

  • Disable token streaming so the model waits to send the full response.

Question 9 of 20

A development team notices that their machine-learning model shows very low error on the training dataset but much higher error when evaluated on new customer data. Which condition is this most likely an example of?

  • A well-balanced model with good generalization

  • High variance (overfitting)

  • High bias (underfitting)

  • Data leakage between training and test sets

Question 10 of 20

A company wants to limit the carbon footprint of a new generative-AI project on AWS. Which model-selection practice best supports this environmental goal?

  • Choose a model that requires dedicated on-premises hardware running continuously, even when not serving requests.

  • Train an extra-large model from scratch on multiple GPU clusters to maximize top-end accuracy.

  • Select the smallest pre-trained model that meets the solution's accuracy needs instead of training a larger model from scratch.

  • Run the same model-training job in several AWS Regions to provide redundancy during development.

Question 11 of 20

When assessing a foundation model's responses for subjective qualities such as helpfulness and tone, which evaluation approach provides the most reliable insight?

  • BLEU score comparison against reference answers

  • BERTScore semantic similarity measurement

  • ROUGE-L recall on a held-out test set

  • Human evaluation performed by domain experts

Question 12 of 20

During due diligence for a new AI workload on AWS, a startup wants an independent audit report that evaluates AWS controls for security, availability, and confidentiality. Which regulatory compliance standard provides these Service Organization Control (SOC) reports?

  • PCI DSS

  • GDPR

  • SOC (System and Organization Controls)

  • ISO/IEC 27001

Question 13 of 20

A developer wants to rely on Amazon Bedrock Guardrails built-in protections so that their chatbot automatically refuses any prompt seeking instructions for criminal activity. Which Guardrails policy category should they enable and set to Block?

  • Misconduct content filter

  • Sensitive-information (PII) filter

  • Word filter for profanity

  • Denied topics policy

Question 14 of 20

Why might a company that needs high transparency and explainability select an open source foundation model available through Amazon SageMaker JumpStart?

  • The company can examine and audit the model's code and architecture to understand how predictions are produced.

  • The model automatically scales across all AWS Regions without any configuration effort.

  • The model includes proprietary training data that remains hidden from users for privacy reasons.

  • The open source license guarantees higher accuracy than any closed-source alternative.

Question 15 of 20

A team plans to deploy a generative AI chatbot on AWS. To meet responsible AI requirements for privacy and security, which action MOST directly protects users' personally identifiable information (PII) that might appear in prompts or responses?

  • Publish a model card describing the model's intended use and evaluation metrics.

  • Enable word filters that block profanity in user input.

  • Apply a PII redaction rule in Guardrails for Amazon Bedrock.

  • Run a post-training bias analysis using Amazon SageMaker Clarify.

Question 16 of 20

A data science team trains models in Amazon SageMaker using sensitive data stored in Amazon S3. The company must guarantee that all traffic between SageMaker and the S3 bucket stays on the AWS network and never goes over the public internet. What should the team do?

  • Enable Amazon Macie to classify objects in the S3 bucket.

  • Create an Amazon S3 interface VPC endpoint using AWS PrivateLink and route SageMaker traffic through it.

  • Enable server-side encryption with Amazon S3-managed keys (SSE-S3) on the bucket.

  • Attach an IAM role to SageMaker that grants s3:GetObject and s3:PutObject permissions.

Question 17 of 20

Which risk of prompt engineering occurs when attackers insert malicious instructions into a model's context, such as data retrieved by RAG, so future prompts are influenced to produce incorrect or harmful answers?

  • Prompt poisoning

  • Model drift

  • Prompt hijacking

  • Jailbreaking

Question 18 of 20

A data science team uses several Amazon SageMaker notebook IAM roles. The team wants to apply the same custom read-only Amazon S3 permissions to all these roles without copying identical inline policies into each one. Which IAM feature should they choose?

  • Use a service control policy in AWS Organizations to grant access.

  • Attach a customer managed policy to each role.

  • Add the permissions to every role as an inline policy.

  • Configure a role trust policy with the required S3 permissions.

Question 19 of 20

A company operates a customer-service chatbot that calls an Amazon Bedrock foundation model around the clock at a steady rate of several requests per second. They need consistently low latency and want to minimize the cost per request. Which pricing approach best meets these requirements?

  • Continue using On-Demand pay-per-request invocations

  • Configure Provisioned Throughput for the model endpoint

  • Deploy the model in an additional Region and enable latency-based routing

  • Switch to a larger model with a higher context length

Question 20 of 20

Within machine learning terminology, what is meant by a "model" after the training process is complete?

  • The mathematical artifact that encodes learned patterns and can generate predictions on new data

  • The algorithm or statistical procedure chosen to learn from data

  • The dataset supplied to the training job

  • The compute environment (CPU or GPU instances) used during training