00:20:00

AWS Certified AI Practitioner Practice Test (AIF-C01)

Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified AI Practitioner AIF-C01
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified AI Practitioner AIF-C01 Information

The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.

Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.

Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

AWS Certified AI Practitioner AIF-C01 Logo
  • Free AWS Certified AI Practitioner AIF-C01 Practice Test

  • 20 Questions
  • Unlimited
  • Fundamentals of AI and ML
    Fundamentals of Generative AI
    Applications of Foundation Models
    Guidelines for Responsible AI
    Security, Compliance, and Governance for AI Solutions

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

Within the context of machine learning, what does the term "bias" most accurately describe when evaluating a model's predictions against real-world outcomes?

  • The total number of trainable parameters present in a specific neural network layer.

  • The iterative procedure of adjusting model weights during backpropagation.

  • A consistent error introduced by algorithmic assumptions or training data that favors certain outcomes in the model's predictions.

  • Random fluctuations in predictions that arise solely from noisy data samples.

Question 2 of 20

An e-commerce retailer adds a generative-AI product-recommendation banner to its website. The team must pick a business metric to confirm that the model is driving more revenue. Which metric is MOST appropriate?

  • Token usage cost per model inference

  • Conversion rate of visitors who complete a purchase

  • Embedding vector dimensionality used by the model

  • Average handle time in customer-support chats

Question 3 of 20

An organization has fine-tuned a text-generation foundation model and now wants to expose it through a scalable HTTPS endpoint without provisioning or managing any GPU instances. Which AWS service is the most appropriate choice for this deployment stage?

  • AWS CloudFormation

  • Amazon Bedrock

  • Amazon S3

  • Amazon EC2 Auto Scaling

Question 4 of 20

An online media company stores thousands of recorded podcasts in Amazon S3. Editors need a managed AWS service that can automatically convert the speech in these audio files to searchable text without building custom ML models. Which service best fits this requirement?

  • Amazon Rekognition

  • Amazon Comprehend

  • Amazon Transcribe

  • Amazon Polly

Question 5 of 20

A company is deciding whether to invest in an AI/ML project. In which situation is an AI/ML solution least appropriate and likely to add unnecessary cost and complexity?

  • The security team needs to detect fraudulent credit card transactions that change over time.

  • The finance team must calculate quarterly sales tax using fixed percentages defined by law.

  • The marketing team wants to personalize product recommendations on an e-commerce site.

  • The supply chain team wants to predict next month's inventory requirements using past sales data.

Question 6 of 20

In the ML development lifecycle, a data scientist wants to automatically search different learning rates, batch sizes, and tree depths to find the most accurate model before deployment. Which Amazon SageMaker feature directly supports this stage?

  • Amazon SageMaker Automatic Model Tuning

  • Amazon SageMaker Batch Transform

  • Amazon SageMaker Model Package

  • Amazon SageMaker Ground Truth

Question 7 of 20

Within the AWS Cloud learning path, which of the following statements best defines artificial intelligence (AI) in general computing?

  • Computer systems designed to carry out tasks that typically require human intelligence, such as understanding language or recognizing images.

  • A specialized branch of machine learning that relies on multi-layer neural networks to process very large datasets.

  • A cloud-based database service optimized for storing structured, text, and image data for analytics workloads.

  • Algorithms that automatically adjust their weights through back-propagation to fit labeled datasets.

Question 8 of 20

A video-streaming company wants to automatically propose relevant movies to each user, based on the titles they have previously watched and rated. Which type of AI application best suits this requirement?

  • A fraud detection model that flags unusual account activity

  • A recommendation system that generates personalized content suggestions

  • A speech recognition service that transcribes dialogues in real time

  • An image classification system that labels movie poster images

Question 9 of 20

A retail startup wants to produce high-quality product photos simply by entering short text prompts that describe each desired scene. Which generative AI use case best fits this need?

  • Translation

  • Recommendation engine

  • Summarization

  • Image generation

Question 10 of 20

A retailer launches a generative-AI chatbot that suggests personalized add-on items during checkout. The company wants one metric that shows whether the bot is boosting revenue from ongoing customer relationships, not just single orders. Which metric should they track?

  • Customer acquisition cost (CAC)

  • Average order value (AOV)

  • Net promoter score (NPS)

  • Customer lifetime value (CLV)

Question 11 of 20

Which risk of prompt engineering occurs when attackers insert malicious instructions into a model's context, such as data retrieved by RAG, so future prompts are influenced to produce incorrect or harmful answers?

  • Prompt poisoning

  • Prompt hijacking

  • Model drift

  • Jailbreaking

Question 12 of 20

A startup needs a fully managed, MongoDB-compatible database that can store text embeddings and support similarity searches with a native vector index. Which AWS service should they choose?

  • Amazon DynamoDB

  • Amazon DocumentDB

  • Amazon ElastiCache for Redis

  • Amazon Neptune

Question 13 of 20

A startup wants to train a machine learning model that detects customer sentiment in live chat transcripts. Which type of training data should the team collect and prepare for this use case?

  • Labeled text documents annotated with sentiment categories

  • Unlabeled time-series sensor readings collected at regular intervals

  • Labeled JPEG images showing different facial expressions

  • Numeric tabular data of customer purchase amounts with column headers

Question 14 of 20

Within artificial intelligence terminology, which statement best describes the core idea of machine learning (ML)?

  • Specialized hardware acceleration used to run deep neural networks more efficiently.

  • A technique that allows computers to automatically learn patterns from data and improve without explicit programming of each rule.

  • The process of manually labeling datasets so that an algorithm can be trained later.

  • A programming style where every decision rule is explicitly encoded by developers.

Question 15 of 20

An ecommerce startup is building a customer-support chatbot on Amazon Bedrock. The bot must answer questions by using the company's product manuals stored in Amazon S3. To follow the Retrieval Augmented Generation (RAG) pattern, which additional component should the team add to the workflow?

  • Configure an Amazon S3 Access Point so the model can read all manuals directly during inference.

  • Run an Amazon SageMaker training job to fine-tune the foundation model on the S3 documents.

  • Add an AWS Lambda function that sets the model's temperature to 0 for deterministic answers.

  • Create a vector store such as an Amazon OpenSearch Service index that holds embeddings and returns relevant passages before invoking the model.

Question 16 of 20

While estimating generative AI costs, a team learns that Amazon Bedrock bills for model inference using a token-based pricing model. Which factor will have the greatest direct impact on those token charges?

  • The total count of IAM users authorized to invoke the model endpoint

  • The total number of input and output tokens processed in each model invocation

  • The number of AWS Regions where the Bedrock API is called

  • The amount of data stored in Amazon S3 buckets that the application reads

Question 17 of 20

A team will fine-tune a foundation language model for a retailer's customer-service chatbot. Which data selection practice best prepares the dataset for responsible fine-tuning?

  • Collect recent chat transcripts, redact personal identifiers, and remove conversations that violate company policy.

  • Keep all historical chat logs unchanged to maximize the size of the training set.

  • Use only publicly available Wikipedia articles about retail terminology.

  • Generate entirely synthetic conversations with the current model and include them without manual review.

Question 18 of 20

A startup is testing a new text-generation foundation model to confirm that its replies sound polite and helpful. Which evaluation method is most appropriate for measuring these subjective qualities?

  • Compute the BLEU score of each response compared with a reference answer.

  • Record the model's average inference latency in milliseconds.

  • Have human reviewers rate the model's responses against a qualitative rubric.

  • Count the total number of output tokens produced per response.

Question 19 of 20

A small startup plans to add an AI-powered brainstorming tool to its web app. The team would rather invoke a managed foundation model with plain-language prompts than build, train, and host its own machine-learning pipeline. Which generative AI benefit does this scenario highlight?

  • Regional coverage

  • Simplicity

  • Interpretability

  • Redundancy

Question 20 of 20

A developer must prevent an LLM from returning personal data in its responses. Which prompt-engineering practice provides this guardrail at the prompt level?

  • Insert a negative prompt that instructs the model to exclude any personal data.

  • Increase the maximum output tokens so responses can fully elaborate.

  • Provide chain-of-thought examples so the model explains its reasoning step-by-step.

  • Set the temperature parameter close to 0 to make outputs more deterministic.