🔥 40% Off Crucial Exams Memberships — This Week Only

3 days, 15 hours remaining!
00:20:00

AWS Certified AI Practitioner Practice Test (AIF-C01)

Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for AWS Certified AI Practitioner AIF-C01
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

AWS Certified AI Practitioner AIF-C01 Information

The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.

Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.

Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

AWS Certified AI Practitioner AIF-C01 Logo
  • Free AWS Certified AI Practitioner AIF-C01 Practice Test

  • 20 Questions
  • Unlimited time
  • Fundamentals of AI and ML
    Fundamentals of Generative AI
    Applications of Foundation Models
    Guidelines for Responsible AI
    Security, Compliance, and Governance for AI Solutions
Question 1 of 20

Within the AWS Cloud learning path, which of the following statements best defines artificial intelligence (AI) in general computing?

  • A specialized branch of machine learning that relies on multi-layer neural networks to process very large datasets.

  • Algorithms that automatically adjust their weights through back-propagation to fit labeled datasets.

  • Computer systems designed to carry out tasks that typically require human intelligence, such as understanding language or recognizing images.

  • A cloud-based database service optimized for storing structured, text, and image data for analytics workloads.

Question 2 of 20

A company building an AI workload on AWS must meet strict data residency requirements that forbid storing or processing customer data outside the EU. Which AWS feature lets administrators centrally block the creation of resources in non-EU Regions across all AWS accounts in the organization?

  • Applying a service control policy (SCP) in AWS Organizations that denies actions in non-EU Regions

  • Attaching an inline IAM policy to each user that allows only EU Regions

  • Running Amazon Macie discovery jobs to locate data stored outside the EU

  • Using AWS CloudFormation StackSets to deploy resources only in EU Regions

Question 3 of 20

An e-commerce company needs a foundation model to decide whether customer reviews are positive or negative. The team wants to provide only the task instruction in the prompt and include no sample reviews. Which prompt-engineering technique should they use?

  • Few-shot prompting

  • Chain-of-thought prompting

  • Zero-shot prompting

  • Retrieval-Augmented Generation (RAG)

Question 4 of 20

Under the AWS shared responsibility model, which activity remains solely AWS's responsibility when an organization builds and trains an AI model on Amazon SageMaker?

  • Creating IAM policies that restrict who can invoke SageMaker endpoints

  • Rotating application-level AWS access keys used by developers

  • Encrypting the training data stored in Amazon S3

  • Maintaining physical security of the data-center facilities that host SageMaker

Question 5 of 20

A startup uses text produced entirely by an AWS foundation model without any human creative contribution and wants to claim exclusive rights over that text. According to current U.S. copyright guidance, which statement best describes the copyright status of the generated text?

  • AWS owns the copyright because it created and operates the underlying model that produced the text.

  • It is not protected by copyright because it lacks human authorship and is effectively in the public domain immediately.

  • The text will qualify for copyright protection 70 years after its first publication.

  • The startup automatically owns full copyright because it supplied the prompt that generated the text.

Question 6 of 20

A media company uses a foundation model to generate article summaries in its mobile app. The business goal is to increase average session duration per user. Which evaluation approach best indicates whether the model meets this goal?

  • Track the average inference latency of the summarization endpoint in production.

  • Record the total number of parameters contained in the foundation model.

  • Measure the ROUGE score of model summaries against human reference summaries.

  • Run an A/B test and compare average session duration between users who receive the model-generated summaries and a control group.

Question 7 of 20

An organization is deciding whether to adopt an AI/ML approach. Which business challenge is most appropriate for an AI/ML solution instead of a rule-based program?

  • Compress static website images to reduce download size

  • Reject user passwords that are shorter than eight characters

  • Predict next month's electricity demand using several years of hourly usage data

  • Calculate each employee's overtime pay from hours worked and pay rate

Question 8 of 20

A development team maintains a table of prompts, model outputs, and observations. After every small change to a prompt, they immediately test the foundation model again and note whether the response quality improves. Which prompt-engineering best practice does this approach demonstrate?

  • Iterative experimentation and testing

  • Applying stop sequences to shorten responses

  • Adding negative prompts to restrict the output

  • Using zero-shot prompting to eliminate examples

Question 9 of 20

An AWS team must pick a model for a fraud-detection project that will be reviewed by regulators. They can choose a simple linear model or a deep neural network. Which tradeoff between interpretability and model performance should the team expect?

  • Higher interpretability and higher accuracy when the team selects the deep neural network.

  • Transparent models usually demand significantly more training data to remain interpretable.

  • Transparent models typically have slower inference because explanations must be generated after each prediction.

  • Greater interpretability with the linear model but potentially lower predictive accuracy compared to the neural network.

Question 10 of 20

A startup is building a customer-facing chatbot on Amazon Bedrock. To help ensure the model does not return hateful, violent, or otherwise harmful text, which Bedrock capability should they configure to improve response safety?

  • Agents for Amazon Bedrock to manage multi-step tasks

  • Guardrails that automatically filter inappropriate model outputs

  • Provisioned throughput to reserve dedicated model capacity

  • Amazon CloudWatch logs to monitor inference latency

Question 11 of 20

A company is training a language model on Amazon SageMaker using sensitive customer data. To meet the privacy and security requirement of responsible AI, which practice should the company implement?

  • Use an F1 score threshold to tune overall model performance.

  • Publish a model card describing the model's intended use and limitations.

  • Encrypt the training data at rest with AWS Key Management Service (KMS) keys and ensure TLS encryption for data in transit.

  • Perform subgroup analysis to detect bias in model predictions.

Question 12 of 20

A company wants its new Amazon Bedrock-powered chatbot to answer employee questions by consulting internal policy documents without retraining the underlying foundation model. Which statement best describes how Retrieval-Augmented Generation (RAG) supports this requirement?

  • It fetches relevant document snippets during each query and adds them to the prompt so the model can generate an answer with that up-to-date context.

  • It fully fine-tunes the foundation model by adding the company documents to its training dataset.

  • It removes rarely used parameters from the model to lower inference cost while responses are generated.

  • It distributes the generation task across several smaller models, with each model producing a portion of the final answer.

Question 13 of 20

Which Amazon SageMaker capability provides a standardized report that records a model's training data sources, intended use, evaluation metrics, and limitations to support governance and compliance requirements?

  • Amazon SageMaker Clarify

  • Amazon SageMaker Model Cards

  • Amazon SageMaker Pipelines

  • Amazon SageMaker Feature Store

Question 14 of 20

An e-commerce company wants to display product descriptions in several languages by automatically converting the original English text into Spanish, French, and Japanese while preserving meaning and tone. Which generative AI use case best matches this goal?

  • Summarization

  • Code generation

  • Image generation

  • Translation

Question 15 of 20

A developer is testing a text-generation foundation model on Amazon Bedrock. They need to make sure the response never contains code snippets or URLs. Which prompt engineering technique should they add to the prompt to best meet this requirement?

  • Raise the temperature parameter to make the output more varied.

  • Increase the maximum output token limit so the model has room to elaborate.

  • Include a negative instruction like "Do not include code or URLs" in the prompt.

  • Define a stop sequence consisting only of punctuation characters.

Question 16 of 20

A startup plans to run a text-generation foundation model on small Amazon EC2 instances with limited memory. When comparing models available in Amazon Bedrock, which model attribute should they examine first to estimate whether the model will fit within the instance's compute and memory limits?

  • The model's default temperature setting

  • The range of top-K values the model supports

  • The model's total parameter count (size)

  • The syntax used for defining stop sequences

Question 17 of 20

A startup with no in-house machine-learning team wants to test generative-AI ideas quickly, without provisioning or managing any infrastructure. Which AWS service offers this low entry barrier by providing API access to pretrained foundation models?

  • Amazon SageMaker JumpStart

  • Amazon EC2 Deep Learning AMI

  • Amazon Bedrock

  • AWS Lambda

Question 18 of 20

A startup wants to add a text-summarization feature to its web application in just a few days. The team does not want to deploy or scale GPU instances and prefers a fully managed, serverless API that offers access to multiple large language models from different providers. Which AWS service best meets these requirements?

  • Amazon SageMaker Studio

  • Amazon Bedrock

  • Amazon Redshift

  • AWS Glue

Question 19 of 20

Within responsible AI, which description best defines the feature known as veracity?

  • How clearly the model can communicate the reasoning behind each prediction.

  • The model's ability to maintain performance when inputs contain noise or adversarial changes.

  • The accuracy, truthfulness, and reliability of the data and the model's outputs.

  • The protection of personal information through encryption and strict access controls.

Question 20 of 20

Within machine learning terminology, what is meant by a "model" after the training process is complete?

  • The mathematical artifact that encodes learned patterns and can generate predictions on new data

  • The dataset supplied to the training job

  • The algorithm or statistical procedure chosen to learn from data

  • The compute environment (CPU or GPU instances) used during training