AWS Certified AI Practitioner Practice Test (AIF-C01)
Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified AI Practitioner AIF-C01 Information
The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.
Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.
Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

Free AWS Certified AI Practitioner AIF-C01 Practice Test
- 20 Questions
- Unlimited time
- Fundamentals of AI and MLFundamentals of Generative AIApplications of Foundation ModelsGuidelines for Responsible AISecurity, Compliance, and Governance for AI Solutions
Within the AWS Cloud learning path, which of the following statements best defines artificial intelligence (AI) in general computing?
A specialized branch of machine learning that relies on multi-layer neural networks to process very large datasets.
Algorithms that automatically adjust their weights through back-propagation to fit labeled datasets.
Computer systems designed to carry out tasks that typically require human intelligence, such as understanding language or recognizing images.
A cloud-based database service optimized for storing structured, text, and image data for analytics workloads.
Answer Description
Artificial intelligence is the overarching discipline focused on building computer systems that can perform tasks normally requiring human intellect, such as perception, reasoning, and decision-making. This broad goal may be achieved with many techniques, including-but not limited to-machine learning and deep learning. Choice describing AI as the ability to learn from data using multi-layer neural networks is actually deep learning, a subset of machine learning. The option stating that AI automatically tunes weights with back-propagation also refers to machine or deep learning specifics, not AI as a whole. A database technology that stores text and image data is unrelated to the definition of AI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does artificial intelligence differ from machine learning and deep learning?
What are some common tasks artificial intelligence can perform?
What is the role of perception, reasoning, and decision-making in AI?
A company building an AI workload on AWS must meet strict data residency requirements that forbid storing or processing customer data outside the EU. Which AWS feature lets administrators centrally block the creation of resources in non-EU Regions across all AWS accounts in the organization?
Applying a service control policy (SCP) in AWS Organizations that denies actions in non-EU Regions
Attaching an inline IAM policy to each user that allows only EU Regions
Running Amazon Macie discovery jobs to locate data stored outside the EU
Using AWS CloudFormation StackSets to deploy resources only in EU Regions
Answer Description
Service control policies (SCPs) in AWS Organizations allow a central administrator to set account-wide guardrails. An SCP can include a condition that denies all API calls targeting Regions outside the EU, ensuring data cannot be stored or processed elsewhere. IAM inline policies apply only to individual principals, not across accounts. Amazon Macie discovers sensitive data but does not prevent resource creation. AWS CloudFormation StackSets automate deployments but cannot enforce geographic restrictions on their own.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Service Control Policy (SCP) in AWS Organizations?
How are SCPs different from IAM policies?
What are some use cases for SCPs?
An e-commerce company needs a foundation model to decide whether customer reviews are positive or negative. The team wants to provide only the task instruction in the prompt and include no sample reviews. Which prompt-engineering technique should they use?
Few-shot prompting
Chain-of-thought prompting
Zero-shot prompting
Retrieval-Augmented Generation (RAG)
Answer Description
Zero-shot prompting is used when a user supplies an instruction but no worked examples. The model must rely on its pre-training to perform the requested task. Few-shot prompting adds several labeled examples to guide the model, so it would not meet the requirement of omitting sample reviews. Chain-of-thought prompting asks the model to reveal its reasoning steps and still requires an instruction (and optionally examples). Retrieval-Augmented Generation is not a standalone prompting technique; it enriches prompts with retrieved documents. Therefore, zero-shot prompting is the correct choice for giving only the instruction and no examples.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is zero-shot prompting?
How does zero-shot prompting differ from few-shot prompting?
What kind of tasks is zero-shot prompting suitable for?
Under the AWS shared responsibility model, which activity remains solely AWS's responsibility when an organization builds and trains an AI model on Amazon SageMaker?
Creating IAM policies that restrict who can invoke SageMaker endpoints
Rotating application-level AWS access keys used by developers
Encrypting the training data stored in Amazon S3
Maintaining physical security of the data-center facilities that host SageMaker
Answer Description
In the shared responsibility model, AWS is always responsible for security "of" the cloud, which includes the physical protection of the infrastructure that runs AWS services. Safeguarding the data-center facilities-such as controlling building access and monitoring the hardware environment-falls entirely to AWS. By contrast, tasks like encrypting training data, defining IAM permissions, and rotating user access keys are customer responsibilities because they relate to how resources are configured and used in the cloud.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the AWS shared responsibility model entail?
Why is safeguarding physical infrastructure AWS's sole responsibility?
How do customers secure resources 'in' the cloud on AWS?
A startup uses text produced entirely by an AWS foundation model without any human creative contribution and wants to claim exclusive rights over that text. According to current U.S. copyright guidance, which statement best describes the copyright status of the generated text?
AWS owns the copyright because it created and operates the underlying model that produced the text.
It is not protected by copyright because it lacks human authorship and is effectively in the public domain immediately.
The text will qualify for copyright protection 70 years after its first publication.
The startup automatically owns full copyright because it supplied the prompt that generated the text.
Answer Description
Under guidance from the U.S. Copyright Office, copyright can only subsist in works that include "human authorship." Material created solely by an artificial intelligence system, with no meaningful human creative input, is therefore not eligible for copyright registration. Because no copyright arises, the text enters the public domain immediately and can be freely copied by others. Providing a prompt or owning the infrastructure does not itself create authorship, so neither the startup nor AWS holds copyright. There is no waiting period-works that lack protection never acquire it-so a 70-year delay is irrelevant.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'human authorship' mean in copyright guidance?
What is the public domain, and how does content enter it?
Does providing a prompt for AI-generated text count as authorship?
A media company uses a foundation model to generate article summaries in its mobile app. The business goal is to increase average session duration per user. Which evaluation approach best indicates whether the model meets this goal?
Track the average inference latency of the summarization endpoint in production.
Record the total number of parameters contained in the foundation model.
Measure the ROUGE score of model summaries against human reference summaries.
Run an A/B test and compare average session duration between users who receive the model-generated summaries and a control group.
Answer Description
To know if the foundation model is helping achieve the stated business goal-longer user sessions-you must measure the KPI itself. An A/B test that compares average session duration for users who receive the new AI-generated summaries versus a control group directly ties model performance to the engagement metric the business cares about. ROUGE scores and BLEU scores evaluate linguistic quality against references but may not correlate with user engagement. Inference latency measures system performance, while parameter count says nothing about the business impact. Neither of those metrics reveals whether the summaries keep users in the app longer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an A/B test in machine learning?
Why are ROUGE and BLEU scores insufficient for determining user engagement?
How is inference latency different from business performance metrics?
An organization is deciding whether to adopt an AI/ML approach. Which business challenge is most appropriate for an AI/ML solution instead of a rule-based program?
Compress static website images to reduce download size
Reject user passwords that are shorter than eight characters
Predict next month's electricity demand using several years of hourly usage data
Calculate each employee's overtime pay from hours worked and pay rate
Answer Description
Forecasting future electricity demand is a predictive problem that depends on patterns in historical, time-series data. Such problems benefit from ML algorithms that can learn complex relationships and improve accuracy over time. The other tasks follow fixed, explicit rules: overtime pay is calculated with a known formula, image compression relies on deterministic encoding algorithms, and enforcing a password-length requirement is a simple conditional check. Because these tasks do not involve pattern discovery or prediction, traditional programming is more efficient than ML.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is machine learning better for predicting electricity demand compared to rule-based programming?
What makes time-series data suitable for machine learning models?
What are some examples of problems that rule-based programs handle better than ML solutions?
A development team maintains a table of prompts, model outputs, and observations. After every small change to a prompt, they immediately test the foundation model again and note whether the response quality improves. Which prompt-engineering best practice does this approach demonstrate?
Iterative experimentation and testing
Applying stop sequences to shorten responses
Adding negative prompts to restrict the output
Using zero-shot prompting to eliminate examples
Answer Description
Systematically changing a prompt, observing the model's response, and repeating the process is an example of iterative experimentation and testing. This cycle of modify-test-measure helps teams converge on a prompt that yields consistent, high-quality answers. The other options describe separate techniques-negative prompting constrains content, zero-shot prompting supplies no examples, and stop sequences control response length-but none rely on a repeated test-and-refine loop.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is iterative experimentation important in prompt engineering?
What are foundation models used in prompt engineering?
How does iterative experimentation differ from zero-shot prompting?
An AWS team must pick a model for a fraud-detection project that will be reviewed by regulators. They can choose a simple linear model or a deep neural network. Which tradeoff between interpretability and model performance should the team expect?
Higher interpretability and higher accuracy when the team selects the deep neural network.
Transparent models usually demand significantly more training data to remain interpretable.
Transparent models typically have slower inference because explanations must be generated after each prediction.
Greater interpretability with the linear model but potentially lower predictive accuracy compared to the neural network.
Answer Description
Simple, transparent models such as linear or logistic regression expose how each input contributes to the prediction, making them easy for auditors to understand. However, these models usually cannot capture complex patterns as well as deep neural networks, so accuracy can be lower. Deep neural networks often achieve higher predictive performance but are considered "black-box" and provide little built-in interpretability. The other options describe consequences that are not generally linked to transparency (such as needing more data, slower inference, or increased interpretability with deep models).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does interpretability mean in the context of machine learning models?
Why are deep neural networks often called 'black-box' models?
When should a team prioritize interpretability over model performance?
A startup is building a customer-facing chatbot on Amazon Bedrock. To help ensure the model does not return hateful, violent, or otherwise harmful text, which Bedrock capability should they configure to improve response safety?
Agents for Amazon Bedrock to manage multi-step tasks
Guardrails that automatically filter inappropriate model outputs
Provisioned throughput to reserve dedicated model capacity
Amazon CloudWatch logs to monitor inference latency
Answer Description
Guardrails for Amazon Bedrock let builders define policies that detect and filter inappropriate, hateful, or violent content before it reaches end users. Provisioned throughput reserves model capacity but does not moderate content. Agents for Bedrock add orchestration logic, not safety controls. Amazon CloudWatch logs can record events but cannot block unsafe text in real time.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are guardrails in Amazon Bedrock?
How do guardrails work to filter inappropriate outputs?
Can guardrails be customized for different use cases?
A company is training a language model on Amazon SageMaker using sensitive customer data. To meet the privacy and security requirement of responsible AI, which practice should the company implement?
Use an F1 score threshold to tune overall model performance.
Publish a model card describing the model's intended use and limitations.
Encrypt the training data at rest with AWS Key Management Service (KMS) keys and ensure TLS encryption for data in transit.
Perform subgroup analysis to detect bias in model predictions.
Answer Description
Encrypting the dataset with AWS Key Management Service (KMS) keys protects it at rest, while enforcing TLS encryption secures data in transit. This approach directly addresses privacy and security requirements. Publishing a model card improves transparency, subgroup analysis addresses bias and fairness, and tuning by F1 score optimizes performance-none of these practices safeguard sensitive data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Key Management Service (KMS)?
What is TLS encryption and why is it important?
How does Amazon SageMaker support data encryption?
A company wants its new Amazon Bedrock-powered chatbot to answer employee questions by consulting internal policy documents without retraining the underlying foundation model. Which statement best describes how Retrieval-Augmented Generation (RAG) supports this requirement?
It fetches relevant document snippets during each query and adds them to the prompt so the model can generate an answer with that up-to-date context.
It fully fine-tunes the foundation model by adding the company documents to its training dataset.
It removes rarely used parameters from the model to lower inference cost while responses are generated.
It distributes the generation task across several smaller models, with each model producing a portion of the final answer.
Answer Description
Retrieval-Augmented Generation first looks up the most relevant passages from an external knowledge source-such as an indexed collection of company documents-at inference time. It then injects those retrieved snippets into the prompt so the foundation model can generate an informed answer using that fresh context. Because the knowledge is supplied dynamically, no additional model training, pruning, or model chaining is required. The other options describe full fine-tuning, parameter pruning for cost reduction, or dividing work across several models; none of those approaches capture the defining characteristic of RAG, which is real-time retrieval of external information to augment the model's response.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Retrieval-Augmented Generation (RAG) differ from fine-tuning a model?
What role does document indexing play in RAG?
Why does RAG not require additional model training?
Which Amazon SageMaker capability provides a standardized report that records a model's training data sources, intended use, evaluation metrics, and limitations to support governance and compliance requirements?
Amazon SageMaker Clarify
Amazon SageMaker Model Cards
Amazon SageMaker Pipelines
Amazon SageMaker Feature Store
Answer Description
Amazon SageMaker Model Cards act as a single source of truth for machine-learning models. They capture key governance details such as where the training data came from, how the model will be used, performance and bias metrics, and any known limitations or risks. SageMaker Pipelines orchestrate ML workflows but do not produce governance reports. SageMaker Clarify analyzes bias and explainability results but does not create a consolidated document. SageMaker Feature Store stores features for training and inference, not governance metadata. Therefore, Model Cards are the correct choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Amazon SageMaker Model Cards?
How do Amazon SageMaker Model Cards differ from SageMaker Pipelines?
What kind of information does Amazon SageMaker Model Cards capture?
An e-commerce company wants to display product descriptions in several languages by automatically converting the original English text into Spanish, French, and Japanese while preserving meaning and tone. Which generative AI use case best matches this goal?
Summarization
Code generation
Image generation
Translation
Answer Description
The scenario focuses on converting text from one language to multiple other languages while maintaining the original intent and style. This is the definition of the translation use case in generative AI. Summarization shortens content, code generation produces programming instructions, and image generation creates visual media; none of these directly address language conversion.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is translation in generative AI?
How does AWS Translate work for language conversion?
What are examples of generative AI models used for translation?
A developer is testing a text-generation foundation model on Amazon Bedrock. They need to make sure the response never contains code snippets or URLs. Which prompt engineering technique should they add to the prompt to best meet this requirement?
Raise the temperature parameter to make the output more varied.
Increase the maximum output token limit so the model has room to elaborate.
Include a negative instruction like "Do not include code or URLs" in the prompt.
Define a stop sequence consisting only of punctuation characters.
Answer Description
Negative prompting adds explicit constraints such as "Do not include code or URLs" to the instruction portion of the prompt. The model then treats these constraints as requirements and avoids producing the specified content. Adjusting temperature, length settings, or stop sequences can influence style and termination but do not directly tell the model what content to avoid, so they are less reliable for blocking code or links.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a negative prompt in prompt engineering?
How does adjusting the temperature parameter affect a model's output?
What is a stop sequence in AI models like Amazon Bedrock?
A startup plans to run a text-generation foundation model on small Amazon EC2 instances with limited memory. When comparing models available in Amazon Bedrock, which model attribute should they examine first to estimate whether the model will fit within the instance's compute and memory limits?
The model's default temperature setting
The range of top-K values the model supports
The model's total parameter count (size)
The syntax used for defining stop sequences
Answer Description
The total number of parameters (model size) directly influences how much GPU or CPU memory and compute the model requires during inference. A model with fewer parameters has a smaller memory footprint and lower computational demand, making it more suitable for resource-constrained instances. Temperature, top-K sampling, and supported stop-sequence syntax affect response style or control but have little impact on the base hardware resources needed to load the model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are parameters in a machine learning model?
How does the model's parameter count influence its suitability for specific hardware?
What is Amazon Bedrock, and how does it help with model selection?
A startup with no in-house machine-learning team wants to test generative-AI ideas quickly, without provisioning or managing any infrastructure. Which AWS service offers this low entry barrier by providing API access to pretrained foundation models?
Amazon SageMaker JumpStart
Amazon EC2 Deep Learning AMI
Amazon Bedrock
AWS Lambda
Answer Description
Amazon Bedrock is a fully managed service that exposes foundation models from AWS and leading model providers through a simple API. Developers can experiment or build applications without setting up servers, GPUs, or ML pipelines, giving a very low entry barrier. Amazon SageMaker JumpStart, AWS Lambda, and an EC2 Deep Learning AMI each require more infrastructure knowledge or management effort than Bedrock.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Bedrock?
How does Amazon Bedrock differ from Amazon SageMaker JumpStart?
Why are GPUs and ML pipelines not needed with Amazon Bedrock?
A startup wants to add a text-summarization feature to its web application in just a few days. The team does not want to deploy or scale GPU instances and prefers a fully managed, serverless API that offers access to multiple large language models from different providers. Which AWS service best meets these requirements?
Amazon SageMaker Studio
Amazon Bedrock
Amazon Redshift
AWS Glue
Answer Description
Amazon Bedrock is a fully managed service that gives developers API access to foundation models from multiple providers such as Anthropic, AI21 Labs, and Stability AI. Because the service is serverless, teams do not need to provision, scale, or maintain GPU infrastructure; they simply call the model endpoints and pay per request. Amazon SageMaker Studio supplies an integrated development environment but still requires users to manage underlying compute resources. Amazon Redshift is a data-warehousing service, and AWS Glue is used for extract-transform-load (ETL) workloads, so neither addresses the need for managed access to generative AI models.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are foundation models in Amazon Bedrock?
Why is Amazon Bedrock considered serverless?
How does Amazon Bedrock differ from SageMaker Studio for AI model deployment?
Within responsible AI, which description best defines the feature known as veracity?
How clearly the model can communicate the reasoning behind each prediction.
The model's ability to maintain performance when inputs contain noise or adversarial changes.
The accuracy, truthfulness, and reliability of the data and the model's outputs.
The protection of personal information through encryption and strict access controls.
Answer Description
Veracity in responsible AI focuses on the truthfulness, accuracy, and reliability of the data used to train a model and of the model's resulting outputs. Ensuring veracity involves validating, cleaning, and verifying data so that predictions are based on sound information. Robustness (consistent performance under varied or adversarial conditions), explainability (clarity of decision-making), and privacy (protection of personal data) are separate features that address different concerns.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What steps can be taken to ensure data veracity?
How does data veracity impact AI model performance?
How is veracity different from robustness in AI?
Within machine learning terminology, what is meant by a "model" after the training process is complete?
The mathematical artifact that encodes learned patterns and can generate predictions on new data
The dataset supplied to the training job
The algorithm or statistical procedure chosen to learn from data
The compute environment (CPU or GPU instances) used during training
Answer Description
A model is the mathematical representation produced by training algorithms on data. Once training finishes, this artifact captures learned patterns (for example, weights in a neural network) so it can accept new input and generate predictions or inferences. The algorithm itself, the raw training data, and the compute resources used are not the model; they are components of the pipeline that leads to creating the model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a model and an algorithm in machine learning?
How does a model make predictions on new data?
What happens to training data after a model is built?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.