AWS Certified AI Practitioner Practice Test (AIF-C01)
Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified AI Practitioner AIF-C01 Information
The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.
Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.
Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

Free AWS Certified AI Practitioner AIF-C01 Practice Test
- 20 Questions
- Unlimited
- Fundamentals of AI and MLFundamentals of Generative AIApplications of Foundation ModelsGuidelines for Responsible AISecurity, Compliance, and Governance for AI Solutions
A startup wants to build a chatbot that can answer user questions by referencing its existing PDF manuals, without retraining or fine-tuning the underlying foundation model. Which approach available in Amazon Bedrock best satisfies this need?
Apply Reinforcement Learning from Human Feedback (RLHF) to teach the model the manual content.
Use Retrieval-Augmented Generation (RAG) with indexed embeddings of the manuals.
Perform full fine-tuning of the foundation model on the PDF manuals.
Pre-train a new foundation model from scratch using the manuals as training data.
Answer Description
Retrieval-Augmented Generation (RAG) stores document embeddings in a vector database, retrieves the most relevant passages at query time, and injects them into the prompt sent to the foundation model. Because the model itself is not retrained, the process is faster and cheaper than full fine-tuning, pre-training, or collecting Reinforcement Learning from Human Feedback (RLHF) data. Fine-tuning or RLHF would require additional training cycles and do not automatically incorporate fresh content, while pre-training a new model would be far more costly and unnecessary for this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Retrieval-Augmented Generation (RAG)?
What are indexed embeddings, and how are they created?
Why is fine-tuning or pre-training not ideal in this case?
During inference, how do diffusion models such as Stable Diffusion generally produce a new image from a text prompt?
They begin with random noise and iteratively remove noise through learned denoising steps until an image appears.
They retrieve the closest matching image from a database and apply style transfer to fit the prompt.
They encode the prompt into a latent vector and decode it once without any iterative refinement.
They start with a blank canvas and sequentially draw pixels based solely on attention weights.
Answer Description
Diffusion models are trained to reverse a gradual noising process. At generation time, they start with pure random noise in the latent space. The model then performs a series of learned denoising steps that progressively remove noise, conditioning on the text prompt, until the noise is transformed into a coherent image. The other options do not describe the characteristic iterative denoising process: drawing pixels on a blank canvas is more like autoregressive image generation, retrieving and restyling an existing picture is nearest-neighbour plus style transfer, and a single decoding pass without iterations omits the fundamental step-by-step refinement that defines diffusion models.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the latent space in diffusion models?
How does the denoising process work in diffusion models?
What differentiates diffusion models from autoregressive models in image generation?
A team notices that a text foundation model hosted on Amazon Bedrock often returns lengthy answers that include extra, unrelated facts. Which prompt-engineering best practice will most directly help the team obtain only the information they need?
Use zero-shot prompting without additional context or constraints.
Rewrite the prompt to include clear, precise instructions that limit the scope and length of the response.
Raise the temperature setting to encourage more diverse token selection.
Insert several unrelated examples to make the prompt longer and more detailed.
Answer Description
When a prompt is specific and concise-for example, "List three advantages of Amazon Bedrock in one short sentence each" instead of "Tell me about Bedrock"-the model receives clear boundaries on scope and depth. This reduces ambiguity, limits the space for the model to digress, and therefore decreases the chance of verbose or off-topic content. Changing sampling parameters, adding unrelated examples, or removing context does not address the core problem that the instruction itself is too vague.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is prompt engineering in the context of foundation models?
What does 'temperature setting' do in a foundation model?
How does Amazon Bedrock support foundation model usage?
A company wants to automatically extract key phrases and overall sentiment from incoming customer support emails without building custom models. Which AWS managed AI service meets this requirement?
Amazon Comprehend
Amazon Polly
Amazon Rekognition
Amazon Translate
Answer Description
Amazon Comprehend is a fully managed natural language processing (NLP) service that can detect sentiment, extract key phrases, and identify entities in text with no ML expertise required. Amazon Translate focuses on language translation, Amazon Polly converts text to lifelike speech, and Amazon Rekognition analyzes images and videos, so none of those services provide built-in sentiment or key-phrase extraction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is NLP, and how does Amazon Comprehend use it?
How is sentiment analysis performed by Amazon Comprehend?
What are key phrases, and how does Amazon Comprehend extract them?
A financial services company plans to deploy a generative-AI chatbot with Amazon Bedrock. Regulations require that all customer prompts remain private and must never be stored or reused for model training. Which built-in Bedrock capability directly addresses this compliance requirement?
It guarantees sub-100-millisecond response times from any AWS Region worldwide.
It automatically generates detailed GDPR audit reports for every prompt.
It encrypts customer data and ensures the content is not retained or used for model training.
It performs all inference on the company's on-premises hardware through AWS Outposts.
Answer Description
Amazon Bedrock is designed so that customer content-such as prompts, responses, and embeddings-is encrypted in transit and at rest, and is not used by AWS or model providers to retrain foundation models. This built-in data-protection approach satisfies compliance needs for privacy-sensitive industries. The other statements describe capabilities that Bedrock does not currently provide: automatic GDPR audit report generation, on-premises model execution, or a global sub-100-millisecond latency SLA.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Bedrock?
How does Amazon Bedrock ensure data privacy?
What differentiates Amazon Bedrock from other generative AI solutions?
An AWS team must pick a model for a fraud-detection project that will be reviewed by regulators. They can choose a simple linear model or a deep neural network. Which tradeoff between interpretability and model performance should the team expect?
Greater interpretability with the linear model but potentially lower predictive accuracy compared to the neural network.
Transparent models usually demand significantly more training data to remain interpretable.
Higher interpretability and higher accuracy when the team selects the deep neural network.
Transparent models typically have slower inference because explanations must be generated after each prediction.
Answer Description
Simple, transparent models such as linear or logistic regression expose how each input contributes to the prediction, making them easy for auditors to understand. However, these models usually cannot capture complex patterns as well as deep neural networks, so accuracy can be lower. Deep neural networks often achieve higher predictive performance but are considered "black-box" and provide little built-in interpretability. The other options describe consequences that are not generally linked to transparency (such as needing more data, slower inference, or increased interpretability with deep models).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does interpretability mean in the context of machine learning models?
Why are deep neural networks often called 'black-box' models?
When should a team prioritize interpretability over model performance?
A binary classification dataset contains 95% records labeled as class A and only 5% as class B. According to responsible AI practices, how is this dataset's class distribution classified?
Randomly stratified dataset
Augmented dataset
Imbalanced dataset
Balanced dataset
Answer Description
The dataset is imbalanced because one class (class A) overwhelmingly outnumbers the other. An imbalanced dataset can lead a model to favor the majority class and perform poorly on the minority class. A balanced dataset would have roughly equal representation of both classes, while the other listed terms do not describe class proportions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does an imbalanced dataset affect machine learning models?
What techniques can be used to handle imbalanced datasets?
What is the difference between a balanced dataset and an imbalanced dataset?
A retail company plans to use a large language model on AWS to create automated product descriptions. Which action best mitigates the risk of releasing inaccurate text to customers?
Implement a human approval workflow to review every generated description before it is published.
Increase the model's temperature setting to encourage more diverse responses.
Limit the prompt to a smaller context window to shorten generation time.
Disable token streaming so the model waits to send the full response.
Answer Description
Adding a human-in-the-loop review stage allows subject-matter experts to fact-check and approve each generated description before publication. This directly addresses the risk of inaccurate or hallucinated output. Raising the temperature usually increases creativity but can lower factual accuracy. Reducing the context window or turning off token streaming do not meaningfully improve correctness of the generated content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a human-in-the-loop workflow?
What does the temperature setting in language models control?
What are context windows in language models?
A development team notices that their machine-learning model shows very low error on the training dataset but much higher error when evaluated on new customer data. Which condition is this most likely an example of?
A well-balanced model with good generalization
High variance (overfitting)
High bias (underfitting)
Data leakage between training and test sets
Answer Description
The model is fitting the noise and details of the training data instead of learning the underlying patterns, so its predictions do not generalize to unseen data. This behavior reflects high variance, commonly called overfitting. High bias (underfitting) would show similar error on both training and test sets, data leakage would usually produce unrealistically good results on both sets, and a balanced model would maintain similar accuracy across datasets.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is overfitting in machine learning?
How can overfitting be prevented?
How does overfitting differ from underfitting?
A company wants to limit the carbon footprint of a new generative-AI project on AWS. Which model-selection practice best supports this environmental goal?
Choose a model that requires dedicated on-premises hardware running continuously, even when not serving requests.
Train an extra-large model from scratch on multiple GPU clusters to maximize top-end accuracy.
Select the smallest pre-trained model that meets the solution's accuracy needs instead of training a larger model from scratch.
Run the same model-training job in several AWS Regions to provide redundancy during development.
Answer Description
Choosing a smaller pre-trained model that already satisfies the application's accuracy requirements avoids the energy-intensive process of training a very large model from scratch. This reduces total compute time, electricity consumption, and associated carbon emissions. Retraining large models, running hardware 24/7 when not needed, or duplicating training jobs across Regions all increase resource use and environmental impact without adding sustainability benefits.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a pre-trained model?
How does model size affect carbon footprint?
Why is training from scratch considered less environmentally friendly?
When assessing a foundation model's responses for subjective qualities such as helpfulness and tone, which evaluation approach provides the most reliable insight?
BLEU score comparison against reference answers
BERTScore semantic similarity measurement
ROUGE-L recall on a held-out test set
Human evaluation performed by domain experts
Answer Description
Human evaluation allows real people to judge nuanced, subjective aspects of a model's output-such as tone, helpfulness, or appropriateness-that automatic overlap-based metrics like BLEU, ROUGE, or BERTScore cannot fully capture. Automated metrics are fast and repeatable, but they focus on surface similarity to reference text and often correlate poorly with human perceptions of quality for conversational or creative tasks. Therefore, human review remains the best way to measure subjective response characteristics.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of human evaluation in assessing AI model responses?
Why do automated metrics like BLEU and ROUGE fall short in subjective evaluations?
What is the difference between BERTScore and human evaluation for model assessment?
During due diligence for a new AI workload on AWS, a startup wants an independent audit report that evaluates AWS controls for security, availability, and confidentiality. Which regulatory compliance standard provides these Service Organization Control (SOC) reports?
PCI DSS
GDPR
SOC (System and Organization Controls)
ISO/IEC 27001
Answer Description
Service Organization Control (SOC) reports-SOC 1, SOC 2, and SOC 3-are produced under the American Institute of Certified Public Accountants (AICPA) framework. They provide third-party assurance about a service provider's internal controls, including security, availability, processing integrity, confidentiality, and privacy. AWS makes its SOC reports available to customers through AWS Artifact. ISO / IEC 27001, PCI DSS, and the GDPR are important standards and regulations, but they do not use the SOC report format.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the differences between SOC 1, SOC 2, and SOC 3 reports?
How can AWS customers access SOC reports through AWS Artifact?
How do SOC reports ensure confidentiality and security for AI workloads on AWS?
A developer wants to rely on Amazon Bedrock Guardrails built-in protections so that their chatbot automatically refuses any prompt seeking instructions for criminal activity. Which Guardrails policy category should they enable and set to Block?
Misconduct content filter
Sensitive-information (PII) filter
Word filter for profanity
Denied topics policy
Answer Description
Guardrails content filters include a predefined Misconduct category that detects prompts or responses that seek or provide information about engaging in criminal or fraudulent activity. Setting this category to Block causes the chatbot to refuse the request. Denied topics could work but require you to define and maintain custom topics, word filters only block exact words or phrases, and sensitive-information filters focus on PII rather than illegal-activity instructions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Bedrock Guardrails?
How does the Misconduct content filter work?
What is the difference between the Misconduct content filter and the Denied topics policy?
Why might a company that needs high transparency and explainability select an open source foundation model available through Amazon SageMaker JumpStart?
The company can examine and audit the model's code and architecture to understand how predictions are produced.
The model automatically scales across all AWS Regions without any configuration effort.
The model includes proprietary training data that remains hidden from users for privacy reasons.
The open source license guarantees higher accuracy than any closed-source alternative.
Answer Description
Open source models publish their code, architecture details, and licensing. Because this information is openly accessible, teams can inspect how the model is built, audit the logic, and document limitations-key steps for transparency and explainability. Automatic global scaling, proprietary hidden data, or guaranteed top-tier accuracy are not inherent benefits of open source licensing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does it mean for a foundation model to be open source?
How can a company audit the code and architecture of an open source model?
What are the benefits of transparency in AI models for businesses?
A team plans to deploy a generative AI chatbot on AWS. To meet responsible AI requirements for privacy and security, which action MOST directly protects users' personally identifiable information (PII) that might appear in prompts or responses?
Publish a model card describing the model's intended use and evaluation metrics.
Enable word filters that block profanity in user input.
Apply a PII redaction rule in Guardrails for Amazon Bedrock.
Run a post-training bias analysis using Amazon SageMaker Clarify.
Answer Description
Configuring PII redaction in Guardrails for Amazon Bedrock masks or removes personal data before it is logged or returned, directly addressing the privacy and security pillar of responsible AI. Bias analysis with SageMaker Clarify improves fairness, publishing a model card supports transparency, and word filters that block profanity improve content safety, but none of these measures specifically safeguard PII.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is personally identifiable information (PII)?
How does PII redaction work in Guardrails for Amazon Bedrock?
Why is bias analysis with Amazon SageMaker Clarify not sufficient to protect PII?
A data science team trains models in Amazon SageMaker using sensitive data stored in Amazon S3. The company must guarantee that all traffic between SageMaker and the S3 bucket stays on the AWS network and never goes over the public internet. What should the team do?
Enable Amazon Macie to classify objects in the S3 bucket.
Create an Amazon S3 interface VPC endpoint using AWS PrivateLink and route SageMaker traffic through it.
Enable server-side encryption with Amazon S3-managed keys (SSE-S3) on the bucket.
Attach an IAM role to SageMaker that grants s3:GetObject and s3:PutObject permissions.
Answer Description
Creating an Amazon S3 interface VPC endpoint provides a private connection between resources in the VPC and Amazon S3 through AWS PrivateLink. Traffic routed through the endpoint remains on the AWS backbone, so it never traverses the public internet. Server-side encryption protects data at rest but does not control network paths. Amazon Macie helps discover sensitive data but also does not affect the network route. Granting SageMaker an IAM role controls authorization, not the physical connectivity path.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Amazon S3 interface VPC endpoint?
How does AWS PrivateLink improve security?
Why doesn't enabling SSE-S3 or IAM roles affect network traffic?
Which risk of prompt engineering occurs when attackers insert malicious instructions into a model's context, such as data retrieved by RAG, so future prompts are influenced to produce incorrect or harmful answers?
Prompt poisoning
Model drift
Prompt hijacking
Jailbreaking
Answer Description
The described scenario is prompt poisoning. In a prompt-poisoning attack, adversaries inject harmful or misleading instructions into the data a foundation model later consumes (for example, documents indexed for RAG). When the model retrieves this tainted content, the malicious instructions become part of the prompt, leading the model to generate compromised outputs. Prompt hijacking instead refers to an attacker taking over an in-progress conversation, while jailbreaking tries to bypass a model's safety filters. Model drift is an unrelated performance change that happens gradually as data or requirements evolve, not from deliberate malicious instructions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is RAG in the context of prompt poisoning?
How does prompt poisoning differ from prompt hijacking?
What safeguards can mitigate the risk of prompt poisoning?
A data science team uses several Amazon SageMaker notebook IAM roles. The team wants to apply the same custom read-only Amazon S3 permissions to all these roles without copying identical inline policies into each one. Which IAM feature should they choose?
Use a service control policy in AWS Organizations to grant access.
Attach a customer managed policy to each role.
Add the permissions to every role as an inline policy.
Configure a role trust policy with the required S3 permissions.
Answer Description
A customer managed policy lets you create a single reusable JSON policy document, then attach it to multiple IAM identities such as roles, users, or groups. This avoids duplication and simplifies updates, because you modify the policy in one place and the change propagates to every attached role. A role trust policy only controls who can assume the role; it does not grant S3 permissions. Service control policies apply at the AWS Organizations level and do not attach directly to individual roles. Adding an inline policy to every role is exactly what the team wants to avoid because it duplicates policy documents and is harder to maintain.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a customer managed policy in AWS IAM?
What is the difference between a role trust policy and permissions policy in IAM?
How do service control policies (SCP) work in AWS Organizations?
A company operates a customer-service chatbot that calls an Amazon Bedrock foundation model around the clock at a steady rate of several requests per second. They need consistently low latency and want to minimize the cost per request. Which pricing approach best meets these requirements?
Continue using On-Demand pay-per-request invocations
Configure Provisioned Throughput for the model endpoint
Deploy the model in an additional Region and enable latency-based routing
Switch to a larger model with a higher context length
Answer Description
With steady, predictable traffic, Provisioned Throughput reserves dedicated model capacity for a fixed hourly fee. The reserved capacity delivers consistent low-latency responses and, when fully utilized, reduces the effective cost per request compared with pay-as-you-go On-Demand invocations. On-Demand remains more economical for sporadic or low traffic, but becomes more expensive per request at sustained high volume. Choosing a larger model or deploying to another Region does not directly lower per-request cost, and latency-based routing adds network overhead without addressing pricing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Provisioned Throughput in Amazon Bedrock?
What is the difference between Provisioned Throughput and On-Demand invocations?
How does latency-based routing work, and why isn't it suitable in this scenario?
Within machine learning terminology, what is meant by a "model" after the training process is complete?
The mathematical artifact that encodes learned patterns and can generate predictions on new data
The algorithm or statistical procedure chosen to learn from data
The dataset supplied to the training job
The compute environment (CPU or GPU instances) used during training
Answer Description
A model is the mathematical representation produced by training algorithms on data. Once training finishes, this artifact captures learned patterns (for example, weights in a neural network) so it can accept new input and generate predictions or inferences. The algorithm itself, the raw training data, and the compute resources used are not the model; they are components of the pipeline that leads to creating the model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a model and an algorithm in machine learning?
How does a model make predictions on new data?
What happens to training data after a model is built?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.