AWS Certified AI Practitioner Practice Test (AIF-C01)
Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified AI Practitioner AIF-C01 Information
The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.
Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.
Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

Free AWS Certified AI Practitioner AIF-C01 Practice Test
- 20 Questions
- Unlimited
- Fundamentals of AI and MLFundamentals of Generative AIApplications of Foundation ModelsGuidelines for Responsible AISecurity, Compliance, and Governance for AI Solutions
In the context of transformer-based large language models, what does tokenization mainly do before the model processes user input?
Convert the input text into a sequence of numeric IDs that represent words or sub-word pieces.
Compress the model's parameters to lower memory usage.
Encrypt user prompts before they are transmitted to the model endpoint.
Divide the dataset into training, validation, and test subsets.
Answer Description
Tokenization breaks raw text into smaller units-such as words, sub-words, or characters-and maps each unit to a numeric token ID. This numerical sequence is what the model actually consumes. Splitting datasets, compressing parameters, or encrypting prompts are separate preprocessing or security tasks and are not part of tokenization itself.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is tokenization crucial in large language models?
What are sub-word tokens, and why are they used in tokenization?
How does tokenization differ for languages with complex scripts like Chinese or Arabic?
A company wants to build an internal chat assistant that can answer employee questions about proprietary HR policies stored in a private document repository. The team does not want to retrain the foundation model but needs answers that accurately reference the latest documents. Which approach best meets these requirements?
Perform full fine-tuning of the foundation model on the HR policy documents and redeploy the tuned model.
Raise the model's temperature during inference so that it can generate more detailed answers.
Pre-train a new large language model from scratch using all HR policy documents as the training corpus.
Use Retrieval-Augmented Generation to retrieve relevant policy documents at runtime and pass them to the model as context.
Answer Description
Retrieval-Augmented Generation (RAG) works by first searching a designated knowledge source (such as the company's document repository), retrieving the most relevant passages, and then supplying those passages to the foundation model as additional context. Because the model receives up-to-date content at inference time, it can generate responses grounded in the company's current HR policies without the cost and complexity of pre-training or full fine-tuning.
Increasing temperature only affects randomness and does not introduce company-specific knowledge. Pre-training a new model from scratch and fully fine-tuning an existing model would embed the data, but both are far more expensive and time-consuming than RAG, and they require redeployment each time the documents change.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Retrieval-Augmented Generation (RAG)?
How does changing the model's temperature affect its responses?
Why is full fine-tuning of a foundation model less efficient than RAG?
A team is configuring Guardrails for Amazon Bedrock to stop the model from returning any personal data, such as phone numbers or email addresses. Which guardrail capability should they enable?
Stop sequence token
PII redaction
Denied topics policy
Content moderation severity threshold
Answer Description
Guardrails for Amazon Bedrock offers a PII redaction capability that detects personally identifiable information in model outputs and masks or removes it before the response is returned. Denied topics policies block content about specific subjects but do not specifically target personal data. Content moderation severity thresholds rate harmful content, not PII exposure. A stop-sequence token simply truncates the model's text generation and is unrelated to PII protection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does PII redaction mean in Amazon Bedrock?
How does a denied topics policy differ from PII redaction?
What is the role of a stop-sequence token in Amazon Bedrock?
What does PII redaction mean in the context of Amazon Bedrock?
How does PII redaction differ from a denied topics policy?
When should a stop-sequence token be used instead of PII redaction?
A startup wants to test several foundation models but avoid buying GPUs or making long-term commitments. Which Amazon Bedrock benefit most directly delivers cost-effectiveness for this goal?
Mandatory multi-year GPU reservation contracts
Customer-supplied servers for hosting every model endpoint
A fixed monthly subscription regardless of workload size
Usage-based pricing with no infrastructure to manage
Answer Description
Amazon Bedrock employs on-demand, pay-as-you-go pricing and fully manages the underlying GPU infrastructure. Because organizations are billed only for tokens processed during model inference or fine-tuning, they avoid both capital purchases and unused reserved capacity. The other options either impose fixed fees, require hardware ownership, or force multi-year commitments, so they do not provide the same cost-effective flexibility.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Bedrock?
What is usage-based pricing in Amazon Bedrock?
Why are GPUs significant for foundation models?
An e-commerce company builds a customer-service chatbot that sends each user query to a hosted foundation model together with a hidden system prompt that defines business rules. Which prompt-engineering risk must the team mitigate to stop attackers from supplying input that overrides or replaces those hidden instructions?
Data poisoning
Prompt hijacking
Model underfitting
Vanishing gradients
Answer Description
Prompt hijacking (often called prompt injection) occurs when a user supplies specially crafted input that convinces the model to ignore or overwrite the developer-provided system prompt. If successful, the attacker can make the chatbot reveal sensitive information, perform unintended actions, or deliver unapproved content. Data poisoning relates to corrupting the model's training data, not runtime instructions. Model underfitting and vanishing gradients are training issues, not security threats that arise from user input during inference. Therefore, preventing prompt hijacking-by techniques such as input sanitization, instruction hierarchy, or output validation-is the relevant concern in this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is prompt hijacking in AI systems?
How does input sanitization prevent prompt hijacking?
What is the difference between prompt hijacking and data poisoning?
A content moderation team needs to search thousands of chat transcripts by meaning instead of exact keywords. Which generative AI concept makes this possible by converting each text segment into a high-dimensional numeric vector that can be compared for semantic similarity?
Chunking
Embeddings
Diffusion models
Tokenization
Answer Description
Embeddings map text (or other data) into high-dimensional numeric vectors in such a way that semantically similar items are located near one another in vector space. This representation enables semantic or "meaning-based" search because similarity can be computed with simple distance metrics. Tokenization merely splits text into tokens, chunking breaks long text into manageable pieces, and diffusion models are used for iterative data generation (e.g., images), none of which alone provide semantic vector representations for search.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are embeddings in generative AI?
How is semantic similarity calculated using embeddings?
How do embeddings differ from tokenization?
A company needs to run a foundation model locally on mobile devices that have only 2 GB of RAM and no dedicated GPU. Which consideration best supports selecting a smaller, less complex model architecture for this use case?
Reducing memory and compute requirements to meet on-device latency and footprint limits
Leveraging emergent advanced reasoning that appears in multi-billion-parameter models
Achieving the widest possible multilingual accuracy across 200+ languages
Supporting very long input contexts of tens of thousands of tokens
Answer Description
Smaller, less complex models require fewer parameters and computational resources, allowing them to load into limited memory and generate outputs quickly on resource-constrained hardware. Larger or more complex models are designed to deliver extended context windows, broader multilingual coverage, or stronger emergent reasoning abilities, but they typically demand far more memory, power, and latency headroom than an edge device can provide.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are foundation models?
Why do smaller models work better on devices with limited resources?
What is emergent advanced reasoning in AI models?
Why do smaller models perform better on devices with limited hardware like mobile phones?
What are the trade-offs of using smaller, less complex models?
How do smaller models handle inference latency compared to larger models?
Which scenario is least appropriate for an AI/ML solution because a simple, deterministic approach already meets the requirement?
Converting product prices from U.S. dollars to euros using the daily Central Bank exchange rate.
Classifying incoming customer support emails into high, medium, or low urgency.
Detecting potentially fraudulent credit card transactions in real time.
Recommending complementary products to shoppers based on items already in their cart.
Answer Description
AI/ML is valuable when a task involves uncovering patterns or making predictions that cannot be expressed with a fixed rule set. Converting prices between currencies uses a publicly posted exchange rate and a straightforward formula. A rule-based calculation is cheaper, fully accurate, and easier to maintain than building, training, and operating an ML model. The other scenarios involve classification, anomaly detection, or personalized recommendations-areas where ML can add measurable value.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is converting currency not suitable for AI/ML solutions?
What types of tasks are more appropriate for AI/ML solutions?
What is the advantage of using AI/ML for anomaly detection?
Why is converting prices not suitable for AI/ML solutions?
What makes tasks like anomaly detection and recommendations suitable for AI/ML?
What is a deterministic approach and when is it preferred?
A startup is integrating a generative-AI chatbot with Amazon Bedrock. The team must ensure that user prompts containing hateful or violent language are blocked before the request reaches the foundation model. Which Guardrails for Amazon Bedrock capability should they use to apply this safeguard to the input?
Turn on PII redaction for all prompts submitted to the model.
Add a custom word filter that redacts profanity from model responses only.
Create a denied topics policy that lists hate and violence as restricted subjects.
Enable a content filter on the user input for hate and violence categories.
Answer Description
Guardrails for Amazon Bedrock lets builders add safety controls to both user inputs and model outputs. Content filters are designed to detect and block or transform categories such as hate, harassment, sexual, or violent language. Because the requirement is to stop prompts with hateful or violent language before they reach the model, configuring a content filter on user inputs addresses the need.
- Denied topics policies block requests about broad subject areas (for example, medical advice) but are not category-based language filters.
- Word filters block specific custom terms rather than entire categories.
- PII redaction masks personally identifiable information and does not target hateful or violent language.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Guardrails in Amazon Bedrock?
How do content filters in Amazon Bedrock work?
What is the difference between a content filter and a denied topics policy in Bedrock?
What are Guardrails for Amazon Bedrock?
How do content filters in Amazon Bedrock work?
What is the difference between a content filter and a denied topics policy in Amazon Bedrock?
To get repeatable, highly consistent answers from a text foundation model, which adjustment to the temperature inference parameter should the developer make?
Set the temperature close to 0 to minimize randomness.
Raise the temperature above 1.5 to reduce variability.
Increase the temperature so the model explores more token options.
Leave temperature unchanged and lower the max-tokens limit instead.
Answer Description
Temperature controls how randomly a model samples from its probability distribution when choosing the next token. A low value (for example, 0 or 0.1) narrows sampling to the most likely tokens, producing deterministic, repeatable answers. Raising temperature has the opposite effect, making responses more diverse. Changing max-tokens affects length, not randomness, and temperature is unrelated to training hyper-parameters such as learning rate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the temperature parameter do in text generation models?
How does the temperature parameter differ from the max-tokens setting?
Why would a developer set the temperature close to 0 in production use cases?
What does the temperature parameter do in text generation models?
How is temperature different from the max-tokens parameter?
Why is setting temperature close to 0 recommended for repeatable answers?
To maintain data integrity of training data stored in a single Amazon S3 bucket, a team wants every overwrite or deletion to retain the previous copy so it can be recovered later if corruption occurs. Which S3 feature should they activate?
Enable S3 Versioning on the bucket
Enable S3 Transfer Acceleration
Configure an S3 Lifecycle rule to move data to Amazon S3 Glacier Flexible Retrieval
Enable S3 Object Lock in Compliance mode
Answer Description
Enabling S3 Versioning causes Amazon S3 to keep multiple versions of an object whenever it is overwritten or deleted. This allows earlier, uncorrupted versions to be restored, providing a straightforward safeguard for data integrity. Transfer Acceleration only speeds up uploads and downloads. S3 Object Lock places write-once-read-many (WORM) protections but does not by itself create previous copies when objects change. A Lifecycle rule that archives data to Glacier changes storage class but will not automatically preserve prior versions of modified objects.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is S3 Versioning?
How does S3 Object Lock differ from Versioning?
How do Lifecycle rules interact with Versioning?
What is S3 Versioning, and how does it work?
How is S3 Object Lock different from S3 Versioning?
What are the potential costs of enabling S3 Versioning?
A video-streaming company wants to automatically propose relevant movies to each user, based on the titles they have previously watched and rated. Which type of AI application best suits this requirement?
A recommendation system that generates personalized content suggestions
A speech recognition service that transcribes dialogues in real time
A fraud detection model that flags unusual account activity
An image classification system that labels movie poster images
Answer Description
Recommender systems analyze users' historical interactions-such as viewing and rating behaviour-to predict and surface items each user is likely to enjoy. This personalization objective is not addressed by fraud detection (focused on anomalous activity), speech recognition (turning audio into text), or image classification (labelling visual content).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a recommendation system, and how does it work?
What is the difference between collaborative filtering and content-based filtering?
How does a recommendation system handle new users or items (cold-start problem)?
Which Amazon SageMaker feature creates a standardized Model Card that stores details such as model purpose, training data, performance metrics, and ethical considerations to support transparency and compliance reviews?
Amazon SageMaker Model Monitor
Amazon SageMaker Clarify
AWS Artifact
Amazon SageMaker Model Cards
Answer Description
Amazon SageMaker Model Cards generate a single, shareable document that captures the model's intended use, dataset sources, evaluation results, and responsible-AI considerations. Model Monitor tracks data and model drift, Clarify analyzes bias and explainability, and AWS Artifact provides compliance reports unrelated to model documentation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Amazon SageMaker Model Cards?
How does Amazon SageMaker Model Monitor differ from Model Cards?
What is the role of Amazon SageMaker Clarify in the ML lifecycle?
What is Amazon SageMaker Model Cards?
How do SageMaker Model Cards support responsible AI practices?
What is the difference between SageMaker Model Monitor and Model Cards?
An AI team stores large training datasets in Amazon S3. Company policy states that any dataset older than 3 years must be removed automatically to meet data retention requirements. Which AWS feature will allow the team to enforce this policy without writing custom deletion scripts?
Amazon CloudWatch Logs retention settings
AWS Config managed rule
Amazon S3 lifecycle configuration
AWS Artifact
Answer Description
Amazon S3 lifecycle configuration lets you define rules that automatically transition objects to different storage classes or expire (delete) them after a specified number of days. By creating a lifecycle rule that sets an expiration action at 1,095 days (3 years), the team can ensure datasets are deleted automatically, meeting the retention mandate. AWS Artifact provides compliance reports, CloudWatch Logs retention settings apply only to log groups, and AWS Config rules evaluate resource configurations but do not delete S3 objects.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Lifecycle Configuration?
How do you set up a lifecycle rule in Amazon S3?
What happens to expired objects in Amazon S3?
A startup will fine-tune a foundation model in Amazon SageMaker using confidential customer chat transcripts stored in Amazon S3. To follow AWS data governance and security best practices, which action should the company take before starting the training job?
Attach an IAM role to the training job that allows read-only access only to the specific S3 paths containing the transcripts.
Embed AWS access keys for the S3 bucket in the training script's environment variables to simplify access.
Copy the transcripts to an S3 bucket with public-read permissions so the training cluster can download them without credentials.
Convert the transcripts to unencrypted CSV files before uploading them to the training cluster for faster processing.
Answer Description
Granting the SageMaker training job an IAM role that has least-privilege, read-only permission to the exact Amazon S3 prefixes keeps sensitive data access tightly controlled. Public buckets expose the data, embedding long-term keys in code violates security guidance, and removing encryption weakens protection; none of these align with proper data governance for fine-tuning.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an IAM role in AWS?
What is the principle of least privilege in AWS?
How does Amazon S3 bucket encryption work?
What is an IAM Role in AWS?
Why is least-privilege access important in AWS?
How does Amazon SageMaker interact with S3 for training jobs?
In the context of responsible AI on AWS, which characteristic best defines a transparent (white-box) model?
Its internal decision logic can be directly inspected and understood by humans.
It protects intellectual property by hiding model parameters from end users.
It delivers the highest predictive accuracy by learning complex non-linear patterns that are difficult to interpret.
It prevents adversarial attacks by obfuscating training data and weights.
Answer Description
A transparent or "white-box" model is valued because its internal parameters, features, and decision rules can be directly examined. This visibility allows practitioners and stakeholders to understand why the model produced a particular prediction, making it easier to audit for bias and ensure compliance with responsible-AI requirements. Models that instead maximize accuracy through complex, opaque structures, hide parameters, or rely on obfuscation are considered "black-box" approaches and do not provide the same level of interpretability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are common examples of white-box models in AI?
Why is interpretability important for responsible AI?
How does a black-box model differ from a white-box model?
A startup releases a generative-AI image service without any content safeguards. Soon, hateful images are produced and shared publicly, causing users to delete their accounts and post negative reviews. According to responsible-AI guidance, which legal risk does this scenario most clearly illustrate for the company?
Loss of customer trust
Violation of data residency requirements
Unexpected compute cost overruns
Intellectual property infringement
Answer Description
The company's unmanaged outputs directly harmed the perceived reliability of its service. When customers lose confidence and stop using the product, the legal and business impact is categorized as a loss of customer trust. No intellectual-property dispute, data-sovereignty issue, or cost overrun is highlighted in the scenario, so those choices are not the primary risk demonstrated here.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is responsible-AI guidance?
How do companies ensure content safeguards in AI systems?
What are the key elements of customer trust in AI services?
What is responsible-AI guidance?
Why is customer trust critical for AI services?
What are content safeguards in generative-AI systems?
A company finds that its customer-service chatbot mislabels requests from customers who write in a regional dialect, while accuracy for other customers remains high. What impact on demographic groups does this situation illustrate?
Higher misclassification rates for the underrepresented dialect caused by training data bias
Equal accuracy across all customer groups, indicating no measurable bias
Lower error for the underrepresented group as a result of effective regularization
Improved system throughput after endpoint scaling rather than any bias effect
Answer Description
The chatbot shows higher error rates for customers who use a regional dialect because that speech pattern is likely under-represented in the training data. This is an example of model bias that disproportionately affects a specific demographic group. The other options either contradict the observed accuracy difference, attribute the effect to an unrelated performance optimization, or incorrectly claim the underrepresented group experiences lower error.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is training data bias?
How can training data be augmented to reduce bias?
What role does model evaluation play in detecting bias?
An ecommerce startup is building a customer-support chatbot on Amazon Bedrock. The bot must answer questions by using the company's product manuals stored in Amazon S3. To follow the Retrieval Augmented Generation (RAG) pattern, which additional component should the team add to the workflow?
Create a vector store such as an Amazon OpenSearch Service index that holds embeddings and returns relevant passages before invoking the model.
Add an AWS Lambda function that sets the model's temperature to 0 for deterministic answers.
Configure an Amazon S3 Access Point so the model can read all manuals directly during inference.
Run an Amazon SageMaker training job to fine-tune the foundation model on the S3 documents.
Answer Description
RAG solutions first retrieve content relevant to the user's question and then pass that content to a foundation model for generation. Amazon Bedrock supports this by connecting to a vector database where embeddings of the source documents are stored. When a query arrives, the vector store performs similarity search, returns the most relevant passages, and those passages are appended to the prompt that Bedrock sends to the model. Fine-tuning the model, granting the model blanket S3 access, or only adjusting inference parameters does not provide retrieval capabilities. Therefore, adding a vector store such as an Amazon OpenSearch Service index for embeddings is required to implement RAG.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Retrieval Augmented Generation (RAG)?
What is a vector store and how does it support RAG workflows?
How does Amazon Bedrock integrate with a vector store for RAG solutions?
A developer using Amazon Bedrock wants a text generation model to immediately stop producing further tokens when the string "###END###" appears in the output. Which inference parameter should the developer configure to achieve this?
Stop sequence
Top-K sampling value
Maximum output tokens
Temperature
Answer Description
Stop sequences let you specify one or more character strings that signal the model to halt generation as soon as any of them is produced. Setting a stop sequence of "###END###" causes the model to terminate output when that exact string is emitted. Temperature adjusts randomness, Top-K limits the number of candidate tokens considered during sampling, and a maximum output token limit cuts off output only when the length threshold is reached-all of which do not stop generation based on a specific string.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a stop sequence in Amazon Bedrock?
How does Temperature affect text generation in Amazon Bedrock?
What is the role of maximum output tokens in text generation?
What is a stop sequence in Amazon Bedrock?
How does temperature affect text generation models?
What is Top-K sampling and how does it differ from stop sequences?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.