AWS Certified AI Practitioner Practice Test (AIF-C01)
Use the form below to configure your AWS Certified AI Practitioner Practice Test (AIF-C01). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified AI Practitioner AIF-C01 Information
The AWS Certified AI Practitioner (AIF-C01) certification verifies that you have a strong foundational understanding of artificial intelligence, machine learning, and generative AI, along with exposure to how these are implemented via AWS services. It’s aimed at those who may not be developers or ML engineers but who need to understand and contribute to AI/ML-related decisions and initiatives in their organizations. The exam tests your knowledge of AI/ML concepts, use cases, and how AWS’s AI/ML services map to various business problems.
Topics include basics of supervised/unsupervised learning, generative AI fundamentals, prompt engineering, evaluation metrics, responsible AI practices, and how AWS tools like Amazon SageMaker, Amazon Bedrock, Amazon Comprehend, Amazon Rekognition, and others support AI workflows. Questions are scenario-based and focused on choosing the right service or strategy, rather than hands-on coding or architecture design. After passing, you’ll be able to identify the proper AI/ML tool for a given business challenge, articulate trade-offs, and guide responsible deployment of AI solutions within AWS environments.
Having this certification shows stakeholders that you have a solid conceptual grasp of AI and AWS’s AI/ML ecosystem. It’s well suited for technical leads, solution architects, product managers, or anyone who interacts with AI/ML teams and wants credibility in AI strategy discussions. It also helps bridge the gap between technical teams and business stakeholders around what AI/ML can — and cannot — do in real scenarios.

Free AWS Certified AI Practitioner AIF-C01 Practice Test
- 20 Questions
- Unlimited
- Fundamentals of AI and MLFundamentals of Generative AIApplications of Foundation ModelsGuidelines for Responsible AISecurity, Compliance, and Governance for AI Solutions
A developer is building a Bedrock-powered assistant that must gather a customer's address, call an internal shipping API, and then return a confirmation-all triggered by a single user request. Within Amazon Bedrock, what is the primary function of an agent in this workflow?
Hosting the foundation model inside a dedicated VPC subnet to minimize latency.
Fine-tuning the foundation model on shipping-related data for higher response accuracy.
Reducing prompt size by automatically compressing user input and model output.
Creating and executing a step-by-step plan that calls external APIs and returns the results to the user.
Answer Description
Agents for Amazon Bedrock automatically break down a user's request into a plan of steps, invoke the required APIs or other functions in the correct order, and pass results back to the foundation model so it can generate a final natural-language response. The other options describe capabilities unrelated to task orchestration: token compression does not exist as a Bedrock feature, private VPC hosting is handled through model access settings rather than agents, and fine-tuning adapts model weights but does not coordinate multi-step actions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an agent in Amazon Bedrock?
How does an Amazon Bedrock agent invoke external APIs?
Is fine-tuning involved in an Amazon Bedrock agent's workflow?
A data scientist trains a binary classifier in Amazon SageMaker. After several runs, training accuracy and validation accuracy both hover around 55%, well below the desired 90%. The team concludes the model shows high bias (underfitting). Which action is MOST likely to reduce this problem?
Replace the current model with a more complex algorithm, such as a deeper neural network.
Collect additional training examples and retrain using the same model configuration.
Increase the dropout rate to further limit model capacity.
Enable early stopping so that training ends sooner.
Answer Description
High bias means the model is too simple to capture underlying patterns, so both training and validation accuracy remain low (underfitting). Increasing model complexity, such as using a deeper neural network or adding more relevant features, gives the model greater capacity to fit the data, often lifting both training and validation performance. The other actions either further restrict capacity (higher dropout), shorten learning (early stopping), or add more data without addressing the model's limited capacity; none of these directly resolve high bias.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is underfitting in machine learning?
Why does a deeper neural network help address high bias?
How does dropout impact model complexity?
A startup is refining an open-source large language model so it better follows natural-language directives like "generate an email response" or "explain this code" across many tasks, without retraining on domain-specific content. Which fine-tuning method addresses this requirement?
Continuous pre-training with additional domain-specific text
Training a brand-new model from scratch on a proprietary corpus
Adding parameter-efficient adapters trained on one specialized dataset
Instruction tuning on a multi-task set of instruction-response examples
Answer Description
Instruction tuning fine-tunes a foundation model on a curated set of input-instruction-output pairs from many different tasks. This helps the model understand and comply with varied user instructions, improving its ability to follow prompts without changing its underlying domain knowledge. Continuous pre-training is aimed at extending the model's knowledge with new domain data, not primarily at instruction following. Adapter-based parameter-efficient tuning can adapt a model but, if trained on a narrow dataset, will not necessarily teach broad instruction compliance. Training a new model from scratch is far more expensive and unnecessary when the goal is simply to make an existing model better at following instructions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is instruction tuning in machine learning?
What is the key difference between instruction tuning and continuous pre-training?
Why is training a new model from scratch not cost-effective for refining instruction following?
A startup is building an AI inference API on AWS and needs to assure customers that its information security controls align with a globally recognized ISO standard. Which ISO publication should the startup cite to prove compliance with information security management requirements?
ISO 31000
ISO/IEC 27001
ISO 9001
ISO 14001
Answer Description
ISO/IEC 27001 specifies requirements for establishing, implementing, maintaining, and continually improving an information security management system (ISMS). Demonstrating alignment with ISO/IEC 27001 therefore shows that the organization follows internationally recognized best practices for protecting the confidentiality, integrity, and availability of its AI workload and data on AWS. The other standards focus on environmental management (ISO 14001), quality management (ISO 9001), and risk-management guidelines (ISO 31000); none of these set specific controls for information security management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an ISMS in the context of ISO/IEC 27001?
How does ISO/IEC 27001 compare to other ISO standards like ISO 14001?
Why is ISO/IEC 27001 important for AI workloads on AWS?
An application running on AWS makes real-time text classification predictions. The team wants any prediction with a confidence score below 60% automatically routed to a pool of human reviewers so they can approve or correct the result. Which AWS service should they use to add this human review step with minimal custom code?
Amazon SageMaker Clarify
Amazon Augmented AI (Amazon A2I)
Guardrails for Amazon Bedrock
Amazon SageMaker Model Monitor
Answer Description
Amazon Augmented AI (A2I) is designed to insert human review into machine learning workflows. It provides pre-built or custom workflows that automatically send low-confidence predictions to human reviewers and then returns the consolidated result to the application. SageMaker Clarify detects bias and explains model behavior, but it does not manage review tasks. SageMaker Model Monitor tracks data and model drift, not human review. Guardrails for Amazon Bedrock filters and redacts generative AI inputs and outputs but does not orchestrate human approval. Therefore, A2I is the correct choice for the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Augmented AI (A2I)?
How does Amazon A2I handle predictions with low confidence scores?
What is the difference between Amazon A2I and SageMaker Clarify?
A company builds a chatbot that uses retrieval-augmented generation (RAG). For each document, the team creates an embedding vector and stores it in a vector database. At runtime, the user query is also embedded, then a vector search is performed. What is the main purpose of this vector search step?
Identify documents whose embeddings are nearest to the query vector so semantically relevant content can be retrieved even when exact words differ.
Encrypt all stored embeddings with AES-256 to meet compliance requirements.
Convert the document embeddings back into text tokens before they are sent to the language model.
Fine-tune the language model by adjusting its weights using the retrieved vectors.
Answer Description
Vector search compares the numeric embedding of the user's query with the stored document embeddings and returns the ones that lie closest in the high-dimensional space (for example, by cosine similarity or Euclidean distance). Because embeddings capture semantic meaning, proximity in this space indicates that the documents are conceptually related to the query even if they share few or no identical keywords. The other options describe activities that are not carried out by the vector search operation itself: converting vectors back to tokens is part of downstream processing, encryption is a security feature handled separately, and updating model weights is training rather than retrieval.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do embeddings capture semantic meaning?
What are some common techniques used in vector search?
What is a vector database, and why is it used?
A developer wants a foundation model to gather product data from an internal database, summarize the findings, and then draft an email to a vendor-all from a single prompt. Which Amazon Bedrock capability is designed to plan and execute these multi-step actions?
Knowledge Bases for Amazon Bedrock
Agents for Amazon Bedrock
Guardrails for Amazon Bedrock
Amazon Bedrock model evaluation workflow
Answer Description
Agents for Amazon Bedrock are specifically built to decompose a user request into individual steps, select the correct tools or API calls, manage the flow of information between steps, and return a final consolidated response. Guardrails add safety controls, Knowledge Bases enable retrieval-augmented generation, and model evaluation tools benchmark quality; none of those components independently orchestrate a chain of actions on the user's behalf.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Agents for Amazon Bedrock?
How do Agents for Amazon Bedrock differ from Guardrails?
What is retrieval-augmented generation in Knowledge Bases for Amazon Bedrock?
A retailer deploys a generative-AI résumé screener that consistently lowers the ranking of applicants over 50 years old. According to responsible AI practices, what specific legal risk does this biased behavior create for the company?
Exposure to discrimination lawsuits and regulatory penalties for biased hiring practices
Increased operational expenses from running an oversized model in the cloud
Claims that the model's training data infringes third-party copyrights
Customer confusion stemming from hallucinated product descriptions without legal consequences
Answer Description
Because the model's outputs disproportionately disadvantage a protected age group, the company could be accused of employment discrimination. Such bias violates laws like Title VII of the U.S. Civil Rights Act or the Age Discrimination in Employment Act and can lead to lawsuits, regulatory penalties, and reputational loss. Copyright infringement, operating cost concerns, or hallucinated text do not directly address the legal exposure created by discriminatory impact.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Age Discrimination in Employment Act (ADEA)?
How does biased AI violate Title VII of the Civil Rights Act?
What steps can companies take to mitigate bias in AI systems?
During evaluation, a sentiment analysis model scores 98% accuracy on training data but drops to 62% on new reviews. What does this gap indicate, and which simple mitigation is appropriate?
Data leakage is the issue; remove regularization so the model can learn the hidden signal.
The model has high bias (underfitting); increase model complexity to capture more patterns.
The model has high variance (overfitting); apply regularization or collect additional diverse training data.
No real issue exists; simply running more training epochs will close the accuracy gap.
Answer Description
A large gap between training and unseen data accuracy signals high variance, meaning the model has memorized the training samples and is overfitting. Techniques such as adding regularization, simplifying the model, or gathering more diverse data help it generalize better. Increasing complexity, removing regularization, or merely training longer would likely worsen or not resolve the variance problem.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is overfitting in machine learning?
How does regularization help reduce overfitting?
Why is diverse training data important in machine learning?
A support team replaced a manual triage script with a foundation model-powered agent. Which measurement would give the clearest indication that the new solution has improved task-engineering efficiency for the business?
Average time required to classify a support ticket
Total number of parameters in the deployed model
Count of languages that the model can understand
Size of the vector database storing ticket embeddings
Answer Description
Efficiency in task engineering is shown by how much faster or easier a task is completed. Tracking the average time an agent needs to classify each support ticket directly reveals whether the foundation model reduces human effort and speeds up the workflow. Metrics such as the model's parameter count, the number of languages it supports, or the size of the vector index do not measure how efficiently the business task itself is executed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do foundation models improve task-engineering efficiency?
What are examples of metrics used to measure model performance in business tasks?
Why is the number of model parameters not a good indicator of efficiency?
A marketing team has a dataset of customer purchasing histories with no labels. They want a model to automatically identify naturally occurring customer segments for targeted campaigns. Which learning method is most appropriate for this task?
Transfer learning
Supervised learning
Reinforcement learning
Unsupervised learning
Answer Description
Unsupervised learning is suited to situations where the training data lacks predefined labels. Algorithms such as clustering can examine purchase‐history features and group customers into segments based solely on similarities in the data. Supervised learning requires labeled examples, so it cannot be applied directly here. Reinforcement learning relies on an agent receiving rewards from interacting with an environment, which does not match the scenario. Transfer learning reuses knowledge from a different, often pre-trained model and is not the primary approach for discovering new clusters in unlabeled data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is unsupervised learning?
What are clustering algorithms?
Why is supervised learning inappropriate for this task?
A data scientist starts an Amazon SageMaker training job that downloads input files from an Amazon S3 bucket. Which measure will ensure the files are encrypted in transit while being transferred from Amazon S3 to SageMaker?
Enable server-side encryption with AWS KMS on the S3 bucket.
Turn on versioning for the S3 bucket.
Add a bucket policy that denies requests when the aws:SecureTransport condition is false, forcing HTTPS.
Attach an IAM role with AmazonS3ReadOnlyAccess to the SageMaker training job.
Answer Description
Requiring HTTPS connections to the S3 bucket encrypts data in transit with TLS, protecting the files as they travel to SageMaker. A bucket policy that denies any request not using SecureTransport enforces this requirement. Server-side encryption with AWS KMS protects data at rest only, IAM permissions control access but do not add transport encryption, and bucket versioning tracks object versions without affecting encryption.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the aws:SecureTransport condition in an S3 bucket policy?
How does HTTPS ensure encryption in transit?
Why is server-side encryption with AWS KMS not sufficient for encryption in transit?
In the context of foundation models on AWS, which feature best distinguishes a multimodal generative AI model from a text-only large language model (LLM)?
It is limited to producing vector embeddings instead of human-readable outputs.
It requires human-generated labels for every training example.
It relies exclusively on tokenizing character sequences for every input.
It can both interpret and generate content across multiple data types, such as text and images.
Answer Description
Multimodal models are trained to handle two or more data modalities, so they can ingest and/or produce different kinds of content-commonly text, images, audio, or video. In contrast, a text-only LLM is limited to processing sequences of text tokens. The ability to generate or understand multiple data types is therefore the defining characteristic. Relying solely on text tokenization, requiring fully labeled data, or restricting output to vector embeddings describes attributes that are not unique to multimodal models and may apply to many other model classes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a multimodal generative AI model?
What are tokenized character sequences used for in AI models?
Why do multimodal models not require human labels for every training example?
A startup needs to adapt a large language model to its highly specialized vocabulary. They are comparing full fine-tuning, parameter-efficient fine-tuning (PEFT), and a retrieval-augmented generation (RAG) approach. Which statement BEST describes the cost profile of full fine-tuning?
It usually has the highest overall cost because every model parameter must be updated and a full new set of weights must be stored.
Its cost is comparable to PEFT because both methods adjust only a small subset of parameters.
Its storage cost is negligible because only lightweight weight deltas are saved.
It is cheaper than RAG because it removes the need for an external vector database.
Answer Description
Full fine-tuning retrains every parameter in the foundation model. This requires extensive GPU time and creates a complete new copy of the model weights, both of which contribute to high training and storage costs. In contrast, PEFT updates only a small set of additional parameters, and RAG leaves the model unchanged while relying on a separate (usually cheaper) vector store, so both alternatives are typically less expensive than full fine-tuning.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is parameter-efficient fine-tuning (PEFT)?
What is a retrieval-augmented generation (RAG) approach?
Why does full fine-tuning have higher storage costs?
After training a model in Amazon SageMaker, a team must capture its purpose, training-data lineage, evaluation results, and potential risks in a standardized document that auditors can review. Which SageMaker capability fulfills this need?
Amazon SageMaker Feature Store
Amazon SageMaker Model Registry
Amazon SageMaker Model Monitor
Amazon SageMaker Model Cards
Answer Description
Amazon SageMaker Model Cards provide a single, standardized location to record key information about a machine-learning model, including its intended use, data sources, training details, evaluation metrics, and ethical or risk considerations. This helps organizations meet governance and compliance requirements by making model provenance and performance transparent.
- SageMaker Model Monitor tracks data and model drift after deployment, not documentation.
- SageMaker Feature Store manages engineered features for training and inference, but does not compile model-level summaries.
- SageMaker Model Registry stores versioned model artifacts and approval status, yet it does not capture the broader context-such as data lineage or risk assessment-that Model Cards are designed to hold.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Amazon SageMaker Model Cards used for?
How is a SageMaker Model Card different from SageMaker Model Monitor?
What is included in a SageMaker Model Card?
To maintain data integrity of training data stored in a single Amazon S3 bucket, a team wants every overwrite or deletion to retain the previous copy so it can be recovered later if corruption occurs. Which S3 feature should they activate?
Enable S3 Object Lock in Compliance mode
Configure an S3 Lifecycle rule to move data to Amazon S3 Glacier Flexible Retrieval
Enable S3 Versioning on the bucket
Enable S3 Transfer Acceleration
Answer Description
Enabling S3 Versioning causes Amazon S3 to keep multiple versions of an object whenever it is overwritten or deleted. This allows earlier, uncorrupted versions to be restored, providing a straightforward safeguard for data integrity. Transfer Acceleration only speeds up uploads and downloads. S3 Object Lock places write-once-read-many (WORM) protections but does not by itself create previous copies when objects change. A Lifecycle rule that archives data to Glacier changes storage class but will not automatically preserve prior versions of modified objects.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is S3 Versioning?
How does S3 Object Lock differ from Versioning?
How do Lifecycle rules interact with Versioning?
Which Amazon SageMaker capability provides a standardized report that records a model's training data sources, intended use, evaluation metrics, and limitations to support governance and compliance requirements?
Amazon SageMaker Pipelines
Amazon SageMaker Feature Store
Amazon SageMaker Model Cards
Amazon SageMaker Clarify
Answer Description
Amazon SageMaker Model Cards act as a single source of truth for machine-learning models. They capture key governance details such as where the training data came from, how the model will be used, performance and bias metrics, and any known limitations or risks. SageMaker Pipelines orchestrate ML workflows but do not produce governance reports. SageMaker Clarify analyzes bias and explainability results but does not create a consolidated document. SageMaker Feature Store stores features for training and inference, not governance metadata. Therefore, Model Cards are the correct choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Amazon SageMaker Model Cards?
How do Amazon SageMaker Model Cards differ from SageMaker Pipelines?
What kind of information does Amazon SageMaker Model Cards capture?
A developer is building a chatbot using Amazon Bedrock. What is the primary purpose of enabling Guardrails for Amazon Bedrock for this application?
To monitor compute resource usage and automatically scale Bedrock inference endpoints.
To automatically encrypt all data stored in Amazon Bedrock with customer-managed keys.
To enforce safety and responsible-AI policies that filter or block harmful, biased, or unwanted content in model inputs and outputs.
To provide a catalog of foundation models optimized for low-latency inference.
Answer Description
Guardrails for Amazon Bedrock let builders specify policies-such as denied topics, content filters, and PII redaction-that the service automatically applies to model inputs and outputs. This helps prevent the chatbot from returning harmful, biased, or otherwise restricted content. The feature is not for scaling, encryption, or model selection; those capabilities are handled by other AWS services and Bedrock functions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon Bedrock?
What are Guardrails in Amazon Bedrock?
Why is filtering harmful content important for chatbots?
Which Amazon SageMaker capability is designed to simplify data preparation and exploratory analysis before model training by allowing users to import, transform, and visualize data with minimal coding effort?
Amazon SageMaker Ground Truth
Amazon SageMaker Data Wrangler
Amazon SageMaker Feature Store
Amazon SageMaker Model Monitor
Answer Description
Amazon SageMaker Data Wrangler provides a visual interface to import data from multiple sources, perform transformations, handle feature engineering, and create quick visualizations. These tasks occur in the data preparation phase of the ML pipeline. Amazon SageMaker Feature Store is meant for storing and sharing prepared features, not for preparing them. Amazon SageMaker Model Monitor tracks models in production for data or performance drift, a post-deployment activity. Amazon SageMaker Ground Truth focuses on generating labeled datasets through human or automated annotation. Therefore, only Data Wrangler directly addresses streamlined data preparation and analysis prior to training.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SageMaker Data Wrangler used for?
How does Amazon SageMaker Feature Store differ from Data Wrangler?
Can Data Wrangler handle visualizations, and how does it work?
A startup releases a generative-AI image service without any content safeguards. Soon, hateful images are produced and shared publicly, causing users to delete their accounts and post negative reviews. According to responsible-AI guidance, which legal risk does this scenario most clearly illustrate for the company?
Violation of data residency requirements
Intellectual property infringement
Loss of customer trust
Unexpected compute cost overruns
Answer Description
The company's unmanaged outputs directly harmed the perceived reliability of its service. When customers lose confidence and stop using the product, the legal and business impact is categorized as a loss of customer trust. No intellectual-property dispute, data-sovereignty issue, or cost overrun is highlighted in the scenario, so those choices are not the primary risk demonstrated here.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is responsible-AI guidance?
How do companies ensure content safeguards in AI systems?
What are the key elements of customer trust in AI services?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.