Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Free Microsoft Azure AI Fundamentals AI-900 Practice Test
- 20 Questions
- Unlimited
- Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
A company plans to implement a chatbot that provides human-like answers to customer queries.
What feature of Azure OpenAI Service should they use?
Leverage the image creation functions of Azure OpenAI Service
Utilize the text generation capabilities of Azure OpenAI Service
Use Azure's QnA Maker to fetch answers from a knowledge base
Employ the code generation features of Azure OpenAI Service
Answer Description
Utilize the text generation capabilities of Azure OpenAI Service - This is the correct answer. The text generation capabilities of Azure OpenAI Service, such as models like GPT, are ideal for building a chatbot that provides human-like answers to customer queries. These models can generate coherent, contextually relevant responses based on the input provided.
Employ the code generation features of Azure OpenAI Service - This feature is focused on generating code from natural language prompts.
Leverage the image creation functions of Azure OpenAI Service - Image creation functions like DALL·E are used for generating images from text descriptions
Use Azure's QnA Maker to fetch answers from a knowledge base - While QnA Maker is useful for creating a FAQ-style system based on a knowledge base, it does not offer the same level of dynamic, human-like conversation generation as the text generation models in Azure OpenAI Service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of text generation in Azure OpenAI Service?
What differentiates Azure OpenAI from Azure QnA Maker for building chatbots?
Can the same chatbot use multiple Azure OpenAI Service features?
Which consideration ensures that AI systems are developed with mechanisms for oversight and that organizations are responsible for the outcomes produced by these systems?
Inclusiveness
Accountability
Transparency
Reliability and Safety
Answer Description
Accountability - This is the correct answer. Accountability ensures that AI systems are developed with mechanisms for oversight, and that organizations take responsibility for the outcomes produced by these systems. It involves ensuring that AI systems are used ethically and that their creators or operators are held responsible for their impact.
Transparency focuses on making AI systems understandable and providing visibility into how they work, but it doesn't directly address the mechanisms for oversight and responsibility for outcomes.
Inclusiveness is about ensuring that AI systems are designed to be fair and accessible, considering the diverse needs of users, but it is not specifically about oversight or accountability for outcomes.
Reliability and Safety focus on ensuring that AI systems perform as expected and do not cause harm, but accountability is the key consideration for ensuring oversight and responsibility for the system's outcomes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is accountability more important than transparency in ensuring oversight of AI systems?
What mechanisms can organizations use to ensure accountability in AI systems?
How does accountability in AI align with ethical principles?
Which characteristic is associated with AI models that generate new content based on learned patterns?
They retrieve exact copies of existing content
They create new data based on learned patterns from training data
They classify input data into predefined categories
They predict numerical values from historical data
Answer Description
They create new data based on learned patterns from training data - This is the correct answer. AI models that generate new content, such as text, images, or music, are typically generative models. They learn patterns from training data and use this knowledge to produce original content that resembles the data they were trained on.
They classify input data into predefined categories - This describes classification models, which focus on categorizing data into specific labels or categories, rather than generating new content.
They predict numerical values from historical data - This is characteristic of predictive models, which are used for forecasting or regression tasks, rather than generating new content.
They retrieve exact copies of existing content - This describes retrieval-based models, which focus on finding and returning pre-existing content from a database or dataset, rather than generating new, original content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are some examples of generative AI models?
How does a generative model differ from a classification model?
What role does training data play in generative AI?
Which approach contributes to ensuring safety in applications using machine learning?
Reducing the diversity of training data
Minimizing the transparency of system operations
Ignoring edge cases during development
Including fail-safe mechanisms in the system design
Answer Description
Including fail-safe mechanisms in the system design helps prevent the application from causing harm if it encounters unexpected situations. Minimizing transparency, reducing data diversity, and ignoring edge cases can compromise safety and lead to undesirable outcomes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are fail-safe mechanisms in machine learning systems?
Why is transparency important for machine learning systems?
How does training data diversity contribute to machine learning safety?
A company wants to convert text documents into spoken audio files with natural-sounding voices.
Which Azure service should they use to achieve this?
Azure AI Speech service
Azure AI Language service
Azure Cognitive Search
Azure AI Vision service
Answer Description
The Azure AI Speech service provides text-to-speech capabilities, allowing developers to convert written text into spoken words with natural-sounding voices. This service supports multiple languages and customizable voice options, making it suitable for applications like audiobooks, accessibility tools, and interactive voice responses.
The Azure AI Language service focuses on natural language processing tasks such as sentiment analysis and key phrase extraction but does not offer text-to-speech functionality. Azure AI Vision service is designed for image and video analysis. Azure Cognitive Search provides indexing and querying over content but does not handle text-to-speech conversion.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure AI Speech service used for?
How does Azure AI Speech service differ from Azure AI Language service?
Can the Azure AI Speech service support multiple languages and custom voices?
An organization wants to enhance the searchability of its document repository by automatically identifying significant terms and concepts within their text documents.
Which Azure AI capability should they use to achieve this?
Entity Recognition
Speech Recognition
Key Phrase Extraction
Sentiment Analysis
Answer Description
Key Phrase Extraction is the appropriate Azure AI capability for this scenario. It analyzes text to identify the most relevant phrases that represent the main topics or concepts within a document. These key phrases can be used as metadata tags to improve searchability and organization.
Sentiment Analysis determines the emotional tone of the text, which is not useful for identifying key concepts.
Entity Recognition identifies and categorizes specific entities like names of people, organizations, or locations but may not capture all significant terms.
Speech Recognition converts spoken language into text and is not applicable to text documents.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Key Phrase Extraction in Azure AI?
How is Entity Recognition different from Key Phrase Extraction?
Can Key Phrase Extraction be customized for specific industries or use cases?
A retail company wants to automatically identify and categorize products on store shelves using images from in-store cameras.
Which Azure workload should they use?
Natural Language Processing (NLP)
Generative AI
Computer Vision
Knowledge Mining
Answer Description
They should use Computer Vision. This workload enables machines to interpret and understand visual information from images or videos. In this scenario, the retail company aims to analyze images from cameras to identify and categorize products, which is a typical application of Computer Vision.
Natural Language Processing deals with understanding and processing human language, which is not applicable here.
Knowledge Mining involves extracting information from large volumes of text but doesn't directly address image analysis.
Generative AI focuses on creating new content, such as text or images, and is not suited for product identification tasks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Computer Vision and how does it work?
What are some real-world applications of Computer Vision?
What are the main differences between Computer Vision and Natural Language Processing (NLP)?
An online retailer is building a recommendation engine that uses individual-level purchase history and click-stream data. The company must comply with privacy regulations such as GDPR while still keeping the data useful for personalizing suggestions.
Which privacy-preserving technique best satisfies this requirement?
Apply data anonymization to remove or irreversibly mask all PII before model training
Encrypt the raw data at rest and decrypt it during model training without additional masking
Replace each customer ID with a reversible hash and keep the mapping table for future reference
Aggregate the data into category-level totals and delete the original customer-level records
Answer Description
Removing or masking personally identifiable information (PII) before feature engineering-through data anonymization or strong redaction-ensures that customer identities cannot be recovered, so the training data is no longer classified as personal data under GDPR. Encryption alone protects data at rest and in transit but exposes PII once the data are decrypted inside the training pipeline. Reversible hashing (pseudonymization) still permits re-identification if the lookup table is compromised. Aggregating the data into population-level statistics removes all personalization signals, making it unsuitable for a recommender system.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data anonymization, and how does it differ from pseudonymization?
Why is encryption not sufficient to meet GDPR compliance for training AI models?
How does data aggregation impact the effectiveness of a recommendation engine?
You are deploying a text generation AI model that produces job descriptions.
What responsible AI consideration should you address to ensure the generated content treats all candidates equitably?
Evaluate and adjust the training data to remove discriminatory patterns
Optimize the model's performance to generate descriptions faster
Increase the model's vocabulary to include industry-specific terms
Reduce the computational resources required for deployment
Answer Description
Evaluate and adjust the training data to remove discriminatory patterns - This is the correct answer. Ensuring that the training data is free from biases and discriminatory patterns is essential for responsible AI. This helps ensure that the job descriptions generated by the model do not inadvertently favor one group over another, promoting fairness and equity.
Increase the model's vocabulary to include industry-specific terms - While increasing vocabulary can improve the model's relevance to specific industries, it does not directly address equity concerns in the generated content.
Optimize the model's performance to generate descriptions faster - Performance optimization for speed may improve efficiency but does not directly relate to ensuring equitable treatment of candidates in the generated job descriptions.
Reduce the computational resources required for deployment - Reducing computational resources can be important for cost or environmental reasons but does not directly address fairness or equity in the AI-generated job descriptions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'evaluate and adjust the training data' mean in AI development?
How can training data lead to discriminatory patterns in AI models?
What are the steps to remove bias from training data for responsible AI?
Which model in Azure OpenAI Service is used to generate images from text prompts?
GPT-3
Codex
ChatGPT
DALL·E
Answer Description
DALL·E is an image generation model available in Azure OpenAI Service that creates images from text descriptions, allowing users to generate visual content based on textual inputs.
GPT-3 and ChatGPT are language models designed for generating and understanding human-like text, not images.
Codex is tailored for code generation and completion tasks and does not generate images.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the primary purpose of the DALL·E model in Azure OpenAI Service?
How are GPT-3 and ChatGPT different from DALL·E in Azure OpenAI Service?
What are some typical use cases for Codex in Azure OpenAI Service?
A financial institution is developing an AI model to assess loan applications. During testing, they observe that the model declines applications from a specific demographic group more frequently than others.
What step should they take to address this disparity?
Switch to a different machine learning algorithm for better performance
Implement algorithms that promote fairness in the model's decision-making process
Increase the amount of data from the affected demographic group in the training set
Remove all demographic features from the dataset used to train the model
Answer Description
Implementing algorithms that promote fairness in the model's decision-making process helps to mitigate discriminatory patterns and ensure equitable outcomes across different demographic groups. Merely increasing data from the affected group or removing demographic features might not fully resolve the underlying biases. Switching to a different algorithm without addressing fairness considerations may not eliminate the disparity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are fairness-promoting algorithms, and how do they work?
Why is increasing data from the underrepresented group not enough to address bias?
Why is removing demographic data not a sufficient solution for fairness?
Which scenario is most appropriate for applying regression techniques in machine learning?
Detecting fraudulent transactions in financial data
Grouping customers based on purchasing behavior
Classifying emails as spam or not spam
Forecasting the future price of a stock
Answer Description
Regression analysis is used when the goal is to predict a continuous numerical value based on input variables. Forecasting the future price of a stock is a typical regression problem because it involves predicting a numeric value.
Classifying emails as spam or not spam is a classification task because it involves assigning discrete categories to data points.
Grouping customers based on purchasing behavior is a clustering task, as it involves identifying natural groupings within the data without predefined labels.
Detecting fraudulent transactions in financial data is also a classification problem, aiming to categorize transactions as fraudulent or legitimate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is predicting the stock price best suited for regression?
How does regression differ from classification and clustering?
What types of algorithms are commonly used for regression tasks?
An organization needs to digitally extract text from a large number of scanned documents and images containing both printed and handwritten text. They require a solution that can process unstructured data effectively.
Which feature of Azure AI services is most suitable for their needs?
Computer Vision's text reading capability
Text Analytics to analyze and interpret text sentiment
Face API to detect and analyze faces in images
Form Recognizer to analyze and extract structured data
Answer Description
Computer Vision's text reading capability - This is the correct answer. Computer Vision's text reading capability (including OCR features) in Azure AI services is the most suitable solution for extracting both printed and handwritten text from scanned documents and images. It can process unstructured data by detecting and reading text in various image formats, making it ideal for the organization's needs.
Form Recognizer to analyze and extract structured data - Form Recognizer is useful for extracting structured data (e.g., from forms or invoices), but it is not primarily designed to handle the extraction of general unstructured text from images, especially when dealing with a mix of printed and handwritten text.
Text Analytics to analyze and interpret text sentiment - Text Analytics is primarily used for analyzing and interpreting the sentiment, entities, or key phrases in text data, but it does not extract text from images or scanned documents.
Face API to detect and analyze faces in images - The Face API is focused on detecting and analyzing human faces in images, not for extracting text from documents or images. This is unrelated to the task of text extraction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Computer Vision's text reading capability work?
What is the difference between unstructured data and structured data?
When should I use Form Recognizer instead of Computer Vision?
Which of the following features is commonly associated with solutions that analyze human faces in images?
Converting handwritten text into digital format
Categorizing images into predefined classes
Estimating emotional states of people in images
Detecting objects like vehicles and furniture
Answer Description
Estimating emotional states of people in images - This is the correct answer. Solutions that analyze human faces in images often include the ability to estimate emotional states, such as detecting whether a person is happy, sad, angry, or surprised based on their facial expressions.
Converting handwritten text into digital format - This feature is associated with Optical Character Recognition (OCR), which focuses on recognizing and converting text from images, not facial analysis.
Detecting objects like vehicles and furniture - This is the functionality of object detection solutions, which focus on identifying and locating specific objects in an image, not analyzing human faces.
Categorizing images into predefined classes - This refers to image classification, where the goal is to classify an image into one of several predefined categories, but it is not specifically focused on analyzing human faces.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is facial expression analysis in AI?
How does facial expression analysis differ from OCR?
What are common use cases of object detection in AI?
An AI system that processes images to recognize and categorize objects is an example of which type of AI workload?
Natural Language Processing (NLP)
Computer Vision
Knowledge Mining
Answer Description
Computer Vision - This is the correct answer. An AI system that processes images to recognize and categorize objects is an example of computer vision. This type of workload focuses on enabling machines to interpret and understand visual information, such as identifying objects within images.
Natural Language Processing (NLP) is focused on understanding and generating human language, such as text or speech. It is not used for processing images.
Knowledge Mining involves extracting insights and patterns from unstructured data, such as documents and media, but it is not specifically focused on recognizing or categorizing objects within images. It may include visual data analysis but is broader in scope.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does computer vision recognize and categorize objects in images?
What are some real-world applications of computer vision?
How is computer vision different from natural language processing (NLP)?
An e-commerce company wants to analyze images to determine the number and positions of various products shown for inventory management.
Which computer vision capability would best meet this requirement?
Facial Recognition to detect and identify human faces
Image Classification to assign labels to entire images
Optical Character Recognition (OCR) to extract text from images
Object Detection to identify and locate products in images
Answer Description
Object Detection is the appropriate capability because it can identify and locate multiple instances of objects within an image, providing both the class of each object and its position through bounding boxes. This enables the company to count and track products in images effectively.
Image Classification assigns a single label to an entire image without providing location data.
Optical Character Recognition (OCR) extracts text from images but doesn't detect objects.
Facial Recognition is specialized for detecting and identifying human faces, not general products.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Object Detection and Image Classification?
How does Object Detection work in computer vision?
Can Optical Character Recognition (OCR) be combined with Object Detection?
A developer needs to build an application that can create new images from natural language descriptions. Which Azure OpenAI Service model is designed for this specific purpose?
GPT-4
Azure Computer Vision
Codex
DALL-E
Answer Description
The correct answer is DALL-E. DALL-E models are specifically designed to generate original images from textual prompts within the Azure OpenAI Service. GPT-4 is a large language model focused on understanding and generating text. Azure Computer Vision is an Azure AI service used for analyzing existing images, such as recognizing objects and extracting text, not for generating new ones. Codex is a model family that is optimized for generating code.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does DALL-E create images from text descriptions?
What is the difference between DALL-E and Azure Computer Vision?
What are some use cases for DALL-E in real-world applications?
As a data scientist at a software development company, you are considering models for generating synthetic data to enhance your testing datasets.
Which feature of generative AI models makes them suitable for this task?
They can classify data into specific categories with high precision.
They can identify anomalies by learning normal data patterns.
They can generate new data instances similar to the training data.
They can reduce data dimensionality while retaining key features.
Answer Description
They can generate new data instances similar to the training data - This is the correct answer. Generative AI models are designed to create new data instances that resemble the training data. This makes them ideal for generating synthetic data to augment testing datasets, ensuring variety and realism in the data used for testing.
They can classify data into specific categories with high precision - This feature is characteristic of classification models, which categorize data into predefined labels but do not generate new data instances.
They can reduce data dimensionality while retaining key features - This describes dimensionality reduction techniques (such as PCA), which focus on simplifying the data without losing important features. It is not related to generating new data instances.
They can identify anomalies by learning normal data patterns - This is a feature of anomaly detection models, which are used to identify outliers or unusual patterns in data, not for generating synthetic data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What types of generative AI models are commonly used to create synthetic data?
How does generating synthetic data benefit testing datasets?
How is generative AI different from other AI models like classification or anomaly detection?
A company wants to ensure that users can understand how their AI system processes data and arrives at decisions.
Which responsible AI principle should they focus on enhancing?
Inclusiveness
Fairness
Privacy
Transparency
Answer Description
Transparency involves making AI systems understandable and explainable to users and stakeholders. By enhancing transparency, the company allows users to comprehend how data is processed and how decisions are made, building trust in the system.
Privacy focuses on protecting personal and sensitive data, which is important but does not specifically address understanding AI decision-making processes.
Fairness aims to prevent biases and ensure equitable outcomes.
Inclusiveness ensures AI systems are accessible and beneficial to diverse users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does transparency mean in the context of responsible AI?
How does transparency differ from fairness in responsible AI principles?
Why is enhancing transparency important for trust in AI systems?
A company wants to develop a virtual assistant that can understand spoken commands and respond accordingly.
Which Azure service should be integrated to enable the application to convert the spoken commands into text for processing?
Azure Cognitive Services Translation
Azure Speech service
Azure Bot service
Azure Text Analytics
Answer Description
Azure Speech service is designed for converting spoken language into text (speech-to-text) and vice versa (text-to-speech). It enables applications to transcribe and understand spoken words, which is essential for a virtual assistant that processes voice commands.
Azure Bot Service helps in building conversational agents but does not handle speech recognition.
Azure Text Analytics processes textual data for tasks like sentiment analysis and key phrase extraction but does not convert speech to text.
Azure Cognitive Services Translation translates text from one language to another but does not perform speech recognition.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Speech service, and how does it work?
What is the difference between Azure Bot Service and Azure Speech service?
Can Azure Speech service handle multiple languages or accents?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.