Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Free Microsoft Azure AI Fundamentals AI-900 Practice Test
- 20 Questions
- Unlimited
- Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
An organization wants to group customers into segments based on similarities in their behavior, but they don't have labeled data.
Which machine learning technique should they utilize?
Time Series Analysis
Regression
Clustering
Classification
Answer Description
Clustering - This is the correct answer. Clustering is an unsupervised machine learning technique used to group data into segments based on similarities without requiring labeled data. It is ideal for grouping customers into segments based on their behavior.
Regression is used for predicting continuous numerical values based on input features, not for grouping data or segmenting customers.
Classification is a supervised learning technique where the goal is to categorize data into predefined classes. It requires labeled data, which is not available in this case.
Time Series Analysis is used to analyze data that is collected over time (e.g., stock prices, sales data) to identify trends or patterns. It is not focused on segmenting data into groups based on behavior.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is unsupervised learning in machine learning?
How does clustering differ from classification in machine learning?
What are some common algorithms used for clustering?
A retailer wants to analyze customer reviews to determine overall customer satisfaction.
Which AI workload is best suited for this task?
Natural Language Processing (NLP) workload
Document Intelligence workload
Content Moderation workload
Computer Vision workload
Answer Description
Natural Language Processing (NLP) workload - This is the correct answer. Analyzing customer reviews to determine overall satisfaction involves processing and understanding text data, which is exactly what Natural Language Processing (NLP) is designed for. NLP techniques such as sentiment analysis can be applied to assess customer sentiment from reviews and feedback.
Computer Vision workload - Computer vision is used for analyzing visual data, such as images or videos.
Content Moderation workload - Content moderation focuses on filtering inappropriate or harmful content.
Document Intelligence workload - Document intelligence focuses on understanding and extracting structured information from documents, often for tasks like form processing or invoice extraction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is sentiment analysis in NLP?
How is NLP different from Computer Vision?
What tools in Azure can be used for NLP tasks like sentiment analysis?
Azure OpenAI Service's image generation models can create images based on audio prompts provided by users.
False
True
Answer Description
This statement is False.
Azure OpenAI Service's image generation capabilities, such as those offered by the DALL-E model, generate images based on textual descriptions provided by the user, not audio prompts. Users input text prompts that describe the desired image, and the service generates images accordingly. Audio input is not a supported method for image generation in Azure OpenAI Service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the DALL-E model in Azure OpenAI Service?
Can Azure OpenAI Service process audio inputs for any other purpose?
How does Azure OpenAI Service handle text-based input for image generation?
Which statement best reflects responsible AI considerations for bias when using generative AI models in Azure OpenAI Service?
Generative AI models are inherently unbiased because they rely solely on mathematical algorithms, so human oversight is unnecessary.
Generative AI models can inherit and amplify social biases present in their training data; developers should apply ongoing human-led monitoring, testing, and mitigation strategies.
Bias is only a concern during the pre-training phase; once a model is fine-tuned it can be deployed safely without further monitoring.
Using Azure OpenAI content filters alone removes all bias from generated outputs, eliminating the need for additional red-team testing.
Answer Description
Generative AI models learn from massive text and image corpora that contain historical and social biases. Those biases can surface or even amplify in model outputs. Responsible AI guidance from Microsoft requires developers to continuously identify, measure, and mitigate potential harms by combining automated tools (such as content filters) with human-in-the-loop reviews, red-team testing, and ongoing monitoring. Ignoring these steps risks unfair, stereotyped, or otherwise harmful generations. Therefore, the only correct statement is that bias can occur and must be addressed through sustained human oversight.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'bias in generative AI models' mean?
What is human-in-the-loop review, and why is it important for mitigating bias?
What is red-team testing in the context of responsible AI development?
An Azure data scientist wants to build and deploy a machine learning model without extensive coding or manual model selection.
Which Azure Machine Learning capability should they use?
Azure Synapse Analytics
Azure Machine Learning Designer
Automated Machine Learning
Azure Databricks
Answer Description
Automated Machine Learning enables data scientists to automatically train and tune models by selecting the best algorithms and hyperparameters for a given dataset.
Azure Machine Learning Designer provides a visual interface for building models, it still requires manual selection of algorithms and setup.
Azure Databricks and Azure Synapse Analytics are comprehensive data processing platforms but do not specialize in automated model training and selection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Automated Machine Learning (AutoML)?
How does Automated Machine Learning differ from Azure Machine Learning Designer?
Can Automated Machine Learning be used by beginners in data science?
Your software development team wants to implement an AI assistant that can generate code snippets based on natural language descriptions.
Which Azure OpenAI model should they use for this purpose?
Codex
DALL·E
GPT-3's text-davinci-003
Azure's Computer Vision API
Answer Description
Codex is the Azure OpenAI model specifically designed for code generation tasks, allowing developers to transform natural language prompts into code in various programming languages.
GPT-3's text-davinci-003 is powerful for natural language understanding and generation but is not optimized for code generation.
DALL·E is used for image generation.
Azure's Computer Vision API is intended for analyzing visual content, not generating code.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How is Codex different from GPT-3?
What programming languages does Codex support?
Can Codex debug or improve existing code?
You are a data scientist at a publishing company looking to generate creative story ideas based on existing literature.
Which characteristic of generative AI models allows them to create new and original content inspired by the training data?
They store exact copies of data for direct replication
They detect anomalies and irregularities in data
They categorize input data into predefined classes
They create new content by modeling the structure of the training data
Answer Description
They create new content by modeling the structure of the training data - This is the correct answer. Generative AI models create new and original content by learning the underlying structure, patterns, and relationships within the training data. This allows them to generate creative outputs, like story ideas, that are inspired by but not directly copied from the original material.
They store exact copies of data for direct replication - This is characteristic of retrieval-based systems, which retrieve exact content from a database or dataset. This does not apply to generative AI, which creates novel content rather than copying it.
They categorize input data into predefined classes - This is the function of classification models, which categorize data into specific classes based on predefined labels. It does not involve generating new content.
They detect anomalies and irregularities in data - This describes anomaly detection models, which focus on identifying outliers or unusual patterns in data, not generating new and original content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do generative AI models learn the structure of training data?
What is the difference between generative AI and classification models?
What are some real-world applications of generative AI besides story creation?
A company's customer support department has accumulated a large number of email inquiries. They want to quickly identify the main issues customers are experiencing by automatically extracting important words and phrases from these emails.
Which natural language processing (NLP) technique should they use to achieve this?
Language Detection
Key Phrase Extraction
Sentiment Analysis
Entity Recognition
Answer Description
Key Phrase Extraction is the process of automatically identifying the most significant and relevant phrases within a text. This technique helps in summarizing the main topics or issues discussed, enabling the company to quickly understand customer concerns.
Sentiment analysis identifies the emotional tone behind words, language detection identifies the language of the text, and entity recognition identifies and classifies named entities, such as names of people or organizations, which may not provide the overall themes or issues.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'Key Phrase Extraction' involve in NLP?
How does 'Key Phrase Extraction' differ from 'Entity Recognition'?
What are some tools or services in Azure that support Key Phrase Extraction?
Which of the following scenarios best exemplifies a document intelligence workload?
A system that recommends products based on user preferences
An application that translates spoken language into text
A service that extracts key information from invoices and receipts
An algorithm that identifies objects in images
Answer Description
Document intelligence workloads involve extracting structured information from unstructured or semi-structured documents like invoices, receipts, and forms. A service that extracts key information from invoices and receipts is an example of this, as it automates data extraction from documents.
The other options represent different AI workloads: translating spoken language into text is speech recognition (natural language processing), recommending products based on user preferences is personalization, and identifying objects in images is computer vision.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are unstructured and semi-structured documents?
How does document intelligence differ from other AI workloads?
What is OCR, and why is it important in document intelligence?
A company wants to develop a model that can determine if a transaction is fraudulent or legitimate. What type of machine learning task is appropriate for this scenario?
Classification
Dimensionality Reduction
Clustering
Regression
Answer Description
Classification - This is the correct answer. Fraud detection is a classification task because the model needs to classify each transaction as either fraudulent or legitimate, which involves assigning data points to predefined categories or labels.
Regression is used for predicting continuous numerical values for example sales forecasts or prices, not for classifying transactions into categories such as "fraudulent" or "legitimate."
Clustering is an unsupervised learning technique used to group data based on similarities, but it is not suitable for determining whether a transaction is fraudulent or legitimate, which requires labeled data and a classification approach.
Dimensionality Reduction is used to reduce the number of features in the data, typically for improving performance or visualization, but it is not a task in itself for determining fraud or legitimacy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between classification and regression in machine learning?
Why is supervised learning required for fraud detection?
What is clustering, and why is it not suitable for fraud detection in this case?
An organization wants to develop an AI system that converts natural language descriptions of features into corresponding programming code to expedite their software development process.
Which Azure OpenAI model should they choose to achieve this goal?
Embeddings
DALL-E
Codex
GPT-3
Answer Description
The Codex model in Azure OpenAI Service is specifically designed for translating natural language input into programming code. It supports multiple programming languages and can interpret user intent to generate accurate code snippets, making it ideal for this scenario.
GPT-3 is capable of understanding and generating human-like text, it is not optimized for code generation tasks.
DALL-E focuses on creating images from textual descriptions.
Embeddings are used for semantic understanding and similarity tasks, not code generation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes Codex different from GPT-3 for code generation?
Which programming languages are supported by Azure Codex?
How does Codex interpret natural language inputs for code generation?
An analyst at a telecommunications company wants to forecast the number of customer service calls expected next month based on data from previous months.
Which machine learning technique is most suitable for this task?
Clustering
Classification
Regression
Answer Description
Regression is the appropriate technique for predicting continuous numerical values, such as the number of customer service calls. It models the relationship between dependent and independent variables to forecast future values.
Classification is used for predicting categorical outcomes
Clustering groups data points based on similarity without prior labels.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is regression the most suitable technique for forecasting numerical data?
How does regression differ from classification in machine learning?
What are some real-world examples of clustering in machine learning?
A company wants to develop a system that can create original artworks and designs based on existing art styles.
Which type of workload is most suitable for this purpose?
Predictive Analytics
Generative AI
Computer Vision
Knowledge Mining
Answer Description
Generative AI - This is the correct answer. A system that creates original artworks and designs based on existing art styles would fall under generative AI. Generative models, like GANs (Generative Adversarial Networks), can learn from existing artworks and create new, unique pieces in similar styles.
Computer Vision - Computer vision is used for analyzing and processing visual data, such as recognizing objects or identifying patterns in images. While it could be used as part of the system (e.g., for analyzing art), it is not the primary workload for creating original art.
Knowledge Mining - Knowledge mining involves extracting insights and patterns from unstructured data like documents and media files. It is not focused on generating new creative content such as artworks.
Predictive Analytics - Predictive analytics uses historical data to make predictions about future events or trends. It is not suitable for creating original artworks or designs based on existing styles.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are GANs and how do they work?
How is generative AI different from computer vision?
Can generative AI be used alongside knowledge mining or predictive analytics?
An AI engineer is working on a project that involves analyzing vast amounts of unstructured data, such as images and speech. She needs to build a model that can automatically learn hierarchical representations from raw data without extensive feature engineering.
Which machine learning technique is most appropriate for this scenario?
Decision Trees
Clustering Algorithms
Deep Learning
Regression Algorithms
Answer Description
Deep Learning - This is the correct answer. Deep learning is the most appropriate technique for analyzing unstructured data like images and speech, as it can automatically learn hierarchical representations from raw data. Deep learning models, especially neural networks, excel at processing and extracting meaningful features from complex data without requiring extensive manual feature engineering.
Clustering Algorithms are used for grouping data into clusters based on similarity, but they are not designed to automatically learn hierarchical representations from raw unstructured data like images and speech.
Regression Algorithms are used for predicting continuous numerical outcomes based on input features, but they are not suited for learning hierarchical representations from unstructured data.
Decision Trees are used for classification or regression tasks by splitting data into decision rules based on features. They require predefined features and are less effective than deep learning for automatically learning hierarchical representations from raw, unstructured data like images and speech.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes deep learning suitable for unstructured data like images and speech?
How do neural networks automatically learn features?
What are some examples of deep learning architectures for analyzing images and speech?
Which of the following is a feature of an image classification solution?
Extracting text from images for text analysis
Identifying and locating objects within an image
Predicting the content category of an image
Detecting facial features to analyze emotions
Answer Description
Predicting the content category of an image - This is the correct answer. Image classification involves assigning a label or category to an image based on its content. For example, an image classification solution might categorize an image as "cat," "dog," or "car" based on the objects it contains.
Identifying and locating objects within an image - This describes object detection, not image classification. Object detection involves both identifying objects and locating them within the image (often with bounding boxes), while classification only assigns a category label.
Extracting text from images for text analysis - This is Optical Character Recognition (OCR), which is focused on extracting text from images, not classifying the entire image into categories.
Detecting facial features to analyze emotions - This is facial recognition or emotion detection, which focuses on detecting and analyzing facial features, often for applications like identifying emotions or individuals, but it is not image classification.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between image classification and object detection?
How does Optical Character Recognition (OCR) differ from image classification?
What does facial recognition analyze differently from image classification?
Why do machine learning practitioners divide a dataset into separate training and validation subsets when building a model?
To evaluate the model's performance on unseen data, helping to prevent overfitting
To reduce the training time by using smaller datasets
To increase the total amount of data available for training
To ensure the model memorizes the training data perfectly
Answer Description
To evaluate the model's performance on unseen data, helping to prevent overfitting - This is the correct answer. Dividing a dataset into training and validation subsets allows the model to learn from the training data while being evaluated on a separate validation set. This helps assess the model's ability to generalize to new, unseen data, preventing overfitting where the model memorizes the training data but performs poorly on new data.
To increase the total amount of data available for training - This is incorrect because splitting the data reduces the amount of data available for training. The goal is to use a portion for training and another for validation, not to increase the total amount of data.
To reduce the training time by using smaller datasets - Splitting the data does not specifically aim to reduce training time. It’s more about evaluating model performance and preventing overfitting.
To ensure the model memorizes the training data perfectly - The goal is not for the model to memorize the training data. In fact, memorization (overfitting) is something practitioners want to avoid. The focus is on ensuring the model generalizes well to new, unseen data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is overfitting in machine learning?
Why is it important to use a validation dataset?
How is a training dataset different from a validation dataset?
You are a data analyst at a marketing firm tasked with evaluating how customers feel about a recent product launch by analyzing thousands of social media posts.
Which natural language processing technique should you use to understand the emotions expressed in the text?
Key Phrase Extraction
Sentiment Analysis
Topic Modeling
Entity Recognition
Answer Description
Sentiment Analysis is used to determine the emotional tone behind words, identifying and categorizing opinions expressed in text to understand the customers feelings.
Entity Recognition focuses on identifying and classifying named entities within text, such as people or organizations, which doesn't provide insights into emotions.
Key Phrase Extraction identifies important keywords or phrases but doesn't assess emotional context.
Topic Modeling discovers abstract topics within documents but doesn't directly evaluate emotional sentiment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Sentiment Analysis in NLP?
How does Sentiment Analysis differ from Entity Recognition?
Can Topic Modeling and Sentiment Analysis be used together?
A company collected data to develop a machine learning model that predicts the final selling price of products based on factors like 'Production Cost', 'Marketing Budget', 'Competitor Prices' and 'Time on Market'.
In this context, which variable is the label for the model?
Production Cost
Competitor Prices
Marketing Budget
Final Selling Price
Answer Description
In machine learning, the label is the variable that represents the output or the value the model is trying to predict. In this scenario, the model aims to predict the 'Final Selling Price' of products, making it the label.
The other variables such as 'Production Cost', 'Marketing Budget' and 'Competitor Prices' are features that the model uses to make the prediction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a label and a feature in machine learning?
How does a machine learning model use features to make predictions?
What is the role of the training dataset in machine learning?
A company is developing an AI-driven mobile application that collects user data to provide personalized recommendations.
To address concerns about privacy and security, which practice should the company adopt?
Storing user data on shared servers to reduce costs
Implementing robust encryption techniques for data at rest and in transit
Collecting as many data points as possible to improve recommendations
Giving developers access to user data for debugging purposes
Answer Description
Implementing robust encryption techniques for data at rest and in transit - This is the correct answer. To address concerns about privacy and security, the company should implement strong encryption techniques to protect user data both when it is stored at rest and when it is transmitted in transit. This ensures that sensitive data is secure and reduces the risk of unauthorized access.
Collecting as many data points as possible to improve recommendations - While collecting data may help improve recommendations, it raises privacy concerns if personal data is not properly protected or managed. This approach does not directly address the need for secure handling of user data.
Giving developers access to user data for debugging purposes - Giving developers access to user data can create significant privacy and security risks. It is important to ensure that user data is only accessible to those who have a legitimate need and that proper safeguards are in place.
Storing user data on shared servers to reduce costs - Storing user data on shared servers without adequate security measures can expose the data to higher risks of breaches and unauthorized access. It is essential to store user data securely, even if it means higher costs for private or dedicated infrastructure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data encryption, and why is it important for security?
What is the difference between data at rest and data in transit?
How can companies ensure developers access user data securely during debugging?
A company wants to extract structured information such as names of people, places, organizations, and dates from a large collection of unstructured text documents.
Which natural language processing technique should they use to achieve this?
Entity Recognition
Sentiment Analysis
Language Translation
Key Phrase Extraction
Answer Description
Entity Recognition - This is the correct answer. Entity Recognition (also known as Named Entity Recognition or NER) is a natural language processing technique used to extract structured information such as names of people, places, organizations, dates and other specific entities from unstructured text documents.
Key Phrase Extraction - Key phrase extraction identifies important phrases in text but does not specifically target named entities like people, places, or dates.
Sentiment Analysis - Sentiment analysis is used to evaluate the emotional tone or sentiment (positive, negative, neutral) in text, not to extract specific entities.
Language Translation - Language translation is used to convert text from one language to another, but it does not focus on extracting structured information like names or dates.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Entity Recognition, and why is it important?
How does Entity Recognition differ from Key Phrase Extraction?
What are some practical examples of using Entity Recognition?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.