Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Free Microsoft Azure AI Fundamentals AI-900 Practice Test
- 20 Questions
- Unlimited
- Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
A retailer wants to analyze customer reviews to determine overall customer satisfaction.
Which AI workload is best suited for this task?
Computer Vision workload
Content Moderation workload
Document Intelligence workload
Natural Language Processing (NLP) workload
Answer Description
Natural Language Processing (NLP) workload - This is the correct answer. Analyzing customer reviews to determine overall satisfaction involves processing and understanding text data, which is exactly what Natural Language Processing (NLP) is designed for. NLP techniques such as sentiment analysis can be applied to assess customer sentiment from reviews and feedback.
Computer Vision workload - Computer vision is used for analyzing visual data, such as images or videos.
Content Moderation workload - Content moderation focuses on filtering inappropriate or harmful content.
Document Intelligence workload - Document intelligence focuses on understanding and extracting structured information from documents, often for tasks like form processing or invoice extraction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is sentiment analysis in NLP?
How is NLP different from Computer Vision?
What tools in Azure can be used for NLP tasks like sentiment analysis?
A bank wants to segment its customers into different categories based on their spending habits and transaction history to tailor marketing strategies.
Which machine learning technique is most suitable for this objective?
Classification
Reinforcement Learning
Regression
Clustering
Answer Description
Clustering is the most suitable technique because it involves grouping similar data points together based on features, without using predefined labels. This allows the bank to identify distinct customer segments.
Regression is used for predicting continuous numerical values, which doesn't fit the goal of segmenting customers into categories.
Classification predicts categorical labels based on training data with known categories, but in this case, the categories (customer segments) are not predefined.
Reinforcement Learning involves an agent interacting with an environment to maximize cumulative reward, which is not applicable here.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is clustering in machine learning?
Why is classification not suitable for customer segmentation in this case?
How does clustering differ from regression?
A retailer wants to implement a system that can track and count individuals in surveillance video to monitor foot traffic in their store.
Which type of computer vision solution would best meet this need?
Optical Character Recognition (OCR)
Image Classification
Object Detection
Facial Detection and Analysis
Answer Description
Object Detection - This is the correct answer. Object detection is a computer vision technique used to identify and locate objects within an image or video. In this case, object detection would be ideal for tracking and counting individuals in the surveillance video, as it can identify and track people as they move through the store.
Image Classification - Image classification assigns a label to an image but does not provide location information about specific objects. It would not be suitable for tracking and counting individuals in real-time surveillance footage.
Optical Character Recognition (OCR) - OCR is used to extract and recognize text from images or videos. It is not applicable for tracking or counting individuals in surveillance video.
Facial Detection and Analysis - Facial detection focuses on identifying and analyzing human faces, but it would not be the best solution for tracking and counting all individuals in a store, as it is specific to detecting faces rather than general object tracking.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between object detection and image classification?
How does object detection work in tracking individuals?
Why is facial detection not ideal for counting individuals in a store?
Which of the following best explains why an AI solution requires regular updates and maintenance after deployment to ensure its reliability and safety?
To significantly reduce the initial cost of developing the AI model.
To change the underlying AI architecture from a neural network to a decision tree.
To fulfill a one-time deployment checklist required by software vendors.
To address potential data drift and maintain the model's performance.
Answer Description
This statement is correct. To ensure ongoing reliability and safety, an AI solution requires regular updates and maintenance. This is because the data and environment in which the model operates can change over time, a phenomenon known as 'data drift' or 'model drift'. These changes can degrade the model's performance and accuracy. Continuous monitoring and updating help address these shifts, fix any emerging issues, and maintain the system's effectiveness and safety. Reducing development costs or changing the fundamental architecture are not the primary goals of post-deployment maintenance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data drift in AI?
How is model performance monitored after deployment?
Why is AI safety important and how does maintenance contribute to it?
A company wants to automatically create unique marketing slogans based on their brand values and target audience.
Which technology approach is most suitable for generating these slogans?
Implementing predictive analytics to forecast market trends
Using clustering algorithms to segment customer data
Applying sentiment analysis to gauge customer opinions
Utilizing generative models for content creation
Answer Description
Generative models can produce new content such as marketing slogans by learning from existing data. They can generate creative and brand-aligned slogans that resonate with the target audience, making this approach appropriate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly are generative models and how do they create content like marketing slogans?
How are generative models different from predictive analytics?
What kind of data or input does a generative model need to create effective slogans?
You are building an AI-powered customer service chatbot that will be used by a global audience, including customers who rely on assistive technologies such as screen readers and voice-control software. To make the chatbot more accessible, you add semantic labels for every UI element, ensure full keyboard navigation, and include descriptive alt-text for any images generated by the system. Which Microsoft Responsible AI principle are you primarily addressing with these design decisions?
Fairness
Transparency
Inclusiveness
Accountability
Answer Description
The design decisions focus on Inclusiveness. Microsoft's Inclusiveness principle states that AI systems should "empower everyone and engage all people, regardless of their backgrounds" and be inclusive "for people of all abilities." Adding screen-reader friendly labels, keyboard navigation, and descriptive alt-text removes barriers for users with disabilities, directly addressing this principle. The other options target different Responsible AI areas: Fairness concerns bias and equal treatment, Transparency is about making the system understandable, and Accountability relates to human oversight.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Microsoft's Inclusiveness principle in Responsible AI?
Why is adding semantic labels and alt-text important for accessibility?
How does Inclusiveness differ from Fairness in Microsoft's Responsible AI principles?
Azure OpenAI Service's image generation models can create images based on audio prompts provided by users.
True
False
Answer Description
This statement is False.
Azure OpenAI Service's image generation capabilities, such as those offered by the DALL-E model, generate images based on textual descriptions provided by the user, not audio prompts. Users input text prompts that describe the desired image, and the service generates images accordingly. Audio input is not a supported method for image generation in Azure OpenAI Service.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the DALL-E model in Azure OpenAI Service?
Can Azure OpenAI Service process audio inputs for any other purpose?
How does Azure OpenAI Service handle text-based input for image generation?
Your software development team wants to implement an AI assistant that can generate code snippets based on natural language descriptions.
Which Azure OpenAI model should they use for this purpose?
DALL·E
Codex
GPT-3's text-davinci-003
Azure's Computer Vision API
Answer Description
Codex is the Azure OpenAI model specifically designed for code generation tasks, allowing developers to transform natural language prompts into code in various programming languages.
GPT-3's text-davinci-003 is powerful for natural language understanding and generation but is not optimized for code generation.
DALL·E is used for image generation.
Azure's Computer Vision API is intended for analyzing visual content, not generating code.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How is Codex different from GPT-3?
What programming languages does Codex support?
Can Codex debug or improve existing code?
A company wants to implement an AI chatbot that can produce human-like responses to customer inquiries.
Which Azure service capability would best support this solution?
Use Azure OpenAI Service for natural language generation
Deploy a chatbot using Azure Bot Service Gallery
Utilize Azure Cognitive Services' Speech Recognition
Use Azure Cognitive Search to retrieve relevant information
Answer Description
Azure OpenAI Service provides natural language generation capabilities, enabling applications to produce human-like text based on prompts. This makes it suitable for creating AI chatbots that can respond to customer inquiries in a conversational manner.
Azure Cognitive Search focuses on indexing and searching data.
Azure Bot Service can host chatbots but doesn't provide the language generation itself.
Azure Cognitive Services' Speech Recognition deals with transcribing spoken words into text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure OpenAI Service used for?
How does Azure Bot Service differ from Azure OpenAI Service?
Can Azure Cognitive Services' Speech Recognition be integrated with Azure OpenAI Service?
A company collected data to develop a machine learning model that predicts the final selling price of products based on factors like 'Production Cost', 'Marketing Budget', 'Competitor Prices' and 'Time on Market'.
In this context, which variable is the label for the model?
Competitor Prices
Final Selling Price
Marketing Budget
Production Cost
Answer Description
In machine learning, the label is the variable that represents the output or the value the model is trying to predict. In this scenario, the model aims to predict the 'Final Selling Price' of products, making it the label.
The other variables such as 'Production Cost', 'Marketing Budget' and 'Competitor Prices' are features that the model uses to make the prediction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a label and a feature in machine learning?
How does a machine learning model use features to make predictions?
What is the role of the training dataset in machine learning?
An online retailer wants to understand patterns in customer behavior based on purchase history, browsing behavior, and demographic data to better tailor its marketing strategies.
Which machine learning technique is most appropriate for this task?
Time Series Analysis
Classification
Clustering
Regression
Answer Description
Clustering is the most appropriate technique for discovering inherent groupings or patterns in data without predefined labels. In this scenario, the retailer aims to segment customers based on similarities to tailor marketing strategies for each group.
Classification involves assigning data points to predefined categories using labeled data, which is not suitable here since the categories are not known beforehand.
Regression is used for predicting continuous numerical values, not for identifying patterns or groupings.
Time Series Analysis focuses on data points collected over time, which does not align with the goal of understanding customer behavior patterns in this context.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is clustering in machine learning?
How is clustering different from classification?
What are some real-world applications of clustering?
A global e-commerce company wants their product descriptions to be accessible to customers in multiple languages automatically within their app.
Which Azure service should they use to implement this multilingual text conversion feature?
Azure AI Speech service
Azure AI Translator service
Azure AI Language service
Answer Description
The Azure AI Translator service specializes in converting text from one language to another, making it ideal for automatically translating product descriptions into multiple languages.
The Azure AI Speech service focuses on speech recognition and synthesis, handling audio inputs and outputs.
The Azure AI Language service deals with understanding and processing natural language but does not provide direct translation capabilities.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure AI Translator service?
How does the Azure AI Translator service differ from the Azure AI Speech service?
What are common use cases for the Azure AI Translator service?
You are developing an application that needs to determine the attitude expressed in customer comments, categorizing them as positive, negative, or neutral.
Which feature of the Azure AI Language service should you use?
Key Phrase Extraction
Translation
Entity Recognition
Sentiment Analysis
Answer Description
The Sentiment Analysis feature of the Azure AI Language service is designed to evaluate text and determine the emotional tone, categorizing it as positive, negative or neutral. This allows applications to understand the overall sentiment conveyed in customer comments.
Key Phrase Extraction identifies important terms or topics within the text but doesn't assess emotional tone.
Entity Recognition identifies and classifies named entities like people, organizations and locations within the text.
Translation converts text from one language to another and does not analyze sentiment. Therefore, sentiment analysis is the appropriate feature for determining the attitude expressed in customer comments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Sentiment Analysis work in the Azure AI Language Service?
What are some examples of applications that use Sentiment Analysis?
How is Sentiment Analysis different from Key Phrase Extraction?
An AI engineer is working on a project that involves analyzing vast amounts of unstructured data, such as images and speech. She needs to build a model that can automatically learn hierarchical representations from raw data without extensive feature engineering.
Which machine learning technique is most appropriate for this scenario?
Deep Learning
Regression Algorithms
Clustering Algorithms
Decision Trees
Answer Description
Deep Learning - This is the correct answer. Deep learning is the most appropriate technique for analyzing unstructured data like images and speech, as it can automatically learn hierarchical representations from raw data. Deep learning models, especially neural networks, excel at processing and extracting meaningful features from complex data without requiring extensive manual feature engineering.
Clustering Algorithms are used for grouping data into clusters based on similarity, but they are not designed to automatically learn hierarchical representations from raw unstructured data like images and speech.
Regression Algorithms are used for predicting continuous numerical outcomes based on input features, but they are not suited for learning hierarchical representations from unstructured data.
Decision Trees are used for classification or regression tasks by splitting data into decision rules based on features. They require predefined features and are less effective than deep learning for automatically learning hierarchical representations from raw, unstructured data like images and speech.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes deep learning suitable for unstructured data like images and speech?
How do neural networks automatically learn features?
What are some examples of deep learning architectures for analyzing images and speech?
A company wants to extract structured information such as names of people, places, organizations, and dates from a large collection of unstructured text documents.
Which natural language processing technique should they use to achieve this?
Key Phrase Extraction
Language Translation
Sentiment Analysis
Entity Recognition
Answer Description
Entity Recognition - This is the correct answer. Entity Recognition (also known as Named Entity Recognition or NER) is a natural language processing technique used to extract structured information such as names of people, places, organizations, dates and other specific entities from unstructured text documents.
Key Phrase Extraction - Key phrase extraction identifies important phrases in text but does not specifically target named entities like people, places, or dates.
Sentiment Analysis - Sentiment analysis is used to evaluate the emotional tone or sentiment (positive, negative, neutral) in text, not to extract specific entities.
Language Translation - Language translation is used to convert text from one language to another, but it does not focus on extracting structured information like names or dates.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Entity Recognition, and why is it important?
How does Entity Recognition differ from Key Phrase Extraction?
What are some practical examples of using Entity Recognition?
An organization needs to digitally extract text from a large number of scanned documents and images containing both printed and handwritten text. They require a solution that can process unstructured data effectively.
Which feature of Azure AI services is most suitable for their needs?
Text Analytics to analyze and interpret text sentiment
Computer Vision's text reading capability
Form Recognizer to analyze and extract structured data
Face API to detect and analyze faces in images
Answer Description
Computer Vision's text reading capability - This is the correct answer. Computer Vision's text reading capability (including OCR features) in Azure AI services is the most suitable solution for extracting both printed and handwritten text from scanned documents and images. It can process unstructured data by detecting and reading text in various image formats, making it ideal for the organization's needs.
Form Recognizer to analyze and extract structured data - Form Recognizer is useful for extracting structured data (e.g., from forms or invoices), but it is not primarily designed to handle the extraction of general unstructured text from images, especially when dealing with a mix of printed and handwritten text.
Text Analytics to analyze and interpret text sentiment - Text Analytics is primarily used for analyzing and interpreting the sentiment, entities, or key phrases in text data, but it does not extract text from images or scanned documents.
Face API to detect and analyze faces in images - The Face API is focused on detecting and analyzing human faces in images, not for extracting text from documents or images. This is unrelated to the task of text extraction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Computer Vision's text reading capability work?
What is the difference between unstructured data and structured data?
When should I use Form Recognizer instead of Computer Vision?
As a data scientist at a software development company, you are considering models for generating synthetic data to enhance your testing datasets.
Which feature of generative AI models makes them suitable for this task?
They can classify data into specific categories with high precision.
They can reduce data dimensionality while retaining key features.
They can generate new data instances similar to the training data.
They can identify anomalies by learning normal data patterns.
Answer Description
They can generate new data instances similar to the training data - This is the correct answer. Generative AI models are designed to create new data instances that resemble the training data. This makes them ideal for generating synthetic data to augment testing datasets, ensuring variety and realism in the data used for testing.
They can classify data into specific categories with high precision - This feature is characteristic of classification models, which categorize data into predefined labels but do not generate new data instances.
They can reduce data dimensionality while retaining key features - This describes dimensionality reduction techniques (such as PCA), which focus on simplifying the data without losing important features. It is not related to generating new data instances.
They can identify anomalies by learning normal data patterns - This is a feature of anomaly detection models, which are used to identify outliers or unusual patterns in data, not for generating synthetic data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What types of generative AI models are commonly used to create synthetic data?
How does generating synthetic data benefit testing datasets?
How is generative AI different from other AI models like classification or anomaly detection?
A developer needs to detect faces in photos and retrieve 27 facial landmark points, head-pose angles, and a limited age-estimate attribute. Which Azure service should they use?
Azure Cognitive Search service
Azure Computer Vision service
Azure Video Indexer service
Azure Face service
Answer Description
The Azure Face service is designed for detailed facial analysis. Its Detect API can return bounding boxes, 27-point landmarks, head-pose roll/yaw/pitch values, and (in approved scenarios) an age attribute. The Azure Computer Vision Image Analysis API only returns a basic face rectangle without these detailed attributes. Azure Cognitive Search indexes data, and Azure Video Indexer extracts insights from video, making them unsuitable for fine-grained face analysis in still images.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Detect API in Azure Face service?
What is the difference between Azure Face service and Azure Computer Vision service?
What scenarios require approval to use the age-estimate attribute in Azure Face service?
You are building an application that enables users to produce images by providing descriptive text inputs.
Which feature of Azure OpenAI Service would you utilize to implement this functionality?
Leverage the service's ability to generate images from text descriptions.
Implement image analysis to extract information from images.
Use code generation features to create image-rendering scripts.
Apply language translation capabilities to interpret user inputs.
Answer Description
Leverage the service's ability to generate images from text descriptions - This is the correct answer. Azure OpenAI Service provides models like DALL·E, which can generate images based on descriptive text inputs.
Use code generation features to create image-rendering scripts - This feature is focused on generating code for specific tasks
Apply language translation capabilities to interpret user inputs - Language translation is used for converting text from one language to another
Implement image analysis to extract information from images - Image analysis focuses on understanding and processing images
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DALL·E and how does it generate images from text?
How does Azure OpenAI Service differ from Microsoft Cognitive Services for image-related tasks?
What are some practical use cases for DALL·E in applications?
In machine learning, which of the following best describes the purpose of a validation dataset?
To collect new data for expanding the training dataset
To test the final model performance on unseen data after training is complete
To adjust model hyperparameters and prevent overfitting by evaluating the model's performance during training
To train the model by providing examples for it to learn from
Answer Description
The validation dataset is used to adjust model hyperparameters and prevent overfitting by evaluating the model's performance during training. It helps fine-tune the model before final testing. The training dataset is used for teaching the model, while the test dataset assesses the final performance on unseen data. Collecting new data is not related to the validation dataset's purpose.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a validation dataset and a test dataset?
What are hyperparameters in machine learning, and why are they adjusted using the validation dataset?
Why is it important to prevent overfitting in a machine learning model?
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.