Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Free Microsoft Azure AI Fundamentals AI-900 Practice Test
- 20 Questions
- Unlimited
- Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
You are designing a solution to automate the process of entering data from scanned invoices into a database. Which type of computer vision model should you use to extract the text from the scanned invoice images?
Object detection
Facial analysis
Image classification
Optical Character Recognition (OCR)
Answer Description
The correct answer is Optical Character Recognition (OCR). OCR is the technology used to extract printed or handwritten text from images, such as scanned documents like invoices. This allows the text to be converted into a machine-readable format, which is essential for automating data entry. Object detection is used to identify the location of objects in an image. Image classification is used to categorize an entire image. Facial detection is used to locate human faces in an image.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Optical Character Recognition (OCR) work?
What are some real-world applications of OCR technology?
What is the difference between OCR and object detection?
A company collected data to develop a machine learning model that predicts the final selling price of products based on factors like 'Production Cost', 'Marketing Budget', 'Competitor Prices' and 'Time on Market'.
In this context, which variable is the label for the model?
Marketing Budget
Production Cost
Final Selling Price
Competitor Prices
Answer Description
In machine learning, the label is the variable that represents the output or the value the model is trying to predict. In this scenario, the model aims to predict the 'Final Selling Price' of products, making it the label.
The other variables such as 'Production Cost', 'Marketing Budget' and 'Competitor Prices' are features that the model uses to make the prediction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a label and a feature in machine learning?
How does a machine learning model use features to make predictions?
What is the role of the training dataset in machine learning?
A company wants to automatically create unique marketing slogans based on their brand values and target audience.
Which technology approach is most suitable for generating these slogans?
Implementing predictive analytics to forecast market trends
Utilizing generative models for content creation
Using clustering algorithms to segment customer data
Applying sentiment analysis to gauge customer opinions
Answer Description
Generative models can produce new content such as marketing slogans by learning from existing data. They can generate creative and brand-aligned slogans that resonate with the target audience, making this approach appropriate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly are generative models and how do they create content like marketing slogans?
How are generative models different from predictive analytics?
What kind of data or input does a generative model need to create effective slogans?
Which machine learning technique is particularly effective at processing large amounts of unstructured data such as images and text?
K-means Clustering
Logistic Regression
Deep Learning
Decision Trees
Answer Description
Deep Learning techniques excel at processing large amounts of unstructured data like images and text because they can automatically learn hierarchical representations and features directly from the data without manual feature engineering. Other techniques like logistic regression, k-means clustering, and decision trees are generally more effective on structured data and may require significant feature engineering.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Deep Learning?
How does Deep Learning handle unstructured data?
What is feature engineering, and why is it less necessary in Deep Learning?
A software development team wants to automate the creation of functions and class implementations based on natural language descriptions.
Which Azure OpenAI Service model should they consider using?
Codex models for code generation
Embeddings models for text similarity
DALL·E models for image synthesis
GPT-3 models for natural language understanding
Answer Description
The Codex models in Azure OpenAI Service are specifically designed to translate natural language prompts into code across various programming languages, making them ideal for automating code creation tasks.
The GPT-3 models are geared towards generating human-like text but are not specialized for code.
The DALL·E models generate images from textual descriptions, not code.
The Embeddings models are used for finding text similarities and are not suitable for generating code from descriptions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Codex models, and how are they different from GPT-3?
What programming languages does Codex support?
How does Codex handle errors in generated code?
A security company wants to develop a system that can automatically detect and alert on suspicious activities in video surveillance footage.
Which workload is most appropriate for building this solution?
Natural Language Processing
Computer Vision
Generative AI
Knowledge Mining
Answer Description
Computer Vision focuses on processing and interpreting visual information from images or videos. It is the most suitable choice for analyzing video surveillance to detect suspicious activities.
Natural Language Processing deals with understanding and generating human language, which is not applicable to visual data.
Knowledge Mining involves extracting insights from large volumes of structured and unstructured data, typically text-based.
Generative AI is concerned with creating new content rather than analyzing existing footage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Computer Vision?
How does Computer Vision detect suspicious activities in video footage?
Why is Natural Language Processing (NLP) not suitable for this task?
A retail store plans to use AI to monitor shelf stock levels by locating and counting various products in images taken by in-store cameras.
Which computer vision technique should the store use to achieve this goal?
Semantic Segmentation
Object Detection
Optical Character Recognition (OCR)
Image Classification
Answer Description
Object Detection is the appropriate technique because it not only identifies instances of objects within an image but also provides their locations, typically using bounding boxes. This makes it suitable for locating and counting products on shelves.
Image Classification assigns a label to an entire image but does not identify individual objects within the image or their positions.
Optical Character Recognition (OCR) is used to extract text from images, which is not relevant for detecting products.
Semantic Segmentation classifies each pixel in an image, assigning a class label to every pixel, which can provide more detailed localization but is more complex than necessary for simply locating and counting objects.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Object Detection differ from Image Classification?
What algorithms or models are commonly used for Object Detection?
Why isn’t Semantic Segmentation a better choice for this task?
A software developer wants to use Azure OpenAI Service to generate code snippets from natural language descriptions.
Which model should the developer choose to best accomplish this task?
A model specialized in image generation
A model specialized in code generation
A model specialized in generating long-form text
A model specialized in sentiment analysis
Answer Description
A model specialized in code generation - This is the correct answer. For generating code snippets from natural language descriptions, the developer should use a model like Codex, which is specifically designed for code generation. It can understand natural language inputs and generate corresponding code in various programming languages.
A model specialized in image generation - This type of model, like DALL·E, is designed for generating images from text descriptions, not for generating code.
A model specialized in sentiment analysis - Sentiment analysis models are used to assess the emotional tone of text, not to generate code based on natural language descriptions.
A model specialized in generating long-form text - While models like GPT are capable of generating text, they are not specifically optimized for code generation, making them less ideal for this task compared to a model like Codex.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure OpenAI Codex?
How does Codex differ from GPT when generating text?
Can Codex support multiple programming languages?
An organization needs a service that can generate new text based on input prompts, for uses such as content creation or code suggestions.
Which Azure service provides this capability?
Azure OpenAI Service
Azure Translator Service
Azure Text Analytics
Azure Cognitive Search
Answer Description
Azure OpenAI Service offers advanced language models that can generate human-like text based on input prompts, making it suitable for content creation and code suggestions. The other services listed do not provide text generation capabilities.
Azure Translator Service is designed for translating text between languages.
Azure Cognitive Search enables indexing and querying of content.
Azure Text Analytics provides features like sentiment analysis and key phrase extraction but does not generate new text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure OpenAI Service, and how does it work?
What is the difference between Azure OpenAI Service and Azure Text Analytics?
How does Azure OpenAI Service ensure security and compliance for its users?
A data scientist is building a machine learning model to predict housing prices based on various factors such as location, size, and age of properties.
In the dataset, which of the following represents the label?
The price of the property
The age of the property in years
The size of the property in square feet
The location of the property
Answer Description
In machine learning, the label is the target variable that the model is intended to predict. Here, 'The price of the property' is what the model aims to predict, making it the label.
The other options are input variables, or features, that provide information to help the model make accurate predictions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a label in machine learning?
What is the difference between a label and a feature in machine learning?
Why do machine learning models need labeled data?
In Azure computer vision solutions, which technique analyzes the overall content of an image and then assigns a single label that describes the entire image?
Object Detection
Image Classification
Optical Character Recognition (OCR)
Image Segmentation
Answer Description
Image Classification assigns a single label to an entire image by analyzing its overall content. This differs from object detection, which identifies and locates individual objects within an image, and image segmentation, which classifies each pixel to understand the image at a granular level.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the primary purpose of Image Classification?
How does Image Classification differ from Object Detection?
Can Image Classification work with multiple labels for a single image?
A logistics company wants to implement a system that can identify and locate damaged packages in images captured by surveillance cameras in their warehouses.
Which type of computer vision solution is the most suitable for this requirement?
Facial Detection
Object Detection
Image Classification
Optical Character Recognition (OCR)
Answer Description
Object Detection is the appropriate solution because it not only recognizes objects within an image but also determines their locations by drawing bounding boxes around them. This allows the system to both identify damaged packages and pinpoint where they are in each image.
Image Classification could determine if an image contains a damaged package but cannot locate it within the image.
Optical Character Recognition (OCR) is used for extracting and interpreting text from images, which is not applicable in this scenario.
Facial Detection focuses on identifying human faces and would not be useful for detecting packages.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Object Detection better suited than Image Classification for this scenario?
What technologies enable Object Detection to locate objects within an image?
How does Object Detection ensure accuracy when identifying damaged packages in images?
In Azure Machine Learning, which component is used to deploy trained models to provide real-time predictions via web services?
Azure Machine Learning Compute Instances
Azure Blob Storage
Azure Machine Learning Pipelines
Azure Machine Learning Endpoints
Answer Description
Azure Machine Learning Endpoints allow you to deploy trained machine learning models as web services, enabling real-time predictions through RESTful APIs.
Azure Machine Learning Pipelines are used to build and manage automated workflows.
Azure Machine Learning Compute Instances provide development environments for machine learning tasks.
Azure Blob Storage is a service for storing unstructured data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Azure Machine Learning Endpoints?
How do RESTful APIs work in Azure Machine Learning Endpoints?
What are the differences between Azure Machine Learning Pipelines and Endpoints?
Which characteristic is associated with AI models that generate new content based on learned patterns?
They predict numerical values from historical data
They classify input data into predefined categories
They retrieve exact copies of existing content
They create new data based on learned patterns from training data
Answer Description
They create new data based on learned patterns from training data - This is the correct answer. AI models that generate new content, such as text, images, or music, are typically generative models. They learn patterns from training data and use this knowledge to produce original content that resembles the data they were trained on.
They classify input data into predefined categories - This describes classification models, which focus on categorizing data into specific labels or categories, rather than generating new content.
They predict numerical values from historical data - This is characteristic of predictive models, which are used for forecasting or regression tasks, rather than generating new content.
They retrieve exact copies of existing content - This describes retrieval-based models, which focus on finding and returning pre-existing content from a database or dataset, rather than generating new, original content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are some examples of generative AI models?
How does a generative model differ from a classification model?
What role does training data play in generative AI?
An online retail company wants to enhance customer engagement by introducing a new feature on their website. They are considering various AI-powered solutions.
Which of the following would be an appropriate use of generative AI in this context?
Analyzing customer purchasing patterns to recommend products they might like.
Using image recognition to categorize new products uploaded by sellers.
Implementing a chatbot that provides customers with automated responses based on predefined scripts.
Generating personalized product descriptions for each customer based on their browsing history.
Answer Description
Generating personalized product descriptions uses a text-generation model that creates new, tailored content for each visitor based on patterns learned from their browsing data. This is a clear example of generative AI, which focuses on producing novel outputs rather than selecting from fixed templates.
The other options are not generative:
- A scripted chatbot simply retrieves predefined replies, so no new content is generated.
- Analyzing purchasing patterns to recommend items is predictive analytics, concerned with forecasting rather than content creation.
- Image recognition that categorizes uploads is a classification task in computer vision, not content generation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes generative AI different from predictive models?
How does generative AI learn to create personalized content?
Can generative AI be used alongside other types of AI in an online retail setting?
An AI development team is training a machine learning model using customer data to enhance product recommendations.
What is the most effective method to safeguard customer privacy during the training process?
Encrypt the dataset during storage
Use secure servers for computation
Remove personally identifiable information from the data
Limit data access to authorized personnel
Answer Description
Removing personally identifiable information (PII) from the data, also known as data anonymization, is the most effective way to protect customer privacy during model training.
While encrypting data and limiting access are important security measures, they do not prevent the AI model from potentially learning sensitive information. Using secure servers protects data from external threats but does not address privacy concerns related to data handling during training.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Personally Identifiable Information (PII)?
How does data anonymization work in machine learning?
Why is encrypting the dataset not enough to safeguard privacy during training?
You are building an application that enables users to produce images by providing descriptive text inputs.
Which feature of Azure OpenAI Service would you utilize to implement this functionality?
Implement image analysis to extract information from images.
Use code generation features to create image-rendering scripts.
Apply language translation capabilities to interpret user inputs.
Leverage the service's ability to generate images from text descriptions.
Answer Description
Leverage the service's ability to generate images from text descriptions - This is the correct answer. Azure OpenAI Service provides models like DALL·E, which can generate images based on descriptive text inputs.
Use code generation features to create image-rendering scripts - This feature is focused on generating code for specific tasks
Apply language translation capabilities to interpret user inputs - Language translation is used for converting text from one language to another
Implement image analysis to extract information from images - Image analysis focuses on understanding and processing images
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DALL·E and how does it generate images from text?
How does Azure OpenAI Service differ from Microsoft Cognitive Services for image-related tasks?
What are some practical use cases for DALL·E in applications?
A marketing team needs to estimate the age range and head pose of people in user-submitted photos so they can understand customer demographics for targeted campaigns. Which type of computer-vision solution should they choose?
Optical Character Recognition (OCR)
Object Detection
Image Classification
Facial Analysis
Answer Description
Facial detection and analysis solutions locate each face in an image and can return attributes such as bounding-box coordinates, head pose, and a limited age estimate. These details let the marketing team derive per-person demographic insights.
Image classification assigns one or more labels to an entire image but does not return per-face information. Object detection draws boxes around many object types but does not calculate facial attributes like age. Optical character recognition (OCR) only extracts text, so it is irrelevant here.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Facial Analysis in computer vision?
How does Facial Analysis differ from Object Detection?
Can Facial Analysis solutions determine emotions accurately?
An organization wants to build an application that can perform sentiment analysis, key phrase extraction, and named entity recognition on customer reviews.
Which Azure service should they use to achieve this functionality with minimal custom development?
Azure Cognitive Search
Azure AI Speech service
Azure AI Language service
Azure Machine Learning
Answer Description
The Azure AI Language service provides pre-built features for sentiment analysis, key phrase extraction, and named entity recognition, enabling developers to analyze text data with minimal custom development.
Azure Machine Learning is a platform for building and deploying custom machine learning models, which would require more effort to implement these features.
Azure AI Speech service focuses on speech processing, such as speech-to-text and text-to-speech, not text analytics.
Azure Cognitive Search is designed for indexing and searching over data but does not provide text analytics capabilities like sentiment analysis or entity recognition.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is sentiment analysis in the Azure AI Language service?
What is named entity recognition (NER) in the Azure AI Language service?
What is key phrase extraction in the Azure AI Language service?
Which computer vision solution assigns labels to images based on the overall visual content, without pinpointing the location of specific objects?
Object Detection
Image Classification
Optical Character Recognition (OCR)
Semantic Segmentation
Answer Description
Image Classification - This is the correct answer. Image classification assigns labels to images based on the overall visual content, without pinpointing the location of specific objects. It categorizes the entire image into predefined classes for example "dog," "cat," "car" but it does not locate individual objects within the image.
Object Detection involves identifying and locating specific objects within an image, often with bounding boxes, in addition to classifying the objects. This is different from image classification, which does not detect object locations.
Optical Character Recognition (OCR) is used to extract and recognize text from images, but it is not focused on classifying images or identifying overall content.
Semantic Segmentation divides an image into regions that correspond to different object categories and labels every pixel in the image, which is more detailed than image classification. It assigns labels to every part of the image, rather than classifying the entire image at once.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Image Classification differ from Object Detection?
What are some real-world applications of Image Classification?
Why doesn't Image Classification pinpoint the location of objects?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.