Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.
Free Microsoft Azure AI Fundamentals AI-900 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 20
- Time: Unlimited
- Included Topics:Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
A retail company wants to automatically identify and categorize products on store shelves using images from in-store cameras.
Which Azure workload should they use?
Natural Language Processing (NLP)
Knowledge Mining
Generative AI
Computer Vision
Answer Description
They should use Computer Vision. This workload enables machines to interpret and understand visual information from images or videos. In this scenario, the retail company aims to analyze images from cameras to identify and categorize products, which is a typical application of Computer Vision.
Natural Language Processing deals with understanding and processing human language, which is not applicable here.
Knowledge Mining involves extracting information from large volumes of text but doesn't directly address image analysis.
Generative AI focuses on creating new content, such as text or images, and is not suited for product identification tasks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Computer Vision and how does it work?
What are some real-world applications of Computer Vision?
What are the main differences between Computer Vision and Natural Language Processing (NLP)?
Which of the following scenarios best exemplifies a document intelligence workload?
A service that extracts key information from invoices and receipts
An algorithm that identifies objects in images
An application that translates spoken language into text
A system that recommends products based on user preferences
Answer Description
Document intelligence workloads involve extracting structured information from unstructured or semi-structured documents like invoices, receipts, and forms. A service that extracts key information from invoices and receipts is an example of this, as it automates data extraction from documents.
The other options represent different AI workloads: translating spoken language into text is speech recognition (natural language processing), recommending products based on user preferences is personalization, and identifying objects in images is computer vision.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are unstructured and semi-structured documents?
How does document intelligence differ from other AI workloads?
What is OCR, and why is it important in document intelligence?
Why do machine learning practitioners divide a dataset into separate training and validation subsets when building a model?
To reduce the training time by using smaller datasets
To increase the total amount of data available for training
To evaluate the model's performance on unseen data, helping to prevent overfitting
To ensure the model memorizes the training data perfectly
Answer Description
To evaluate the model's performance on unseen data, helping to prevent overfitting - This is the correct answer. Dividing a dataset into training and validation subsets allows the model to learn from the training data while being evaluated on a separate validation set. This helps assess the model's ability to generalize to new, unseen data, preventing overfitting where the model memorizes the training data but performs poorly on new data.
To increase the total amount of data available for training - This is incorrect because splitting the data reduces the amount of data available for training. The goal is to use a portion for training and another for validation, not to increase the total amount of data.
To reduce the training time by using smaller datasets - Splitting the data does not specifically aim to reduce training time. It’s more about evaluating model performance and preventing overfitting.
To ensure the model memorizes the training data perfectly - The goal is not for the model to memorize the training data. In fact, memorization (overfitting) is something practitioners want to avoid. The focus is on ensuring the model generalizes well to new, unseen data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is overfitting in machine learning?
Why is it important to use a validation dataset?
How is a training dataset different from a validation dataset?
Which of the following is a primary function of content moderation workloads in AI?
Analyzing user behavior to recommend products
Generating summaries of long documents
Automatically detecting and filtering inappropriate or harmful content
Translating text from one language to another
Answer Description
Content moderation workloads in AI are designed to automatically detect and filter inappropriate or harmful content, such as offensive language, violent images, or spam. This helps maintain a safe and welcoming environment for users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What technologies are typically used in AI-based content moderation?
What are the benefits of using AI for content moderation versus manual moderation?
How does AI balance accuracy and fairness in content moderation?
An email service provider wants to add a feature that suggests sentence completions to users as they compose emails, improving typing efficiency.
Which Azure AI capability should they utilize to implement this functionality?
Sentiment Analysis
Language Modeling
Entity Recognition
Key Phrase Extraction
Answer Description
Language Modeling is the appropriate AI capability for this scenario because it can generate and predict text based on context, allowing the application to suggest probable sentence completions.
Sentiment analysis determines the emotional tone of text, entity recognition identifies and classifies entities within text, and key phrase extraction identifies the main topics or themes. These capabilities do not generate text predictions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is language modeling in AI?
How does language modeling differ from sentiment analysis?
What role does Azure play in implementing language modeling?
An organization needs to process a large collection of images and generate the approximate age of every person detected in each image.
Which Azure AI capability should they use?
Object Detection
Speech Recognition
Text Analytics
Facial Analysis
Answer Description
Facial analysis features of the Azure AI Face service detect faces in an image and can predict certain attributes-such as an estimated age-for each detected face (this attribute is available only to approved customers with limited access). Object detection locates generic objects but does not return facial attributes. Text Analytics works with text data, and Speech Recognition works with audio; neither service can estimate age from images.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure AI Face service, and how is it used in Facial Analysis?
How does Facial Analysis differ from Object Detection in Azure AI?
What are some examples of real-world applications that use Facial Analysis?
Which statement best reflects responsible AI considerations for bias when using generative AI models in Azure OpenAI Service?
Generative AI models can inherit and amplify social biases present in their training data; developers should apply ongoing human-led monitoring, testing, and mitigation strategies.
Generative AI models are inherently unbiased because they rely solely on mathematical algorithms, so human oversight is unnecessary.
Bias is only a concern during the pre-training phase; once a model is fine-tuned it can be deployed safely without further monitoring.
Using Azure OpenAI content filters alone removes all bias from generated outputs, eliminating the need for additional red-team testing.
Answer Description
Generative AI models learn from massive text and image corpora that contain historical and social biases. Those biases can surface or even amplify in model outputs. Responsible AI guidance from Microsoft requires developers to continuously identify, measure, and mitigate potential harms by combining automated tools (such as content filters) with human-in-the-loop reviews, red-team testing, and ongoing monitoring. Ignoring these steps risks unfair, stereotyped, or otherwise harmful generations. Therefore, the only correct statement is that bias can occur and must be addressed through sustained human oversight.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'bias in generative AI models' mean?
What is human-in-the-loop review, and why is it important for mitigating bias?
What is red-team testing in the context of responsible AI development?
A company wants to develop a model that can determine if a transaction is fraudulent or legitimate. What type of machine learning task is appropriate for this scenario?
Dimensionality Reduction
Clustering
Regression
Classification
Answer Description
Classification - This is the correct answer. Fraud detection is a classification task because the model needs to classify each transaction as either fraudulent or legitimate, which involves assigning data points to predefined categories or labels.
Regression is used for predicting continuous numerical values for example sales forecasts or prices, not for classifying transactions into categories such as "fraudulent" or "legitimate."
Clustering is an unsupervised learning technique used to group data based on similarities, but it is not suitable for determining whether a transaction is fraudulent or legitimate, which requires labeled data and a classification approach.
Dimensionality Reduction is used to reduce the number of features in the data, typically for improving performance or visualization, but it is not a task in itself for determining fraud or legitimacy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between classification and regression in machine learning?
Why is supervised learning required for fraud detection?
What is clustering, and why is it not suitable for fraud detection in this case?
A marketing team wants to automatically generate creative slogans and taglines based on their company's products to enhance their advertising campaigns.
Which method would be most suitable for this task?
Using language generation models to produce creative content
Implementing product recommendation algorithms
Applying data analysis and visualization techniques
Utilizing sentiment analysis for text assessment
Answer Description
Using language generation models to produce creative content - This is the correct answer. Language generation models, such as GPT, are ideal for automatically creating creative content like slogans and taglines by generating text based on input prompts or the product's features.
Applying data analysis and visualization techniques - This method is used to analyze and present data in a visual format
Implementing product recommendation algorithms - Product recommendation algorithms are used to suggest products to customers based on their behavior or preferences
Utilizing sentiment analysis for text assessment - Sentiment analysis is used to assess the emotional tone of text
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are language generation models and how do they work?
Could sentiment analysis improve the generated slogans?
How is a product recommendation algorithm different from a language generation model?
Which type of machine learning workload involves synthesizing new content similar to examples it has been trained on?
Data Classification
Regression Analysis
Anomaly Detection
Content Synthesis tasks
Answer Description
Content Synthesis tasks - This is the correct answer. Content synthesis tasks involve generating new content that is similar to the examples the model has been trained on. This is typically seen in generative models, where the system can create new, original content such as text, images, or music based on the patterns it has learned.
Anomaly Detection focuses on identifying outliers or unusual patterns in data, not on generating new content. It is used for detecting abnormal behavior or data points.
Data Classification involves categorizing data into predefined classes or categories, such as labeling emails as spam or not. It does not involve generating new content.
Regression Analysis is used to predict numerical values based on historical data, such as forecasting sales or stock prices. It is not focused on generating new content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are some examples of content synthesis in machine learning?
What is the key difference between content synthesis and data classification tasks?
What machine learning models are commonly used for content synthesis tasks?
Which practice best ensures accountability in the development of an AI solution?
Allowing stakeholders to audit and review the system
Automating decision-making without human oversight
Prioritizing only performance metrics during development
Maintaining complete confidentiality of development processes
Answer Description
Allowing stakeholders to audit and review the system - This is the correct answer. Allowing stakeholders to audit and review the AI system ensures transparency and accountability in the development process. It enables oversight, helps identify potential issues, and ensures that the system meets ethical and legal standards.
Maintaining complete confidentiality of development processes - While confidentiality is important for protecting intellectual property, it can hinder accountability by limiting transparency and oversight from stakeholders who need to ensure the system is ethical and fair.
Automating decision-making without human oversight - Automating decision-making without human oversight can reduce accountability, as there would be no mechanism for checking or correcting decisions made by the AI system, especially in critical areas like healthcare or finance.
Prioritizing only performance metrics during development - Focusing solely on performance metrics can lead to overlooking ethical concerns and accountability issues. A holistic approach that considers fairness, transparency, and accountability is essential in AI development.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is stakeholder auditing critical in AI development?
What mechanisms are put in place to enable stakeholder audits?
How does human oversight complement AI accountability?
A retail company wants to train a machine learning model to forecast future sales using historical data. Their dataset contains the following columns: date, store location, number of customers, total sales, and promotional events.
Which column in the dataset represents the label?
Total sales
Number of customers
Store location
Date
Answer Description
The label in a machine learning dataset is the variable that you aim to predict. In this scenario, the company wants to forecast future sales, so total sales is the label.
The other columns are features used to make the prediction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are features in a machine learning dataset?
Can a dataset have more than one label?
What makes a good label in machine learning?
A data scientist is tasked with developing a model to recognize and classify images of handwritten letters from thousands of samples with varying handwriting styles.
Which feature of deep learning techniques makes them particularly suitable for this task?
Their dependence on manual feature extraction methods
Their minimal computational resource requirements during training
Their effectiveness when working with small datasets
Their ability to automatically learn complex patterns and features from raw data like images
Answer Description
Deep learning techniques have the capability to automatically learn complex patterns and features from raw data like images. This ability eliminates the need for manual feature engineering, allowing the model to identify intricate patterns within the data that may be difficult to extract manually.
The other options are incorrect because deep learning models typically require substantial computational resources and large datasets to perform effectively. Additionally, they do not depend on manual feature extraction methods.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is feature extraction in deep learning?
Why do deep learning models require large datasets?
What computational resources are necessary for deep learning?
An environmental research team wants to use AI to automatically identify and track different species of animals in recorded data collected from field sensors.
Which AI workload would be most suitable for this task?
Knowledge Mining
Document Intelligence
Computer Vision
Natural Language Processing (NLP)
Answer Description
Computer Vision is the appropriate AI workload for analyzing visual data to identify and track objects or patterns within images or video. In this scenario, the team needs to process visual data to recognize different animal species, which is a typical application of Computer Vision.
Natural Language Processing (NLP) deals with understanding and generating human language, which is not relevant here.
Knowledge Mining involves extracting information from large datasets but doesn't specifically handle visual recognition tasks.
Document Intelligence focuses on extracting information from documents, which does not apply to analyzing images or videos.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Computer Vision, and how does it work?
How is Computer Vision trained to identify and track objects?
What are examples of tools or platforms used for Computer Vision in Azure?
What capability does the Azure AI Vision service provide to developers?
Analyzing images and extracting visual information
Performing sentiment analysis on text data
Transcribing spoken language into text
Translating text between different languages
Answer Description
Analyzing images and extracting visual information - This is the correct answer. The Azure AI Vision service provides developers with capabilities to analyze images and extract visual information. This includes tasks such as image classification, object detection, and optical character recognition (OCR), helping developers to gain insights from images.
Transcribing spoken language into text - This is the functionality of speech recognition services, not the Azure AI Vision service. Speech recognition is focused on converting spoken language into written text.
Translating text between different languages - This is the functionality of Azure Translator service, which is designed for translating text between languages, not related to image analysis.
Performing sentiment analysis on text data - This is the functionality of text analytics services, which are used for tasks like sentiment analysis, key phrase extraction, and language detection, but it is not related to image analysis.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What specific tasks can Azure AI Vision perform under image analysis?
How does optical character recognition (OCR) in Azure AI Vision work?
What are some industries or use cases that benefit most from Azure AI Vision?
Technology that converts spoken language into written text is an example of text generation.
True
False
Answer Description
This statement is False.
Technology that converts spoken language into written text is known as speech recognition. Text generation involves creating new text content, often based on patterns learned from existing data, but does not specifically refer to transcribing spoken words.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is speech recognition?
How does text generation differ from speech recognition?
What are some real-world applications of speech recognition technology?
An application requires analysis of faces in photographs to retrieve detailed attribute information for each face (for example head pose and mask presence) so it can tailor the user experience.
Which capability of the Azure AI Face detection service should you use?
Facial Attribute Analysis
Face Similarity Matching
Face Identification
Face Detection
Answer Description
Use the service's facial attribute analysis capability (invoking the Detect API with the returnFaceAttributes parameter). In addition to the face rectangle, the call can return headPose, blur, mask, accessories, qualityForRecognition, occlusion and other attributes for every detected face, enabling per-user customization.
Face Detection without attributes supplies only bounding-box coordinates (and optionally a faceId).
Face Identification compares a detected face to a person group to find who it is.
Face Similarity Matching (Find Similar) returns faces that visually resemble a given face but does not expose individual attributes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Detect API in Azure's Face service?
What is the difference between Face Detection and Facial Attribute Analysis?
What are some use cases for Facial Attribute Analysis?
A company needs to automatically assign a single label to each image in a large dataset based on the main object present in the image.
Which type of computer vision solution is most appropriate for this task?
Image Classification
Optical Character Recognition (OCR)
Object Detection
Facial Detection
Answer Description
Image Classification - This is the correct answer. Image classification is the most appropriate solution for assigning a single label to each image based on the main object or feature present in the image. It categorizes the entire image into predefined classes for example "dog," "car," "cat" without identifying specific objects' locations within the image.
Object Detection - Object detection is used for identifying and locating multiple objects within an image, often with bounding boxes. While it can classify objects, its primary focus is on locating objects, making it more complex than image classification for this task.
Optical Character Recognition (OCR) - OCR is specifically used for extracting text from images or documents, not for classifying images based on their content.
Facial Detection - Facial detection focuses on detecting human faces within images. It is not used for general image classification based on the main object in the image.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Image Classification and Object Detection?
When should you use Optical Character Recognition (OCR) over Image Classification?
How does Image Classification handle multiple objects in a single image?
Which of the following is a feature of an image classification solution?
Detecting facial features to analyze emotions
Predicting the content category of an image
Identifying and locating objects within an image
Extracting text from images for text analysis
Answer Description
Predicting the content category of an image - This is the correct answer. Image classification involves assigning a label or category to an image based on its content. For example, an image classification solution might categorize an image as "cat," "dog," or "car" based on the objects it contains.
Identifying and locating objects within an image - This describes object detection, not image classification. Object detection involves both identifying objects and locating them within the image (often with bounding boxes), while classification only assigns a category label.
Extracting text from images for text analysis - This is Optical Character Recognition (OCR), which is focused on extracting text from images, not classifying the entire image into categories.
Detecting facial features to analyze emotions - This is facial recognition or emotion detection, which focuses on detecting and analyzing facial features, often for applications like identifying emotions or individuals, but it is not image classification.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between image classification and object detection?
How does Optical Character Recognition (OCR) differ from image classification?
What does facial recognition analyze differently from image classification?
An organization needs a service that can generate new text based on input prompts, for uses such as content creation or code suggestions.
Which Azure service provides this capability?
Azure Cognitive Search
Azure Translator Service
Azure OpenAI Service
Azure Text Analytics
Answer Description
Azure OpenAI Service offers advanced language models that can generate human-like text based on input prompts, making it suitable for content creation and code suggestions. The other services listed do not provide text generation capabilities.
Azure Translator Service is designed for translating text between languages.
Azure Cognitive Search enables indexing and querying of content.
Azure Text Analytics provides features like sentiment analysis and key phrase extraction but does not generate new text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure OpenAI Service, and how does it work?
What is the difference between Azure OpenAI Service and Azure Text Analytics?
How does Azure OpenAI Service ensure security and compliance for its users?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.