Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure鈥檚 services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure鈥檚 language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It鈥檚 an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Free Microsoft Azure AI Fundamentals AI-900 Practice Test
- 20 Questions
- Unlimited
- Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
An application requires analysis of faces in photographs to retrieve detailed attribute information for each face (for example head pose and mask presence) so it can tailor the user experience.
Which capability of the Azure AI Face detection service should you use?
Face Similarity Matching
Facial Attribute Analysis
Face Identification
Face Detection
Answer Description
Use the service's facial attribute analysis capability (invoking the Detect API with the returnFaceAttributes parameter). In addition to the face rectangle, the call can return headPose, blur, mask, accessories, qualityForRecognition, occlusion and other attributes for every detected face, enabling per-user customization.
Face Detection without attributes supplies only bounding-box coordinates (and optionally a faceId).
Face Identification compares a detected face to a person group to find who it is.
Face Similarity Matching (Find Similar) returns faces that visually resemble a given face but does not expose individual attributes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Detect API in Azure's Face service?
What is the difference between Face Detection and Facial Attribute Analysis?
What are some use cases for Facial Attribute Analysis?
A financial company wants to automatically extract names of organizations, dates, and monetary amounts from large volumes of unstructured text documents.
Which NLP technique should they use to accomplish this?
Sentiment Analysis
Key Phrase Extraction
Entity Recognition
Translation
Answer Description
Entity recognition is the appropriate NLP technique for this task. It involves identifying and classifying key elements in text into predefined categories such as names of persons, organizations, locations, dates and monetary values. This allows the company to extract structured data from unstructured text.
Key Phrase Extraction identifies important terms and phrases but doesn't categorize them into specific entity types.
Sentiment Analysis determines the emotional tone behind a body of text.
Translation converts text from one language to another.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Entity Recognition in NLP?
How is Entity Recognition different from Key Phrase Extraction?
How can Entity Recognition benefit businesses?
You are building an application that enables users to produce images by providing descriptive text inputs.
Which feature of Azure OpenAI Service would you utilize to implement this functionality?
Implement image analysis to extract information from images.
Use code generation features to create image-rendering scripts.
Apply language translation capabilities to interpret user inputs.
Leverage the service's ability to generate images from text descriptions.
Answer Description
Leverage the service's ability to generate images from text descriptions - This is the correct answer. Azure OpenAI Service provides models like DALL路E, which can generate images based on descriptive text inputs.
Use code generation features to create image-rendering scripts - This feature is focused on generating code for specific tasks
Apply language translation capabilities to interpret user inputs - Language translation is used for converting text from one language to another
Implement image analysis to extract information from images - Image analysis focuses on understanding and processing images
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DALL·E and how does it generate images from text?
How does Azure OpenAI Service differ from Microsoft Cognitive Services for image-related tasks?
What are some practical use cases for DALL·E in applications?
Which of the following is an example of a natural language processing workload?
Recognizing objects in images
Translating data into visual charts
Sentiment analysis of customer reviews
Predicting equipment failures using sensor data
Answer Description
Sentiment analysis of customer reviews involves processing and understanding human language, which is a key aspect of natural language processing (NLP).
Recognizing objects in images is a computer vision task.
Predicting equipment failures using sensor data is related to predictive analytics.
Translating data into visual charts is data visualization.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is natural language processing (NLP)?
How does sentiment analysis work in NLP?
How is NLP different from computer vision?
An online retailer wants to understand patterns in customer behavior based on purchase history, browsing behavior, and demographic data to better tailor its marketing strategies.
Which machine learning technique is most appropriate for this task?
Clustering
Regression
Time Series Analysis
Classification
Answer Description
Clustering is the most appropriate technique for discovering inherent groupings or patterns in data without predefined labels. In this scenario, the retailer aims to segment customers based on similarities to tailor marketing strategies for each group.
Classification involves assigning data points to predefined categories using labeled data, which is not suitable here since the categories are not known beforehand.
Regression is used for predicting continuous numerical values, not for identifying patterns or groupings.
Time Series Analysis focuses on data points collected over time, which does not align with the goal of understanding customer behavior patterns in this context.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is clustering in machine learning?
How is clustering different from classification?
What are some real-world applications of clustering?
A global engineering firm needs to convert complex technical documents into several different languages while maintaining the original document's structure, including diagrams and formatting.
Which Azure service should they use to accomplish this task?
Azure Cognitive Services Text Analytics
Azure Cognitive Services Form Recognizer
Azure Cognitive Services Translator
Azure Cognitive Services Language Understanding (LUIS)
Answer Description
Azure Cognitive Services Translator provides a document translation feature that enables the translation of entire documents while preserving their original formatting and structure, including diagrams and other non-text elements. This is suitable for translating complex documents without losing their layout.
Azure Text Analytics is designed for text analysis tasks like sentiment analysis and key phrase extraction, not for translation.
Azure Form Recognizer extracts data from forms and documents but doesn't perform translation.
Azure Language Understanding (LUIS) helps in understanding user intents and utterances but does not offer translation services.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cognitive Services Translator?
How does Azure Cognitive Services Translator ensure document formatting is preserved?
In what scenarios would Azure Text Analytics not be a good fit compared to Azure Translator?
An e-commerce company wants to analyze images to determine the number and positions of various products shown for inventory management.
Which computer vision capability would best meet this requirement?
Object Detection to identify and locate products in images
Image Classification to assign labels to entire images
Optical Character Recognition (OCR) to extract text from images
Facial Recognition to detect and identify human faces
Answer Description
Object Detection is the appropriate capability because it can identify and locate multiple instances of objects within an image, providing both the class of each object and its position through bounding boxes. This enables the company to count and track products in images effectively.
Image Classification assigns a single label to an entire image without providing location data.
Optical Character Recognition (OCR) extracts text from images but doesn't detect objects.
Facial Recognition is specialized for detecting and identifying human faces, not general products.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Object Detection and Image Classification?
How does Object Detection work in computer vision?
Can Optical Character Recognition (OCR) be combined with Object Detection?
A developer is tasked with building an application that transforms text content from one language into multiple other languages while preserving context and meaning.
Which feature of Azure's Natural Language Processing (NLP) services should they use?
Use Azure Speech service for speech recognition
Use Azure Translator
Use Azure Text Analytics for language detection
Use Azure Text Analytics for key phrase extraction
Answer Description
Azure Translator is designed specifically for translating text between languages while maintaining the original context and meaning. It utilizes advanced neural machine translation to handle idiomatic expressions and nuances in language.
The Azure Text Analytics service's key phrase extraction identifies important terms in text but does not translate text.
The Azure Speech service for speech recognition transcribes spoken words into text but does not translate text.
Azure Text Analytics for language detection identifies the language of a given text but does not perform translation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Azure Translator maintain context and meaning during translation?
What is the difference between Azure Translator and the Azure Text Analytics service?
Can Azure Speech service also be used to process translations?
A data science team needs to manage and keep track of multiple versions of their trained models within Azure Machine Learning to facilitate deployment and collaboration.
Which feature of Azure Machine Learning should they use?
Experiment tracking
Data labeling service
Pipeline orchestration
Model registry
Answer Description
The Model registry in Azure Machine Learning provides a centralized repository to store and manage multiple versions of machine learning models. It allows teams to track model versions, annotate them with metadata, and retrieve specific versions for deployment, ensuring consistent and reproducible results.
Experiment tracking is used to record and analyze training runs but does not handle model versioning.
Data labeling service assists with annotating data for training models, not managing models themselves.
Pipeline orchestration automates machine learning workflows but does not offer model version management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure Machine Learning Model Registry?
How does experiment tracking differ from model registry?
Can the Model Registry handle deployment tasks?
An organization needs to extract data from scanned forms and invoices automatically. Which AI workload is most suitable for this task?
Computer Vision workloads
Knowledge Mining workloads
Document Intelligence workloads
Natural Language Processing (NLP) workloads
Answer Description
Document intelligence workloads are designed to extract structured data from scanned forms and invoices, automating the data extraction process. This workload leverages AI to interpret and process documents, identifying key information within them.
Natural Language Processing (NLP) workloads focus on understanding and generating human language, which is more suited for text analysis tasks.
Computer Vision workloads deal with recognizing and analyzing visual content in images and videos but do not specifically address extracting structured data from documents.
Knowledge Mining workloads are used to extract insights from large volumes of unstructured data but are not specialized for processing forms and invoices.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly is a Document Intelligence workload?
How does Document Intelligence differ from Natural Language Processing?
Can Document Intelligence workloads process handwritten text?
A machine learning model that outputs a single label summarizing the content of an image is performing which type of computer vision task?
Semantic Segmentation
Image Classification
Optical Character Recognition (OCR)
Object Detection
Answer Description
Image Classification is when a model assigns a single label to an entire image based on its overall content. It does not identify the presence or location of multiple objects within the image.
Object Detection on the other hand involves identifying and locating multiple objects within an image.
Optical Character Recognition (OCR) extracts text from images.
Semantic Segmentation assigns a class label to each pixel in the image, effectively delineating object boundaries.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key differences between Image Classification and Object Detection?
How does Optical Character Recognition (OCR) work, and what is it used for?
When would Semantic Segmentation be more useful than Image Classification?
Which of the following best describes the primary function of an image classification solution?
Analyzes facial features to identify individuals
Assigns one or more labels to an entire image based on its content
Extracts textual information from images
Detects and localizes individual objects within an image
Answer Description
Assigns one or more labels to an entire image based on its content - This is the correct answer. The primary function of an image classification solution is to assign one or more labels to an entire image based on its overall content. For example, it can classify an image as "cat," "dog," or "car" based on what is depicted in the image, without identifying specific object locations within the image.
Detects and localizes individual objects within an image - This describes object detection, which not only identifies objects but also locates them within the image using bounding boxes, whereas image classification only assigns labels to the whole image.
Extracts textual information from images - This describes Optical Character Recognition (OCR), which focuses on extracting and recognizing text from images, not classifying the entire image.
Analyzes facial features to identify individuals - This describes facial recognition, which focuses on identifying and analyzing faces within an image, not general image classification.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between image classification and object detection?
How is Optical Character Recognition (OCR) different from image classification?
Can image classification solutions differentiate between similar classes, like different dog breeds?
A global company wants to enable their employees to understand spoken content in meetings held in different languages, providing real-time output in their native language.
Which Azure AI Speech service feature should they use to achieve this goal?
Speech Translation
Speech Recognition
Speech Synthesis
Text-to-Speech
Answer Description
Speech Translation - This is the correct answer. Speech translation is the Azure AI Speech service feature that enables real-time translation of spoken content into different languages. This allows employees to understand spoken content in meetings held in various languages, providing real-time translation into their native language.
Speech Recognition - Speech recognition converts spoken language into text but does not translate it into another language. It would help with transcribing speech, but it wouldn't provide the real-time output in a different language.
Speech Synthesis - Speech synthesis, also known as Text-to-Speech (TTS), converts written text into audible speech, but it doesn't translate spoken content into another language.
Text-to-Speech (TTS) - Text-to-Speech (TTS) converts written text into spoken words but doesn't involve translating spoken language. It is not the correct feature for translating speech in real time during meetings.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Speech Recognition and Speech Translation?
How does Speech Translation work in real-time scenarios?
What are the key use cases for Azure AI Speech Translation?
A company wants to analyze images to determine the presence and location of multiple items within each image.
Which type of computer vision solution should they use?
Optical Character Recognition (OCR)
Image Classification
Facial Recognition
Object Detection
Answer Description
Object Detection identifies multiple items within an image and provides their locations by returning bounding boxes around each detected item.
Image Classification assigns a single label to the entire image without providing location information.
Optical Character Recognition (OCR) extracts text from images.
Facial Recognition is used to identify individuals based on facial features.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Object Detection, and how does it differ from Image Classification?
How are bounding boxes used in Object Detection?
What are common use cases for Object Detection in real-world applications?
Which of the following best explains why an AI solution requires regular updates and maintenance after deployment to ensure its reliability and safety?
To significantly reduce the initial cost of developing the AI model.
To fulfill a one-time deployment checklist required by software vendors.
To change the underlying AI architecture from a neural network to a decision tree.
To address potential data drift and maintain the model's performance.
Answer Description
This statement is correct. To ensure ongoing reliability and safety, an AI solution requires regular updates and maintenance. This is because the data and environment in which the model operates can change over time, a phenomenon known as 'data drift' or 'model drift'. These changes can degrade the model's performance and accuracy. Continuous monitoring and updating help address these shifts, fix any emerging issues, and maintain the system's effectiveness and safety. Reducing development costs or changing the fundamental architecture are not the primary goals of post-deployment maintenance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data drift in AI?
How is model performance monitored after deployment?
Why is AI safety important and how does maintenance contribute to it?
You are deploying a text generation AI model that produces job descriptions.
What responsible AI consideration should you address to ensure the generated content treats all candidates equitably?
Evaluate and adjust the training data to remove discriminatory patterns
Increase the model's vocabulary to include industry-specific terms
Optimize the model's performance to generate descriptions faster
Reduce the computational resources required for deployment
Answer Description
Evaluate and adjust the training data to remove discriminatory patterns - This is the correct answer. Ensuring that the training data is free from biases and discriminatory patterns is essential for responsible AI. This helps ensure that the job descriptions generated by the model do not inadvertently favor one group over another, promoting fairness and equity.
Increase the model's vocabulary to include industry-specific terms - While increasing vocabulary can improve the model's relevance to specific industries, it does not directly address equity concerns in the generated content.
Optimize the model's performance to generate descriptions faster - Performance optimization for speed may improve efficiency but does not directly relate to ensuring equitable treatment of candidates in the generated job descriptions.
Reduce the computational resources required for deployment - Reducing computational resources can be important for cost or environmental reasons but does not directly address fairness or equity in the AI-generated job descriptions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'evaluate and adjust the training data' mean in AI development?
How can training data lead to discriminatory patterns in AI models?
What are the steps to remove bias from training data for responsible AI?
A developer is building an application that needs to analyze images to identify objects and generate descriptive tags, using pre-trained models without the need for custom model training. The application does not require facial recognition functionalities.
Which Azure service should the developer use?
Azure AI Form Recognizer
Azure AI Vision
Azure AI Face Detection
Azure AI Custom Vision
Answer Description
Azure AI Vision provides pre-built features for image analysis, including object detection and generation of descriptive tags, without requiring custom model training. This makes it suitable for developers who want to quickly add image analysis capabilities to their applications.
Azure AI Face detection specializes in facial recognition and analysis, which is not needed in this scenario.
Azure AI Custom Vision allows developers to build and train custom image classification and object detection models, which is unnecessary since pre-trained models suffice.
Azure AI Form Recognizer focuses on extracting text and structure from documents, which is unrelated to analyzing images for objects and tags.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What features does Azure AI Vision provide?
When would you use Azure AI Custom Vision instead of Azure AI Vision?
How does Azure AI Vision differ from Azure AI Form Recognizer?
Which feature in Azure Machine Learning helps you find the optimal model for your data by systematically testing various algorithms and hyperparameter combinations?
Azure Machine Learning Designer
Azure Machine Learning Interpretability
Automated Machine Learning
Azure Notebooks
Answer Description
Automated Machine Learning simplifies the model development process by systematically testing multiple algorithms and hyperparameter settings to identify the best-performing model for a given dataset. It automates the time-consuming process of model selection and tuning, allowing users to focus on other tasks.
Azure Machine Learning Designer provides a visual interface to build machine learning pipelines but does not automate the selection and tuning of models.
Azure Notebooks is a cloud-based Jupyter Notebook service for writing and running code, without built-in capabilities for automated model selection.
Azure Machine Learning Interpretability offers tools to explain and interpret machine learning models but does not assist in finding the optimal model through systematic testing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are hyperparameters in machine learning?
How does Automated Machine Learning work in Azure?
How is Azure Machine Learning Designer different from Automated Machine Learning?
A company is developing a system that can create original artwork in the style of famous painters.
This is an example of which type of workload?
Knowledge Mining workloads
Computer Vision workloads
Generative AI workloads
Content Moderation workloads
Answer Description
This scenario represents a Generative AI workload because the system is generating new original artwork that imitates the style of existing artists. Generative AI focuses on producing new data that shares characteristics with the training data.
Computer Vision workloads involve interpreting and analyzing visual information but not generating new images.
Knowledge Mining workloads are about extracting insights from existing data, not creating new content.
Content Moderation workloads deal with identifying and filtering inappropriate content. Therefore, the most suitable workload in this case is Generative AI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Generative AI?
How does Generative AI differ from Computer Vision?
What are some common applications of Generative AI?
An AI development team is training a machine learning model using customer data to enhance product recommendations.
What is the most effective method to safeguard customer privacy during the training process?
Use secure servers for computation
Limit data access to authorized personnel
Encrypt the dataset during storage
Remove personally identifiable information from the data
Answer Description
Removing personally identifiable information (PII) from the data, also known as data anonymization, is the most effective way to protect customer privacy during model training.
While encrypting data and limiting access are important security measures, they do not prevent the AI model from potentially learning sensitive information. Using secure servers protects data from external threats but does not address privacy concerns related to data handling during training.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Personally Identifiable Information (PII)?
How does data anonymization work in machine learning?
Why is encrypting the dataset not enough to safeguard privacy during training?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.