Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Free Microsoft Azure AI Fundamentals AI-900 Practice Test
- 20 Questions
- Unlimited
- Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
An online platform wants to suggest content to users based on their individual preferences and browsing history to enhance user engagement.
Which Azure AI service is BEST suited for implementing this functionality?
Azure Machine Learning
Azure Personalizer
Azure Cognitive Search
Azure Content Moderator
Answer Description
Azure Personalizer is designed to provide personalized experiences by learning from user behavior and preferences, making it the best choice for suggesting content based on individual interactions.
Azure Content Moderator is used for detecting and filtering inappropriate content.
Azure Cognitive Search provides indexing and search capabilities.
Azure Machine Learning is a platform for building and deploying custom machine learning models but does not offer out-of-the-box personalization features.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Personalizer and how does it work?
How is Azure Personalizer different from Azure Machine Learning?
Can Azure Personalizer handle real-time recommendations?
Your company is developing an artificial intelligence application that processes personal data from customers in multiple countries, including those in the European Union.
Which approach is the BEST to ensure compliance with privacy regulations?
Anonymize all customer data before processing it.
Implement strong encryption methods for storing and transmitting all customer data.
Obtain explicit consent from users and adhere to relevant data protection laws like GDPR.
Restrict data collection to non-sensitive information to avoid privacy issues.
Answer Description
Obtaining explicit consent from users and adhering to relevant data protection laws like GDPR is the best approach to ensure compliance when processing personal data. This involves informing users about how their data will be used and ensuring all data handling practices meet legal requirements. While encryption, data anonymization, and restricting data collection are important measures, they alone may not fulfill all legal obligations under privacy laws.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is GDPR, and why is it important for data protection?
What are the key steps to ensure compliance with GDPR when processing customer data?
How does anonymization differ from pseudonymization, and why isn't anonymization enough for GDPR compliance?
A company wants to automate the extraction of structured data from scanned documents such as invoices and receipts.
Which Azure AI service is BEST suited for this purpose?
Azure AI Document Intelligence
Azure Computer Vision OCR
Azure AI Search
Azure AI Language
Answer Description
Azure AI Document Intelligence is specifically designed to extract structured data from scanned documents like invoices and receipts. It uses machine learning models to identify and extract key-value pairs, text, and tables, transforming unstructured documents into structured data.
Azure AI Search is used for indexing and searching over large sets of data but is not the primary service for extracting structured data from document layouts.
Azure AI Language processes unstructured text to detect sentiment, key phrases, and entities but doesn't work directly with the layout and structure of scanned documents.
Azure Computer Vision OCR extracts text from images but doesn't inherently structure the data or extract key-value pairs as needed for invoices and receipts.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure AI Document Intelligence?
How is Azure AI Document Intelligence different from Azure Computer Vision OCR?
What types of documents can Azure AI Document Intelligence handle?
A security company wants to develop a system that can automatically detect and alert on suspicious activities in video surveillance footage.
Which workload is most appropriate for building this solution?
Generative AI
Natural Language Processing
Computer Vision
Knowledge Mining
Answer Description
Computer Vision focuses on processing and interpreting visual information from images or videos. It is the most suitable choice for analyzing video surveillance to detect suspicious activities.
Natural Language Processing deals with understanding and generating human language, which is not applicable to visual data.
Knowledge Mining involves extracting insights from large volumes of structured and unstructured data, typically text-based.
Generative AI is concerned with creating new content rather than analyzing existing footage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Computer Vision?
How does Computer Vision detect suspicious activities in video footage?
Why is Natural Language Processing (NLP) not suitable for this task?
Which action best demonstrates that fairness considerations have been addressed when developing an AI-powered recommendation system?
Evaluate model performance across demographic groups and adjust it to reduce observed disparities.
Apply the same predictive model to every user and ignore demographic information.
Delete all sensitive attributes from the training data so the model is unaware of protected characteristics.
Optimize the model solely for the highest overall accuracy, even if error rates differ across groups.
Answer Description
Evaluating model performance across demographic groups and adjusting it to reduce observed disparities shows that the team is actively measuring and mitigating bias. Fairness requires equitable outcomes, not merely equal treatment. Ignoring demographics, deleting sensitive attributes, or focusing only on overall accuracy can leave hidden biases unaddressed.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to evaluate model performance across demographic groups?
What are examples of observed disparities in AI models?
How does deleting sensitive attributes from training data fail to eliminate bias?
A company wants to automatically analyze customer reviews to determine sentiments and extract key topics discussed.
Which AI workload would be most suitable for this task?
Natural Language Processing (NLP)
Computer Vision
Knowledge Mining
Content Moderation
Answer Description
Natural Language Processing (NLP) is the AI workload that enables computers to understand, interpret, and generate human language. It is used to analyze textual data like customer reviews to determine sentiments and extract key topics.
Computer Vision is used for image and video analysis.
Knowledge Mining involves extracting structured information from large datasets but doesn't specialize in sentiment analysis of text.
Content Moderation focuses on detecting and filtering inappropriate content, not on sentiment analysis or topic extraction in text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Natural Language Processing (NLP)?
How does sentiment analysis work in NLP?
What is the difference between NLP and Knowledge Mining?
Which consideration ensures that AI systems are developed with mechanisms for oversight and that organizations are responsible for the outcomes produced by these systems?
Reliability and Safety
Inclusiveness
Transparency
Accountability
Answer Description
Accountability - This is the correct answer. Accountability ensures that AI systems are developed with mechanisms for oversight, and that organizations take responsibility for the outcomes produced by these systems. It involves ensuring that AI systems are used ethically and that their creators or operators are held responsible for their impact.
Transparency focuses on making AI systems understandable and providing visibility into how they work, but it doesn't directly address the mechanisms for oversight and responsibility for outcomes.
Inclusiveness is about ensuring that AI systems are designed to be fair and accessible, considering the diverse needs of users, but it is not specifically about oversight or accountability for outcomes.
Reliability and Safety focus on ensuring that AI systems perform as expected and do not cause harm, but accountability is the key consideration for ensuring oversight and responsibility for the system's outcomes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is accountability more important than transparency in ensuring oversight of AI systems?
What mechanisms can organizations use to ensure accountability in AI systems?
How does accountability in AI align with ethical principles?
To promote fairness in an AI solution used for loan approvals, what is an important consideration during data preparation?
Exclude sensitive attributes like race and gender from the training data
Include a diverse set of data points representing different demographic groups
Use historical data without modification to reflect real-world trends
Prioritize algorithm efficiency over data diversity
Answer Description
Include a diverse set of data points representing different demographic groups - This is the correct answer. To promote fairness in an AI solution for loan approvals, it is crucial to include a diverse set of data points that represent various demographic groups. This helps the model learn from a wide range of experiences and ensures that the system does not disproportionately favor or disadvantage any particular group.
Exclude sensitive attributes like race and gender from the training data - While excluding sensitive attributes like race and gender can prevent direct bias, it may not be enough to ensure fairness.
Use historical data without modification to reflect real-world trends - Using historical data without modification might perpetuate existing biases in the data.
Prioritize algorithm efficiency over data diversity - While efficiency is important, prioritizing it over data diversity can lead to biased or incomplete models.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is diverse data important in AI solutions for loan approvals?
How does excluding sensitive attributes like race and gender still lead to bias?
What challenges arise when using historical data in AI training?
A company wants to implement an AI solution that can automatically detect and classify objects within images to assist in automating their product inventory process.
Which type of AI workload is BEST suited for this task?
Image Classification
Object Detection
Semantic Segmentation
Optical Character Recognition (OCR)
Answer Description
Object Detection is the appropriate AI workload for detecting and classifying multiple objects within an image. It not only identifies the presence of objects but also provides their locations by drawing bounding boxes around them.
Image Classification assigns a single label to an entire image, which is insufficient when multiple objects need to be identified and classified individually.
Semantic Segmentation labels every pixel in an image, which is more detailed than necessary for this task and is computationally more intensive.
Optical Character Recognition (OCR) is used to extract text from images, which is not applicable in detecting and classifying objects in images.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Object Detection and Image Classification?
When would Semantic Segmentation be more applicable than Object Detection?
How does Optical Character Recognition (OCR) differ from Object Detection in terms of use case?
A company is developing a system that can create original artwork in the style of famous painters.
This is an example of which type of workload?
Content Moderation workloads
Generative AI workloads
Computer Vision workloads
Knowledge Mining workloads
Answer Description
This scenario represents a Generative AI workload because the system is generating new original artwork that imitates the style of existing artists. Generative AI focuses on producing new data that shares characteristics with the training data.
Computer Vision workloads involve interpreting and analyzing visual information but not generating new images.
Knowledge Mining workloads are about extracting insights from existing data, not creating new content.
Content Moderation workloads deal with identifying and filtering inappropriate content. Therefore, the most suitable workload in this case is Generative AI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Generative AI?
How does Generative AI differ from Computer Vision?
What are some common applications of Generative AI?
A company has amassed a vast repository of documents, including PDFs, Word files, and scanned images of text. They want to enable employees to find specific information within these documents, such as policy details or client data, regardless of the file format.
Which type of AI workload would best address this need?
Natural Language Processing (NLP)
Content Personalization
Computer Vision
Knowledge Mining
Answer Description
Knowledge mining is the appropriate AI workload because it orchestrates services such as OCR, natural language processing, and search indexing to ingest, enrich, and make large, heterogeneous document collections easily searchable.
Natural language processing focuses on understanding and generating language from already-available text but does not, by itself, provide the pipeline to ingest multiple file formats or build a search index.
Computer Vision can extract text from images and scanned pages through OCR, but on its own it lacks the enrichment and indexing steps required to turn a mixed-format content repository into a searchable knowledge base.
Content personalization tailors experiences to individual users' preferences and does not address the need to search across a document repository.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is OCR and how does it work in the context of knowledge mining?
How does search indexing improve the knowledge mining process?
What specific role does natural language processing (NLP) play in knowledge mining?
When developing an AI-powered application, which approach best promotes inclusiveness?
Using training data that represents a wide range of user groups and experiences
Implementing advanced algorithms to maximize accuracy
Designing the user interface with modern aesthetics
Focusing on optimizing the application's performance
Answer Description
Using training data that represents a wide range of user groups and experiences helps ensure that the AI model performs fairly across different populations. This approach reduces bias and improves the equity of the AI solution, which are essential aspects of inclusiveness. While implementing advanced algorithms might enhance accuracy, it doesn't address potential biases in the data. Focusing solely on performance optimization may overlook the needs of diverse users. Designing the user interface with modern aesthetics enhances visual appeal but does not necessarily make the application more inclusive unless it also considers accessibility features.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is using diverse training data important for AI inclusiveness?
What are some potential biases in training data, and how do they arise?
How does inclusiveness differ from accuracy in AI models?
Which of the following is an example of a natural language processing workload?
Predicting equipment failures using sensor data
Recognizing objects in images
Sentiment analysis of customer reviews
Translating data into visual charts
Answer Description
Sentiment analysis of customer reviews involves processing and understanding human language, which is a key aspect of natural language processing (NLP).
Recognizing objects in images is a computer vision task.
Predicting equipment failures using sensor data is related to predictive analytics.
Translating data into visual charts is data visualization.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is natural language processing (NLP)?
How does sentiment analysis work in NLP?
How is NLP different from computer vision?
A hospital wants to develop an AI system that can assist doctors by evaluating radiology scans to detect early signs of diseases.
Which AI workload is most appropriate for this task?
Natural Language Processing (NLP)
Predictive Analytics
Knowledge Mining
Computer Vision
Answer Description
Computer Vision is the AI workload that enables machines to interpret and analyze visual data, such as radiology scans, to detect patterns and anomalies indicating diseases.
Natural Language Processing (NLP) deals with text and speech data.
Knowledge Mining involves extracting information from large datasets.
Predictive Analytics focuses on forecasting future outcomes based on data, but they do not directly process visual imagery like radiology scans.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Computer Vision in AI?
How does Computer Vision differ from NLP?
Why isn't Predictive Analytics suitable for evaluating radiology scans?
An organization wants to extract insights from a vast collection of unstructured documents and make them easily searchable.
Which AI workload is best suited for this task?
Natural Language Processing (NLP)
Computer Vision
Speech Recognition
Knowledge Mining
Answer Description
Knowledge Mining involves using AI to extract information from large volumes of unstructured data, such as documents, and making it accessible through search and analysis. It is the appropriate workload for extracting insights from unstructured documents and making them searchable.
Natural Language Processing (NLP) focuses on understanding and generating human language but does not specifically focus on making documents searchable.
Computer Vision deals with extracting information from images and videos.
Speech Recognition converts spoken language into text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What types of data can be processed using Knowledge Mining?
How does Knowledge Mining differ from Natural Language Processing (NLP)?
What Microsoft Azure tools are commonly used for Knowledge Mining?
Which of the following best explains why an AI solution requires regular updates and maintenance after deployment to ensure its reliability and safety?
To change the underlying AI architecture from a neural network to a decision tree.
To fulfill a one-time deployment checklist required by software vendors.
To significantly reduce the initial cost of developing the AI model.
To address potential data drift and maintain the model's performance.
Answer Description
This statement is correct. To ensure ongoing reliability and safety, an AI solution requires regular updates and maintenance. This is because the data and environment in which the model operates can change over time, a phenomenon known as 'data drift' or 'model drift'. These changes can degrade the model's performance and accuracy. Continuous monitoring and updating help address these shifts, fix any emerging issues, and maintain the system's effectiveness and safety. Reducing development costs or changing the fundamental architecture are not the primary goals of post-deployment maintenance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data drift in AI?
How is model performance monitored after deployment?
Why is AI safety important and how does maintenance contribute to it?
An online community wants to automatically detect and filter inappropriate user-generated content such as offensive language and adult images.
Which type of AI workload is best suited to address this need?
Personalization
Knowledge Mining
Content Moderation
Document Intelligence
Answer Description
Content Moderation workloads are specifically designed to detect and filter inappropriate or offensive content in text, images, and videos. They leverage AI models trained to recognize patterns associated with hate speech, profanity, nudity, and other forms of undesirable content. Therefore, content moderation is the best choice for automatically detecting and filtering inappropriate user-generated content.
Personalization workloads focus on tailoring content or recommendations to individual users based on their preferences and behavior, which does not address the need to filter inappropriate content.
Knowledge Mining involves extracting insights from large volumes of data, which is not directly related to content filtering.
Document Intelligence focuses on processing and analyzing documents to extract structured information, which is also not relevant to detecting inappropriate content in user submissions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What technologies enable Content Moderation?
How does content moderation handle different languages in text analysis?
Can content moderation systems be customized for specific industries or community guidelines?
An AI solution extracts data fields from scanned documents and transforms them into structured data.
This is an example of which AI workload?
Natural Language Processing (NLP)
Knowledge Mining
Document Intelligence
Computer Vision
Answer Description
Document Intelligence - This is the correct answer. Document Intelligence refers to AI workloads that extract data from unstructured or scanned documents and transform it into structured data. It involves techniques like optical character recognition (OCR) and data extraction to process documents, forms, and invoices into usable structured formats.
Knowledge Mining focuses on extracting insights from a variety of unstructured data sources, including documents, but it is broader and may involve additional capabilities beyond just extracting data from documents.
Natural Language Processing (NLP) is used for understanding and processing human language, primarily text, but it is not specifically focused on extracting structured data from scanned or unstructured documents.
Computer Vision deals with analyzing visual data, such as images and video, it is not primarily focused on extracting structured data from scanned documents, which is the focus of document intelligence.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Optical Character Recognition (OCR) and how does it relate to Document Intelligence?
How is Document Intelligence different from Knowledge Mining?
What role does AI play in transforming data from scanned documents into structured formats?
A company wants to ensure that users can understand how their AI system processes data and arrives at decisions.
Which responsible AI principle should they focus on enhancing?
Fairness
Inclusiveness
Transparency
Privacy
Answer Description
Transparency involves making AI systems understandable and explainable to users and stakeholders. By enhancing transparency, the company allows users to comprehend how data is processed and how decisions are made, building trust in the system.
Privacy focuses on protecting personal and sensitive data, which is important but does not specifically address understanding AI decision-making processes.
Fairness aims to prevent biases and ensure equitable outcomes.
Inclusiveness ensures AI systems are accessible and beneficial to diverse users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does transparency mean in the context of responsible AI?
How does transparency differ from fairness in responsible AI principles?
Why is enhancing transparency important for trust in AI systems?
An e-commerce company wants to develop a system that can automatically analyze customer reviews to determine the overall sentiment (positive, negative, or neutral) towards their products.
Which type of AI workload should they use?
Natural Language Processing (NLP)
Predictive Maintenance
Time Series Forecasting
Computer Vision
Answer Description
Natural Language Processing (NLP) is used to analyze and understand human language in text or speech form. Since the company wants to analyze textual customer reviews to determine sentiment, NLP techniques are appropriate for this task. Computer Vision focuses on visual data like images and videos, which doesn't apply to text reviews. Predictive Maintenance and Time Series Forecasting involve predicting equipment failures and future values based on time-series data, respectively, neither of which relate to analyzing text reviews for sentiment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Natural Language Processing (NLP)?
How does sentiment analysis work in NLP?
Why is Computer Vision not suitable for analyzing customer reviews?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.