Microsoft Azure AI Fundamentals Practice Test (AI-900)
Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure AI Fundamentals AI-900 Information
The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azureās services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.
The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azureās language and speech services.
Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. Itās an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.
Scroll down to see your responses and detailed results
Free Microsoft Azure AI Fundamentals AI-900 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Describe Artificial Intelligence Workloads and ConsiderationsDescribe Fundamental Principles of Machine Learning on AzureDescribe Features of Computer Vision Workloads on AzureDescribe Features of Natural Language Processing (NLP) Workloads on AzureDescribe features of generative AI workloads on Azure
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Regular updates and maintenance are unnecessary for ensuring the reliability and safety of an AI solution after deployment.
False
True
Answer Description
This statement is False.
Regular updates and maintenance are essential for ensuring the reliability and safety of an AI solution after deployment. Over time, factors such as data drift, changes in user behavior, or new environmental conditions can affect the performance of an AI model. By continuously monitoring and updating the AI system, organizations can address these changes, fix emerging issues, and maintain the system's reliability and safety.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data drift?
What does continuous monitoring involve?
Why is maintenance important for AI solutions?
An AI solution that treats all users identically and ignores their individual characteristics adequately addresses fairness considerations.
True
False
Answer Description
This statement is False.
Fairness in AI involves acknowledging and addressing the diverse needs and potential biases affecting different user groups. Treating all users identically without considering individual characteristics can perpetuate existing inequalities and lead to biased outcomes. Effective fairness strategies require assessing how different groups may be uniquely impacted and adjusting the AI solution to promote equity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are fairness strategies in AI?
What are biases in AI?
Why is it important to consider individual characteristics in AI?
When developing an AI-powered application, which approach best promotes inclusiveness?
Using training data that represents a wide range of user groups and experiences
Focusing on optimizing the application's performance
Designing the user interface with modern aesthetics
Implementing advanced algorithms to maximize accuracy
Answer Description
Using training data that represents a wide range of user groups and experiences helps ensure that the AI model performs fairly across different populations. This approach reduces bias and improves the equity of the AI solution, which are essential aspects of inclusiveness. While implementing advanced algorithms might enhance accuracy, it doesn't address potential biases in the data. Focusing solely on performance optimization may overlook the needs of diverse users. Designing the user interface with modern aesthetics enhances visual appeal but does not necessarily make the application more inclusive unless it also considers accessibility features.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is meant by 'training data' in AI?
Why is representation in training data important for inclusiveness?
What are some ways to assess and improve inclusiveness in AI applications?
A company is developing a system that can create original artwork in the style of famous painters.
This is an example of which type of workload?
Computer Vision workloads
Content Moderation workloads
Generative AI workloads
Knowledge Mining workloads
Answer Description
This scenario represents a Generative AI workload because the system is generating new original artwork that imitates the style of existing artists. Generative AI focuses on producing new data that shares characteristics with the training data.
Computer Vision workloads involve interpreting and analyzing visual information but not generating new images.
Knowledge Mining workloads are about extracting insights from existing data, not creating new content.
Content Moderation workloads deal with identifying and filtering inappropriate content. Therefore, the most suitable workload in this case is Generative AI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Generative AI workloads?
What is the difference between Generative AI and Computer Vision workloads?
What are some examples of Generative AI applications?
Which of the following is an example of a natural language processing workload?
Sentiment analysis of customer reviews
Recognizing objects in images
Translating data into visual charts
Predicting equipment failures using sensor data
Answer Description
Sentiment analysis of customer reviews involves processing and understanding human language, which is a key aspect of natural language processing (NLP).
Recognizing objects in images is a computer vision task.
Predicting equipment failures using sensor data is related to predictive analytics.
Translating data into visual charts is data visualization.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Sentiment Analysis?
What are the main applications of Natural Language Processing?
How does NLP differ from Computer Vision?
A company wants to automate the extraction of structured data from scanned documents such as invoices and receipts.
Which Azure AI service is BEST suited for this purpose?
Azure Computer Vision OCR
Azure Cognitive Search
Azure Form Recognizer
Azure Text Analytics
Answer Description
Azure Form Recognizer is specifically designed to extract structured data from scanned documents like invoices and receipts. It uses machine learning models to identify and extract key-value pairs, text, and tables, transforming unstructured documents into structured data.
Azure Cognitive Search is used for indexing and searching over large sets of data but doesn't extract structured data from documents.
Azure Text Analytics processes unstructured text to detect sentiment, key phrases, and entities but doesn't work directly with scanned documents.
Azure Computer Vision Optical Character Recognition (OCR) extracts text from images but doesn't structure the data or extract key-value pairs as needed for invoices and receipts.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are key-value pairs in the context of Azure Form Recognizer?
How does Azure Form Recognizer compare with Azure Computer Vision OCR?
What types of documents can Azure Form Recognizer process?
A security company wants to develop a system that can automatically detect and alert on suspicious activities in video surveillance footage.
Which workload is most appropriate for building this solution?
Computer Vision
Natural Language Processing
Generative AI
Knowledge Mining
Answer Description
Computer Vision focuses on processing and interpreting visual information from images or videos. It is the most suitable choice for analyzing video surveillance to detect suspicious activities.
Natural Language Processing deals with understanding and generating human language, which is not applicable to visual data.
Knowledge Mining involves extracting insights from large volumes of structured and unstructured data, typically text-based.
Generative AI is concerned with creating new content rather than analyzing existing footage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the main applications of Computer Vision?
How does Computer Vision detect suspicious activities in videos?
What are some challenges faced by Computer Vision systems in surveillance?
A company wants to automatically analyze customer reviews to determine sentiments and extract key topics discussed.
Which AI workload would be most suitable for this task?
Computer Vision
Natural Language Processing (NLP)
Knowledge Mining
Content Moderation
Answer Description
Natural Language Processing (NLP) is the AI workload that enables computers to understand, interpret, and generate human language. It is used to analyze textual data like customer reviews to determine sentiments and extract key topics.
Computer Vision is used for image and video analysis.
Knowledge Mining involves extracting structured information from large datasets but doesn't specialize in sentiment analysis of text.
Content Moderation focuses on detecting and filtering inappropriate content, not on sentiment analysis or topic extraction in text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What specific techniques are used in Natural Language Processing for sentiment analysis?
How can businesses apply NLP to improve customer service?
What are some common applications of NLP beyond sentiment analysis?
An organization wants to extract insights from a vast collection of unstructured documents and make them easily searchable.
Which AI workload is best suited for this task?
Knowledge Mining
Natural Language Processing (NLP)
Speech Recognition
Computer Vision
Answer Description
Knowledge Mining involves using AI to extract information from large volumes of unstructured data, such as documents, and making it accessible through search and analysis. It is the appropriate workload for extracting insights from unstructured documents and making them searchable.
Natural Language Processing (NLP) focuses on understanding and generating human language but does not specifically focus on making documents searchable.
Computer Vision deals with extracting information from images and videos.
Speech Recognition converts spoken language into text.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Knowledge Mining and how does it work?
How does Knowledge Mining differ from Natural Language Processing (NLP)?
What are some use cases for Knowledge Mining?
Which consideration ensures that AI systems are developed with mechanisms for oversight and that organizations are responsible for the outcomes produced by these systems?
Accountability
Reliability and Safety
Inclusiveness
Transparency
Answer Description
Accountability - This is the correct answer. Accountability ensures that AI systems are developed with mechanisms for oversight, and that organizations take responsibility for the outcomes produced by these systems. It involves ensuring that AI systems are used ethically and that their creators or operators are held responsible for their impact.
Transparency focuses on making AI systems understandable and providing visibility into how they work, but it doesn't directly address the mechanisms for oversight and responsibility for outcomes.
Inclusiveness is about ensuring that AI systems are designed to be fair and accessible, considering the diverse needs of users, but it is not specifically about oversight or accountability for outcomes.
Reliability and Safety focus on ensuring that AI systems perform as expected and do not cause harm, but accountability is the key consideration for ensuring oversight and responsibility for the system's outcomes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does accountability mean in the context of AI systems?
How can organizations implement accountability in their AI practices?
What are the implications of lack of accountability in AI systems?
A company wants to ensure that users can understand how their AI system processes data and arrives at decisions.
Which responsible AI principle should they focus on enhancing?
Transparency
Fairness
Privacy
Inclusiveness
Answer Description
Transparency involves making AI systems understandable and explainable to users and stakeholders. By enhancing transparency, the company allows users to comprehend how data is processed and how decisions are made, building trust in the system.
Privacy focuses on protecting personal and sensitive data, which is important but does not specifically address understanding AI decision-making processes.
Fairness aims to prevent biases and ensure equitable outcomes.
Inclusiveness ensures AI systems are accessible and beneficial to diverse users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does transparency mean in the context of AI?
How can a company improve the transparency of its AI systems?
Why is transparency important in AI systems?
Your company is developing an artificial intelligence application that processes personal data from customers in multiple countries, including those in the European Union.
Which approach is the BEST to ensure compliance with privacy regulations?
Anonymize all customer data before processing it.
Implement strong encryption methods for storing and transmitting all customer data.
Restrict data collection to non-sensitive information to avoid privacy issues.
Obtain explicit consent from users and adhere to relevant data protection laws like GDPR.
Answer Description
Obtaining explicit consent from users and adhering to relevant data protection laws like GDPR is the best approach to ensure compliance when processing personal data. This involves informing users about how their data will be used and ensuring all data handling practices meet legal requirements. While encryption, data anonymization, and restricting data collection are important measures, they alone may not fulfill all legal obligations under privacy laws.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is GDPR and why is it important?
What does obtaining explicit consent from users entail?
What are the main components of data protection laws like GDPR?
To promote fairness in an AI solution used for loan approvals, what is an important consideration during data preparation?
Exclude sensitive attributes like race and gender from the training data
Use historical data without modification to reflect real-world trends
Prioritize algorithm efficiency over data diversity
Include a diverse set of data points representing different demographic groups
Answer Description
Include a diverse set of data points representing different demographic groups - This is the correct answer. To promote fairness in an AI solution for loan approvals, it is crucial to include a diverse set of data points that represent various demographic groups. This helps the model learn from a wide range of experiences and ensures that the system does not disproportionately favor or disadvantage any particular group.
Exclude sensitive attributes like race and gender from the training data - While excluding sensitive attributes like race and gender can prevent direct bias, it may not be enough to ensure fairness.
Use historical data without modification to reflect real-world trends - Using historical data without modification might perpetuate existing biases in the data.
Prioritize algorithm efficiency over data diversity - While efficiency is important, prioritizing it over data diversity can lead to biased or incomplete models.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is including a diverse set of data points important for fairness in AI?
What are potentially sensitive attributes that should be monitored during data preparation?
What is the risk of using historical data without modification in AI training?
An online platform wants to suggest content to users based on their individual preferences and browsing history to enhance user engagement.
Which Azure AI service is BEST suited for implementing this functionality?
Azure Machine Learning
Azure Cognitive Search
Azure Content Moderator
Azure Personalizer
Answer Description
Azure Personalizer is designed to provide personalized experiences by learning from user behavior and preferences, making it the best choice for suggesting content based on individual interactions.
Azure Content Moderator is used for detecting and filtering inappropriate content.
Azure Cognitive Search provides indexing and search capabilities.
Azure Machine Learning is a platform for building and deploying custom machine learning models but does not offer out-of-the-box personalization features.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Personalizer and how does it work?
Can you explain the difference between Azure Personalizer and Azure Machine Learning?
What other Azure AI services can complement Azure Personalizer?
A hospital wants to develop an AI system that can assist doctors by evaluating radiology scans to detect early signs of diseases.
Which AI workload is most appropriate for this task?
Natural Language Processing (NLP)
Predictive Analytics
Knowledge Mining
Computer Vision
Answer Description
Computer Vision is the AI workload that enables machines to interpret and analyze visual data, such as radiology scans, to detect patterns and anomalies indicating diseases.
Natural Language Processing (NLP) deals with text and speech data.
Knowledge Mining involves extracting information from large datasets.
Predictive Analytics focuses on forecasting future outcomes based on data, but they do not directly process visual imagery like radiology scans.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What exactly does Computer Vision do in the context of medical imaging?
How does Computer Vision compare to other AI workloads like Predictive Analytics?
What are some real-world applications of Computer Vision in healthcare?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.