00:20:00

Microsoft Azure AI Fundamentals Practice Test (AI-900)

Use the form below to configure your Microsoft Azure AI Fundamentals Practice Test (AI-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft Azure AI Fundamentals AI-900
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft Azure AI Fundamentals AI-900 Information

The Microsoft Certified: Azure AI Fundamentals (AI-900) exam is an entry-level certification designed for individuals seeking foundational knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their applications within the Microsoft Azure platform. The AI-900 exam covers essential AI workloads such as anomaly detection, computer vision, and natural language processing, and it emphasizes responsible AI principles, including fairness, transparency, and accountability. While no deep technical background is required, a basic familiarity with technology and Azure’s services can be helpful, making this certification accessible to a wide audience, from business decision-makers to early-career technologists.

The exam covers several major domains, starting with AI workloads and considerations, which introduces candidates to various types of AI solutions and ethical principles. Next, it delves into machine learning fundamentals, explaining core concepts like data features, model training, and types of machine learning such as classification and clustering. The exam also emphasizes specific Azure tools for implementing AI solutions, such as Azure Machine Learning Studio for visual model-building, the Computer Vision service for image analysis, and Azure Bot Service for conversational AI. Additionally, candidates learn how natural language processing (NLP) tasks, including sentiment analysis, translation, and speech recognition, are managed within Azure’s language and speech services.

Achieving the AI-900 certification demonstrates a solid understanding of AI and ML basics and prepares candidates for more advanced Azure certifications in data science or AI engineering. It’s an excellent credential for those exploring how AI solutions can be effectively used within the Azure ecosystem, whether to aid business decision-making or to set a foundation for future roles in AI and data analytics.

Microsoft Azure AI Fundamentals AI-900 Logo
  • Free Microsoft Azure AI Fundamentals AI-900 Practice Test

  • 20 Questions
  • Unlimited
  • Describe Artificial Intelligence Workloads and Considerations
    Describe Fundamental Principles of Machine Learning on Azure
    Describe Features of Computer Vision Workloads on Azure
    Describe Features of Natural Language Processing (NLP) Workloads on Azure
    Describe features of generative AI workloads on Azure
Question 1 of 20

An organization wants to group customers into segments based on similarities in their behavior, but they don't have labeled data.

Which machine learning technique should they utilize?

  • Time Series Analysis

  • Regression

  • Clustering

  • Classification

Question 2 of 20

A retailer wants to analyze customer reviews to determine overall customer satisfaction.

Which AI workload is best suited for this task?

  • Natural Language Processing (NLP) workload

  • Document Intelligence workload

  • Content Moderation workload

  • Computer Vision workload

Question 3 of 20

Azure OpenAI Service's image generation models can create images based on audio prompts provided by users.

  • False

  • True

Question 4 of 20

Which statement best reflects responsible AI considerations for bias when using generative AI models in Azure OpenAI Service?

  • Generative AI models are inherently unbiased because they rely solely on mathematical algorithms, so human oversight is unnecessary.

  • Generative AI models can inherit and amplify social biases present in their training data; developers should apply ongoing human-led monitoring, testing, and mitigation strategies.

  • Bias is only a concern during the pre-training phase; once a model is fine-tuned it can be deployed safely without further monitoring.

  • Using Azure OpenAI content filters alone removes all bias from generated outputs, eliminating the need for additional red-team testing.

Question 5 of 20

An Azure data scientist wants to build and deploy a machine learning model without extensive coding or manual model selection.

Which Azure Machine Learning capability should they use?

  • Azure Synapse Analytics

  • Azure Machine Learning Designer

  • Automated Machine Learning

  • Azure Databricks

Question 6 of 20

Your software development team wants to implement an AI assistant that can generate code snippets based on natural language descriptions.

Which Azure OpenAI model should they use for this purpose?

  • Codex

  • DALL·E

  • GPT-3's text-davinci-003

  • Azure's Computer Vision API

Question 7 of 20

You are a data scientist at a publishing company looking to generate creative story ideas based on existing literature.

Which characteristic of generative AI models allows them to create new and original content inspired by the training data?

  • They store exact copies of data for direct replication

  • They detect anomalies and irregularities in data

  • They categorize input data into predefined classes

  • They create new content by modeling the structure of the training data

Question 8 of 20

A company's customer support department has accumulated a large number of email inquiries. They want to quickly identify the main issues customers are experiencing by automatically extracting important words and phrases from these emails.

Which natural language processing (NLP) technique should they use to achieve this?

  • Language Detection

  • Key Phrase Extraction

  • Sentiment Analysis

  • Entity Recognition

Question 9 of 20

Which of the following scenarios best exemplifies a document intelligence workload?

  • A system that recommends products based on user preferences

  • An application that translates spoken language into text

  • A service that extracts key information from invoices and receipts

  • An algorithm that identifies objects in images

Question 10 of 20

A company wants to develop a model that can determine if a transaction is fraudulent or legitimate. What type of machine learning task is appropriate for this scenario?

  • Classification

  • Dimensionality Reduction

  • Clustering

  • Regression

Question 11 of 20

An organization wants to develop an AI system that converts natural language descriptions of features into corresponding programming code to expedite their software development process.

Which Azure OpenAI model should they choose to achieve this goal?

  • Embeddings

  • DALL-E

  • Codex

  • GPT-3

Question 12 of 20

An analyst at a telecommunications company wants to forecast the number of customer service calls expected next month based on data from previous months.

Which machine learning technique is most suitable for this task?

  • Clustering

  • Classification

  • Regression

Question 13 of 20

A company wants to develop a system that can create original artworks and designs based on existing art styles.

Which type of workload is most suitable for this purpose?

  • Predictive Analytics

  • Generative AI

  • Computer Vision

  • Knowledge Mining

Question 14 of 20

An AI engineer is working on a project that involves analyzing vast amounts of unstructured data, such as images and speech. She needs to build a model that can automatically learn hierarchical representations from raw data without extensive feature engineering.

Which machine learning technique is most appropriate for this scenario?

  • Decision Trees

  • Clustering Algorithms

  • Deep Learning

  • Regression Algorithms

Question 15 of 20

Which of the following is a feature of an image classification solution?

  • Extracting text from images for text analysis

  • Identifying and locating objects within an image

  • Predicting the content category of an image

  • Detecting facial features to analyze emotions

Question 16 of 20

Why do machine learning practitioners divide a dataset into separate training and validation subsets when building a model?

  • To evaluate the model's performance on unseen data, helping to prevent overfitting

  • To reduce the training time by using smaller datasets

  • To increase the total amount of data available for training

  • To ensure the model memorizes the training data perfectly

Question 17 of 20

You are a data analyst at a marketing firm tasked with evaluating how customers feel about a recent product launch by analyzing thousands of social media posts.

Which natural language processing technique should you use to understand the emotions expressed in the text?

  • Key Phrase Extraction

  • Sentiment Analysis

  • Topic Modeling

  • Entity Recognition

Question 18 of 20

A company collected data to develop a machine learning model that predicts the final selling price of products based on factors like 'Production Cost', 'Marketing Budget', 'Competitor Prices' and 'Time on Market'.

In this context, which variable is the label for the model?

  • Production Cost

  • Competitor Prices

  • Marketing Budget

  • Final Selling Price

Question 19 of 20

A company is developing an AI-driven mobile application that collects user data to provide personalized recommendations.

To address concerns about privacy and security, which practice should the company adopt?

  • Storing user data on shared servers to reduce costs

  • Implementing robust encryption techniques for data at rest and in transit

  • Collecting as many data points as possible to improve recommendations

  • Giving developers access to user data for debugging purposes

Question 20 of 20

A company wants to extract structured information such as names of people, places, organizations, and dates from a large collection of unstructured text documents.

Which natural language processing technique should they use to achieve this?

  • Entity Recognition

  • Sentiment Analysis

  • Language Translation

  • Key Phrase Extraction