00:20:00

Microsoft Azure Data Fundamentals Practice Test (DP-900)

Use the form below to configure your Microsoft Azure Data Fundamentals Practice Test (DP-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft Azure Data Fundamentals DP-900
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft Azure Data Fundamentals DP-900 Information

The Microsoft Azure Data Fundamentals certification exam is designed to validate foundational knowledge of core data concepts and how they are implemented using Microsoft Azure data services. It is ideal for individuals who are new to data workloads and cloud environments. Using DP-900 practice tests, practice exams, and reviewing many practice questions can help candidates build confidence, familiarise themselves with exam language, and strengthen their grasp of key topics.

The exam covers four major domains: describing core data concepts (25-30%), identifying considerations for relational data on Azure (20-25%), describing non-relational data on Azure (15-20%), and describing analytics workloads on Azure (25-30%). To prepare effectively, leveraging full‐length practice exams and targeted practice questions focused on each domain will help you identify weak areas, improve your timing, and enhance your readiness for the real exam experience.

Practice Exams & Practice Questions

Success on the DP-900 exam isn’t just about recalling facts, you'll need to apply them under timed conditions. Using DP-900 practice tests helps simulate the exam environment, while drilling practice questions for each objective ensures your understanding is solid. Practice exams expose you to question types like case studies, drag-and-drop, multiple‐choice and multiple‐response, allowing you to manage pacing and reduce surprises on exam day. With consistent work on practice exams and practice questions, you’ll go into the exam with increased confidence and reduce the chance of needing a retake.

Microsoft Azure Data Fundamentals DP-900 Logo
  • Free Microsoft Azure Data Fundamentals DP-900 Practice Test

  • 20 Questions
  • Unlimited
  • Describe core data concepts
    Identify considerations for relational data on Azure
    Describe considerations for working with non-relational data on Azure
    Describe an analytics workload

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

A company wants to store millions of IoT sensor readings that arrive with slightly different JSON fields and must be retrieved quickly by device ID and timestamp. Which type of database best meets this requirement?

  • A cloud data warehouse, for example Azure Synapse Analytics

  • A graph database, for example Azure Cosmos DB Gremlin API

  • A document database, for example Azure Cosmos DB with the Core (SQL) API

  • A relational database, for example Azure SQL Database

Question 2 of 20

A project team must decide whether an upcoming workload should be classified as transactional or analytical. Which characteristic would most strongly indicate that the workload is transactional rather than analytical?

  • It performs many small insert and update operations that must commit atomically in real time.

  • It runs long-running queries that aggregate several months of historical data.

  • It loads data from several sources during a nightly batch process for reporting.

  • It stores data in a columnar format optimized for large scans.

Question 3 of 20

Your organization needs someone to design, build, and schedule an automated pipeline that loads daily CSV files from Azure Blob Storage into an Azure Synapse Analytics dedicated SQL pool. Which workforce role is typically responsible for creating and maintaining such data ingestion pipelines?

  • Database administrator

  • Security engineer

  • Data analyst

  • Data engineer

Question 4 of 20

You are classifying a new application workload in Azure. Which characteristic would most clearly indicate that the workload is transactional rather than analytical?

  • It stores data in columnar format to optimize large table scans.

  • It must support many small, concurrent inserts and updates that require full ACID guarantees.

  • It runs long-running batch queries that aggregate months of historical data.

  • Its primary workload is refreshing read-only dashboards once per day.

Question 5 of 20

Your organization plans to move an on-premises SQL Server database to Azure without modifying the database schema or existing SQL Server Agent jobs. The operations team wants Microsoft to manage operating-system patching, while administrators still need to restore native SQL Server .bak files. Which Azure relational data service best meets these requirements?

  • Azure SQL Managed Instance

  • Azure SQL Database (single database)

  • Azure Database for PostgreSQL

  • SQL Server on an Azure virtual machine

Question 6 of 20

You are building an Azure Data Lake solution that will be queried by Spark for large-scale analytics. To minimize storage costs and improve scan performance, you want a columnar, compressed, open-source file format. Which format best meets these requirements?

  • Parquet

  • Avro

  • JSON

  • CSV

Question 7 of 20

An IoT project stores device telemetry as JSON objects that can include new properties at any time. Engineers must query individual documents and filter on nested fields without redesigning a fixed schema. Which type of database is most appropriate for this requirement?

  • Graph database

  • Column-family database

  • Relational database

  • Document database

Question 8 of 20

You are designing a new web application that stores user profile information as JSON documents. The data layer must automatically index all properties without requiring schema management, support active multi-region writes, and deliver single-digit millisecond response times at any scale. Which Azure data storage option should you choose?

  • Azure Blob Storage

  • Azure Database for PostgreSQL

  • Azure SQL Managed Instance

  • Azure Cosmos DB

Question 9 of 20

A development team needs a fully managed relational database service on Azure that is compatible with the open-source MySQL engine they already use on-premises. Which Azure service best meets this requirement?

  • Azure Database for MySQL

  • Azure Database for PostgreSQL

  • Azure SQL Database

  • Azure Cosmos DB for NoSQL

Question 10 of 20

A company lands high-volume sensor data as CSV files in Azure Data Lake Storage Gen2. Each night the data must be converted to Parquet, cleaned, and loaded into an Azure Synapse Analytics dedicated SQL pool for reporting. Which Azure data role typically designs and schedules the pipelines that perform this ETL process?

  • Database administrator

  • Data engineer

  • Data scientist

  • Security engineer

Question 11 of 20

You plan to load five years of sales records into Azure Synapse Analytics and create interactive dashboards that show long-term revenue trends. Which characteristic of this solution makes the workload analytical rather than transactional?

  • It requires every individual transaction to satisfy strict ACID guarantees.

  • It reads and aggregates large volumes of historical data to identify trends.

  • It relies on a fully normalized schema designed for write performance.

  • It performs many small, random insert and update operations that must commit immediately.

Question 12 of 20

A company has consolidated and cleansed sales data in Azure Synapse Analytics. Management now needs interactive dashboards that highlight current sales trends and can be shared with business stakeholders through Power BI. In a modern Azure data project, which role is primarily responsible for creating these dashboards and delivering the insights?

  • Database administrator

  • Data analyst

  • Data engineer

  • Security engineer

Question 13 of 20

You are explaining to a colleague what makes a database "relational." Which characteristic is fundamental to the relational data model?

  • Data consistency is enforced solely by application logic rather than constraints.

  • Data is organized in two-dimensional tables consisting of rows and columns.

  • Records in the same table can each have completely different sets of columns.

  • The database stores data only as key/value pairs without predefined columns.

Question 14 of 20

You are designing a new retail web app that will collect product catalog data as JSON documents. The solution must automatically index every property, offer single-digit-millisecond reads and writes worldwide, and let you choose between multiple consistency levels. Which Azure data storage option should you recommend?

  • Azure SQL Database

  • Azure Blob Storage

  • Azure Synapse Analytics dedicated SQL pool

  • Azure Cosmos DB

Question 15 of 20

Your company plans to lift-and-shift an on-premises file server to Azure. The migrated store must remain accessible from multiple Windows and Linux clients by using their native SMB or NFS capabilities so that users can map network drives without rewriting applications. Which Azure storage service should you deploy?

  • Azure Queue storage

  • Azure Blob storage

  • Azure File storage

  • Azure Table storage

Question 16 of 20

An organization is hiring for several Azure data roles. Which task should be assigned to the data analyst rather than the database administrator or data engineer?

  • Scheduling automated Azure Data Factory pipelines to ingest raw telemetry from IoT devices.

  • Building interactive Power BI dashboards that illustrate sales trends for business stakeholders.

  • Index tuning and query plan optimization on the production Azure SQL Database.

  • Creating and applying Azure RBAC policies to restrict access to sensitive tables.

Question 17 of 20

Your team stores application log files in Azure Blob Storage, with each log entry formatted as a JSON document containing key-value pairs that can vary between records. When cataloging this dataset, which classification best describes the JSON log files?

  • Structured data, because each JSON entry could be loaded into relational tables after transformation.

  • Binary data, because the log files are stored as objects in blob storage like images or videos.

  • Semi-structured data, because the JSON format stores a flexible, self-describing schema with the data itself.

  • Unstructured data, because blobs in Azure do not enforce any schema on the stored files.

Question 18 of 20

Your organization uses Azure to build a cloud-based analytics solution. Management asks you to identify the role that predominantly designs, builds, and maintains the data ingestion and transformation pipelines that supply the enterprise data warehouse. Which job role fits this responsibility?

  • Database administrator

  • Data engineer

  • Solutions architect

  • Data analyst

Question 19 of 20

In the context of data representation options available in Azure solutions, which statement correctly distinguishes semi-structured data from structured data?

  • It stores records as edges and nodes optimized for traversing relationships in a property graph model.

  • It consists purely of binary blobs such as video files with no meaningful metadata or tags.

  • It is stored in fixed-width columns and must adhere to a rigid table schema defined in a relational database.

  • It has no predefined schema but uses self-describing tags or key-value pairs to organize fields, as in JSON files.

Question 20 of 20

Your organization exports daily social media feeds containing free-form text, hashtags, emoji characters, and attached image files directly into Azure Blob Storage. No consistent schema is applied, and the files cannot be queried using traditional SQL until they are processed. Which characteristic of data does this scenario illustrate?

  • The data is relational because it is stored in cloud object storage.

  • The data is semi-structured because it uses key-value tags to label each field.

  • The data is time-series because each post arrives with a timestamp.

  • The data lacks a predefined schema and must be interpreted during processing.