Microsoft Azure Data Fundamentals Practice Test (DP-900)
Use the form below to configure your Microsoft Azure Data Fundamentals Practice Test (DP-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Data Fundamentals DP-900 Information
The Microsoft Azure Data Fundamentals certification exam is designed to validate foundational knowledge of core data concepts and how they are implemented using Microsoft Azure data services. It is ideal for individuals who are new to data workloads and cloud environments. Using DP-900 practice tests, practice exams, and reviewing many practice questions can help candidates build confidence, familiarise themselves with exam language, and strengthen their grasp of key topics.
The exam covers four major domains: describing core data concepts (25-30%), identifying considerations for relational data on Azure (20-25%), describing non-relational data on Azure (15-20%), and describing analytics workloads on Azure (25-30%). To prepare effectively, leveraging fullālength practice exams and targeted practice questions focused on each domain will help you identify weak areas, improve your timing, and enhance your readiness for the real exam experience.
Practice Exams & Practice Questions
Success on the DP-900 exam isnāt just about recalling facts, you'll need to apply them under timed conditions. Using DP-900 practice tests helps simulate the exam environment, while drilling practice questions for each objective ensures your understanding is solid. Practice exams expose you to question types like case studies, drag-and-drop, multipleāchoice and multipleāresponse, allowing you to manage pacing and reduce surprises on exam day. With consistent work on practice exams and practice questions, youāll go into the exam with increased confidence and reduce the chance of needing a retake.

Free Microsoft Azure Data Fundamentals DP-900 Practice Test
- 20 Questions
- Unlimited
- Describe core data conceptsIdentify considerations for relational data on AzureDescribe considerations for working with non-relational data on AzureDescribe an analytics workload
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
A company wants to store millions of IoT sensor readings that arrive with slightly different JSON fields and must be retrieved quickly by device ID and timestamp. Which type of database best meets this requirement?
A cloud data warehouse, for example Azure Synapse Analytics
A graph database, for example Azure Cosmos DB Gremlin API
A document database, for example Azure Cosmos DB with the Core (SQL) API
A relational database, for example Azure SQL Database
Answer Description
Document databases are designed to store semi-structured data such as JSON without requiring a fixed relational schema. Azure Cosmos DB using the Core (SQL) API lets you ingest records that differ in structure and still query them efficiently on properties like device ID and timestamp. Relational databases expect a predefined table schema, graph databases are optimized for relationship traversal rather than time-series ingestion, and data warehouses focus on aggregated analytical workloads rather than rapid operational writes from devices.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Can you explain what a document database is and why it’s suitable for semi-structured data?
What is JSON and why is it commonly used in IoT applications?
Why is Azure Cosmos DB Core (SQL) API better for querying IoT data compared to relational databases?
A project team must decide whether an upcoming workload should be classified as transactional or analytical. Which characteristic would most strongly indicate that the workload is transactional rather than analytical?
It performs many small insert and update operations that must commit atomically in real time.
It runs long-running queries that aggregate several months of historical data.
It loads data from several sources during a nightly batch process for reporting.
It stores data in a columnar format optimized for large scans.
Answer Description
Transactional workloads are designed to support frequent, small data modifications that must be completed reliably with full ACID guarantees. The need for many short insert, update, or delete operations that commit atomically is therefore a hallmark of a transactional system. Long-running aggregations, nightly batch loading, and column-oriented storage are all typical of analytical (OLAP) workloads that focus on large-scale reading and reporting rather than real-time data modification.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are ACID guarantees in a transactional system?
How is transactional data different from analytical data?
Why is columnar storage associated with analytical workloads?
Your organization needs someone to design, build, and schedule an automated pipeline that loads daily CSV files from Azure Blob Storage into an Azure Synapse Analytics dedicated SQL pool. Which workforce role is typically responsible for creating and maintaining such data ingestion pipelines?
Database administrator
Security engineer
Data analyst
Data engineer
Answer Description
Designing and operating data ingestion pipelines is a core duty of a data engineer. Data engineers create, monitor, and optimize data loading and transformation processes that move data between storage and analytics systems. A data analyst focuses on exploring data and building reports, while a database administrator manages the operational health, security, and backups of database systems. A security engineer concentrates on overall security posture and access control, not on data pipeline design.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob Storage used for?
What is a dedicated SQL pool in Azure Synapse Analytics?
What tools do data engineers use to create data ingestion pipelines in Azure?
You are classifying a new application workload in Azure. Which characteristic would most clearly indicate that the workload is transactional rather than analytical?
It stores data in columnar format to optimize large table scans.
It must support many small, concurrent inserts and updates that require full ACID guarantees.
It runs long-running batch queries that aggregate months of historical data.
Its primary workload is refreshing read-only dashboards once per day.
Answer Description
Transactional (OLTP) workloads typically handle a large number of short, interactive operations such as inserts, updates, and deletes, and they must preserve ACID (atomicity, consistency, isolation, durability) properties to ensure data integrity for each individual business transaction. Analytical (OLAP) workloads, in contrast, focus on reading large volumes of historical data, executing complex aggregations, and are often batch-oriented or read-only in nature. Therefore, the presence of many small, concurrent write operations that require strict consistency is a strong signal that the workload is transactional. The other options describe characteristics more typical of analytical workloads, such as large-scale aggregations, columnar storage optimized for scans, or infrequent, read-only dashboard refreshes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does ACID mean in the context of transactional workloads?
How do transactional workloads differ from analytical workloads in Azure?
What is the significance of columnar storage in analytical workloads?
Your organization plans to move an on-premises SQL Server database to Azure without modifying the database schema or existing SQL Server Agent jobs. The operations team wants Microsoft to manage operating-system patching, while administrators still need to restore native SQL Server .bak files. Which Azure relational data service best meets these requirements?
Azure SQL Managed Instance
Azure SQL Database (single database)
Azure Database for PostgreSQL
SQL Server on an Azure virtual machine
Answer Description
Azure SQL Managed Instance is a platform-as-a-service (PaaS) offering that delivers nearly 100 percent compatibility with the SQL Server engine. It supports SQL Server Agent, cross-database queries, and native restoration of .bak files from on-premises SQL Server. Because it is a managed service, Microsoft handles operating-system patching and most routine maintenance tasks.
Azure SQL Database (single database) lacks SQL Server Agent and native restore capabilities. Running SQL Server on an Azure virtual machine would preserve full compatibility and native restore, but the customer-not Microsoft-remains responsible for OS patching and maintenance. Azure Database for PostgreSQL is a PostgreSQL service and does not support SQL Server workloads. Therefore, Azure SQL Managed Instance is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Azure SQL Managed Instance and Azure SQL Database?
What does OS patching mean in the context of managed services?
How does native restoration of *.bak files work in Azure SQL Managed Instance?
You are building an Azure Data Lake solution that will be queried by Spark for large-scale analytics. To minimize storage costs and improve scan performance, you want a columnar, compressed, open-source file format. Which format best meets these requirements?
Parquet
Avro
JSON
CSV
Answer Description
Parquet is an open-source, columnar storage format that stores data by column rather than by row. This layout enables efficient compression and allows scan operations to read only the columns that a query needs, reducing both I/O and storage costs-advantages that make Parquet a common choice for big-data analytics engines such as Apache Spark.
CSV and JSON store data in a row-oriented manner and provide no built-in column pruning or advanced compression, so they offer lower analytical performance and typically larger file sizes. Avro is a binary, row-based format that supports schema evolution but is not columnar, so it does not provide the same scan-time benefits required in this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a columnar storage format?
Why is Parquet preferred for big-data applications like Apache Spark?
What are the key differences between Parquet and other formats like JSON or CSV?
An IoT project stores device telemetry as JSON objects that can include new properties at any time. Engineers must query individual documents and filter on nested fields without redesigning a fixed schema. Which type of database is most appropriate for this requirement?
Graph database
Column-family database
Relational database
Document database
Answer Description
A document database stores data as self-describing JSON (or BSON) documents, allowing each record to have its own shape. Because the schema is flexible, new properties can be added without altering a central definition, and queries can directly target nested values inside each document. Column-family databases can add columns dynamically but organize data in rows and column families rather than self-contained JSON documents, so querying nested JSON structures is not native. Relational databases rely on a fixed table schema, and graph databases are optimized for relationship traversal rather than flexible JSON storage. Therefore, a document database best meets the requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a document database?
How do document databases query nested fields?
Why are relational databases not suitable for flexible schemas?
You are designing a new web application that stores user profile information as JSON documents. The data layer must automatically index all properties without requiring schema management, support active multi-region writes, and deliver single-digit millisecond response times at any scale. Which Azure data storage option should you choose?
Azure Blob Storage
Azure Database for PostgreSQL
Azure SQL Managed Instance
Azure Cosmos DB
Answer Description
Azure Cosmos DB is Microsoft's globally distributed, multimodel database service. It automatically indexes JSON documents, supports active multi-region replication with multi-master (multi-write) capabilities, and is engineered for single-digit millisecond latency at the 99th percentile. Traditional relational services such as Azure SQL Managed Instance and Azure Database for PostgreSQL offer robust transactional support but require predefined schemas and do not natively provide multi-region writes with the same latency guarantees. Azure Blob Storage is optimized for large binary objects, not for querying JSON documents with automatic indexing or sub-millisecond read performance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is JSON indexing in Azure Cosmos DB?
What does multi-region writes mean in Azure Cosmos DB?
Why is single-digit millisecond latency important?
A development team needs a fully managed relational database service on Azure that is compatible with the open-source MySQL engine they already use on-premises. Which Azure service best meets this requirement?
Azure Database for MySQL
Azure Database for PostgreSQL
Azure SQL Database
Azure Cosmos DB for NoSQL
Answer Description
Azure Database for MySQL is Microsoft's fully managed implementation of the open-source MySQL relational engine. It provides built-in high availability, automated backups, and scaling while maintaining full MySQL compatibility. Azure SQL Database is based on Microsoft SQL Server, not MySQL. Azure Database for PostgreSQL uses the PostgreSQL engine, and Azure Cosmos DB for NoSQL is a multi-model, non-relational database service. Therefore, only Azure Database for MySQL satisfies the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Database for MySQL?
How is Azure SQL Database different from Azure Database for MySQL?
What are the benefits of using a fully managed database service like Azure Database for MySQL?
A company lands high-volume sensor data as CSV files in Azure Data Lake Storage Gen2. Each night the data must be converted to Parquet, cleaned, and loaded into an Azure Synapse Analytics dedicated SQL pool for reporting. Which Azure data role typically designs and schedules the pipelines that perform this ETL process?
Database administrator
Data engineer
Data scientist
Security engineer
Answer Description
Building and orchestrating data ingestion and transformation pipelines is a core responsibility of the data engineer role. Data engineers use services such as Azure Data Factory or Azure Synapse pipelines to extract data from sources, convert formats, cleanse it, and load it into analytical stores. Other roles focus on database administration, advanced analytics, or security rather than ETL workflows.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ETL in data engineering?
What tools can data engineers use to design and schedule pipelines in Azure?
What is the purpose of converting sensor data from CSV to Parquet?
You plan to load five years of sales records into Azure Synapse Analytics and create interactive dashboards that show long-term revenue trends. Which characteristic of this solution makes the workload analytical rather than transactional?
It requires every individual transaction to satisfy strict ACID guarantees.
It reads and aggregates large volumes of historical data to identify trends.
It relies on a fully normalized schema designed for write performance.
It performs many small, random insert and update operations that must commit immediately.
Answer Description
Analytical (OLAP) workloads focus on reading and aggregating large volumes of historical data to discover patterns and trends. In contrast, transactional (OLTP) workloads emphasize frequent, small insert or update operations that must commit quickly, use highly normalized schemas, and guarantee strict ACID properties for each individual transaction. Therefore, the defining analytical characteristic in this scenario is the need to scan and summarize extensive historical data, not the requirement for rapid single-row updates or fully normalized storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between OLAP and OLTP?
What is ACID compliance, and why is it less relevant for analytical workloads?
Why is a denormalized schema preferred for analytical workloads?
A company has consolidated and cleansed sales data in Azure Synapse Analytics. Management now needs interactive dashboards that highlight current sales trends and can be shared with business stakeholders through Power BI. In a modern Azure data project, which role is primarily responsible for creating these dashboards and delivering the insights?
Database administrator
Data analyst
Data engineer
Security engineer
Answer Description
Creating reports, dashboards, and other visualizations that turn prepared data into actionable business insights is a core responsibility of the data analyst role. Data analysts work closely with stakeholders to understand business questions, explore data, build visual models (often in Power BI), and share findings. Database administrators focus on the availability, performance, and security of databases; data engineers design and maintain data ingestion and transformation pipelines; security engineers concentrate on protecting information assets. Therefore, the duty described aligns with the data analyst role.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of a Data Analyst in Azure projects?
What is Power BI, and how does it help in creating dashboards?
How does the role of a Data Analyst differ from a Data Engineer?
You are explaining to a colleague what makes a database "relational." Which characteristic is fundamental to the relational data model?
Data consistency is enforced solely by application logic rather than constraints.
Data is organized in two-dimensional tables consisting of rows and columns.
Records in the same table can each have completely different sets of columns.
The database stores data only as key/value pairs without predefined columns.
Answer Description
Relational databases organize information into two-dimensional tables called relations. Each table consists of rows (tuples) and columns (attributes). This tabular structure distinguishes relational systems from key-value, document, or wide-column stores. While primary keys, constraints, and fixed schemas are common practices, the defining feature of the model is storing data in tables of rows and columns that can be logically related through keys.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a 'relation' in relational databases?
How do primary keys relate tables in a relational database?
What distinguishes relational databases from non-relational databases?
You are designing a new retail web app that will collect product catalog data as JSON documents. The solution must automatically index every property, offer single-digit-millisecond reads and writes worldwide, and let you choose between multiple consistency levels. Which Azure data storage option should you recommend?
Azure SQL Database
Azure Blob Storage
Azure Synapse Analytics dedicated SQL pool
Azure Cosmos DB
Answer Description
Azure Cosmos DB is a globally distributed, multi-model database service that automatically indexes all data by default, delivers single-digit-millisecond latency for reads and writes, and lets you select from several consistency levels (such as strong, bounded-staleness, session, or eventual). Azure SQL Database is a relational service that can store JSON but is optimized for structured, relational workloads and does not provide automatic indexing of every JSON property or the same global distribution and latency guarantees. Azure Blob Storage is object storage for unstructured data and does not provide automatic property indexing or transactional querying capabilities. A dedicated SQL pool in Azure Synapse Analytics is designed for large-scale analytical workloads, not for low-latency operational workloads with fine-grained document access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cosmos DB?
What are consistency levels in Azure Cosmos DB?
Why is automatic indexing important in Azure Cosmos DB?
Your company plans to lift-and-shift an on-premises file server to Azure. The migrated store must remain accessible from multiple Windows and Linux clients by using their native SMB or NFS capabilities so that users can map network drives without rewriting applications. Which Azure storage service should you deploy?
Azure Queue storage
Azure Blob storage
Azure File storage
Azure Table storage
Answer Description
Azure File storage provides fully managed file shares that can be mounted concurrently from Windows, Linux, and macOS clients by using the industry-standard SMB protocol, and it also offers NFS 4.1 shares. This lets users map the share as a network drive with no application changes. Azure Blob storage now supports NFS 3.0 on Premium block-blob accounts, but it lacks native SMB support and is intended for object storage rather than traditional file-share scenarios. Azure Table and Azure Queue storage are accessed through REST APIs and cannot be mounted via SMB or NFS. Therefore, Azure File storage is the correct choice for a cross-platform lift-and-shift file-server replacement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SMB, and why is it important in Azure File storage?
What are NFS 4.1 shares, and when should they be used?
Why is Azure Blob storage not suitable for this scenario?
An organization is hiring for several Azure data roles. Which task should be assigned to the data analyst rather than the database administrator or data engineer?
Scheduling automated Azure Data Factory pipelines to ingest raw telemetry from IoT devices.
Building interactive Power BI dashboards that illustrate sales trends for business stakeholders.
Index tuning and query plan optimization on the production Azure SQL Database.
Creating and applying Azure RBAC policies to restrict access to sensitive tables.
Answer Description
Data analysts focus on exploring data and presenting insights. Creating interactive Power BI dashboards to illustrate business metrics is a core analytic activity that enables stakeholders to make data-driven decisions. Scheduling Azure Data Factory pipelines is an engineering responsibility focused on data movement. Tuning indexes and query plans is a database administration task aimed at operational performance. Configuring Azure RBAC policies is typically handled by administrators or security engineers who manage access control, not by analysts.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the primary role of a data analyst in Azure?
How does Power BI support data analysts?
What is Azure RBAC and why wouldn’t a data analyst manage it?
Your team stores application log files in Azure Blob Storage, with each log entry formatted as a JSON document containing key-value pairs that can vary between records. When cataloging this dataset, which classification best describes the JSON log files?
Structured data, because each JSON entry could be loaded into relational tables after transformation.
Binary data, because the log files are stored as objects in blob storage like images or videos.
Semi-structured data, because the JSON format stores a flexible, self-describing schema with the data itself.
Unstructured data, because blobs in Azure do not enforce any schema on the stored files.
Answer Description
JSON documents embed a flexible, self-describing structure using key-value pairs that may differ across records. Such data is not strictly tabular, yet it contains tags and delimiters that provide partial structure, so it is classified as semi-structured. Structured data would enforce a fixed schema (for example, relational tables). Unstructured data lacks an inherent schema and does not use tags to describe its elements, while a binary classification is reserved for data such as images or audio rather than text-based JSON.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is JSON considered semi-structured data?
How does semi-structured data differ from structured and unstructured data?
Why are JSON files stored in Azure Blob Storage?
Your organization uses Azure to build a cloud-based analytics solution. Management asks you to identify the role that predominantly designs, builds, and maintains the data ingestion and transformation pipelines that supply the enterprise data warehouse. Which job role fits this responsibility?
Database administrator
Data engineer
Solutions architect
Data analyst
Answer Description
Designing, building, and maintaining data pipelines that ingest, transform, and load data for analytics is the primary responsibility of a data engineer. Database administrators focus on the operational health and security of database systems, data analysts focus on exploring and visualizing data to deliver business insights, and solutions architects provide high-level system design guidance rather than creating the detailed data pipelines themselves.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the primary purpose of data pipelines?
How does a Data Engineer differ from a Database Administrator?
Why are data transformation processes important in analytics?
In the context of data representation options available in Azure solutions, which statement correctly distinguishes semi-structured data from structured data?
It stores records as edges and nodes optimized for traversing relationships in a property graph model.
It consists purely of binary blobs such as video files with no meaningful metadata or tags.
It is stored in fixed-width columns and must adhere to a rigid table schema defined in a relational database.
It has no predefined schema but uses self-describing tags or key-value pairs to organize fields, as in JSON files.
Answer Description
Semi-structured data, such as JSON or XML, does not rely on a rigid table schema. Instead, it carries its own self-describing structure through tags or key-value pairs, allowing records to contain optional or variable attributes. Structured data, by contrast, is organized into relational tables with predefined columns and data types. Unstructured data (for example, video files) lacks an internal organizational model beyond basic metadata, while graph data stores focus on nodes and edges rather than tagged elements. Therefore, the description referring to flexible, self-describing tags is the only accurate characterization of semi-structured data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is semi-structured data and how is it different from structured data?
Can semi-structured data be stored in Azure and what are some storage options?
What are common use cases of semi-structured data?
Your organization exports daily social media feeds containing free-form text, hashtags, emoji characters, and attached image files directly into Azure Blob Storage. No consistent schema is applied, and the files cannot be queried using traditional SQL until they are processed. Which characteristic of data does this scenario illustrate?
The data is relational because it is stored in cloud object storage.
The data is semi-structured because it uses key-value tags to label each field.
The data is time-series because each post arrives with a timestamp.
The data lacks a predefined schema and must be interpreted during processing.
Answer Description
The described data consists of free-form text and image files that do not follow a predefined structure such as fixed columns or labelled keys. Because there is no upfront schema and the information cannot be queried with standard SQL until it is transformed, the collection is classified as unstructured data. Semi-structured data would include self-describing tags (for example, JSON), relational data would reside in tables with relationships, and time-series data is organized primarily around time-stamped numerical values, none of which match the scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is unstructured data in Azure Blob Storage?
How does unstructured data differ from semi-structured data?
Why can't unstructured data be queried with traditional SQL?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.