Microsoft Azure Data Fundamentals Practice Test (DP-900)
Use the form below to configure your Microsoft Azure Data Fundamentals Practice Test (DP-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Data Fundamentals DP-900 Information
The Microsoft Azure Data Fundamentals certification exam is designed to validate foundational knowledge of core data concepts and how they are implemented using Microsoft Azure data services. It is ideal for individuals who are new to data workloads and cloud environments. Using DP-900 practice tests, practice exams, and reviewing many practice questions can help candidates build confidence, familiarise themselves with exam language, and strengthen their grasp of key topics.
The exam covers four major domains: describing core data concepts (25-30%), identifying considerations for relational data on Azure (20-25%), describing non-relational data on Azure (15-20%), and describing analytics workloads on Azure (25-30%). To prepare effectively, leveraging fullālength practice exams and targeted practice questions focused on each domain will help you identify weak areas, improve your timing, and enhance your readiness for the real exam experience.
Practice Exams & Practice Questions
Success on the DP-900 exam isnāt just about recalling facts, you'll need to apply them under timed conditions. Using DP-900 practice tests helps simulate the exam environment, while drilling practice questions for each objective ensures your understanding is solid. Practice exams expose you to question types like case studies, drag-and-drop, multipleāchoice and multipleāresponse, allowing you to manage pacing and reduce surprises on exam day. With consistent work on practice exams and practice questions, youāll go into the exam with increased confidence and reduce the chance of needing a retake.

Free Microsoft Azure Data Fundamentals DP-900 Practice Test
- 20 Questions
- Unlimited
- Describe core data conceptsIdentify considerations for relational data on AzureDescribe considerations for working with non-relational data on AzureDescribe an analytics workload
Your organization needs an analytical store in Azure that can hold many terabytes of structured data and execute complex SQL queries by distributing the workload across multiple compute nodes. Which Azure service is designed for this Massively Parallel Processing (MPP) scenario?
Azure Synapse Analytics dedicated SQL pool
Azure Data Lake Storage Gen2
Azure Cosmos DB
Azure SQL Database single database
Answer Description
Azure Synapse Analytics dedicated SQL pool (formerly Azure SQL Data Warehouse) is built for large-scale analytics. It stores data in a distributed manner and uses Massively Parallel Processing (MPP) to run complex, high-performance SQL queries across many nodes, supporting petabyte-scale workloads. Azure SQL Database is optimized for transactional (OLTP) or smaller analytical workloads on a single engine, not for MPP at massive scale. Azure Cosmos DB is a globally distributed NoSQL database aimed at operational workloads rather than large SQL analytics. Azure Data Lake Storage Gen2 provides scalable, low-cost file storage but does not include a built-in SQL query engine or MPP compute layer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Massively Parallel Processing (MPP)?
How does Azure Synapse Analytics dedicated SQL pool store data?
What differentiates Azure Synapse Analytics from Azure SQL Database for analytical workloads?
You administer an Azure SQL Database. Developers repeatedly submit a multitable join to produce a product sales summary. They want to refer to that result set by a single object name without duplicating data. Which database object best meets this requirement?
Create a nonclustered index on the joined columns.
Create an AFTER INSERT trigger to populate a summary table.
Create a view based on the join.
Create a table-valued function that returns the join results.
Answer Description
A view is defined by a saved SELECT statement and behaves like a virtual table. Because the data is retrieved from the underlying tables at query time, no duplicate data is stored. An index stores only key information to speed up lookups, a trigger executes automatically in response to data-modifying events, and a table-valued function returns results through an invocable function rather than appearing as a standalone virtual table. Therefore, using a view is the most appropriate way for developers to reuse the complex join through a single object name without copying data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Azure SQL Database view?
How does a view differ from a table-valued function?
Why not use an AFTER INSERT trigger or nonclustered index instead of a view?
You have a dataset with the columns Product Category, Year, and Total Sales. You must display how total sales for each category change across years so users can easily compare yearly trends. Which Power BI visual is most appropriate for this requirement?
Donut chart
Line chart
Card visual
Scatter chart
Answer Description
Line charts are designed to plot numeric measures against an ordered axis, typically time, making it easy to observe and compare trends for multiple series such as product categories across years. Donut charts emphasize part-to-whole relationships at a single point in time, not changes over time. Scatter charts are ideal for showing correlation between two quantitative measures, not time-based trends. A card visual only shows a single aggregated value, so it cannot illustrate year-over-year changes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is a line chart better for showing changes over time compared to other visuals?
What makes a scatter chart inappropriate for time-based trends?
Can a donut chart show trends if used for multiple time points?
Which of the following characteristics most clearly indicates that a database workload is transactional rather than analytical?
It requires every write operation to be committed using ACID properties to maintain data consistency.
It stores raw, semi-structured logs in a data lake using schema-on-read.
It primarily refreshes read-only dashboards with nightly batch loads.
It focuses on scanning petabytes of historical data to produce complex aggregations.
Answer Description
Transactional (OLTP) workloads center on many small data modifications that must succeed or fail as a single unit, so they depend on ACID properties such as atomic commit and strong consistency. The other options describe analytical patterns: large-scale scans and aggregations, schema-on-read storage of raw data in a data lake, or periodic batch refreshes for dashboards. These scenarios emphasize read-heavy analytics over write-intensive, consistency-critical operations and therefore are not typical of transactional workloads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are ACID properties in databases?
How are transactional workloads different from analytical workloads?
What is the difference between schema-on-write and schema-on-read?
You are reviewing objects in an Azure SQL Database. Which database object is primarily used to pre-compute and persist an ordered data structure that the query optimizer can use to accelerate data retrieval, without storing duplicate copies of the table's rows?
Trigger
View
Index
Stored procedure
Answer Description
An index stores key values and row pointers in an ordered structure that the query optimizer can quickly search, reducing the I/O needed to locate requested rows. Views are virtual tables that hold no persisted structure, triggers run automatically in response to data-modification events, and stored procedures are reusable batches of T-SQL code. None of these alternatives provide the persisted, searchable ordering that an index supplies, so they do not speed data access in the same way.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of an index in a database?
How does an index differ from a view in Azure SQL Database?
What types of indexes exist in Azure SQL Database?
You plan to store semi-structured JSON documents in Azure Cosmos DB. You want to query the data by using a familiar SQL-like syntax and benefit from automatic indexing without defining a schema. Which Azure Cosmos DB API should you select?
Cassandra API
Core (SQL) API
Azure Cosmos DB for MongoDB API
Gremlin API
Answer Description
The Core (SQL) API (also called Azure Cosmos DB for NoSQL) is purpose-built for JSON documents. It automatically indexes all properties, requires no schema definition, and lets you query the items with a SQL-like language. The MongoDB, Cassandra, and Gremlin APIs expose wire protocols that mimic those respective databases and therefore use their own query languages rather than SQL-like syntax in Azure Cosmos DB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Cosmos DB Core (SQL) API used for?
How does automatic indexing work in Azure Cosmos DB?
Can you use SQL-like syntax in other Azure Cosmos DB APIs?
Your company is building a photo-sharing website hosted on Azure. Each uploaded picture must be stored as an individual object, remain highly available, be accessible directly over HTTPS by a unique URL, and automatically move to cooler storage tiers as it ages to reduce costs. Which Azure storage service is the best fit for this scenario?
Azure File shares
Azure Blob storage
Azure Table storage
Azure Managed Disks
Answer Description
Azure Blob storage is Microsoft's object storage service for unstructured data such as images and videos. Blobs are stored as individual objects, can hold associated metadata, and are directly accessible over HTTP/HTTPS via unique URLs, which makes them ideal for public web or mobile access. Blob storage also supports built-in lifecycle management policies that automatically transition data between hot, cool, and archive tiers, helping control long-term storage costs.
Azure File shares are optimized for SMB/NFS file shares rather than public web access. Azure Table storage is a NoSQL key-value store for structured datasets, not large binary files. Azure Managed Disks provide block storage for virtual machines and are not designed for direct URL access or automatic tiering across hot, cool, and archive layers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob storage and how does it work?
What are the differences between Azure Blob storage's hot, cool, and archive tiers?
How does Azure Blob storage ensure high availability for stored objects?
An organization is designing an Azure-based analytics platform. The team wants to ingest large volumes of raw structured and unstructured files, keep storage independent from processing clusters, and rely on open file formats such as Parquet for future analysis. Which analytical data store should they implement?
An operational relational database hosted in Azure SQL Database
A dedicated SQL pool in Azure Synapse Analytics
A data lake that uses Azure Data Lake Storage Gen2
A globally distributed NoSQL database using Azure Cosmos DB
Answer Description
A data lake built on Azure Data Lake Storage Gen2 is designed for large-scale analytics scenarios that require cost-effective storage of raw data in many formats. Because storage is separated from the compute engines that later process the data, the lake can scale independently and remain available to multiple analytics services that understand open standards like Parquet. Azure SQL Database, Azure Synapse dedicated SQL pools, and Azure Cosmos DB are optimized for structured or operational workloads and do not store raw files in open formats while keeping compute fully decoupled from storage, so they do not satisfy all stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a data lake, and how does it differ from a database?
What is Azure Data Lake Storage Gen2?
What are ‘open file formats’ and why are they important for analytics?
You need to store high-resolution marketing images in Azure. Each image is downloaded often during the first month after upload; afterwards, it is rarely accessed but must remain available on demand at the lowest possible storage cost. Which Azure Blob storage feature should you use to meet this requirement without moving the data to another service?
Upgrade the storage account to use geo-redundant replication.
Enable Azure Files caching for the image container.
Store the images in Azure Table storage instead of blobs.
Move the blobs to the Cool access tier after 30 days.
Answer Description
Azure Blob storage lets you change a blob's access tier. After the initial period of frequent downloads, assigning the Cool tier lowers the per-GB storage price while accepting a higher per-access charge, making it cost-effective for infrequently used data that still needs online availability. Azure Files caching does not reduce storage cost for blobs, Table storage cannot hold binary large objects efficiently, and choosing geo-redundant replication addresses durability rather than tiered cost management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob Storage?
What is the difference between the Hot and Cool access tiers in Azure Blob Storage?
Why is Azure Blob Storage better for binary large object (BLOB) data compared to Azure Table Storage?
A marketing team wants to build an interactive dashboard in Power BI by simply typing questions such as "What were online sales last quarter?" and having the visual update automatically. Which Power BI capability should you recommend to meet this requirement?
Row-level security
Q&A visual (natural language query)
Quick Insights
Power Query Editor
Answer Description
Power BI Q&A lets users type natural-language questions and instantly get answers displayed as visuals that can be pinned to dashboards. Quick Insights automatically searches for statistical patterns, Power Query Editor shapes and transforms data, and row-level security restricts data access; none of those features support real-time natural-language querying.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Power BI Q&A understand natural language queries?
What types of visuals can Power BI Q&A generate automatically?
Can you customize the dataset for better Q&A results in Power BI?
You need to store several terabytes of application log files and user-uploaded images. The data is unstructured, must be retrievable over HTTPS through REST APIs, and should automatically transition to cooler storage tiers as it ages. Which Azure storage service should you use?
Azure Table storage
Azure File storage
Azure Queue storage
Azure Blob storage
Answer Description
Azure Blob storage is Microsoft's object storage service for unstructured data such as text, images, and log files. It is accessible through REST-based endpoints and SDKs, supports HTTPS, and offers built-in lifecycle management with hot, cool, and archive access tiers that automatically move data based on defined rules. Azure File storage provides SMB/NFS file shares, not optimized for massive unstructured data or lifecycle tiering. Azure Table storage is a NoSQL key-value store optimized for structured datasets, while Azure Queue storage is designed for message queuing, not bulk object storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is unstructured data in the context of Azure Blob Storage?
What are the access tiers in Azure Blob Storage, and how do they work?
What are REST APIs, and how do they enable access to Azure Blob Storage?
Your company is designing a large-scale analytics architecture in Azure. The team needs a storage layer that can ingest and keep raw structured, semi-structured, and unstructured data without applying a schema up front, while scaling to petabytes at a comparatively low cost. Which type of analytical data store should they choose?
An in-memory cache hosted on Azure Cache for Redis
A relational data warehouse using Azure Synapse dedicated SQL pool
A data lake built on Azure Data Lake Storage Gen2
A NoSQL key-value store implemented with Azure Cosmos DB Table API
Answer Description
A data lake built on Azure Data Lake Storage Gen2 is designed for schema-on-read scenarios, allowing any data type to be stored in its raw form. It offers petabyte-scale capacity and is priced for bulk storage, making it suitable for large-scale analytics. A dedicated SQL pool in Azure Synapse is a relational data warehouse that enforces schema-on-write and is optimized for structured data only. Azure Cosmos DB's Table API is a NoSQL key-value store tuned for operational workloads rather than analytical, large-volume data processing. Azure Cache for Redis is an in-memory cache intended for low-latency data retrieval, not for persistent, low-cost, petabyte-scale storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Data Lake Storage Gen2?
What does schema-on-read mean, and how does it work?
When would you use a NoSQL database like Azure Cosmos DB instead of a data lake?
Your cloud app writes a new line of diagnostic data to the same file in Azure Blob Storage every minute. The existing content is never modified-data is only added to the end of the file. Which Azure blob type should you choose to optimize storage and write performance for this append-only pattern?
Azure File share
Append blob
Block blob
Page blob
Answer Description
Append blobs are optimized for append-only workloads such as logging. Each append operation adds a new block to the end of the blob, avoiding the need to upload or rewrite the entire object. Block blobs are intended for general purpose object storage and require reuploading changed blocks. Page blobs are designed for frequent random reads/writes, such as virtual hard disks. Azure File shares are a different service that exposes the SMB protocol and are not blob types.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Append blobs and how do they function?
How is a Block blob different from an Append blob?
What are the other blob types, and when should they be used?
A developer stores sensor readings as JSON files in Azure Blob Storage. Each file contains key-value pairs, but no fixed table schema is enforced. How should this data be classified for reporting purposes?
Unstructured data
Semi-structured data
Binary large objects (BLOBs)
Structured data
Answer Description
JSON documents contain tags or keys that describe each value, so the data carries some self-describing structure. However, the layout is not bound to rigid rows and columns like a relational table. Because it has an irregular but identifiable structure, JSON is considered semi-structured data. Structured data would exist in a predefined tabular schema, while unstructured data such as free-form text or images lacks recognizable fields. The term "binary large objects (BLOBs)" refers to a storage format rather than a data structure classification.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is semi-structured data in Azure Blob Storage?
How does JSON differ from structured and unstructured data?
How is Azure Blob Storage used to store and manage semi-structured data?
Your Power BI data model includes a Date table with separate Year, Quarter, Month, and Day columns. Report users want to drill down from yearly totals to specific days within a single column chart, without manually adding columns in the visual. Which data-modeling feature should you configure to meet this requirement?
Create a hierarchy that contains the Year, Quarter, Month, and Day columns
Write a DAX measure to return totals at different date levels
Apply row-level security based on the Date table
Define a calculated column that concatenates Year, Quarter, Month, and Day
Answer Description
A hierarchy combines related columns into a single expandable structure. When you add the hierarchy to a visual, Power BI automatically provides drill-down and drill-up actions, letting users move from Year to Quarter, Month, and Day without manually placing each column. A calculated column or DAX measure would not provide interactive drill actions, and row-level security controls data access, not navigation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a hierarchy in Power BI?
How do you create a hierarchy in Power BI?
What is the difference between a hierarchy and a calculated column in Power BI?
A retailer plans to load several terabytes of historical point-of-sale data into an Azure Synapse Analytics dedicated SQL pool so that analysts can run quarterly sales trend reports. Which feature is typical of this analytical workload?
Queries are mostly read-intensive and return aggregated results across large data sets.
Data must be updated in real time with sub-second latency.
Strict row-level locking is required to prevent concurrent reads.
The workload consists of many small, short-lived transactions that modify individual rows.
Answer Description
Analytical workloads focus on discovering insights from large volumes of data that has already been collected. They are dominated by complex, read-intensive queries that scan many rows and return aggregated results such as trends and patterns. In contrast, transactional workloads process numerous small, short-lived write operations, often require sub-second response times, and depend on strict locking to ensure consistency for concurrent updates-all of which are less important for analytical scenarios.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Synapse Analytics dedicated SQL pool?
What distinguishes analytical workloads from transactional workloads?
Why are aggregated results important in analytical workloads?
A company lands high-volume sensor data as CSV files in Azure Data Lake Storage Gen2. Each night the data must be converted to Parquet, cleaned, and loaded into an Azure Synapse Analytics dedicated SQL pool for reporting. Which Azure data role typically designs and schedules the pipelines that perform this ETL process?
Security engineer
Data engineer
Data scientist
Database administrator
Answer Description
Building and orchestrating data ingestion and transformation pipelines is a core responsibility of the data engineer role. Data engineers use services such as Azure Data Factory or Azure Synapse pipelines to extract data from sources, convert formats, cleanse it, and load it into analytical stores. Other roles focus on database administration, advanced analytics, or security rather than ETL workflows.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ETL in data engineering?
What tools can data engineers use to design and schedule pipelines in Azure?
What is the purpose of converting sensor data from CSV to Parquet?
You need to choose an Azure service to store large volumes of raw application log files produced every day in JSON and CSV formats. The files are several megabytes to gigabytes in size, must be accessed over HTTPS, should support tiered storage to reduce cost, and will later be ingested by Azure Data Factory for analytics. Which Azure storage option should you use?
Azure SQL Database
Azure Blob Storage
Azure Queue Storage
Azure Table Storage
Answer Description
Azure Blob Storage is Microsoft's object store for unstructured data such as text or binary files. It supports Hot, Cool, and Archive tiers for cost optimization, can store individual objects up to several terabytes, provides HTTPS endpoints, and is natively supported as a source by Azure Data Factory. Azure Table Storage is a NoSQL key-value store, Azure Queue Storage is for message queuing, and Azure SQL Database is optimized for structured relational data rather than large log files.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob Storage used for?
How does Azure Blob Storage tiering work?
Why is Azure Table Storage not suitable for storing log files?
An application experiences short, unpredictable bursts of traffic and then remains idle for several hours. You need a Microsoft-hosted relational service that can automatically pause compute during idle periods, resume in roughly a minute when new requests arrive, and bill you only for the compute actually used. Which Azure service meets these requirements?
Azure SQL Database with the serverless compute tier
Azure SQL Managed Instance (General Purpose tier)
SQL Server installed on an Azure Virtual Machine
Azure Database for PostgreSQL Flexible Server
Answer Description
The serverless compute tier of Azure SQL Database automatically scales up and down based on workload demand. After a configurable idle period (minimum 1 hour) the database pauses, and during that time you pay only for storage. When activity resumes, the service automatically restarts within about a minute, so you are charged for compute only while it is in use. Azure SQL Managed Instance and SQL Server on Azure Virtual Machines run continuously and accrue compute charges whether or not they are busy. Azure Database for PostgreSQL Flexible Server lets you stop the server manually or on a schedule to avoid compute charges, but it does not automatically pause in response to inactivity and therefore does not match the requirement for on-demand auto-pause and auto-resume.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Azure SQL Database serverless compute tier?
What is the difference between Azure SQL Database serverless compute tier and Azure SQL Managed Instance?
How does Azure SQL Database serverless compare to manually stopping servers in Azure Database for PostgreSQL Flexible Server?
Your company is deploying a new Azure SQL Database for a line-of-business app. Which activity is most likely assigned to the database administrator role rather than to a data engineer or data analyst?
Configure and manage the database's backup schedule and perform restores when needed.
Develop DAX measures and visualizations in Power BI dashboards for business users.
Train a customer-churn predictive model by using Azure Machine Learning datasets.
Build an end-to-end data pipeline that ingests streaming IoT data into the database.
Answer Description
Database administrators are responsible for the operational health of database systems. This includes setting up and monitoring backup and restore operations, configuring high availability, applying patches, and managing security and access. Building data ingestion pipelines is usually the duty of data engineers, while creating reports with Power BI is a data-analyst task. Training predictive models belongs to data scientists or machine-learning engineers, not database administrators.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of a database administrator in Azure SQL Database?
What is high availability in database management?
How do Azure SQL Database backups work?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.