Microsoft Azure Data Fundamentals Practice Test (DP-900)
Use the form below to configure your Microsoft Azure Data Fundamentals Practice Test (DP-900). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Azure Data Fundamentals DP-900 Information
The Microsoft Azure Data Fundamentals certification exam is designed to validate foundational knowledge of core data concepts and how they are implemented using Microsoft Azure data services. It is ideal for individuals who are new to data workloads and cloud environments. Using DP-900 practice tests, practice exams, and reviewing many practice questions can help candidates build confidence, familiarise themselves with exam language, and strengthen their grasp of key topics.
The exam covers four major domains: describing core data concepts (25-30%), identifying considerations for relational data on Azure (20-25%), describing non-relational data on Azure (15-20%), and describing analytics workloads on Azure (25-30%). To prepare effectively, leveraging fullālength practice exams and targeted practice questions focused on each domain will help you identify weak areas, improve your timing, and enhance your readiness for the real exam experience.
Practice Exams & Practice Questions
Success on the DP-900 exam isnāt just about recalling facts, you'll need to apply them under timed conditions. Using DP-900 practice tests helps simulate the exam environment, while drilling practice questions for each objective ensures your understanding is solid. Practice exams expose you to question types like case studies, drag-and-drop, multipleāchoice and multipleāresponse, allowing you to manage pacing and reduce surprises on exam day. With consistent work on practice exams and practice questions, youāll go into the exam with increased confidence and reduce the chance of needing a retake.

Free Microsoft Azure Data Fundamentals DP-900 Practice Test
- 20 Questions
- Unlimited time
- Describe core data conceptsIdentify considerations for relational data on AzureDescribe considerations for working with non-relational data on AzureDescribe an analytics workload
Your organization plans to migrate an on-premises PostgreSQL database to Azure. The solution must be a fully managed platform-as-a-service offering that keeps native PostgreSQL engine compatibility so existing drivers and extensions continue to work. Which Azure service should you deploy?
Azure Database for PostgreSQL
Azure SQL Managed Instance
Azure Database Migration Service
Azure Synapse Analytics
Answer Description
Azure Database for PostgreSQL is Microsoft's fully managed PaaS offering for the open-source PostgreSQL engine. Because it runs the community PostgreSQL code base, applications can use the same drivers, tools, and extensions after migration. Azure SQL Managed Instance and Azure SQL Database run the Microsoft SQL Server engine, not PostgreSQL. Azure Synapse Analytics is a data warehouse service, and Azure Database Migration Service is used for one-time or continuous migrations, not as a hosting platform.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is native PostgreSQL engine compatibility and why is it important?
What is PaaS in Azure, and how does it differ from IaaS?
What is the purpose of Azure Database Migration Service?
You are evaluating Azure SQL deployment options for a finance application that relies on SQL Server Agent jobs and cross-database transactions. The operations team wants Microsoft to handle operating-system patching and automatic minor version upgrades, but they also need almost full engine compatibility to avoid code rewrites. Which Azure SQL service should you choose?
SQL Server on Azure Virtual Machines
Azure SQL Database (serverless)
Azure SQL Database (single database)
Azure SQL Managed Instance
Answer Description
Azure SQL Managed Instance is a Platform as a Service (PaaS) offering that provides near-100% compatibility with the SQL Server engine, including support for SQL Server Agent and cross-database queries. Because it is a managed service, Microsoft takes care of operating-system patching and in-place upgrades. Azure SQL Database (single or serverless) lacks some instance-level features such as SQL Server Agent and does not support cross-database transactions in the same way. SQL Server on Azure Virtual Machines offers full feature parity but leaves operating-system maintenance to the customer, which does not meet the stated requirement that Microsoft handle patching.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Azure SQL Managed Instance and Azure SQL Database?
Why is SQL Server on Azure Virtual Machines not recommended for this scenario?
How does Azure SQL Managed Instance handle cross-database transactions?
You need to store high-resolution marketing images in Azure. Each image is downloaded often during the first month after upload; afterwards, it is rarely accessed but must remain available on demand at the lowest possible storage cost. Which Azure Blob storage feature should you use to meet this requirement without moving the data to another service?
Upgrade the storage account to use geo-redundant replication.
Enable Azure Files caching for the image container.
Store the images in Azure Table storage instead of blobs.
Move the blobs to the Cool access tier after 30 days.
Answer Description
Azure Blob storage lets you change a blob's access tier. After the initial period of frequent downloads, assigning the Cool tier lowers the per-GB storage price while accepting a higher per-access charge, making it cost-effective for infrequently used data that still needs online availability. Azure Files caching does not reduce storage cost for blobs, Table storage cannot hold binary large objects efficiently, and choosing geo-redundant replication addresses durability rather than tiered cost management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure Blob Storage?
What is the difference between the Hot and Cool access tiers in Azure Blob Storage?
Why is Azure Blob Storage better for binary large object (BLOB) data compared to Azure Table Storage?
A team is reviewing the OrderDetails table in an Azure SQL Database. The table stores the columns OrderID, ProductID, ProductName, ProductPrice, and Quantity. Because ProductName and ProductPrice repeat across many rows that share the same ProductID, you are asked to apply normalization principles. Which change best aligns the table with the goals of normalization?
Move ProductName and ProductPrice to a Products table keyed by ProductID and reference that key from OrderDetails.
Add an additional ProductPrice column to store promotional prices alongside the regular price.
Combine ProductID and OrderID into a single composite column to reduce the number of columns in the table.
Replace ProductName with a JSON column that stores both the name and price together in the same field.
Answer Description
Normalization reduces data redundancy and prevents update anomalies by ensuring each fact is stored exactly once. Repeating attributes that describe a product (its name and price) should be kept in a separate Products table that uses ProductID as its primary key. OrderDetails would then hold only the ProductID as a foreign-key reference, eliminating duplicated product information. Creating a composite column, adding extra price columns, or embedding attributes in JSON does not remove redundancy and therefore does not achieve the fundamental purpose of normalization.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is database normalization and why is it important?
What is the difference between a primary key and a foreign key?
Why is redundancy problematic in a database?
You are configuring Azure Cosmos DB for a web application that maintains a user session. After a document is written, the same user must always read the most recent version during that session, but other users can temporarily see an earlier version. Which consistency level meets this requirement while still allowing high global performance?
Bounded staleness consistency
Strong consistency
Session consistency
Eventual consistency
Answer Description
Session consistency provides a "read-your-writes" guarantee within a client session, so the writer sees its own committed updates immediately. Outside that session, reads can be slightly stale, which improves latency and throughput compared to strong or bounded-staleness consistency. Strong consistency forces all clients to see the latest version, increasing latency. Eventual and consistent prefix offer no read-your-writes guarantee, so even the writing session could receive older data. Therefore, session consistency is the appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is session consistency in Azure Cosmos DB?
How does session consistency compare to strong consistency?
Why does session consistency improve global performance compared to strong consistency?
You are designing a cloud solution that needs a shared folder that multiple Windows and Linux Azure virtual machines can mount concurrently by using the SMB or NFS protocol. The same data must also be cached on-premises servers with Azure File Sync. Which Azure storage option should you choose?
Azure Queue storage
Azure Table storage
Azure File storage
Azure Blob storage
Answer Description
Azure File storage provides fully managed file shares that support both SMB and NFS, letting you mount the share just like a traditional network file share from Windows or Linux VMs. In addition, Azure File Sync can replicate and cache an Azure file share on on-premises Windows servers for fast local access and centralized cloud-based storage. Azure Blob storage is optimised for object data and cannot be mounted by SMB/NFS in the same way. Azure Table storage is a key-value NoSQL store, and Azure Queue storage handles message queuing; neither offers file-system semantics or compatibility with Azure File Sync.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure File Sync and how does it work?
What is the difference between SMB and NFS protocols used in Azure File storage?
Why can't Azure Blob storage be used for SMB or NFS mounting?
An organization plans to migrate an on-premises SQL Server workload to Azure. They need a fully managed PaaS solution that provides automatic patching and backups, built-in high availability, and almost complete compatibility with SQL Server features such as SQL Agent and cross-database transactions. Which Azure relational data service should they choose?
Azure Database for MySQL Flexible Server
SQL Server on Azure Virtual Machines
Azure SQL Database (single database)
Azure SQL Managed Instance
Answer Description
Azure SQL Managed Instance is a platform as a service (PaaS) offering that Microsoft manages for backups, patching, and automatic updates. It delivers built-in high availability and offers nearly 100 percent compatibility with the SQL Server engine, including support for SQL Agent jobs and cross-database queries. Azure SQL Database single databases lack several instance-level features such as SQL Agent and cross-database transactions. SQL Server on Azure Virtual Machines provides full SQL Server compatibility but is an infrastructure-as-a-service (IaaS) option, so customers must handle patching, backups, and high-availability configuration themselves. Azure Database for MySQL is a managed service for the MySQL engine and does not meet the SQL Server compatibility requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Azure SQL Managed Instance and Azure SQL Database?
How does high availability work in Azure SQL Managed Instance?
What makes Azure SQL Managed Instance different from SQL Server on Azure Virtual Machines?
You are designing cloud storage for an application that must share configuration files between several Azure virtual machines. The VMs need to mount the same remote folder as a network drive over the SMB protocol so they can use standard file system APIs. Which Azure storage service should you deploy?
Azure Blob storage
Azure Table storage
Azure File storage
Azure Queue storage
Answer Description
Azure File storage exposes fully managed file shares that are accessible simultaneously from Windows, Linux, and macOS via the SMB protocol. Because the share appears as a standard network drive, applications can use familiar file system operations. Azure Blob storage, Table storage, and Queue storage do not provide an SMB-compatible file share and therefore cannot be mounted as a network drive by multiple VMs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SMB protocol and why is it important for Azure File storage?
How does Azure File storage differ from Azure Blob storage?
What are the best practices for managing Azure File storage across virtual machines?
A developer is building a social networking feature that must store and query complex relationships, such as "friends of friends," with millisecond latency. Which Azure Cosmos DB API is specifically designed to support this graph-oriented workload?
SQL API
Gremlin API
Table API
Cassandra API
Answer Description
Azure Cosmos DB offers several APIs, each optimized for a different data model. The Gremlin API implements the Apache TinkerPop graph model, making it the natural choice for applications that need to store and traverse vertices and edges-common in social networks, recommendation engines, and other relationship-centric scenarios. The SQL API targets document (JSON) data, the Cassandra API targets wide-column data, and the Table API provides basic key-value storage. While these alternatives can store related data, none provide the native graph traversal operators and optimizations that the Gremlin API delivers for complex relationship queries.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Gremlin API?
How does the Gremlin API differ from the SQL API in Azure Cosmos DB?
What are some examples of applications suited for the Gremlin API?
Your organization runs a point-of-sale system that writes each customer purchase to a database in real time. Finance staff also generate weekly reports that summarize six months of sales data to identify seasonal trends. In the context of Azure data workloads, how should you classify each of these two activities?
The point-of-sale system is a transactional workload, and the reporting process is an analytical workload.
Both workloads are analytical because they involve sales data.
The point-of-sale system is an analytical workload, and the reporting process is a transactional workload.
Both workloads are transactional because they store data in a database.
Answer Description
A point-of-sale application focuses on rapidly capturing individual transactions, requires ACID guarantees, and supports high concurrency; these characteristics define a transactional workload. Weekly reporting over historical data performs large-scale reads, aggregates data, and seeks trends for business intelligence, which are hallmarks of an analytical workload. Therefore, the sales application is transactional, while the reporting process is analytical. The other options either misclassify one or both activities or overlook the differing workload patterns.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are ACID guarantees in a transactional workload?
What is the difference between transactional and analytical workloads?
How does Azure support transactional and analytical workloads?
In Power BI Desktop you want to show total sales that automatically recalculates for whatever slicers or filters a user selects, but you also want to avoid adding extra stored data to the model. Which data-modeling feature should you create?
Build a hierarchy that contains Year, Quarter, and Month levels.
Add a calculated column to the Sales table that holds the running total.
Create a DAX measure that returns the sum of the SalesAmount column.
Define a many-to-many relationship between the Sales and Date tables.
Answer Description
A measure is defined with DAX and is evaluated at query time for the current filter context, so it does not add persistent storage to the data model. A calculated column stores its results in the model, increasing its size. Hierarchies organize existing fields for drill-down but do not perform calculations. Relationships define how tables connect and likewise do not create dynamic aggregations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a DAX measure in Power BI?
How is a calculated column different from a DAX measure?
What is the role of hierarchies in Power BI?
You are designing a relational database and must guarantee that every row inserted into an Orders table references an existing row in the Customers table. Which database object should you define to enforce this parent-child relationship during insert and update operations?
View
Foreign key constraint
Nonclustered index
Stored procedure
Answer Description
A foreign key constraint enforces referential integrity between two tables by ensuring that values in the child table (Orders) match existing primary-key values in the parent table (Customers). If a corresponding parent row does not exist, the database engine blocks the insert or update, preventing orphaned rows. A view is a virtual table that does not enforce data integrity. An index speeds up data retrieval but cannot enforce relationships. A stored procedure can contain logic that checks data, but the enforcement would be manual and not inherent to the table definition.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a foreign key in a database?
How is a foreign key different from a primary key?
What happens if you try to insert a child record without a matching parent in a foreign key relationship?
In a cloud data platform project, someone must design, build, and schedule the pipelines that move raw data into curated storage before it is queried by business users. Which role is primarily responsible for that task?
Database administrator
Security engineer
Data analyst
Data engineer
Answer Description
Creating and maintaining data ingestion and transformation pipelines is a core duty of a data engineer. Data engineers focus on the technical aspects of acquiring, transforming, and loading data so that downstream users can analyze it. A database administrator concentrates on operational tasks such as backups, patching, and security of database systems. A data analyst consumes prepared data to create reports and visualizations, not to build the pipelines. A security engineer focuses on safeguarding infrastructure and data, not on data movement processes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a data pipeline in the context of cloud data platforms?
What is the difference between a data engineer and a database administrator?
What is ETL, and why is it important for data engineers?
Your company is building a real-time multiplayer mobile game. The game must store player profiles and leaderboard entries as JSON documents, automatically scale to handle sudden spikes in traffic, and provide single-digit millisecond read and write latency to players around the world. Which Azure data service best meets these requirements?
Azure SQL Database
Azure Cosmos DB
Azure Data Lake Storage Gen2
Azure Database for PostgreSQL
Answer Description
Azure Cosmos DB is a fully managed NoSQL service that stores semi-structured data such as JSON documents, offers automatic and virtually unlimited throughput and storage scaling, and can replicate data to any Azure region while guaranteeing single-digit millisecond latency. These capabilities align with the needs of a global, low-latency gaming workload. In contrast, Azure SQL Database and Azure Database for PostgreSQL are relational services without turnkey global distribution or the same latency guarantees, and Azure Data Lake Storage Gen2 is optimized for analytical rather than real-time transactional scenarios.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Azure Cosmos DB suitable for JSON documents?
How does Azure Cosmos DB provide single-digit millisecond latency?
What makes Azure Cosmos DB scalable for sudden spikes in traffic?
An organization routinely scans several months of point-of-sale data to discover sales trends and build executive dashboards. The queries read millions of rows, perform complex aggregations, and seldom modify the source data. Which type of data workload does this scenario illustrate?
Streaming data workload
Operational workload
Analytical workload
Transactional workload
Answer Description
The scenario matches an analytical workload. Analytical workloads are characterized by large, read-intensive queries that aggregate historical data to support business intelligence and reporting. Transactional workloads focus on frequent inserts, updates, and deletes of small sets of current data. Streaming workloads handle continuous, real-time event ingestion. An operational workload is a general term for day-to-day transaction processing, not large-scale analysis.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the primary difference between analytical and transactional workloads?
What tools or services in Microsoft Azure can be used for analytical workloads?
How does the 'aggregation' process work in analytical workloads?
You stream sensor events into Azure Stream Analytics and need a query that continuously calculates the average temperature over the most recent five-second period, refreshing the value every second. Which type of temporal window should you use in the query?
Session window
Tumbling window
Sliding window
Hopping window
Answer Description
A hopping window lets you specify both a window length and a hop size. Setting a five-second window length with a one-second hop creates overlapping windows: each new window starts one second after the previous one, but still covers the last five seconds of data. This produces a new, updated average every second. Tumbling windows are non-overlapping, so results would appear only once every five seconds. Sliding windows emit results only when events enter or leave the window, so they do not guarantee output on a fixed schedule. Session windows group bursts of activity separated by idle gaps and are not appropriate for fixed-interval aggregation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a hopping window in Azure Stream Analytics?
How does a tumbling window differ from a hopping window?
When should you use a sliding window versus a hopping window?
For a new solution requiring ingestion of real-time sensor data into Azure Data Lake Storage, creation of Spark notebooks to transform the data, and securely scheduling daily loads into an Azure Synapse Analytics dedicated pool, which data professional role is primarily accountable for designing and implementing this end-to-end data pipeline?
Data engineer
Security engineer
Data analyst
Database administrator
Answer Description
Designing, building, and maintaining data ingestion and transformation pipelines is a core responsibility of a data engineer. Data engineers choose appropriate storage solutions, develop code or notebooks that prepare and load the data, automate the workflows, and ensure performance and reliability. Data analysts focus on exploring and visualizing data, database administrators manage database availability, configuration, and backups, while security engineers concentrate on security controls. Therefore, the activity described belongs to a data engineer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the core responsibilities of a data engineer in Azure?
How does Azure Synapse Analytics integrate into a data pipeline?
What tools can a data engineer use to securely schedule workflows in Azure?
You need one Power BI visual that lets business users quickly see how each product category contributed to total revenue in the last quarter. Precise numbers are not required; the focus is on comparing the proportion of the whole that each category represents. Which visualization type should you choose?
100 percent stacked column chart
Donut chart
Gauge
Line chart
Answer Description
A donut (doughnut) chart is specifically designed to show how individual parts contribute to a whole at a single point in time. Each segment's arc length is proportional to its share of the total, making it easy for users to compare categories and understand their relative importance without focusing on exact values.
A 100 percent stacked column chart can also display part-to-whole relationships, but it generally emphasizes changes across multiple categories or periods and is less intuitive when there is only one period to show. A line chart highlights trends over time rather than composition at a single point. A gauge is intended for tracking progress toward a single target, not for comparing multiple categories simultaneously. Therefore, the donut chart is the most appropriate choice in this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is a donut chart better than a 100 percent stacked column chart for this scenario?
What is the key advantage of a donut chart compared to other chart types for part-to-whole comparisons?
When is it more appropriate to use a line chart instead of a donut chart?
Your organization needs a storage solution that delivers fully managed, cloud-based file shares accessible over the SMB protocol and that can be cached on on-premises Windows servers by using Azure File Sync. Which Azure storage service meets these requirements?
Azure Queue storage
Azure Blob storage
Azure Table storage
Azure File storage
Answer Description
Azure File storage is designed to provide fully managed file shares in the cloud that use standard SMB (and optional NFS) protocols. It also integrates with Azure File Sync, allowing Windows servers on-premises to cache and synchronize files locally while keeping the authoritative copy in Azure. Blob storage, Queue storage, and Table storage do not expose SMB file shares and cannot be used with Azure File Sync for on-premises caching, so they do not satisfy the stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Azure File Storage used for?
What are Azure File Sync benefits?
What is the difference between SMB and NFS protocols in Azure File Storage?
Your company assigns distinct tasks to its data roles. Which activity is most likely the responsibility of a database administrator rather than a data engineer or a data analyst?
Training predictive models on historical data
Scheduling and validating database backups
Designing ETL pipelines to populate a data warehouse
Exploring datasets and creating interactive visual reports
Answer Description
Scheduling and validating backups is a core operational duty of a database administrator because they are charged with protecting data and ensuring recoverability. Designing ETL pipelines is typically handled by data engineers, who focus on data ingestion and transformation. Exploring data and building visual reports is the domain of data analysts. Training predictive models is associated with data scientists or specialized analytics roles, not database administrators.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of a database administrator (DBA)?
What is ETL in data engineering?
How does a data analyst differ from a database administrator?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.