00:20:00

Microsoft Power BI Data Analyst Practice Test

Use the form below to configure your Microsoft Power BI Data Analyst Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for Microsoft Power BI Data Analyst
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

Microsoft Power BI Data Analyst Information

The Microsoft Certified: Power BI Data Analyst Associate certification (via exam PL-300) validates your ability to turn raw data into meaningful business insights using Microsoft Power BI. As a Power BI Data Analyst, you’ll work with stakeholders to gather requirements and translate them into visual reports, dashboards, and data models. You’ll collaborate with data and analytics engineers to acquire data, transform it, and present it powerfully. Core tools include Power Query (data preparation), DAX (data modeling and calculations), and the Power BI service (deployment and sharing).

The exam assesses skills across four major domains: Prepare the data, Model the data, Visualize and analyze the data, and Manage and secure Power BI. You’ll be tested on connecting to data sources, cleaning and transforming data, designing relationships and semantic models, creating reports and dashboards, applying DAX calculations, optimizing model performance, and securing Power BI assets via roles and row-level security.

Earning this certification shows employers you know how to use Power BI end-to-end — from data ingestion to sharing interactive insights. It’s ideal for roles such as Business Intelligence Analyst, Reporting Analyst, or Power BI Specialist. With this credential, you can confidently engage with stakeholders, build scalable data models, and implement visual analytics solutions in real-world business settings.

Microsoft Power BI Data Analyst Logo
  • Free Microsoft Power BI Data Analyst Practice Test

  • 20 Questions
  • Unlimited
  • Prepare the data
    Model the Data
    Visualize and Analyze the Data
    Manage and secure Power BI

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

Your company assigns only Power BI Pro licenses and stores all content in a workspace that runs in shared capacity. You publish an import-mode semantic model that must refresh every 30 minutes between 08:00 and 18:00 local time. When configuring scheduled refresh in the Power BI service, you find that you can add only eight refresh times. To meet the business requirement, which action should you perform first?

  • Enable incremental refresh for the semantic model.

  • Create a dataflow to stage the data and reference the dataflow from the semantic model.

  • Change the semantic model's storage mode to DirectQuery.

  • Move the workspace to a Power BI Premium capacity.

Question 2 of 20

A data analyst is modeling data in Power BI. They have a 'Products' dimension table and a 'Sales' fact table. The relationship between them should be based on a product identifier. However, the 'ProductSKU' column in the 'Products' table contains duplicate values, preventing the creation of a one-to-many relationship. What is the most effective approach in the Power Query Editor to establish a valid relationship key for the 'Products' table?

  • In the 'Sales' table, filter out all rows corresponding to the duplicate 'ProductSKU' values found in the 'Products' table.

  • Create a composite key by combining the 'ProductSKU' column with the 'ProductName' column.

  • Establish a many-to-many relationship between the 'Products' and 'Sales' tables using the 'ProductSKU' column.

  • Remove duplicate rows based on the 'ProductSKU' and then add an index column to serve as a new, unique surrogate key.

Question 3 of 20

You are developing a Power BI report that tracks daily inventory levels. The data model contains an 'Inventory' table with 'ProductKey', 'TransactionDate', and 'UnitsInStock' columns. You also have a 'Date' table that is correctly marked as a date table. You need to create a measure that calculates the closing inventory balance. The measure must show the stock level from the last date within any selected time period (e.g., month, quarter, or year), rather than a sum of the stock levels throughout the period. Which DAX formula correctly calculates this semi-additive measure?

  • TOTALYTD(SUM('Inventory'[UnitsInStock]), 'Date'[Date])

  • CALCULATE(SUM('Inventory'[UnitsInStock]), LASTDATE('Date'[Date]))

  • SUM('Inventory'[UnitsInStock])

  • AVERAGEX(VALUES('Date'[Date]), SUM('Inventory'[UnitsInStock]))

Question 4 of 20

You are a data analyst modeling sales data in Power BI. You start by connecting to a single, denormalized table named 'SalesData' in Power Query. This table contains columns for transactional values (e.g., 'SaleAmount', 'Quantity') and descriptive attributes for customers, products, and dates (e.g., 'CustomerName', 'ProductCategory', 'OrderDate'). To optimize the data model for performance and analysis, you need to transform this flat table into a star schema consisting of a central fact table and multiple dimension tables. Which sequence of actions in Power Query correctly creates the dimension tables and prepares the fact table?

  • Append the 'SalesData' query with separate queries created for customers and products. Then, use the 'Group By' feature to summarize sales by customer and product to create the final table.

  • First, in the 'SalesData' query, remove all descriptive text columns like 'CustomerName' and 'ProductCategory'. Then, duplicate the modified 'SalesData' query to create dimension tables.

  • Create new queries that reference 'SalesData' for each dimension. In each new dimension query, keep only the relevant descriptive columns and remove duplicates. Then, merge the 'SalesData' query with the new dimension queries to add their key columns and remove the original descriptive columns from 'SalesData'.

  • Split the 'SalesData' query into multiple tables using the 'Split Column' transformation based on customer and product information. Then, define relationships between the newly created tables in the model view.

Question 5 of 20

You built a Power Query that imports an Excel worksheet, promotes the first row to headers, and then applies a Changed Type step. The source file owner later renames the header 'Unit Price' to 'UnitCost'. The next scheduled refresh fails with the message:

Expression.Error: The column name 'Unit Price' of the table wasn't found.

You need to fix the import error and keep every downstream step exactly as it is. What should you do in Power Query?

  • Switch the query's storage mode from Import to DirectQuery to avoid schema validation during refresh.

  • Edit the Changed Type step so that it references 'UnitCost' instead of 'Unit Price'.

  • Delete the existing query and rebuild it from scratch to match the new header names.

  • Insert a Rename Columns step immediately after the Source step that renames 'UnitCost' back to 'Unit Price'.

Question 6 of 20

During data preparation in Power Query, you connect to a SQL Server view that returns columns CustomerID, SalesDate, and LineTotal. You must create a table that lists each CustomerID together with the sum of LineTotal and the number of transactions. The result has to be produced in a single transformation step, stay query-foldable, and avoid extra queries. What should you do?

  • Load the view without transformations and create a DAX table with the SUMMARIZE function to calculate the totals and counts.

  • Open the Group By dialog, switch to Advanced, group on CustomerID, and add two aggregations in the same step: Sum of LineTotal and Count Rows.

  • Duplicate the query, use one duplicate to Group By and sum LineTotal, use the other to Group By and count rows, then merge the results.

  • Add a Table.Buffer step before applying Table.Group to aggregate LineTotal and transaction count.

Question 7 of 20

While profiling a Customers table in Power Query Editor, you need to verify that the EmailAddress column contains no duplicates before designating it as a key. You want a quick visual that shows, directly beneath every column header, the total count of distinct values and how many of those are unique (appear only once). Which data-profiling view should you enable from the View ribbon to obtain this information?

  • Turn on the Data preview (top 1,000 rows) option; it automatically pivots each distinct value into a new column.

  • Enable Column profile; it shows detailed statistics but only for the column you select.

  • Enable Column distribution; it displays Distinct and Unique counts beneath each column.

  • Enable Column quality; it highlights valid, error, and empty percentages only.

Question 8 of 20

You are cleaning a 'Sales' table in the Power Query Editor. The 'DiscountPercentage' column, which should contain numerical values, has a data type of 'Any'. It contains numbers, null values, and the text string "N/A". Your goal is to convert this column to a 'Decimal Number' type for calculations. You must replace both the null values and the "N/A" strings with 0 and ensure no rows are lost. Which sequence of steps is the most effective way to achieve this?

  • Use 'Replace Values' to replace "N/A" with 0. Use 'Replace Values' again to replace null with 0. Finally, change the column data type to 'Decimal Number'.

  • Change the column data type to 'Decimal Number'. Then, use the 'Remove Errors' feature to eliminate any rows that failed the conversion.

  • Filter the column to exclude "N/A" and null values. Then, change the column data type to 'Decimal Number'.

  • Use the 'Fill Down' transformation to handle the null values. Then, use 'Replace Values' to replace "N/A" with 0.

Question 9 of 20

You are a data analyst cleaning sales data in the Power Query Editor. You have a column named 'OrderCode' that contains concatenated values in the format 'Region-ProductID-Date', such as 'NA-P45821-20250115'. You need to create a new column that contains only the 'ProductID' (e.g., 'P45821') from the middle of the string. Which of the following transformations is the most direct and appropriate method to accomplish this task?

  • Use the 'Split Column by Delimiter' transformation.

  • Use the 'Unpivot Columns' transformation on the 'OrderCode' column.

  • Use the 'Conditional Column' feature to create logic that extracts the text.

  • Use the 'Replace Values' transformation to remove the region and date codes.

Question 10 of 20

Your organization has a certified semantic model named SalesAnalytics published in a Fabric workspace. You must build a new report in Power BI Desktop that reuses the model's existing measures and row-level security while keeping SalesAnalytics as the single source of truth (no data copy). What should you do first in Power BI Desktop?

  • Start a DirectQuery connection to the underlying SQL Server database that feeds SalesAnalytics.

  • Use Analyze in Excel to connect to SalesAnalytics, then import the workbook into Power BI Desktop.

  • On the Home ribbon, select Get data > Power BI semantic model and connect to SalesAnalytics.

  • Download the SalesAnalytics PBIX file, delete its report pages, and save a new file for reporting.

Question 11 of 20

You are developing a Power BI Desktop report that contains a query named Staging_Product. The query is used only as an intermediate step for a merge operation and its data is not needed in the final data model that will be published. You need to minimize both the size of the model and the overall refresh duration while preserving the transformations performed by Staging_Product. What should you do in Power Query Editor?

  • Change Staging_Product to use DirectQuery storage mode.

  • Set the storage mode of Staging_Product to Dual so it is loaded only on demand.

  • Clear the Enable load option for Staging_Product so the query becomes connection-only.

  • Clear the Include in report refresh option for Staging_Product.

Question 12 of 20

You need to connect Power BI Desktop to an Azure SQL Database that stores roughly 40 million sales records and receives new transactions every minute. Business users must see figures that are no more than five minutes old when they open a report. The organization has only Power BI Pro licenses and will publish the report to a workspace that runs in shared capacity (no Premium features). The database is well-tuned and can return aggregate queries in under five seconds. Which Power BI connectivity mode should you choose so the report meets the freshness requirement without requiring Premium capacity?

  • Create a live connection to Azure Analysis Services.

  • Import all data and schedule a refresh every hour.

  • Use DirectQuery mode when creating the semantic model.

  • Import the data and configure an incremental refresh policy.

Question 13 of 20

You are developing a Power BI data model that includes a 'Sales' fact table and a 'DimDate' dimension table. The model has an active relationship between 'Sales'[OrderDate] and 'DimDate'[Date], and an inactive relationship between 'Sales'[ShipDate] and 'DimDate'[Date]. You need to create a measure that calculates the total sales amount specifically by ship date. Which DAX function must you use inside a 'CALCULATE' function to activate the relationship on the ship date for this measure?

  • USERELATIONSHIP

  • CROSSFILTER

  • TREATAS

  • RELATEDTABLE

Question 14 of 20

You are building a Power BI report that sources data from a folder of CSV files. You need to create a parameter in the Power Query Editor that will hold the folder path, allowing you to easily switch between a development and a production location. Which configuration for the new parameter is most appropriate to ensure functionality and adhere to performance best practices?

  • Set the 'Type' to "Text" and clear the "Enable load" selection.

  • Set the 'Type' to "Text" and select the "Enable load" checkbox.

  • Set 'Suggested Values' to "List of values" and enter both the development and production paths.

  • Set the 'Type' to "Any" and select the "Required" checkbox.

Question 15 of 20

You are creating a Power BI data model and have a query named 'BaseSales' that performs 15 transformation steps to clean raw sales data. You need to create two additional tables for analysis: 'SalesByRegion' and 'SalesByProduct'. Both of these tables must start with the cleaned data from 'BaseSales' before having their own specific aggregations applied. You want to ensure that any future modifications to the cleaning steps in 'BaseSales' are automatically inherited by both 'SalesByRegion' and 'SalesByProduct' to minimize maintenance. Which action should you take?

  • Reference the 'BaseSales' query twice to create the two new queries.

  • Create two blank queries and merge each of them with the 'BaseSales' query.

  • Duplicate the 'BaseSales' query twice to create the two new queries.

  • Create two blank queries and append the 'BaseSales' query to each of them.

Question 16 of 20

Your Power BI model contains a large FactSales table and several dimension tables. You receive two requirements:

  1. Show a slicer that lets report consumers filter visuals by FiscalQuarter, a value derived from FactSales[SaleDate].
  2. Build a reusable HighValueCustomers dimension that keeps only customers whose lifetime sales amount exceeds $1,000,000 so the table can be related to other tables and used in reports. Which Power BI calculation type should you use for each requirement to meet the functional needs and follow recommended practice?
  • Create a calculated table for FiscalQuarter and a calculated column for HighValueCustomers.

  • Create a measure for FiscalQuarter and a calculated table for HighValueCustomers.

  • Create a calculated column for FiscalQuarter and a calculated table for HighValueCustomers.

  • Create a calculated column for FiscalQuarter and a measure for HighValueCustomers.

Question 17 of 20

You are designing a Power BI report that must show near-real-time inventory levels from an Azure SQL Database that contains over 150 million rows. Business users require the visuals to reflect changes in the source within one minute, and the data engineering team forbids duplicating the raw data in the Power BI service. Only simple aggregations are needed in the report. Which data connectivity mode should you configure for the fact tables?

  • Configure the tables by using DirectQuery mode.

  • Create a live connection to an Analysis Services tabular model.

  • Use Import mode with incremental refresh policies.

  • Import the tables and schedule a refresh.

Question 18 of 20

You are developing a Power BI report that analyzes financial transaction data from a CSV file. In Power Query, the TransactionAmount column, which contains monetary values with decimal points, is automatically assigned the Decimal Number data type. You observe minor rounding discrepancies when performing aggregations. To ensure the highest level of precision for financial calculations, which data type should you use for the TransactionAmount column?

  • Whole Number

  • Text

  • Fixed Decimal Number

  • Decimal Number

Question 19 of 20

You are designing a Power BI semantic model that contains a Sales fact table with two date keys: OrderDateKey and ShipDateKey. Report authors must be able to build visuals that simultaneously compare order volume by order date and shipping performance by ship date, without writing custom DAX to activate relationships. Row-level security will also be applied. Which modeling approach should you implement?

  • Duplicate the Date table so there is a Date table and a Ship Date table, and create an active one-to-many relationship from each table to its corresponding date key in Sales.

  • Create two inactive relationships from the existing Date table to the Sales table and require measures to use USERELATIONSHIP when needed.

  • Keep a single active relationship on OrderDateKey and set its cross-filter direction to Both so it implicitly filters ShipDateKey.

  • Create a many-to-many relationship between the Date table and the Sales table, with cross-filter direction set to Both.

Question 20 of 20

In Power BI Desktop, you are inspecting a large fact table in Power Query Editor. After turning on Data Profiling, you need to view the minimum, maximum, average (mean), and standard deviation for the Amount column so you can spot outliers before loading the data. Which View-tab option should you enable, and then click the column, to display all of those statistics in a single pane?

  • Column quality

  • Query diagnostics

  • Column distribution

  • Column profile