Microsoft Power BI Data Analyst Practice Test
Use the form below to configure your Microsoft Power BI Data Analyst Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Microsoft Power BI Data Analyst Information
The Microsoft Certified: Power BI Data Analyst Associate certification (via exam PL-300) validates your ability to turn raw data into meaningful business insights using Microsoft Power BI. As a Power BI Data Analyst, you’ll work with stakeholders to gather requirements and translate them into visual reports, dashboards, and data models. You’ll collaborate with data and analytics engineers to acquire data, transform it, and present it powerfully. Core tools include Power Query (data preparation), DAX (data modeling and calculations), and the Power BI service (deployment and sharing).
The exam assesses skills across four major domains: Prepare the data, Model the data, Visualize and analyze the data, and Manage and secure Power BI. You’ll be tested on connecting to data sources, cleaning and transforming data, designing relationships and semantic models, creating reports and dashboards, applying DAX calculations, optimizing model performance, and securing Power BI assets via roles and row-level security.
Earning this certification shows employers you know how to use Power BI end-to-end — from data ingestion to sharing interactive insights. It’s ideal for roles such as Business Intelligence Analyst, Reporting Analyst, or Power BI Specialist. With this credential, you can confidently engage with stakeholders, build scalable data models, and implement visual analytics solutions in real-world business settings.

Free Microsoft Power BI Data Analyst Practice Test
- 20 Questions
- Unlimited
- Prepare the dataModel the DataVisualize and Analyze the DataManage and secure Power BI
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your company assigns only Power BI Pro licenses and stores all content in a workspace that runs in shared capacity. You publish an import-mode semantic model that must refresh every 30 minutes between 08:00 and 18:00 local time. When configuring scheduled refresh in the Power BI service, you find that you can add only eight refresh times. To meet the business requirement, which action should you perform first?
Enable incremental refresh for the semantic model.
Create a dataflow to stage the data and reference the dataflow from the semantic model.
Change the semantic model's storage mode to DirectQuery.
Move the workspace to a Power BI Premium capacity.
Answer Description
Shared capacity restricts each semantic model to eight scheduled refreshes per day. A 30-minute interval between 08:00 and 18:00 requires 20 refreshes. Moving the workspace to a Power BI Premium capacity (or Premium Per User) increases the limit to 48 refreshes per day, enabling you to schedule the needed frequency. Incremental refresh reduces data volume but does not raise the refresh-per-day limit. DirectQuery eliminates scheduled refresh entirely but would change the report's query pattern, which isn't required here. Using a dataflow still leaves the dataset subject to the same refresh quota. Therefore, upgrading the workspace to Premium capacity is the necessary first step.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'shared capacity' mean in Power BI?
What are the advantages of moving to Power BI Premium capacity?
How does Incremental Refresh work in Power BI and why doesn’t it meet the requirement here?
A data analyst is modeling data in Power BI. They have a 'Products' dimension table and a 'Sales' fact table. The relationship between them should be based on a product identifier. However, the 'ProductSKU' column in the 'Products' table contains duplicate values, preventing the creation of a one-to-many relationship. What is the most effective approach in the Power Query Editor to establish a valid relationship key for the 'Products' table?
In the 'Sales' table, filter out all rows corresponding to the duplicate 'ProductSKU' values found in the 'Products' table.
Create a composite key by combining the 'ProductSKU' column with the 'ProductName' column.
Establish a many-to-many relationship between the 'Products' and 'Sales' tables using the 'ProductSKU' column.
Remove duplicate rows based on the 'ProductSKU' and then add an index column to serve as a new, unique surrogate key.
Answer Description
The correct answer is to remove duplicate rows and add an index column. In a star schema, the key column on the 'one' side of a one-to-many relationship must contain unique values. The best practice to resolve duplicate values in a dimension table is to remove the duplicate rows based on the business key (like 'ProductSKU') and then add an index column in Power Query. This new column acts as a surrogate key, providing a unique integer-based identifier for each row, which is optimal for model performance and relationship integrity.
Creating a composite key is a possible but less optimal solution, as joins on multiple columns or text columns are less performant than on a single integer key. Establishing a many-to-many relationship is not ideal as it is a workaround for the data quality issue, not a solution, and can lead to ambiguity and performance problems. Filtering rows from the 'Sales' fact table is incorrect as it would result in the loss of valid data and lead to inaccurate reports.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it necessary to remove duplicate rows when modeling a relationship in Power BI?
What is a surrogate key, and why is it preferable for relationships in Power BI?
Why is using a many-to-many relationship not an ideal solution for handling duplicate keys?
You are developing a Power BI report that tracks daily inventory levels. The data model contains an 'Inventory' table with 'ProductKey', 'TransactionDate', and 'UnitsInStock' columns. You also have a 'Date' table that is correctly marked as a date table. You need to create a measure that calculates the closing inventory balance. The measure must show the stock level from the last date within any selected time period (e.g., month, quarter, or year), rather than a sum of the stock levels throughout the period. Which DAX formula correctly calculates this semi-additive measure?
TOTALYTD(SUM('Inventory'[UnitsInStock]), 'Date'[Date])
CALCULATE(SUM('Inventory'[UnitsInStock]), LASTDATE('Date'[Date]))
SUM('Inventory'[UnitsInStock])
AVERAGEX(VALUES('Date'[Date]), SUM('Inventory'[UnitsInStock]))
Answer Description
The correct DAX formula is CALCULATE(SUM('Inventory'[UnitsInStock]), LASTDATE('Date'[Date]))
. Here's why:
Correct Answer: This formula uses the
CALCULATE
function to modify the filter context. TheLASTDATE('Date'[Date])
function returns a table containing only the last date in the current filter context (e.g., the last day of a selected month).CALCULATE
then applies this new single-day filter context to theSUM('Inventory'[UnitsInStock])
expression, effectively calculating the stock for only the last day of the period. This is the standard pattern for handling semi-additive measures like inventory balances.SUM('Inventory'[UnitsInStock])
: This is incorrect because it is a fully additive measure. It would sum the 'UnitsInStock' for every day in the selected period, leading to a vastly inflated and incorrect number. For example, the inventory for a month would be the sum of 30 daily balances instead of the balance on the last day.AVERAGEX(VALUES('Date'[Date]), SUM('Inventory'[UnitsInStock]))
: This is incorrect because it calculates the average daily stock over the period, not the closing balance on the last day. While this might be a useful metric in some scenarios, it does not meet the requirement for a closing balance.TOTALYTD(SUM('Inventory'[UnitsInStock]), 'Date'[Date])
: This is incorrect as it is a time intelligence function that calculates the cumulative (year-to-date) total. It does not isolate the value from the last day of the period; instead, it sums all values from the start of the year up to the end of the current period.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the CALCULATE function in DAX?
What does the LASTDATE function do in DAX?
Why is SUM('Inventory'[UnitsInStock]) not correct for calculating closing inventory?
You are a data analyst modeling sales data in Power BI. You start by connecting to a single, denormalized table named 'SalesData' in Power Query. This table contains columns for transactional values (e.g., 'SaleAmount', 'Quantity') and descriptive attributes for customers, products, and dates (e.g., 'CustomerName', 'ProductCategory', 'OrderDate'). To optimize the data model for performance and analysis, you need to transform this flat table into a star schema consisting of a central fact table and multiple dimension tables. Which sequence of actions in Power Query correctly creates the dimension tables and prepares the fact table?
Append the 'SalesData' query with separate queries created for customers and products. Then, use the 'Group By' feature to summarize sales by customer and product to create the final table.
First, in the 'SalesData' query, remove all descriptive text columns like 'CustomerName' and 'ProductCategory'. Then, duplicate the modified 'SalesData' query to create dimension tables.
Create new queries that reference 'SalesData' for each dimension. In each new dimension query, keep only the relevant descriptive columns and remove duplicates. Then, merge the 'SalesData' query with the new dimension queries to add their key columns and remove the original descriptive columns from 'SalesData'.
Split the 'SalesData' query into multiple tables using the 'Split Column' transformation based on customer and product information. Then, define relationships between the newly created tables in the model view.
Answer Description
The correct process involves creating dimension tables from the main query and then using those dimensions to prepare the final fact table. First, you should create new queries that reference the original 'SalesData' query for each dimension you want to create (e.g., 'DimCustomer', 'DimProduct'). Using 'Reference' is efficient as it creates a dependency on the source query without duplicating the data in memory. In each of these new dimension queries, you isolate the relevant columns (e.g., 'CustomerName' for 'DimCustomer'), remove all other columns, and then remove duplicate rows to create a unique list of dimension members. It is a best practice to add a unique index column (surrogate key) to each dimension table. After the dimensions are created, you return to the 'SalesData' query, which will serve as your fact table. You merge this query with each new dimension table to bring in their surrogate keys. Finally, you remove the original, now-redundant descriptive columns (e.g., 'CustomerName', 'ProductCategory') from the fact table, leaving only the foreign keys and the numeric measures.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to transform a flat table into a star schema in Power BI?
What does 'referencing' a query in Power Query mean, and why is it used?
What is a surrogate key in a dimension table, and why is it recommended?
You built a Power Query that imports an Excel worksheet, promotes the first row to headers, and then applies a Changed Type step. The source file owner later renames the header 'Unit Price' to 'UnitCost'. The next scheduled refresh fails with the message:
Expression.Error: The column name 'Unit Price' of the table wasn't found.
You need to fix the import error and keep every downstream step exactly as it is. What should you do in Power Query?
Switch the query's storage mode from Import to DirectQuery to avoid schema validation during refresh.
Edit the Changed Type step so that it references 'UnitCost' instead of 'Unit Price'.
Delete the existing query and rebuild it from scratch to match the new header names.
Insert a Rename Columns step immediately after the Source step that renames 'UnitCost' back to 'Unit Price'.
Answer Description
Adding a Rename Columns step right after Source converts the new header 'UnitCost' back to 'Unit Price'. Because all following steps still refer to 'Unit Price', they can execute without modification. Rebuilding the query would resolve the error but is unnecessary and time-consuming. Editing or disabling the Changed Type step would still leave later steps referencing the old column name, so those steps would also need to be changed. Switching to DirectQuery does not address missing-column errors; the schema must still match during refresh.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the Rename Columns step fix the issue in Power Query?
How does Power Query handle schema changes during data refresh?
When should you rebuild a query instead of using transformations like Rename Columns?
During data preparation in Power Query, you connect to a SQL Server view that returns columns CustomerID, SalesDate, and LineTotal. You must create a table that lists each CustomerID together with the sum of LineTotal and the number of transactions. The result has to be produced in a single transformation step, stay query-foldable, and avoid extra queries. What should you do?
Load the view without transformations and create a DAX table with the SUMMARIZE function to calculate the totals and counts.
Open the Group By dialog, switch to Advanced, group on CustomerID, and add two aggregations in the same step: Sum of LineTotal and Count Rows.
Duplicate the query, use one duplicate to Group By and sum LineTotal, use the other to Group By and count rows, then merge the results.
Add a Table.Buffer step before applying Table.Group to aggregate LineTotal and transaction count.
Answer Description
The Group By dialog in Advanced mode lets you add multiple aggregations in a single step. By grouping on CustomerID and adding a Sum aggregation for LineTotal and a Count Rows aggregation, Power Query generates a single Table.Group step that SQL Server can execute, so folding is preserved. Duplicating queries or using DAX would add extra steps, and buffering the table forces the data to be loaded locally, which breaks folding.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is query folding in Power Query?
How does the Group By dialog in Advanced mode work in Power Query?
What happens if query folding is broken?
While profiling a Customers table in Power Query Editor, you need to verify that the EmailAddress column contains no duplicates before designating it as a key. You want a quick visual that shows, directly beneath every column header, the total count of distinct values and how many of those are unique (appear only once). Which data-profiling view should you enable from the View ribbon to obtain this information?
Turn on the Data preview (top 1,000 rows) option; it automatically pivots each distinct value into a new column.
Enable Column profile; it shows detailed statistics but only for the column you select.
Enable Column distribution; it displays Distinct and Unique counts beneath each column.
Enable Column quality; it highlights valid, error, and empty percentages only.
Answer Description
Column distribution is the only data-profiling view that surfaces both the Distinct and Unique counts directly under each column header, letting you instantly spot columns with duplicate or null values. Column quality only shows percentages of valid, error, and empty rows; it does not reveal distinct or unique counts. Column profile does include those counts but only after you select one column at a time, so it is less efficient for scanning all columns simultaneously. The Data preview (top 1,000 rows) switch merely changes the sampling range and does not add any new profiling metrics.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between Distinct and Unique counts in column profiling?
How does enabling Column distribution improve data profiling in Power Query Editor?
What are other data-profiling views available in Power Query Editor, and how are they different?
You are cleaning a 'Sales' table in the Power Query Editor. The 'DiscountPercentage' column, which should contain numerical values, has a data type of 'Any'. It contains numbers, null
values, and the text string "N/A". Your goal is to convert this column to a 'Decimal Number' type for calculations. You must replace both the null
values and the "N/A" strings with 0 and ensure no rows are lost. Which sequence of steps is the most effective way to achieve this?
Use 'Replace Values' to replace "N/A" with 0. Use 'Replace Values' again to replace
null
with 0. Finally, change the column data type to 'Decimal Number'.Change the column data type to 'Decimal Number'. Then, use the 'Remove Errors' feature to eliminate any rows that failed the conversion.
Filter the column to exclude "N/A" and
null
values. Then, change the column data type to 'Decimal Number'.Use the 'Fill Down' transformation to handle the
null
values. Then, use 'Replace Values' to replace "N/A" with 0.
Answer Description
The correct approach is to first use the 'Replace Values' transformation to substitute the text string "N/A" with 0. Then, use 'Replace Values' again to substitute the null
values with 0. After these replacements, the column only contains numerical data, so you can safely change the data type to 'Decimal Number' without introducing errors. Changing the data type to a number before replacing the "N/A" text would convert those text values into errors, which would then need to be handled. Replacing the values first is a more direct and cleaner method. The 'Remove Errors' option would result in data loss, which the requirement explicitly forbids. The 'Fill' command is used for replacing nulls with the value from the preceding or succeeding row and is not appropriate for this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does it mean to change the data type to 'Decimal Number' in Power BI?
Why is it important to replace 'N/A' and `null` values before changing the data type?
What does the 'Replace Values' feature do in Power Query, and how is it used here?
You are a data analyst cleaning sales data in the Power Query Editor. You have a column named 'OrderCode' that contains concatenated values in the format 'Region-ProductID-Date', such as 'NA-P45821-20250115'. You need to create a new column that contains only the 'ProductID' (e.g., 'P45821') from the middle of the string. Which of the following transformations is the most direct and appropriate method to accomplish this task?
Use the 'Split Column by Delimiter' transformation.
Use the 'Unpivot Columns' transformation on the 'OrderCode' column.
Use the 'Conditional Column' feature to create logic that extracts the text.
Use the 'Replace Values' transformation to remove the region and date codes.
Answer Description
The correct answer is to use the 'Split Column by Delimiter' transformation. This feature is specifically designed to parse text that is separated by a consistent character. By selecting the hyphen (-) as the delimiter, Power Query will automatically separate the 'OrderCode' column into three new columns for Region, ProductID, and Date. You can then keep the ProductID column and remove the others.
A 'Conditional Column' is incorrect because it is used for creating new values based on 'if-then-else' logical conditions, not for extracting substrings based on a delimiter.
'Replace Values' is inefficient for this scenario because the region and date codes are likely to change from row to row, which would require an impractical number of individual replacement steps.
'Unpivot Columns' is incorrect as it is a structural transformation used to convert data from a wide format to a long format; it is not used for text manipulation within a column.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'Split Column by Delimiter' do in Power Query?
How does 'Conditional Column' differ from 'Split Column by Delimiter'?
What types of problems would require 'Replace Values' instead of 'Split Column by Delimiter'?
Your organization has a certified semantic model named SalesAnalytics published in a Fabric workspace. You must build a new report in Power BI Desktop that reuses the model's existing measures and row-level security while keeping SalesAnalytics as the single source of truth (no data copy). What should you do first in Power BI Desktop?
Start a DirectQuery connection to the underlying SQL Server database that feeds SalesAnalytics.
Use Analyze in Excel to connect to SalesAnalytics, then import the workbook into Power BI Desktop.
On the Home ribbon, select Get data > Power BI semantic model and connect to SalesAnalytics.
Download the SalesAnalytics PBIX file, delete its report pages, and save a new file for reporting.
Answer Description
Choosing Get data > Power BI semantic model creates a live connection to SalesAnalytics. Power BI Desktop does not import the data; it simply queries the shared semantic model, so only one copy of the data and model definitions exists. The live connection automatically applies any row-level security that is already defined. Connecting directly to the SQL database would bypass the vetted semantic model, potentially break security rules, and duplicate modelling effort. Downloading the PBIX moves the model to a separate file, creating maintenance overhead and risking multiple versions. Analyze in Excel generates an Excel workbook, not a Power BI Desktop report, and importing that workbook would still create a separate model copy. Therefore, only the live connection meets all requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a semantic model in Power BI?
What is row-level security (RLS) in Power BI and how does it work?
What is a live connection in Power BI Desktop?
You are developing a Power BI Desktop report that contains a query named Staging_Product. The query is used only as an intermediate step for a merge operation and its data is not needed in the final data model that will be published. You need to minimize both the size of the model and the overall refresh duration while preserving the transformations performed by Staging_Product. What should you do in Power Query Editor?
Change Staging_Product to use DirectQuery storage mode.
Set the storage mode of Staging_Product to Dual so it is loaded only on demand.
Clear the Enable load option for Staging_Product so the query becomes connection-only.
Clear the Include in report refresh option for Staging_Product.
Answer Description
Clearing the Enable load option turns the query into a connection-only object. The query still executes when referenced by other queries, so dependent transformations continue to work, but the resulting table is not loaded into the data model. This reduces model size and avoids the time required to load that data. Clearing Include in report refresh would stop Staging_Product from refreshing at all, which could break downstream queries. Changing storage mode to DirectQuery or Dual is irrelevant for a purely intermediate query.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'Enable load' do in Power Query Editor?
What is the difference between 'Enable load' and 'Include in report refresh' in Power Query?
Why isn't storage mode relevant for intermediate queries like Staging_Product?
You need to connect Power BI Desktop to an Azure SQL Database that stores roughly 40 million sales records and receives new transactions every minute. Business users must see figures that are no more than five minutes old when they open a report. The organization has only Power BI Pro licenses and will publish the report to a workspace that runs in shared capacity (no Premium features). The database is well-tuned and can return aggregate queries in under five seconds. Which Power BI connectivity mode should you choose so the report meets the freshness requirement without requiring Premium capacity?
Create a live connection to Azure Analysis Services.
Import all data and schedule a refresh every hour.
Use DirectQuery mode when creating the semantic model.
Import the data and configure an incremental refresh policy.
Answer Description
DirectQuery leaves the data in Azure SQL and sends a query to the database each time a visual is rendered, so the report always shows the latest rows without relying on dataset refresh. Import mode-whether full or with incremental refresh-would still depend on scheduled dataset refresh, and shared capacity limits this to eight refreshes per day, far short of the five-minute requirement. A live connection applies to Analysis Services models, not to an Azure SQL source. Therefore, DirectQuery is the only mode that delivers near real-time data under the stated licensing and capacity constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DirectQuery in Power BI?
How does DirectQuery differ from Import mode in Power BI?
Why can't incremental refresh be used in this scenario?
You are developing a Power BI data model that includes a 'Sales' fact table and a 'DimDate' dimension table. The model has an active relationship between 'Sales'[OrderDate] and 'DimDate'[Date], and an inactive relationship between 'Sales'[ShipDate] and 'DimDate'[Date]. You need to create a measure that calculates the total sales amount specifically by ship date. Which DAX function must you use inside a 'CALCULATE' function to activate the relationship on the ship date for this measure?
USERELATIONSHIP
CROSSFILTER
TREATAS
RELATEDTABLE
Answer Description
The correct answer is USERELATIONSHIP
. To analyze a measure by different date roles (like order date and ship date) using a single date table, you create multiple relationships, where one is active and the others are inactive. The USERELATIONSHIP
function is used within a CALCULATE
statement to temporarily activate an inactive relationship for the duration of that specific calculation.
CROSSFILTER
is incorrect because it modifies the filter direction of an existing active relationship; it does not activate an inactive one.TREATAS
is incorrect. It is used to apply the filter context from one table to another where no physical relationship exists, which is different from activating a predefined inactive relationship.RELATEDTABLE
is incorrect as it is a navigation function that returns a table of related rows from the 'many' side of a relationship, not for activating a relationship path for a calculation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the USERELATIONSHIP function do in DAX?
How is CROSSFILTER different from USERELATIONSHIP in Power BI?
When should TREATAS be used instead of USERELATIONSHIP?
You are building a Power BI report that sources data from a folder of CSV files. You need to create a parameter in the Power Query Editor that will hold the folder path, allowing you to easily switch between a development and a production location. Which configuration for the new parameter is most appropriate to ensure functionality and adhere to performance best practices?
Set the 'Type' to "Text" and clear the "Enable load" selection.
Set the 'Type' to "Text" and select the "Enable load" checkbox.
Set 'Suggested Values' to "List of values" and enter both the development and production paths.
Set the 'Type' to "Any" and select the "Required" checkbox.
Answer Description
The correct approach is to set the parameter's 'Type' to 'Text', as a file path is a text string. It is also a best practice to clear the 'Enable load' selection. Parameters are typically used within other queries (like defining a data source) and do not need to be loaded into the data model as a separate table, which unchecking 'Enable load' prevents. Loading the parameter into the model would consume unnecessary memory.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to clear the 'Enable load' selection for parameters in Power BI?
What happens if I set the 'Type' of the parameter to 'Any' instead of 'Text'?
How can I use parameters to switch between development and production environments in Power BI?
You are creating a Power BI data model and have a query named 'BaseSales' that performs 15 transformation steps to clean raw sales data. You need to create two additional tables for analysis: 'SalesByRegion' and 'SalesByProduct'. Both of these tables must start with the cleaned data from 'BaseSales' before having their own specific aggregations applied. You want to ensure that any future modifications to the cleaning steps in 'BaseSales' are automatically inherited by both 'SalesByRegion' and 'SalesByProduct' to minimize maintenance. Which action should you take?
Reference the 'BaseSales' query twice to create the two new queries.
Create two blank queries and merge each of them with the 'BaseSales' query.
Duplicate the 'BaseSales' query twice to create the two new queries.
Create two blank queries and append the 'BaseSales' query to each of them.
Answer Description
The correct action is to reference the 'BaseSales' query. Referencing creates a new query that is linked to the original and uses the output of the source query as its starting point. This means any changes made to the cleaning steps in 'BaseSales' will automatically flow through to 'SalesByRegion' and 'SalesByProduct'. This centralizes the transformation logic, making the model more efficient and easier to maintain.
Duplicating the query would create two new, independent queries. Any changes in 'BaseSales' would not propagate to them, requiring manual updates in three separate places.
Merging and Appending are different operations. Merging is used to join queries column-wise (like a SQL join), and Appending is used to stack queries row-wise.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between referencing and duplicating a query in Power BI?
What is the difference between merging and appending queries in Power BI?
Why is centralizing transformation logic important when working with data models in Power BI?
Your Power BI model contains a large FactSales table and several dimension tables. You receive two requirements:
- Show a slicer that lets report consumers filter visuals by FiscalQuarter, a value derived from FactSales[SaleDate].
- Build a reusable HighValueCustomers dimension that keeps only customers whose lifetime sales amount exceeds $1,000,000 so the table can be related to other tables and used in reports. Which Power BI calculation type should you use for each requirement to meet the functional needs and follow recommended practice?
Create a calculated table for FiscalQuarter and a calculated column for HighValueCustomers.
Create a measure for FiscalQuarter and a calculated table for HighValueCustomers.
Create a calculated column for FiscalQuarter and a calculated table for HighValueCustomers.
Create a calculated column for FiscalQuarter and a measure for HighValueCustomers.
Answer Description
A calculated column is evaluated for every row in a table during data refresh and the result is stored in the model, so the value can be placed in slicers, rows, or columns. That makes it the correct choice for the FiscalQuarter attribute, which must be selectable in a slicer. A calculated table is created with a DAX expression that returns an entire table. Because it is stored in the model, it can participate in relationships and act as an independent dimension. Using a calculated table allows you to aggregate FactSales by customer, filter to those above $1,000,000, and then relate the resulting HighValueCustomers table to other tables. Alternative answers fail because measures cannot be used in slicers or relationships, and a calculated column cannot create a separate table that other tables can relate to.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a calculated column and a calculated table in Power BI?
How do calculated columns contribute to slicers in Power BI?
Why can't measures be used for slicers or relationships in Power BI?
You are designing a Power BI report that must show near-real-time inventory levels from an Azure SQL Database that contains over 150 million rows. Business users require the visuals to reflect changes in the source within one minute, and the data engineering team forbids duplicating the raw data in the Power BI service. Only simple aggregations are needed in the report. Which data connectivity mode should you configure for the fact tables?
Configure the tables by using DirectQuery mode.
Create a live connection to an Analysis Services tabular model.
Use Import mode with incremental refresh policies.
Import the tables and schedule a refresh.
Answer Description
DirectQuery leaves the data in the Azure SQL Database and stores only metadata in the Power BI semantic model, so no copy of the detailed rows is persisted in the service. Because queries are sent to the source each time a visual renders, the report can surface changes almost immediately-well inside the one-minute requirement-without waiting for scheduled or incremental refresh cycles. Import modes (with or without incremental refresh) would cache the data inside Power BI and rely on refresh jobs that cannot guarantee sub-minute latency. A live connection to Analysis Services is irrelevant here because the model resides directly in Azure SQL, and building an intermediate Analysis Services layer would still require data movement or a separate modeling environment. Therefore, DirectQuery best satisfies both the real-time visibility and the data-duplication constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DirectQuery mode in Power BI?
Why is DirectQuery better than Import mode for real-time data scenarios?
How does Power BI handle performance in DirectQuery mode for large datasets?
You are developing a Power BI report that analyzes financial transaction data from a CSV file. In Power Query, the TransactionAmount
column, which contains monetary values with decimal points, is automatically assigned the Decimal Number
data type. You observe minor rounding discrepancies when performing aggregations. To ensure the highest level of precision for financial calculations, which data type should you use for the TransactionAmount
column?
Whole Number
Text
Fixed Decimal Number
Decimal Number
Answer Description
The correct data type is Fixed Decimal Number. This data type is specifically designed for currency values and provides precision up to four decimal places, which prevents the floating-point rounding errors that can occur with the Decimal Number type. The Decimal Number type is a floating-point number and is not suitable for financial calculations where exact precision is required. Whole Number is incorrect as it cannot store decimal values. Text is incorrect because it prevents mathematical operations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does Fixed Decimal Number provide higher precision for financial calculations compared to Decimal Number?
What are examples of rounding errors and how do they occur in Decimal Number?
When should you use Decimal Number instead of Fixed Decimal Number in Power BI?
You are designing a Power BI semantic model that contains a Sales fact table with two date keys: OrderDateKey and ShipDateKey. Report authors must be able to build visuals that simultaneously compare order volume by order date and shipping performance by ship date, without writing custom DAX to activate relationships. Row-level security will also be applied. Which modeling approach should you implement?
Duplicate the Date table so there is a Date table and a Ship Date table, and create an active one-to-many relationship from each table to its corresponding date key in Sales.
Create two inactive relationships from the existing Date table to the Sales table and require measures to use USERELATIONSHIP when needed.
Keep a single active relationship on OrderDateKey and set its cross-filter direction to Both so it implicitly filters ShipDateKey.
Create a many-to-many relationship between the Date table and the Sales table, with cross-filter direction set to Both.
Answer Description
Power BI permits only one active relationship between any two tables. To let visuals filter the Sales fact table by both OrderDateKey and ShipDateKey at the same time, each role-playing instance of the date dimension must have its own active relationship. The recommended pattern is therefore to duplicate the Date table (for example, create a Ship Date table as a calculated or reference table) and relate each copy to the appropriate date key in Sales as a one-to-many, single-direction, active relationship. This design keeps the model simple for report authors, works with row-level security, and avoids the need for USERELATIONSHIP calls. Using inactive relationships would force every measure to enable the relationship manually, while many-to-many or bi-directional settings do not resolve the single-active-relationship constraint.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does Power BI only allow one active relationship between two tables?
What is a role-playing dimension, and why is it duplicated in this solution?
How does this modeling approach work with row-level security?
In Power BI Desktop, you are inspecting a large fact table in Power Query Editor. After turning on Data Profiling, you need to view the minimum, maximum, average (mean), and standard deviation for the Amount column so you can spot outliers before loading the data. Which View-tab option should you enable, and then click the column, to display all of those statistics in a single pane?
Column quality
Query diagnostics
Column distribution
Column profile
Answer Description
The Column profile feature opens a pane at the bottom of Power Query Editor that combines detailed column statistics (including minimum, maximum, mean, standard deviation, count, null count, and more) with a value-distribution chart. Selecting a column immediately shows these metrics, making it ideal for identifying outliers. Column distribution only shows a small histogram plus distinct and unique counts, Column quality only shows percentages of Valid/Error/Empty values, and Query diagnostics is unrelated to profiling columns. Therefore, Column profile is the correct choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does Data Profiling in Power BI do?
How does the Column profile differ from Column distribution?
Can Query diagnostics be used for analyzing columns?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.