Microsoft Fabric Data Engineer Associate DP-700 Practice Question

You are designing a data pipeline in Microsoft Fabric that loads operational data into a Lakehouse-based star schema every hour. Dimension tables must retain type-2 history and use surrogate keys that stay unique across all incremental loads. Which action should you implement to prepare the dimension data before the fact tables are loaded?

  • Write the incoming dimension rows in append mode; let a GENERATED ALWAYS IDENTITY column assign surrogate keys automatically during the insert.

  • Overwrite the dimension table on every run by using a KQL dataflow that recreates the table from scratch.

  • Use a Delta Lake MERGE statement that matches on the business key, expires the current row, and inserts a new row that receives a new surrogate key whenever any tracked attribute changes.

  • Load the source table with COPY INTO and keep the original primary key from the operational system as the dimension key.

Microsoft Fabric Data Engineer Associate DP-700
Ingest and transform data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot