Microsoft Fabric Data Engineer Associate DP-700 Practice Question

You are designing a batch transformation in a Microsoft Fabric lakehouse. Sales information is stored in three normalized Delta tables named Orders, OrderLines, and Products. Analysts want a single wide table that contains one row per order line and all related product and order attributes to avoid run-time joins in Power BI. Which approach should you use to create the denormalized table while keeping ACID guarantees in the lakehouse?

  • Create a Fabric notebook that uses PySpark to join the Orders, OrderLines, and Products Delta tables and write the joined result as a new Delta table in the lakehouse.

  • Create an eventstream in Real-Time Intelligence to ingest the three tables and use KQL to materialize the joined output.

  • Use a pipeline copy activity to copy each source table into separate folders in the lakehouse and enable Auto Optimize on the folders.

  • Enable mirroring on the source database so the normalized schema is replicated automatically and rely on the default semantic model in Power BI.

Microsoft Fabric Data Engineer Associate DP-700
Ingest and transform data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot