Microsoft Fabric Data Engineer Associate DP-700 Practice Question

You are building a Lakehouse in Microsoft Fabric. A pipeline must ingest 200 GB of historical orders from Azure SQL Database once and then load only rows that changed since the last run each night. The pipeline triggers a Spark notebook that writes data to a Delta table in the Bronze layer. Which approach should you implement in the notebook to meet these requirements?

  • Enable Auto Optimize and Auto Compaction on the Delta table without additional logic; Spark will automatically detect and merge changes.

  • Use a Delta Lake MERGE INTO operation keyed on a persisted watermark column to upsert nightly changes.

  • Configure the pipeline to always overwrite the Delta table using COPY INTO with FILEFORMAT = PARQUET and a forced path option.

  • Issue a TRUNCATE TABLE statement on the Delta table followed by a full INSERT of all source rows each night.

Microsoft Fabric Data Engineer Associate DP-700
Ingest and transform data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot