Microsoft Fabric Data Engineer Associate DP-700 Practice Question

In a Microsoft Fabric PySpark notebook, you have a DataFrame named df that contains incremental changes for the Customers dimension. You must write the data to the lakehouse path "Tables/dim_customer" so that it is stored in Delta format, automatically merges any new columns in future loads, and is physically partitioned by the Country column. Which PySpark write command meets all these requirements?

  • df.write.format("delta").mode("append").option("mergeSchema", "true").partitionBy("Country").save("Tables/dim_customer")

  • df.repartition("Country").write.format("delta").mode("append").save("Tables/dim_customer")

  • df.write.format("delta").mode("overwrite").option("overwriteSchema", "true").partitionBy("Country").save("Tables/dim_customer")

  • df.write.format("parquet").mode("append").option("mergeSchema", "true").partitionBy("Country").save("Tables/dim_customer")

Microsoft Fabric Data Engineer Associate DP-700
Ingest and transform data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot