Microsoft Fabric Data Engineer Associate DP-700 Practice Question

CSV files land in Azure Blob Storage, exposed in OneLake via a shortcut. You must build a Fabric solution that automatically starts when each new file arrives, passes the file path to an existing PySpark notebook that cleans the data, and loads the output into a Fabric warehouse. The solution must include event triggers, parameterized notebook input, and built-in retry and alerting. Which Fabric component should you create?

  • Create a SQL pipeline in the Fabric warehouse to run the PySpark logic and load the data.

  • Create a Dataflow Gen2 to transform the data and load it into the warehouse.

  • Create a Data Factory pipeline that contains an Execute Notebook activity followed by a Copy Data activity.

  • Create a Spark notebook and schedule it to run on a frequent interval.

Microsoft Fabric Data Engineer Associate DP-700
Implement and manage an analytics solution
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot