Microsoft Fabric Data Engineer Associate DP-700 Practice Question

You have created a notebook in a Microsoft Fabric workspace that accepts two parameters and transforms raw JSON files into a curated Delta table. You must ensure that the notebook runs automatically whenever a new JSON file is written to the raw container of an Azure Data Lake Storage Gen2 account, and only after the file has been copied to OneLake. You want to design the solution by using Fabric features while writing as little custom code as possible. Which approach should you use?

  • Create a SQL job in the Lakehouse that listens for CREATE FILE events and, when triggered, uses dynamic SQL to call the notebook through the Fabric REST API.

  • Develop a Dataflow Gen2 that copies data from the raw container to OneLake and adds a script step at the end to invoke the notebook.

  • Create a Data Factory pipeline that uses an Azure Storage event trigger, adds a Copy activity to move the file to OneLake, and then calls the parameterized notebook in a subsequent Notebook activity.

  • Configure a scheduled run for the notebook that executes every five minutes and add code to the notebook to poll the raw container for new files and copy them before processing.

Microsoft Fabric Data Engineer Associate DP-700
Implement and manage an analytics solution
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot