Microsoft Fabric Data Engineer Associate DP-700 Practice Question

You are designing a Data Factory pipeline in a Microsoft Fabric workspace. One Spark notebook accepts two parameters, sourceTable and targetTable, which are read in the notebook by using dbutils.widgets.get(). Each night the pipeline must run the notebook once for every table name contained in a pipeline array parameter, without duplicating notebook code. Which pipeline design should you implement?

  • Add a ForEach activity that loops over the array parameter and, inside the loop, invoke a single Notebook activity whose base parameters use dynamic expressions such as @item() to pass the current table names.

  • Insert a Copy Data activity that uses a parameterized source dataset for each table and calls the notebook indirectly through a linked service.

  • Create one Notebook activity per table and link them sequentially in the pipeline, setting fixed parameter values in each activity.

  • Pass the entire array as a single comma-separated string to the Notebook activity and split the string inside the notebook to process all tables in one run.

Microsoft Fabric Data Engineer Associate DP-700
Implement and manage an analytics solution
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot