Microsoft Fabric Data Engineer Associate DP-700 Practice Question

You need to design a nightly process that ingests 200 GB of semi-structured JSON sales files from an Azure Storage account into a Microsoft Fabric Lakehouse. The solution must land the files unchanged, instantly expose them to several other Lakehouses without duplication, and then run PySpark code that performs complex joins and writes a cleansed Delta table. Which two Fabric capabilities should you combine to meet these requirements?

  • Mount the storage account in the Lakehouse and schedule a KQL transformation.

  • Create a OneLake shortcut to the storage location and run a PySpark notebook.

  • Use a pipeline Copy activity followed by a dataflow Gen2.

  • Enable mirroring on the storage container and query the mirrored tables with T-SQL.

Microsoft Fabric Data Engineer Associate DP-700
Ingest and transform data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot