🔥 40% Off Crucial Exams Memberships — Deal ends today!

26 minutes, 58 seconds remaining!

GCP Professional Data Engineer Practice Question

Your retail company operates a 600 TB on-premises Hadoop cluster that stores historical sales logs. The corporate data center connects to Google Cloud over a 200 Mbps dedicated link that also carries other production traffic. After an initial one-time backfill of 200 GB, the cluster produces about 1 TB of new log data each day. All data must become queryable in BigQuery, the existing 200 Mbps link must not be saturated, and ongoing operational effort should be minimal. Which approach should you recommend?

  • Ship a Transfer Appliance to move the 600 TB of historical data into Cloud Storage, then schedule daily Storage Transfer Service jobs with on-prem agents to copy new HDFS files to the same bucket and load them into BigQuery.

  • Configure BigQuery Data Transfer Service to connect to the on-prem Hadoop cluster, performing an initial full import and scheduling daily incremental transfers.

  • Provision a 10 Gbps Dedicated Interconnect and run a continuous Dataflow pipeline that streams both historical and daily data directly from the Hadoop cluster into BigQuery.

  • Use gsutil rsync over the existing 200 Mbps link to copy the 600 TB and the 1 TB daily increments into a Cloud Storage location that BigQuery reads as an external table.

GCP Professional Data Engineer
Ingesting and processing the data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot