🔥 40% Off Crucial Exams Memberships — Deal ends today!

23 minutes, 28 seconds remaining!

GCP Professional Data Engineer Practice Question

An ecommerce retailer streams millions of click-event messages into Cloud Pub/Sub. The data engineering team wants to land the raw events in BigQuery as fast as possible, run complex SQL-based enrichments and aggregations inside BigQuery, and then write a daily inventory-forecast table from BigQuery into an external PostgreSQL database that supplies the merchandising application. They want to minimize pipeline code and avoid duplicating heavy transformations outside BigQuery. Which data-movement pattern best satisfies all of these requirements?

  • Use a traditional ETL pipeline that transforms events in Dataflow before loading to BigQuery and also transforms the forecast table in Dataflow before inserting it into PostgreSQL.

  • Perform all transformations in Dataproc Spark jobs outside BigQuery, loading the pre-processed data into BigQuery and exporting the forecast to PostgreSQL via Cloud Storage files.

  • Implement a Reverse ETL pipeline for the click-event ingestion and use ELT to export the forecast table to PostgreSQL.

  • Apply an ELT pattern to ingest: stream raw events into BigQuery and transform them with SQL, then use a Reverse ETL pattern to write the resulting forecast table back to PostgreSQL.

GCP Professional Data Engineer
Ingesting and processing the data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot