🔥 40% Off Crucial Exams Memberships — Deal ends today!

2 hours, 29 minutes remaining!

GCP Professional Data Engineer Practice Question

Your company is migrating a 10-year-old snowflake-style data warehouse to BigQuery. Analysts run ad-hoc dashboards that typically aggregate daily sales by customer attributes and product hierarchy. Query latency must be under two seconds without maintaining complex BI caches, and data is loaded hourly from Cloud Storage. Which data modeling approach in BigQuery best satisfies the latency objective while keeping ETL logic and operating costs low?

  • Retain the star schema unchanged and enable BigQuery BI Engine to cache dashboard queries, avoiding any changes to the underlying tables.

  • Maintain a star schema but create materialized views that pre-join the fact and dimension tables for every common aggregation used in dashboards.

  • Denormalize the warehouse by embedding dimension attributes as nested and repeated fields inside the fact table, producing one wide, partitioned table that analysts query directly.

  • Keep the existing snowflake schema and simply partition the fact table on the load-date column, relying on BigQuery's distributed joins for interactive performance.

GCP Professional Data Engineer
Storing the data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot