GCP Professional Data Engineer Practice Question

Your company wants to adopt a data-mesh model on Google Cloud. Retail, Logistics, and Finance teams will own their own raw Cloud Storage buckets and transformed BigQuery tables. A central data office must still enforce uniform data-classification, masking, and access-audit policies across all domains with minimal custom scripting and without taking over schema or pipeline ownership. Which architecture best satisfies these requirements?

  • Create a shared VPC; let each domain run separate BigQuery projects and maintain Cloud Functions that replicate organization-wide IAM roles and DLP templates to every project.

  • Schedule Cloud Composer workflows that nightly copy each domain's data into a central warehouse where the governance team applies masking views before republishing to consumers.

  • Use Dataplex to create one lake with separate zones and assets per domain, delegate asset-level ownership to the domain projects, and apply centralized taxonomy-based policy tags that propagate to BigQuery tables.

  • Put all data for every domain into a single BigQuery dataset secured with row-level security filters and managed entirely by the central data office.

GCP Professional Data Engineer
Storing the data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot