🔥 40% Off Crucial Exams Memberships — Deal ends today!

1 hour, 58 minutes remaining!

GCP Professional Data Engineer Practice Question

Your organization has dozens of Cloud Storage buckets that hold raw log files and multiple BigQuery projects that contain curated analytics tables. Different business units own the data, but the CDO mandates that data stewards must be able to locate any dataset through a single search interface, add business-glossary tags, and apply column-level IAM policies-all without moving the data or writing or maintaining custom crawler code. Which approach meets the mandate with the least operational overhead?

  • Schedule Cloud Asset Inventory exports to BigQuery each night and build a custom metadata portal in Looker based on the exported tables.

  • Deploy Cloud Functions that call the Data Catalog API to crawl every bucket and dataset and populate custom tag entries and taxonomies.

  • Copy all BigQuery datasets into a single central project and use INFORMATION_SCHEMA views as the enterprise metadata catalog.

  • Create a Dataplex lake, define separate zones for each business unit, and add every Cloud Storage bucket and BigQuery dataset as Dataplex assets so the Dataplex Catalog provides unified discovery and governance.

GCP Professional Data Engineer
Storing the data
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot