GCP Professional Cloud Architect Practice Question

Your analytics team ingests about 50 TB of web-click logs into Google Cloud every month. Logs are queried intensively by Dataproc jobs during the first 30 days and only rarely after that, but must be retained for compliance for seven years. The Dataproc cluster runs in the us-central1 region, and engineers need IAM-based, object-level access to individual log files. Which storage design best meets the performance, cost, and governance requirements with minimal operational overhead?

  • Create a regional Cloud Storage bucket in us-central1. Store new logs in the Standard storage class and use Object Lifecycle Management to move objects to Coldline after 30 days, keeping the bucket under Cloud IAM control.

  • Store logs in a Filestore instance mounted by Dataproc; after 30 days copy the files to Archive storage in a multi-region bucket via a custom script.

  • Write logs to a zonal SSD Persistent Disk attached to a proxy VM for 30 days, then export them to a Coldline bucket using a weekly cron job.

  • Ingest logs directly into Cloud Bigtable with one-week TTL, then export weekly snapshots to a multi-region Nearline bucket for long-term retention.

GCP Professional Cloud Architect
Managing and provisioning a solution infrastructure
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot