🔥 40% Off Crucial Exams Memberships — Deal ends today!

1 hour, 56 minutes remaining!

GCP Professional Cloud Architect Practice Question

Your retail analytics team is migrating an on-premises computer-vision workflow to Google Cloud. Millions of product photos are already stored as JPEG objects in a regional Cloud Storage bucket. You need to design the data-ingestion step of a Vertex AI Pipeline that will:

  1. Trigger human annotation jobs for any newly added images.
  2. Maintain an auditable record of which images and labels fed each AutoML Vision training run so experiments are reproducible.
  3. Allow future pipelines in the same project to reuse the labeled data without copying the objects.
    Which approach best meets these requirements?
  • Import the Cloud Storage URIs into a Vertex AI Managed Dataset and use that dataset as the source for labeling tasks and every training component.

  • Mount the Cloud Storage bucket into every training container with gcsfuse and track the exact file set in a Git repository alongside pipeline code.

  • Convert the images to base64 strings, load them into a BigQuery table, and point AutoML Tables at the table for training.

  • Pass the Cloud Storage path directly to each AutoML Vision training component and store the list of processed object names in Artifact Registry for audit purposes.

GCP Professional Cloud Architect
Managing and provisioning a solution infrastructure
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot