GCP Professional Cloud Architect Practice Question

Your enterprise plans to train a 500-billion-parameter language model on Google Cloud. The data science team wants to iterate quickly, while finance requires that the infrastructure cost scales down to zero when the training cluster is idle. You must design the compute environment so that it

  1. provides a high-bandwidth, low-latency fabric between thousands of accelerators,
  2. lets the team choose between NVIDIA H100 GPUs today and future TPU versions without re-architecting, and
  3. exposes simple hooks so Vertex AI Pipelines can create and tear down the cluster on demand. Which Google Cloud capability most directly satisfies these requirements and should be central to your design?
  • Leverage Google Cloud's AI Hypercomputer platform accessed through Vertex AI custom training jobs.

  • Create an unmanaged MIG of A3 UltraGPU VMs and script scale-down using Cloud Functions and Cloud Scheduler.

  • Deploy a GKE Autopilot cluster with node pools that use Spot A3 GPU VMs and rely on Cluster Autoscaler for scale-to-zero.

  • Reserve a dedicated TPU v5p pod via Compute Engine and attach it to Vertex AI using custom containers.

GCP Professional Cloud Architect
Managing and provisioning a solution infrastructure
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot