GCP Professional Cloud Architect Practice Question

A genomics research group wants to train a 180-billion-parameter transformer model on Google Cloud. Their code includes several custom CUDA kernels that are not compatible with XLA, and internal benchmarks show best throughput on NVIDIA H100 GPUs interconnected with NVSwitch. The scientists will orchestrate experiments with Vertex AI Pipelines and need to scale to multiple hosts that each provide 200 Gbps of network bandwidth for fast parameter exchange. Which solution should the cloud architect recommend?

  • Run the workload as a Vertex AI training job on Cloud TPU v4-32 pod slices to achieve petaflop-scale BF16 performance.

  • Create Vertex AI custom training jobs that use the A3 machine series (8 × H100 80 GB GPUs with NVSwitch) and run multi-host distributed training across several A3 virtual machines.

  • Package the training code in a container and deploy it to Cloud Run on GPU with T4 accelerators, then scale the service horizontally.

  • Provision a GKE Autopilot cluster with NVIDIA P100 GPU node pools and execute the training using Kubeflow Pipelines.

GCP Professional Cloud Architect
Managing and provisioning a solution infrastructure
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot