GCP Professional Cloud Architect Practice Question
Your company runs three independent workloads in Google Cloud:
Workload A: A once-per-month ETL backfill that spins up thousands of stateless workers for about 12 hours. Tasks checkpoint progress every few minutes and can restart automatically if an instance disappears.
Workload B: A low-latency pricing service that must run on exactly 10 vCPU and 32 GB RAM per replica to align with a fixed software license model.
Workload C: An ML research team that repeatedly trains large transformer models needing eight NVIDIA A100 GPUs per run and wants the shortest possible training time.
Which Compute Engine configuration is the most cost-effective match for these workloads while satisfying their technical constraints?
A → N2 standard instances with a 3-year committed-use discount, B → predefined n2-standard-16 machines, C → Cloud TPUs v4 pods
A → Reserved sole-tenant nodes, B → M2-ultramem-208 machines, C → N1-standard-32 instances with P100 GPUs
A → Autoscaled Cloud Run jobs, B → E2-standard-8 instances behind an internal load balancer, C → T2D-standard-32 instances with attached NVIDIA T4 GPUs
A → Spot VMs in a managed instance group, B → custom 10 vCPU/32 GB machine type, C → A2-highgpu (8×A100) instances
Workload A benefits most from Spot VMs because it is fault-tolerant, short-lived, and runs only once a month; accepting preemption cuts instance cost by up to 91% without affecting results. Workload B needs a precise vCPU-to-memory ratio to avoid over-provisioning licensed cores; custom machine types let you define a 10 vCPU/32 GB shape so you pay only for what is required while meeting latency objectives. Workload C needs multiple A100 GPUs and high GPU throughput; the A2-highgpu family provides up to 8 A100 GPUs with optimized CPU-to-GPU interconnect and is the recommended choice for large-scale training. Alternative options either cost more or do not meet technical needs: sustained-use or committed-use standard N2 instances for Workload A miss the largest savings; predefined N1/N2 shapes for Workload B waste vCPUs or memory; TPUs do not run GPU-specific code and smaller GPU VMs would prolong training. Therefore, mapping Spot VMs for A, custom machine types for B, and A2-highgpu VMs for C is correct.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Spot VMs, and why are they suitable for Workload A?
Open an interactive chat with Bash
What are custom machine types, and how do they benefit Workload B?
Open an interactive chat with Bash
What are A2-highgpu instances, and why are they ideal for Workload C?
Open an interactive chat with Bash
What are Spot VMs and why are they cost-effective for fault-tolerant workloads?
Open an interactive chat with Bash
What are custom machine types and why are they suitable for Workload B?
Open an interactive chat with Bash
What is the A2-highgpu machine family and why is it optimal for training large AI/ML models?
Open an interactive chat with Bash
GCP Professional Cloud Architect
Designing and planning a cloud solution architecture
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
Pass with Confidence.
IT & Cybersecurity Package
You have hit the limits of our free tier, become a Premium Member today for unlimited access.
Military, Healthcare worker, Gov. employee or Teacher? See if you qualify for a Community Discount.
Monthly
$19.99
$19.99/mo
Billed monthly, Cancel any time.
3 Month Pass
$44.99
$14.99/mo
One time purchase of $44.99, Does not auto-renew.
MOST POPULAR
Annual Pass
$119.99
$9.99/mo
One time purchase of $119.99, Does not auto-renew.
BEST DEAL
Lifetime Pass
$189.99
One time purchase, Good for life.
What You Get
All IT & Cybersecurity Package plans include the following perks and exams .