GCP Professional Cloud Architect Practice Question

Your company operates a revenue-critical API on GKE in two regions. The SRE team wants to introduce automated chaos experiments that randomly delete pods during business hours. Corporate policy states that the experiment must be aborted as soon as the 500-ms latency SLO is violated, and no customer traffic or data may leave the cluster. Which approach satisfies these requirements with the least operational overhead?

  • Export request logs to BigQuery through Pub/Sub and run a nightly SQL job that disables the chaos Job if latency spikes are detected.

  • Set a PodDisruptionBudget with maxUnavailable at 50% and rely on GKE surge upgrades to randomly evict pods; autoscaling will restore capacity if latency increases.

  • Create a Cloud Monitoring SLO for latency and attach an alerting policy that publishes to Pub/Sub; have a Cloud Function subscribed to the topic delete the chaos-experiment Job via the Kubernetes API, and trigger the Job on a work-hours schedule with Cloud Scheduler.

  • Deploy chaos sidecars through a blue/green rollout with Cloud Deploy and manually watch dashboards to decide whether to roll back.

GCP Professional Cloud Architect
Analyzing and optimizing technical and business processes
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot