🔥 40% Off Crucial Exams Memberships — Deal ends today!

45 minutes, 1 second remaining!

GCP Professional Data Engineer Practice Question

A media-streaming company runs an Apache Beam pipeline on Cloud Dataflow in the us-central1 region. The job keeps several terabytes of user session data in Redis to perform low-latency joins. Management wants the pipeline to survive a complete zonal outage without manual intervention while keeping operational overhead and complexity to a minimum. Which approach best meets these requirements?

  • Run open-source Redis Cluster on a stateful GKE deployment distributed across three zones and manage failover with custom scripts and Kubernetes operators.

  • Create a Memorystore for Redis Cluster instance in us-central1. Configure the Dataflow pipeline to connect through the cluster's discovery endpoint and rely on its built-in multi-zone shard replication.

  • Provision two Basic Tier Memorystore for Redis instances, one in us-central1-a and one in us-central1-b, and modify the Dataflow job to write to both instances for redundancy.

  • Deploy a Standard Tier Memorystore for Redis instance in us-central1-a and create a Cloud SQL read replica in a different zone to take over if the primary zone fails.

GCP Professional Data Engineer
Maintaining and automating data workloads
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot