🔥 40% Off Crucial Exams Memberships — Deal ends today!

27 minutes, 38 seconds remaining!

GCP Professional Data Engineer Practice Question

To keep a mission-critical Spark Streaming job within a 1-minute latency objective, you run it on a persistent Dataproc cluster and attach a YARN-based autoscaling policy. Spikes must be absorbed quickly, but scale-downs must avoid repeatedly killing executors. Which policy setting combination best satisfies these constraints while balancing cost?

  • scaleUpFactor = 0.5, scaleDownFactor = 1.0, gracefulDecommissionTimeout = 30 minutes

  • scaleUpFactor = 0.2, scaleDownFactor = 0.1, gracefulDecommissionTimeout = 0 seconds

  • scaleUpFactor = 0.5, scaleDownFactor = 0.5, gracefulDecommissionTimeout = 0 seconds

  • scaleUpFactor = 1.0, scaleDownFactor = 0.1, gracefulDecommissionTimeout = 30 minutes

GCP Professional Data Engineer
Maintaining and automating data workloads
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot