GCP Professional Data Engineer Practice Question

Your company operates multiple Cloud Dataflow streaming pipelines across dev, staging, and prod projects. The SRE team wants to receive a PagerDuty notification when any production pipeline accumulates more than 10 minutes of system lag for at least 5 consecutive minutes, but they do not want alerts to fire after a job is drained or cancelled. Which Cloud Monitoring alerting configuration best satisfies these requirements?

  • Create a log-based metric that counts ERROR entries from Dataflow job logs and configure an uptime check alert when the metric count is non-zero for 5 minutes.

  • Set an alert on pubsub.googleapis.com/subscription/num_undelivered_messages aggregated across all streams and trigger when backlog exceeds the equivalent of 10 minutes of messages.

  • Create an alerting policy on metric dataflow.googleapis.com/job/current_system_lag for resource type dataflow_job, filter on label job_state="JOB_STATE_RUNNING", and fire when the maximum value exceeds 600 seconds for 5 minutes.

  • Alert on metric dataflow.googleapis.com/job/watermark with a condition that the minimum value is older than 600 seconds for 5 minutes; no additional label filters are required.

GCP Professional Data Engineer
Maintaining and automating data workloads
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot