AWS Certified Data Engineer Associate DEA-C01 Practice Question

A data engineering team needs to automate a three-step daily pipeline: run an AWS Glue crawler, launch a Glue Spark ETL job when the crawler succeeds, and then run a data-quality job only if the ETL job completes successfully. If any step fails, processing must stop and a message must be sent to the existing DataOps Amazon SNS topic. The team wants the simplest solution that avoids provisioning additional compute or workflow services outside of AWS Glue. Which approach meets these requirements?

  • Build an AWS Step Functions Standard workflow that invokes the crawler and Glue jobs with Catch clauses for failures and an SNS integration for notifications.

  • Create an AWS Glue workflow that links the crawler, ETL job, and data-quality job with on-success triggers, and add an EventBridge rule that forwards Glue Job State Change FAILED events to the DataOps SNS topic.

  • Configure three separate Glue time-based triggers; after each run, invoke an AWS Lambda function that checks the previous job status, starts the next job, and publishes failures to SNS.

  • Author a DAG in Amazon MWAA that submits the crawler and Glue jobs through the Spark operator and pushes unsuccessful task callbacks to the SNS topic.

AWS Certified Data Engineer Associate DEA-C01
Data Ingestion and Transformation
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot