AWS Certified Data Engineer Associate DEA-C01 Practice Question

A data engineer needs to let business analysts visually build and test data transformations on a 5 GB sample of CSV files stored in Amazon S3, then run those same transformations every night on 2 TB of new data and write the output to Parquet for Amazon Athena. The company does not want to manage clusters or write code. Which approach meets these requirements with the least operational effort and cost?

  • Launch an Amazon EMR cluster running Apache Spark, store the transformation script in Amazon S3, and invoke it nightly with Amazon EventBridge.

  • Schedule an AWS Lambda function with Amazon EventBridge that runs an Amazon Athena CTAS query to convert the CSV files to Parquet each night.

  • Author a visual ETL job in AWS Glue Studio that uses Apache Spark to convert the data and trigger it nightly with a Glue workflow.

  • Create an AWS Glue DataBrew project to build a transformation recipe on the sample data and schedule a DataBrew job to run nightly on the full S3 dataset, outputting Parquet.

AWS Certified Data Engineer Associate DEA-C01
Data Operations and Support
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot