AWS Certified Data Engineer Associate DEA-C01 Practice Question

Your team launches a transient Amazon EMR cluster nightly to run Spark jobs that turn 50 TB of raw data in Amazon S3 into optimized Parquet files for Athena. The output must persist in S3. Intermediate shuffle and spill data are not required after the job ends. What is the most cost-effective storage configuration that provides high performance and meets these requirements?

  • Mount an Amazon EFS file system on the cluster and direct Spark to read, write, and spill to the EFS mount; leave the cluster running for reuse.

  • Configure EMRFS to read from and write to Amazon S3, and let Spark use instance-store HDFS volumes for intermediate data; terminate the cluster after completion.

  • Enable Spark DirectOutputCommitter v2 to write shuffle and output files directly to S3, and disable local storage on all nodes.

  • Attach multiple Amazon EBS volumes to each core node, set HDFS replication to three for all data, and copy results to S3 with DistCp after the job.

AWS Certified Data Engineer Associate DEA-C01
Data Store Management
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot