AWS Certified Data Engineer Associate DEA-C01 Practice Question

A data engineering team needs to move 10 TB of historical fact data from an Amazon Redshift table to Amazon S3 so the cluster can be resized. The data must remain queryable later through Amazon Redshift Spectrum and should be stored in a compressed, columnar format to minimize storage costs. Which solution meets these requirements while using the least additional AWS services?

  • Extract the rows with the Redshift Data API, store them as compressed CSV files in S3, and catalog the files with an AWS Glue crawler.

  • Use AWS Database Migration Service to perform a full load from the Redshift table to an S3 target endpoint configured for Parquet.

  • Run an UNLOAD command with the PARQUET option to export the query result to an S3 prefix, then create an external table that points to the Parquet files.

  • Run a COPY command to copy the rows into a staging table stored on S3, then query the staging table with Redshift Spectrum.

AWS Certified Data Engineer Associate DEA-C01
Data Store Management
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot