AWS Certified Data Engineer Associate DEA-C01 Practice Question

A company has an existing Amazon Redshift cluster that stores product analytics. Order transactions are written to an Amazon Aurora PostgreSQL DB cluster. Data engineers must run hourly joins between Redshift fact tables and the latest orders data with sub-second query latency, while minimizing data movement and administrative overhead. Which solution meets these requirements by using AWS best practices?

  • Create an external schema in Amazon Redshift that points to the Aurora PostgreSQL cluster by using a federated query IAM role. Build a materialized view that joins the Aurora orders table with the Redshift fact table and schedule an hourly REFRESH.

  • Configure AWS Database Migration Service to continuously replicate the Aurora orders table to Amazon S3 in Parquet format, define an external table, and join the data by using Redshift Spectrum.

  • Unload the Redshift fact tables to Amazon S3 each hour, load them into Aurora by using AWS Data Pipeline, perform the joins in Aurora, and write the results back to Redshift.

  • Set up an AWS Glue job to export the most recent orders every hour to Amazon S3 as CSV files and run a COPY command to load the data into a staging table in Redshift before joining.

AWS Certified Data Engineer Associate DEA-C01
Data Store Management
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot