AWS Certified Data Engineer Associate DEA-C01 Practice Question

A company stores operational data in an Amazon Aurora PostgreSQL cluster. Analysts need to join this data with large fact tables that already reside in Amazon Redshift for near-real-time ad-hoc reporting. The solution must minimize data movement and ongoing maintenance while allowing analysts to run standard SQL joins from their Redshift data warehouse. Which approach meets these requirements with the least operational overhead?

  • Schedule an AWS Glue ETL job to load the Aurora data into Redshift staging tables every 15 minutes and join the staging tables with the fact tables.

  • Set up an AWS Database Migration Service task with change data capture (CDC) to replicate the Aurora tables into Redshift and run joins on the replicated tables.

  • Create an external schema in Amazon Redshift that references the Aurora PostgreSQL database and use Amazon Redshift federated queries to join the remote tables with local fact tables.

  • Export the Aurora tables to Amazon S3 and use Redshift Spectrum external tables to join the exported data with Redshift fact tables.

AWS Certified Data Engineer Associate DEA-C01
Data Store Management
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot