AWS Certified Data Engineer Associate DEA-C01 Practice Question

A company stores user transactions in an Amazon DynamoDB table. The data engineering team must capture every change and land it in Amazon S3 as compressed Parquet files within 5 minutes so downstream analytics can query the data. The pipeline must be fully serverless, require the least custom code, and automatically evolve the schema when new attributes are added. Which solution meets these requirements?

  • Enable Kinesis Data Streams for the DynamoDB table and configure an Amazon Data Firehose delivery stream with data format conversion to write Parquet files to S3.

  • Trigger an AWS Lambda function from DynamoDB Streams to batch records, convert them to Parquet, and upload the files to S3.

  • Use AWS Database Migration Service (AWS DMS) to replicate the table directly to S3 in Parquet format.

  • Enable DynamoDB on-demand export to S3 and schedule an AWS Glue ETL job to convert the exported JSON files to Parquet.

AWS Certified Data Engineer Associate DEA-C01
Data Ingestion and Transformation
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot