AWS Certified Solutions Architect Professional SAP-C02 Practice Question
Your company recently completed a lift-and-shift migration of its on-premises order-processing system to AWS. A Python script is triggered every hour by cron on an Amazon EC2 instance in the OrderService Auto Scaling group. The script connects read-only to an existing Amazon RDS for MySQL database, aggregates the last hour of orders, writes a CSV file, and uploads the file to an Amazon S3 bucket. Because of the cron schedule, at least one EC2 instance must remain running even though the aggregation takes only a few minutes per hour.
Management asks a solutions architect to modernize this reporting workflow. The replacement solution must:
Eliminate the need to keep EC2 instances running solely for the hourly job.
Require only minimal changes to the existing Python logic.
Continue writing the CSV report to the same S3 bucket.
Provide built-in retry capability and error logging without provisioning servers or containers.
Which approach meets these requirements MOST cost-effectively while aligning with a serverless design strategy?
Define an AWS Batch job that runs the Python script and associate it with a managed compute environment using Amazon EC2 On-Demand instances. Schedule the job submission with an EventBridge rule that runs hourly.
Configure an Amazon EventBridge Scheduler rule to invoke an AWS Lambda function each hour. Package the existing Python script as the Lambda handler, connect to the RDS database (optionally through RDS Proxy), generate the CSV in memory, and upload it to the S3 bucket.
Create an AWS Glue Spark ETL job triggered hourly by an AWS Glue workflow. Use a JDBC connection to read order data from RDS and write the results as Parquet files to S3.
Containerize the Python script into an Amazon ECS task that runs on AWS Fargate. Use an EventBridge rule to start the task every hour and stop it when the task finishes.
An Amazon EventBridge Scheduler rule that invokes an AWS Lambda function every hour is completely serverless, removes the need for always-on EC2 instances, and natively provides CloudWatch logging and configurable retry policies. The existing Python script can be packaged as the Lambda handler with only minor adjustments (for example, using an RDS Proxy or a standard MySQL driver) and can still create and upload the CSV file to Amazon S3.
An AWS Glue job is also serverless, but Glue is optimized for large-scale ETL and would require rewriting the script to run inside a Spark context; it typically writes columnar formats such as Parquet rather than the required CSV.
Running the script in an ECS task on AWS Fargate removes server management but still requires containerizing and maintaining task definitions, which violates the "no containers" preference and introduces more operational overhead than Lambda for a simple hourly task.
An AWS Batch job in an EC2 compute environment relies on underlying EC2 instances whose AMIs must be patched and managed by the customer, so it is not a fully serverless solution and retains unwanted infrastructure maintenance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is AWS Lambda the most cost-effective solution for this scenario?
Open an interactive chat with Bash
What is the role of Amazon EventBridge in this solution?
Open an interactive chat with Bash
How does RDS Proxy enhance this architecture?
Open an interactive chat with Bash
AWS Certified Solutions Architect Professional SAP-C02
Accelerate Workload Migration and Modernization
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access