AWS Certified Solutions Architect Professional SAP-C02 Practice Question
A global e-commerce company is modernizing its monolithic application, which runs on Amazon EC2 and uses a large, self-managed PostgreSQL database cluster, also on EC2. This single database handles all application data, including user profiles, product catalog, order management, and transient data like shopping carts and user sessions. During peak traffic events, the database becomes a bottleneck, primarily due to intense read/write activity and lock contention on the tables managing shopping carts and sessions. This negatively impacts the performance of the entire platform. The company wants a highly scalable, durable, and cost-effective solution to specifically address this bottleneck with minimal operational overhead.
Which modernization approach should a Solutions Architect recommend?
Migrate the shopping cart and user session data to an Amazon DynamoDB table. Use the DynamoDB Time to Live (TTL) feature for session data expiration.
Implement Amazon ElastiCache for Redis to offload both shopping cart and user session data from the PostgreSQL database.
Migrate the entire PostgreSQL database to an Amazon Aurora PostgreSQL-Compatible Edition cluster.
Deploy an Amazon DynamoDB Accelerator (DAX) cluster in front of the existing PostgreSQL database to cache frequent queries.
The correct solution is to migrate the shopping cart and user session data to Amazon DynamoDB. DynamoDB is a purpose-built NoSQL key-value database designed for high-traffic, low-latency workloads, which is a perfect fit for shopping cart and session data access patterns. This approach decouples the high-volume, volatile data from the core relational database, directly addressing the performance bottleneck. DynamoDB offers seamless, automatic scaling to handle traffic spikes and provides single-digit millisecond latency. The use of DynamoDB Time to Live (TTL) is an efficient, no-cost mechanism to automatically delete expired session data, reducing storage costs and eliminating the need for custom cleanup logic.
Migrating the entire database to Amazon Aurora PostgreSQL is a plausible but less optimal solution. While Aurora would reduce management overhead compared to a self-managed cluster, it is still a relational database. It would not fully resolve the underlying issue of using a relational model for a high-concurrency, key-value workload, where lock contention could still occur at extreme scale.
Using Amazon ElastiCache for Redis is a strong option for caching and session state, but it is less ideal for shopping cart data where durability is critical. ElastiCache is an in-memory service and is not persistent by default. While Redis can be configured for persistence, DynamoDB provides stronger, built-in durability by automatically replicating data across multiple Availability Zones, making it a more reliable choice for data that cannot be lost, such as a customer's shopping cart.
Deploying Amazon DynamoDB Accelerator (DAX) in front of the PostgreSQL database is incorrect because DAX is an in-memory cache designed exclusively for Amazon DynamoDB. It is not compatible with other database engines like PostgreSQL. This option demonstrates a misunderstanding of the service's function.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes DynamoDB a better choice for session and shopping cart data?
Open an interactive chat with Bash
What challenges occur when using relational databases for high-concurrency workloads?
Open an interactive chat with Bash
How does TTL in DynamoDB reduce operational overhead?
Open an interactive chat with Bash
AWS Certified Solutions Architect Professional SAP-C02
Accelerate Workload Migration and Modernization
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access