A store receives thousands of small updates each hour for credit card purchases. The team wants to keep data accurate after every new purchase. Which approach addresses these needs?
A data lake that stores unstructured sale logs from multiple sources
A transaction-based design that uses row-level operations for each purchase record
A streaming engine that writes aggregated metrics at the end of the day
A star schema that aggregates purchases across a data warehouse
A design centered on individual transactions is suited for frequent data changes. It updates records for each purchase, preserving data accuracy. Star schemas handle analytics with fewer updates, while streaming engines with end-of-day metrics and data lakes for unstructured data are not as effective for continuous row-level operations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a transaction-based design in databases?
Open an interactive chat with Bash
How do row-level operations maintain data accuracy?
Open an interactive chat with Bash
Why is a star schema less suitable for frequent updates?