A development team scans large log files every day, but their queries have become slower over time due to growing data. They want faster read/write responsiveness for these logs. Which solution is most likely to address their latency concerns?
Adopt ephemeral volumes through a serverless process that scans uploaded data
Move the logs to an archive-level tier for reduced storage expenses
Migrate the logs to block-based volumes provisioned with a dedicated high-throughput disk
Replicate the object-based logs across multiple regions to distribute the load
Block-based volumes with high-throughput disks process data-intensive workloads more efficiently because they handle frequent reads and writes with minimal latency. Using object-based storage with replication, migrating data to an archive tier, or adopting ephemeral volumes for daily scans do not meet the constant IOPS demands as effectively.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between block-based and object-based storage?
Open an interactive chat with Bash
What are IOPS, and why are they important for log processing?
Open an interactive chat with Bash
Why wouldn’t object replication or archive tiers address the latency issue?