An application that handles large data sets has shown unexpected latency for users in remote locations. Network logs show heavier traffic headed to those sites, but no packet loss is detected. The load balancer indicates that all nodes are healthy, and local users report normal performance. Which solution is the best approach to improve speed for the remote locations?
Add a caching mechanism near the remote offices to store frequently accessed data
Modify the load balancer to direct requests to a different name server
Increase the memory on the file server hosting the large volumes
Install additional CPUs on the core application servers to handle requests
Installing edge caching unloads repeated data transfers from the original servers by keeping copies of the frequently accessed files closer to remote offices. This directly targets the lengthy round trips that lead to slow performance. Adding CPU cores on the server side does not tackle data transfer overhead. Adjusting DNS redirection does not address the sheer volume of data being transmitted. Increasing memory on the same file server does not lessen transport delays encountered by distant users.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is edge caching, and how does it improve performance?
Open an interactive chat with Bash
Why would adding CPUs or memory not improve latency for remote users?
Open an interactive chat with Bash
How does latency differ from packet loss, and why is it relevant here?