After rolling out a new IoT analytics platform, a company suddenly receives terabytes of sensor data from hundreds of geographically dispersed endpoints. To raise overall transfer capacity while keeping latency and processing overhead minimal, which approach should the company implement?
Tune NAT table sizes on the border firewall
Deploy a content delivery network (CDN) with edge locations near the endpoints
Increase RAM allocation on the analytics virtual machine
Install a central packet-aggregation gateway before data processing
Deploying a content delivery network with edge locations positions cache and compute resources close to each endpoint, increasing available throughput and reducing round-trip latency without adding significant central processing load. A packet-aggregation gateway still funnels traffic through a single point, which can become a bottleneck. Adding RAM to one virtual machine does nothing for network-wide throughput. Adjusting NAT rules primarily affects address mapping and introduces extra processing, offering little benefit for large-scale data bursts.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a specialized distribution layer with local edge nodes?
Open an interactive chat with Bash
How do local edge nodes enhance performance compared to central processing?
Open an interactive chat with Bash
Why wouldn't increasing memory or adjusting NAT rules address a data surge effectively?