A factory that sits in a rural area must deploy a vibration-analysis model that can shut down heavy machinery within 10 ms when an anomaly is detected. The only WAN link is a low-bandwidth satellite connection that is often unavailable for several hours. Sending the raw sensor stream to a cloud or off-site data center would exceed both the latency budget and the link's capacity. Which deployment environment best satisfies the project's technical constraints?
Hybrid deployment with real-time inference in the cloud and periodic data replication at the plant
Edge deployment on embedded gateways mounted to each machine
On-premises cluster located at the organization's main data center 500 km away
Containerized microservices running in a public-cloud region with GPU autoscaling
Running the model on an edge device places compute directly on-or very near-the industrial IoT gateways that collect the vibration data. Local inference eliminates the WAN round-trip so the system can react in well under 10 ms, and it continues to operate when the satellite link is down. Public-cloud or off-site data-center deployments cannot meet the latency target and would fail during connectivity outages, while a hybrid design that keeps inference in the cloud still relies on the same unreliable link. Therefore, the edge deployment is the only option that meets both the latency and connectivity requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is edge deployment and why is it suitable for this scenario?
Open an interactive chat with Bash
Why wouldn’t a cloud-based or data-center solution meet the latency requirement?
Open an interactive chat with Bash
What role do IoT gateways play in edge-computing environments?