A logistics startup is prototyping a fleet of autonomous delivery drones. Each drone must re-plan its route on board whenever a new obstacle or no-fly zone is detected, with a hard latency budget of 100 ms and a memory budget under 2 MB. In practice the drones can lose contact with the cloud for several minutes, so all critical decision-making must run locally on the micro-controller. Which specialized data-science approach is MOST appropriate for this edge-computing scenario?
Single-source Dijkstra shortest-path computed once in the cloud and uploaded to each drone
Greedy nearest-neighbor heuristic executed locally each time a new route is required
Formulating the path as a mixed-integer linear program (MILP) solved in the cloud and sent to the drone
On-device reinforcement learning with tabular Q-learning that updates policies during every flight
A greedy, nearest-neighbor heuristic can be evaluated in O(n²) time and O(n) memory, so it fits comfortably within the tight CPU and RAM limits of a micro-controller. Because it makes a single local choice at each step it can be recomputed quickly every time an obstacle appears, even without cloud access.
Reinforcement-learning (tabular or deep) approaches require far more computation and memory during both training and inference, making them impractical on such constrained hardware. Pre-computing a Dijkstra shortest path in the cloud cannot adapt to in-flight changes; once weights change the algorithm must be rerun, which the drone cannot afford to do online. Mixed-integer linear programming yields high-quality routes but solving a MILP is NP-hard and typically far too slow for real-time use on an embedded device. Therefore, the greedy nearest-neighbor heuristic is the best match for the edge constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is the greedy nearest-neighbor heuristic suitable for this scenario?
Open an interactive chat with Bash
What is the difference between a greedy heuristic and Dijkstra’s algorithm in this context?
Open an interactive chat with Bash
Why are reinforcement-learning approaches unsuitable for the drones?