A development team reports that a containerized analytics application is unexpectedly restarting multiple times per day. An investigation reveals the container orchestrator is terminating the container, and logs indicate an out-of-memory (OOM) error. The container is configured with the cluster's default memory limits. Which action is the MOST likely solution to stabilize the application?
Adjust the assigned memory for the container to a larger amount in the cluster configuration.
Switch the container to a new regional node for lower latency.
Grant more privileges to the container for resource handling.
Pull a newer container image from the registry and redeploy.
This scenario describes an out-of-memory (OOM) kill, where the container orchestrator terminates a process that exceeds its allocated memory limit. Because the container is using default limits, they are likely too low for the application's actual needs. Increasing the memory limit in the container's configuration is the direct solution to this sizing issue. Switching regions addresses latency or regional availability, not memory constraints. Granting more privileges is a permissions-based solution and would not resolve a memory allocation problem. While a newer container image might contain code optimizations, the problem as described points directly to a configuration misstep (inadequate default limits), making a configuration change the most likely fix.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Out-of-Memory (OOM) error in containers?
Open an interactive chat with Bash
What are memory limits in containerized applications?
Open an interactive chat with Bash
How can you calculate the correct memory limit for a containerized application?