Your DevOps team updated the base container image in a Kubernetes deployment to a newer distribution release. After the rollout, the application pod repeatedly crashes with the following error:
/app/server: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
Which of the following is the MOST likely root cause of this deployment failure?
The application's runtime dependencies are incompatible with the library versions present in the upgraded container image.
A memory-management bug in the application is causing random segmentation faults at startup.
The Kubernetes API server is throttling requests because the deployment exceeds the platform's rate limits.
Outbound network traffic is blocked by the cluster's egress firewall, preventing the pod from downloading external modules at runtime.
The newer base image provides OpenSSL 3.0, but the application was built against OpenSSL 1.1. When the container starts, the required version (libssl.so.1.1) is absent, causing a runtime link-time failure. This is a classic example of version incompatibility introduced by upgrading underlying components without rebuilding or re-validating the dependent software. Reverting to a compatible image or rebuilding the app against the new libraries would resolve the issue.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are software libraries in the context of deployment?
Open an interactive chat with Bash
How can teams prevent version mismatches during software deployments?
Open an interactive chat with Bash
What steps can be taken to resolve a deployment failure caused by missing libraries?