A developer reports that a freshly built command-line tool crashes immediately with the message "Segmentation fault (core dumped)", yet no core file appears in the working directory or in /var/lib/systemd/coredump. Before you can analyze the fault with gdb you must ensure a core file is actually written the next time the program crashes. Which action will most directly enable core-dump generation for the process without requiring a rebuild of the binary?
Increase the process stack size to 64 MiB using ulimit -s 65536.
Raise the core-file size limit for the session, for example by running ulimit -c unlimited before starting the application.
Enable kernel crash dumping by starting the kdump service and configuring /etc/kdump.conf.
Recompile the program with the -g compiler flag so that debugging symbols are included.
Linux only writes a core file when the RLIMIT_CORE (core-file size) resource limit is non-zero. Interactive shells on many distributions set this limit to 0, suppressing the dump even though the kernel still delivers SIGSEGV. Running "ulimit -c unlimited" (or another positive size) in the shell that launches the program raises RLIMIT_CORE, allowing the kernel to write the core image. Debug symbols, stack-size changes and the kdump service do not influence whether a user-space core file is created; they merely affect post-mortem readability, memory layout, or kernel-level crash handling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'ulimit -c unlimited' do?
Open an interactive chat with Bash
What is the purpose of a core dump?
Open an interactive chat with Bash
How is RLIMIT_CORE related to core dump generation?
Open an interactive chat with Bash
CompTIA Linux+ XK0-006 (V8)
Troubleshooting
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access