A Linux administrator maintains a Bash script that parses a 10 GB web-server log. Profiling with time and strace shows that almost all CPU time is spent inside a while read line; do …; done loop that uses only simple shell built-ins. The administrator consults an AI code assistant for ways to make the script run much faster with minimal changes. Which recommendation would provide the GREATEST reduction in run time?
Use the Bash builtin mapfile -t to load the file into an array before processing it in the script.
Remount the filesystem containing the log with the noatime option before running the script.
Replace the entire while read loop with a single awk command that performs the same parsing in one pass.
Run the script with nice -n -5 to give it higher CPU scheduling priority.
Refactoring the loop into a single awk one-liner yields the largest speed-up. awk is a compiled C program that reads the file once and performs pattern and field processing internally, avoiding the per-line overhead of Bash's read builtin and loop control. Switching to mapfile still requires per-line Bash processing and can exhaust memory on a 10 GB file. Changing CPU priority (nice) or disabling atime updates affects scheduling or disk metadata writes but does not eliminate the algorithmic bottleneck in the shell loop.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes 'awk' faster than a Bash 'while read' loop?
Open an interactive chat with Bash
What is the limitation of using 'mapfile -t' with a large file?
Open an interactive chat with Bash
Why doesn’t changing CPU priority with `nice` significantly improve performance?
Open an interactive chat with Bash
CompTIA Linux+ XK0-006 (V8)
Automation, Orchestration, and Scripting
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access