Your organization's file server uses a four-disk hardware RAID 5 array. Recently the array's management utility reported S.M.A.R.T. predictive failure on Disk 2, and users complain of noticeably slower read/write speeds. The volume is still mounted but marked as "Degraded" by the controller. Which action should you perform first to restore full performance and redundancy to the array?
Convert the array to RAID 0 to eliminate parity overhead
Hot-swap Disk 2 with a new identical-capacity drive and allow the controller to rebuild the array
Run chkdsk /f on the volume from the operating system to repair file-system errors
Disable S.M.A.R.T. monitoring in the RAID BIOS to clear the warning
A predictive S.M.A.R.T. warning indicates that Disk 2 is likely to fail soon and is already causing the RAID 5 array to run in a degraded state, which reduces performance and removes fault-tolerance. The correct remedy is to replace the failing drive with one of identical capacity (or larger) and let the RAID controller rebuild the array. Converting to RAID 0 would remove parity and risk total data loss. Running chkdsk repairs file-system issues, not hardware faults, and will not fix a degraded RAID set. Disabling S.M.A.R.T. monitoring only hides the warning and leaves the array one disk away from data loss.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is S.M.A.R.T. in storage technology?
Open an interactive chat with Bash
Why does a RAID 5 array become degraded when a disk fails?
Open an interactive chat with Bash
How does the RAID controller rebuild the array after replacing a failed drive?