A systems administrator is deploying a new six-disk RAID 5 set for a transactional database server. The design calls for the highest possible read/write performance, minimal host-CPU overhead, hot-swap capability, and the ability to monitor or rebuild the array even when the operating system cannot boot. Which of the following RAID implementations BEST meets all of these requirements?
Expose the six disks to the hypervisor and let each VM build its own software RAID 5 volume.
Enable the server's UEFI firmware (chipset) RAID feature and create a RAID 5 virtual disk there.
After the OS is installed, build a parity pool with Windows Storage Spaces.
Install a dedicated PCIe RAID controller with battery-backed write cache and configure the RAID 5 set from its pre-boot utility.
A dedicated hardware RAID adapter has its own processor and battery-backed (or flash-backed) write cache, so parity calculations and caching are performed on the card instead of the system CPU. Because the controller firmware includes a pre-boot configuration utility, the array can be built or repaired even if the OS is down, and disks connected to the card remain hot-swappable.
In contrast, OS-level Storage Spaces and Linux mdadm software RAID push parity calculations onto the host CPU and can be managed only after the OS loads. Chipset-based or firmware ("fake") RAID still consumes host resources and usually requires the operating system driver stack to finish initializing before management tools are available. Guest-level RAID inside virtual machines offers no pre-boot management and provides no protection for the host volume itself, so it fails to satisfy the stated constraints.