Build a Linux Software RAID from Scratch
Part Two: Last week we considered Linux software RAID's advantages, this week we show you how to do it.
Last week we covered what Linux software RAID is good for and not good for, and hardware RAID vs. software RAID. In a nutshell it's about getting good performance for just the cost of ordinary inexpensive drives and controllers, and either increasing uptimes or speeding up performance, or a bit of both. It is not a substitute for the usual sensible failsafes like regular backups, power protection, and terrorizing users into following ... I mean "enforcing sane security policies."
Hereinafter "Linux software RAID" will be referred to by various shorter terms, like plain old "RAID." I do not mean for these shortcuts to refer to the entire RAID universe, just Linux software RAID.
Building a brand-new RAID system from scratch is easy -- just use a Linux distribution that lets you set it up during installation, like Red Hat, Fedora, SuSE or Mandrake. Creating a RAID array wipes out any existing data, so you're starting over in any case.
But what if you just want to add RAID to an existing system without disturbing your existing data? It's a less efficient use of hardware, but it's easy to implement. Assuming your existing system is running a single hard drive, you'll add 2-4 drives to form the RAID array. Your original hard drive will not be part of the array; it will serve as the boot disk and hold system files. Use the RAID array for data files, such as /home, /var, mail spools, Web pages or whatever your heart desires.
Confirm Kernel Support
2.4 and 2.6 kernels support software RAID, if it has been compiled in by the distribution vendor. These days all major Linux distributions come with RAID support. Look in your /boot/config-x file to find out:
$ grep -i -A 10 "Multi-device support" /boot/config-2.4.29
# Multi-device support (RAID and LVM)
This shows the kernel does indeed support RAID. (If it doesn't, see Resources.) The drivers are configured as loadable modules (m) which means the RAID disks are not bootable, but we don't care because the original hard drive is the boot disk.
RAID 0 For Speed
As always, make a good backup first. (Mondo Rescue is my current fave backup/system restore utility.) Then install two new drives of approximately the same capacity, and remember to never ever share IDE buses, always have only one IDE hard drive per channel.
Next, drop into single-user mode to minimize the chances of messing up your existing data and filesystem:
# init 1
Linux RAID recognizes all block devices, including partitions, so you may partition your new drives however you like. Don't get fancy for no good reason. For this article each new drive has two partitions: one 19.5 gigabyte partition for data and one of 512 megabytes for swap. Use fdisk, which should be present on all Linuxes. Be sure to mark the new data partition types as RAID partitions, or "fd," which is Linux RAID auto-detect. This is crucial -- there will be no RAID array without this. Do not mark the swap partitions as fd.
Chances are your system loads the "md" module automatically; check by running
md 44928 0 (autoclean) (unused)
If it doesn't, run
$ modprobe md
Then add "md" (without the quotes) to your /etc/modules.conf files to load it at boot.
Updated: The examples in this article included a mistake in the commands used to create and prepare swap partitions for use. We've updated the examples. Thanks to the readers who notified us of the error.