Build a Linux Software RAID from Scratch

By Carla Schroder | May 17, 2005 | Print this Page
http://www.enterprisenetworkingplanet.com/netsysm/article.php/3505761/Build-a-Linux-Software-RAID-from-Scratch.htm

Last week we covered what Linux software RAID is good for and not good for, and hardware RAID vs. software RAID. In a nutshell it's about getting good performance for just the cost of ordinary inexpensive drives and controllers, and either increasing uptimes or speeding up performance, or a bit of both. It is not a substitute for the usual sensible failsafes like regular backups, power protection, and terrorizing users into following ... I mean "enforcing sane security policies."

Hereinafter "Linux software RAID" will be referred to by various shorter terms, like plain old "RAID." I do not mean for these shortcuts to refer to the entire RAID universe, just Linux software RAID.

Linux RAID recognizes all block devices, including partitions, so you may partition your new drives however you like. Don't get fancy for no good reason.

Building a brand-new RAID system from scratch is easy -- just use a Linux distribution that lets you set it up during installation, like Red Hat, Fedora, SuSE or Mandrake. Creating a RAID array wipes out any existing data, so you're starting over in any case.

But what if you just want to add RAID to an existing system without disturbing your existing data? It's a less efficient use of hardware, but it's easy to implement. Assuming your existing system is running a single hard drive, you'll add 2-4 drives to form the RAID array. Your original hard drive will not be part of the array; it will serve as the boot disk and hold system files. Use the RAID array for data files, such as /home, /var, mail spools, Web pages or whatever your heart desires.

Confirm Kernel Support
2.4 and 2.6 kernels support software RAID, if it has been compiled in by the distribution vendor. These days all major Linux distributions come with RAID support. Look in your /boot/config-x file to find out:

$ grep -i -A 10 "Multi-device support" /boot/config-2.4.29
# Multi-device support (RAID and LVM)
#
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID5=m
[...]

This shows the kernel does indeed support RAID. (If it doesn't, see Resources.) The drivers are configured as loadable modules (m) which means the RAID disks are not bootable, but we don't care because the original hard drive is the boot disk.

RAID 0 For Speed
As always, make a good backup first. (Mondo Rescue is my current fave backup/system restore utility.) Then install two new drives of approximately the same capacity, and remember to never ever share IDE buses, always have only one IDE hard drive per channel.

Next, drop into single-user mode to minimize the chances of messing up your existing data and filesystem:

# init 1

Linux RAID recognizes all block devices, including partitions, so you may partition your new drives however you like. Don't get fancy for no good reason. For this article each new drive has two partitions: one 19.5 gigabyte partition for data and one of 512 megabytes for swap. Use fdisk, which should be present on all Linuxes. Be sure to mark the new data partition types as RAID partitions, or "fd," which is Linux RAID auto-detect. This is crucial -- there will be no RAID array without this. Do not mark the swap partitions as fd.

Chances are your system loads the "md" module automatically; check by running

$ lsmod
[...]
md       44928   0    (autoclean) (unused)

If it doesn't, run

$ modprobe md

Then add "md" (without the quotes) to your /etc/modules.conf files to load it at boot.

Continued on page 2: Creating The Array and Filesystem

 


Updated: The examples in this article included a mistake in the commands used to create and prepare swap partitions for use. We've updated the examples. Thanks to the readers who notified us of the error.

Continued From Page 1

Creating The Array and Filesystem
The next steps are starting up the array and creating filesystems. First create an /etc/raidtab file. This is your master RAID configuration file. Let's say our new data partitions are /dev/hdb1 and /dev/hdc1:

# /etc/raidtab
raiddev /dev/md0
raid-level              0
nr-raid-disks           2
persistent-superblock   1
chunk-size              32
device                  /dev/hdb1
raid-disk                0
device                  /dev/hdc1
raid-disk               1

Now initialize the array:

# mkraid /dev/md0

Monitor progress:

# watch /proc/mdstat

Stop the watch command with CTRL+C. When the array is built you may create whatever filesystem you like on the data partitions: Ext2/3, JFS, ReiserFS, XFS, VFAT- anything that Linux supports. Use the utilities specific to the filesystems to create them. For example, this creates Ext3:

# mke2fs -j /dev/md0

This creates a Reiser filesystem:

# mkreiserfs /dev/md0

Be sure to follow the documentation for your chosen filesystem. Then create and initialize the swap partitions:

# mkswap /dev/hdb2
# mkswap /dev/hdc2
# swapon /dev/hdb2
# swapon /dev/hdc2

Start It Up
Now it's time to fire up your shiny new RAID 0 array:

# raidstart /dev/md0

Create mountpoints and mount the array:

# mkdir /mnt/raid0
# mount /dev/md0 /mnt/raid0

Now you can read and write to /mnt/raid0 just like any other directory.

Starting RAID at Boot
You probably want your nice new RAID 0 array to be automatically mounted at boot, so you must add a line to your /etc/fstab file:

/dev/md0   /mnt/raid0  reiserfs  defaults  0 1

Of course you may specify whatever mount options you like. Finally, there is one more kewl performance-enhancing trick you can do with the swap partitions. This is not a RAID function, but a kernel function- set up swapping in parallel in /etc/fstab:

/dev/hdb2   none   swap   sw,pri=0     0 0
/dev/hdc2   none   swap   sw,pri=0     0 0

This is a good time to reboot and verify that everything works. cat /proc/mdstat tells the status of the array.

RAID 1 and RAID 5
Now that you know the basics of setting up RAID, it's simple to create other RAID levels in /etc/raidtab. RAID 1 is just the same as our RAID 0 example, except for this line:

raid-level        1

Adding a three-disk RAID 5 array requires but three changes to /etc/raidtab:

raid-level       5
nr-raid-disks    3
parity-algorithm     left-symmetric

Continued on page 3: Standby Spare Disk

 


Updated: The examples in this article included a mistake in the commands used to create and prepare swap partitions for use. We've updated the examples. Thanks to the readers who notified us of the error.

Continued From Page 2

Standby Spare Disk
You may add additional fault-tolerance by adding a standby spare disk. This only works in RAID levels with fault-tolerance, like RAID 1 and RAID 5. Don't bother with RAID 0, because if one disk fails the whole array is broken.

Simply install the extra disk, then add these lines to /etc/raidtab:

nr-spare-disks    1
device           /dev/hdd1
spare-disk        0

If all goes well, the standby disk will automatically take over when one of the other disks fail.

We covered a lot of ground in a short space here. Be sure to read The Software-RAID HOWTO for explanations of the options in /etc/raidtab. You'll also find the importance of chunk sizes for performance, and chunk-size gotchas with the Ext2/3 filesystem. And tools for monitoring and querying the array, simulating failures and other useful tests and refinements.

Resources

 


Updated: The examples in this article included a mistake in the commands used to create and prepare swap partitions for use. We've updated the examples. Thanks to the readers who notified us of the error.