Linux RAID Smackdown: Crush RAID 5 with RAID 10

I was already mourning the destruction of my Saturday, thanks to some blown deadlines, and was resigned to spending at least part of the day working. Then I made a fatal error: I read my email. There were two messages from readers that said, in essence, another article about RAID 5 was about as interesting as yet another fawning review of Ubuntu Retching Rabbit or Pooping Penguin or whatever the newest coolest release is, and RAID 5 has some serious flaws anyway, and if I really wanted to be hip and helpful I would write about RAID 10.

So I did some research, and lo, RAID 10 did grab my attention, and I toiled late into the night trying to develop a fairly foolproof howto for setting up a RAID 10 + LVM array on a new clean Linux installation. Most Linux installers support RAID 0, 1, and 5, but not 10, drat them. Figuring this out and testing different methods ate my weekend all to heck, so thanks a lot readers! It has some rough edges, and I only tried it with the Kubuntu Alternate Install CD, which has Debian’s text installer.

RAID 10 support is still marked as “experimental” in the kernel. One of my gurus has been using it on some large databases and has no complaints, and I haven’t found anyone (yet) with anything bad to say about it. Still, you know the drill— backups, appropriate cautions, etc.

RAID 10 is a worthy RAID level with many advantages. RAID 10 is shorthand for RAID1+0, a mirrored striped array. Linux RAID 10 needs a minimum of two disks, and you don’t have to use pairs, but can have odd numbers (Haha! Unlike many commercial implementations, which are less flexible and more annoying). You can read all about it in Resources, so I’ll sum up the basic differences:

  • RAID10 provides superior data security and can survive multiple disk failures
  • RAID10 is fast
  • RAID10 is considerably faster during recovery— RAID5 performance during a rebuild after replacing a failed disk bogs down as much as 80 percent, and it can take hours. RAID10 recovery is simple copying.
  • RAID5 is susceptible to perpetuating parity and other errors

The main disadvantage is cost, because 50 percent of your storage is duplication. The redundancy in RAID5 peaks at one-third in a three-disk array, and reduces proportionately as you add disks. Since RAID5 has slow write performance, and if the fine BAARF folks are correct that RAID5 is not reliable enough, then the cost of the disks doesn’t seem like the most important factor. So we’re going to build us a RAID10 array. You can use two disks for practice, and you should practice, though on production systems you need four to get any real benefit.

But What About RAID 5?

If you have your heart set on RAID 5 please visit Resources for detailed instructions. The easiest and most trouble-free way is to start from a fresh installation. The installers of most Linux distributions support RAID 0, 1, and 5, so it’s easy to set these up. The key step is to first create a small RAID 1 array to hold the root filesystem, because RAID 1 is the only one that GRUB Legacy (version 0.9x) and LILO can boot. Then create a separate RAID 5 + LVM partition for your data. You should also have a swap partition on each disk that is not in RAID or LVM. When that’s all done you finish the installation in the usual way, and reboot to an ordinary-looking system.

Creating RAID 10

No Linux installer that I know of supports RAID 10, so we have to jump through some extra hoops to set it up in a fresh installation. This is my favorite layout for RAID systems:

  • /dev/md0 is a RAID 1 array containing the root filesystem.
  • /dev/md1 is a RAID 10 array containing a single LVM group divided into logical volumes for /home, /var, and /tmp, and anything else I feel like stuffing in there.
  • Each disk has its own identical swap partition that is not part of RAID or LVM, just plain old ordinary swap.

One way is to use your Linux installer to create the RAID 1 array and the swap partitions, then boot into the new filesystem and create the RAID 10 array. This works, but then you have to move /home, /var, /tmp, and whatever you else you want there, which means copying files and editing /etc/fstab. I get tired thinking about it.

Another way is to prepare your arrays and logical volumes in advance and then install your new system over them, and that is what we are going to do. You need a bootable live Linux that includes mdadm, LVM2 and GParted, unless you’re a crusty old command-line commando that doesn’t need any sissy GUIs, and are happy with fdisk. Two that I know have all of these are Knoppix and SystemRescueCD; I used SystemRescueCD.

Step one is to partition all of your drives identically. The partition sizes in my example system are small for faster testing; on a production system the 2nd primary partition would be as large as possible:

  • 1st primary partition, 5GB
  • 2nd primary partition, 7GB
  • swap partition, 1GB

The first partition on each drive must be marked as bootable, and the first two partitions must be marked as “fd Linux raid auto” in fdisk. In GParted, use Partition -> Manage Flags.

Now you can create your RAID arrays with the mdadm command. This command creates the RAID1 array for the root filesystem:


# mdadm -v --create /dev/md0 --level=raid1 --raid-devices=2 /dev/hda1 /dev/sda1
mdadm: layout defaults to n1
mdadm: chunk size defaults to 64K
mdadm: size set to 3076352K
mdadm: array /dev/md0 started.

This will take some time, which cat /proc/mdstat will tell you:


Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath] [raid10]
md0 : active raid10 sda1[1] hda1[0]
3076352 blocks 2 near-copies [2/2] [UU]
[====>................] resync = 21.8% (673152/3076352) finish=3.2min speed=12471K/sec

This command creates the RAID 10 array:

# mdadm -v –create /dev/md1 –level=raid10 –raid-devices=2 /dev/hda2 /dev/sda2

Naturally you want to be very careful with your drive names, and give mdadm time to finish. It will tell you when it’s done:


RAID10 conf printout:
--- wd: rd:2
disk 0, wo:0, o:1, dev:hda2
disk 1, wo:0, o:1, dev:sda2

mdadm –detail /dev/md0 displays detailed information on your arrays.

Create LVM Group and Volumes

Now we’ll put a LVM group and volumes on /dev/md1. I use vg- for volume group names and lv- for the logical volumes in the volume groups. Using descriptive names, like lv-home, will save your sanity later when you’re creating filesystems and mountpoints. The -L option specifies the size of the volume:


# pvcreate /dev/md1
# vgcreate vg-server1 /dev/md1
# lvcreate -L4g -nlv-home vg-server1
# lvcreate -L2g -nlv-var vg-server1
# lvcreate -L1g -nlv-tmp vg-server1

You’ll get confirmations for every command, and you can use vgdisplay and lvdisplay to see the fruits of your labors. Use vgdisplay to see how much space is left.

Installing Kubuntu

Out comes SystemRescueCD, in goes Kubuntu. When you get to the partitioning menu select “Manual”. It will not see your RAID or LVM volumes, but only the underlying partitions. But fear not, for they are there. Follow these steps:

  • Select “Configure Software RAID”.
  • The next screen says it needs to format the swap partitions, and then configure RAID. Say Yes.
  • The next screen is the Multidisk (MD) and software RAID configuration menu. Select “Finish.”
  • Now you’re back at the main partitioning screen, with both of your RAID arrays visible. They are both labeled as RAID 1, but don’t worry, the second one is really RAID 10. Now select “Configure the Logical Volume Manager”.
  • The next screen tells you that the partition table for RAID 1 device #1 has changed, write the changes to disks and configure LVM? Say Yes.
  • The next screen warns you that the kernel cannot read the partition table on /dev/md1. This is not a problem—select “Continue”.
  • The next screen informs you that “1 existing volume groups have been found. Activate existing volume groups?” Yes indeedy!
  • The next screen gives you a summary of your LVM setup, and there are options for viewing configuration details and your logical volumes. Select “Finish” when you’re done looking.
  • You’ll be returned to the main partitioning screen, which now shows all of your RAID arrays and LVM volumes. Select them one at a time, and put filesystems and mountpoints on them. The partitioning/formatting screen defaults to “Use as: do not use”. Select this, select the filesystem you want to use, say yes to formatting it, and select the mountpoint.
  • Once you’re finished with formatting and mountpoints, the installation continues in the normal fashion.

Next week we’ll go into detail on monitoring and managing RAID and LVM. If you need to do-over, see the LVM Howto to learn how to delete LVM volumes. Use mdadm –zero-superblock /dev/[partition name] to erase RAID volumes.

Resources

Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.

Latest Articles

Follow Us On Social Media

Explore More