Manage a Linux RAID 10 Storage Server - Page 2

By Carla Schroder | Posted Mar 3, 2008
Page 2 of 2   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Continued from page 1

Querying RAID

View the status of all RAID arrays on the system:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid10 hda2[0] sda2[1]
  6201024 blocks 2 near-copies [2/2] [UU]

md0 : active raid1 hda1[0] sda1[1]
  3076352 blocks [2/2] [UU]

The "personalities" line tells you what RAID levels the kernel supports. In this example you see two separate arrays: md1 and md0, that are both active, their names and BIOS order, and the size and RAID type of each one. 2/2 means two of two devices are in use, and UU means two up devices.

You can get detailed information on individual arrays:

# mdadm --detail /dev/md0

Is this partition part of a RAID array? This displays the contents of the md superblock, which marks it as a member of a RAID array:

# mdadm --examine /dev/hda1

You can also use wildcards, like mdadm --examine /dev/hda*.

Monitoring

mdadm itself can run in daemon mode and send you email when an active disk fails, when a spare fails, or when it detects a degraded array. Degraded means a new array that has not yet been populated with all of its disks, or an array with a failed disk:

# mdadm --monitor --scan --mail=me@here.net --delay=2400 /dev/md0

Your distribution may start the mdadm daemon automatically, so you won't need to run this command. Kubuntu controls it with /etc/init.d/mdadm, /etc/default/mdadm, and /etc/mdadm/mdadm.conf, so all you need to do is add your email address to /etc/mdadm/mdadm.conf.

Starting, Stopping, and Deleting RAID

Your Linux distribution should start your arrays automatically at boot, and mdadm starts them at creation. This command starts an array manually:

# mdadm -A /dev/md0

This command stops it:

# mdadm --stop /dev/md0

You'll need to unmount all filesystems on the array before you can stop it.

To remove devices from an array, they must first be failed. You can fail a healthy device manually:

# mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2

If you're removing a healthy device and want to use it for something else, or just want to wipe everything out and start over, you have to zero out the superblock on each device or it will continue to think it belongs to a RAID array:

# mdadm --zero-superblock /dev/sda2

Adding Devices

You can add disks to a live array with this command:

# mdadm /dev/md1 --add /dev/scd2

This will take some time to rebuild, just like when you create a new array.

That wraps up our whirlwind tour of RAID 10 and mdadm. Come back next week to learn how to manage LVM volumes, which you can use anywhere and not just on RAID arrays, and on using smartctl to monitor hard disk health and warn you of impending failures.

Resources

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter