In our recent installments we learned a whole lot about Linux software RAID. Today we’re going to learn about LVM, the Logical Volume Manager, and Smartctl, the excellent hard disk health-monitoring utility. LVM creates logical storage volumes, and allows you to increase the size of your volumes painlessly on live filesystems. Smartctl uses your hard disk’s built-in Self-Monitoring, Analysis and Reporting Technology (SMART) to test its health, and to warn you of impending failures. SystemRescueCD and Knoppix CD both include LVM, mdadm, and smartmontools when you need an external repair or management disk.
LVM
LVM is actually LVM2, but most folks call it LVM and are happy. LVM1 should have long disappeared from distribution repositories, and it’s easy enough to check:
$ lvm version
LVM version: 2.02.26-RHEL5
Good enough. (Incidentally, you might be interested in this article about Red Hat’s new official don’t-say-RHEL policy.)
Creating LVM volumes overwrites everything; you can’t create a new LVM setup over existing data. These days most Linux distributions include LVM options in their installers, which is a nice easy way to set it up. It is possible to put your root filesystem in an LVM volume, but I don’t recommend it. It complicates booting, updates, and repairs, and your root filesystem shouldn’t be growing at such a rate that you need LVM anyway.
The steps to setting up LVM are simple, and you can practice on a single hard disk with multiple partitions. First have at least two disk partitions available, then initialize your physical volumes, create a volume group, and then create your logical volumes:
# pvcreate -v /dev/sda1 /dev/sda2
# vgcreate -v -s 32 vg-testvm /dev/sda1 /dev/sda2
# lvcreate -v -L 4g -n lv-home vg-testvm
# lvcreate -v -L 2g -n lv-var vg-testvm
Use vgdisplay -v and lvdisplay -v to see your new creations and complete details. My own naming convention is to use “vg” to indicate a volume group, and “lv” for a logical volume. So you see the structure here: the volume group is your total LVM storage space, which is comprised of several physical disk partitions, and then you have to divide your volume group into logical groups, or even just one logical group.
The -v switch turns on verbosity so you know what it’s doing, and -s 32 creates physical extents that are 32 megabytes in size. Extents are often shrouded in mystery because no one bothers to explain them, but actually they’re not mysterious at all. Physical extents are LVM’s individual storage blocks, so the smallest possible size for a logical volume is a single extent. There is a maximum of 65,536 extents available per Linux kernel. The default size is 4 MB, which limits the maximum size of your volume group to about 256 GB. You can calculate a reasonable extent size by dividing the desired size of your volume by 65K. Extent sizes must be a power of 2, so round up to the next one and leave room for growth. Extent size doesn’t affect performance, just your storage allocations. Extents are fixed when you create your volume group, so you can’t change them later.
You have to increase or decrease the size of your volumes according to your extents, so here we’re limited to 32 MB increments. The maximum possible size of a logical volume for 2.6 kernels is 16 terabytes on 32-bit systems, and 8 exabytes on 64-bit systems.
LVM filesystems and Mountpoints
Now it’s time to put filesystems and mountpoints on your logical volumes. Logical volumes are akin to physical disk partitions, so “lv-home” is going to be /home, and “lv-var” is /var:
# mkfs.xfs /dev/vg-testvm/lv-home
# mkfs.ext3 /dev/vg-testvm/lv-var
You may use any filesystem you want. Now create your mountpoints, adjust permissions and ownership, and then create your /etc/fstab entries. You can use either the /dev names or UUIDs:
/dev/vg-testvm/lv-home /home xfs defaults 0 2
/dev/vg-testvm/lv-var /var ext3 defaults 0 2
UUID=8d566d0e /dev/vg-testvm/lv-home /home xfs defaults 0 2
UUID=681919d5 /dev/vg-testvm/lv-var /var ext3 defaults 0 2
The UUIDs are truncated to conserve pixels. vgdisplay -vshows your UUIDs. Now you can reboot or manually mount your new logical volumes, and you’re ready to start using them just like physical disk partitions.
Increasing the Size of a Logical Volume
Follow these steps to add a physical disk partition to an existing logical volume:
# pvcreate -v /dev/sdb1
# vgextend vg-testvm /dev/sdb1
# lvextend -L+10G
Then you must resize your filesystem using the resizing command specific to your filesystem. ReiserFS can be safely resized while mounted, and XFS must be mounted. Ext2/3 should be unmounted first:
# umount /var
# resize2fs -p /dev/vg-testvm/lv-var
# mount /var
The others look like this:
# resize_reiserfs /dev/volumegroup/logical-volume
# xfs_growfs /home
ReiserFS uses the /devname, and XFS uses the name of the mountpoint. JFS is rather complicated, so I shall leave it to the reader to find and follow correct instructions.
Disk Failure Warning System
It’s a lot nicer to replace a failing disk at leisure than to be surprised, so the smartmontools package is a great addition to your LVM and RAID setups. SMART can often be enabled in your BIOS, or with smartctl. First see if it is turned on:
# smartctl -i -d ata /dev/sda
'...'
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Note that SATA and SCSI disks must use the -d option to specify the disk type. If it isn’t on, turn it on with smartctl -s on -d ata /dev/sda. You can run a quick health check:
# smartctl -H -d ata /dev/sda
'...'
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
If your drive keeps its own error log, and not all of them do, check this too:
# smartctl -l error -d ata /dev/sda
If there are no errors, or just a few old transient errors, it’s OK. If the drive fails the health test and shows many errors, especially repeated errors, the disk is doomed.
You can enable smartd to continually monitor disk health, and warn you of impending problems. On CentOS it’s on by default, so all you have to do is edit /etc/smartd.conf to name the disks you want monitored, and enter your email for notifications. Fedora users have to turn it on in the Services control panel. On Debian and Ubuntu, enable it in /etc/default/smartmontools. This is a simple example /etc/smartd.conf:
/dev/hda -H -m carla@alrac.net
/dev/sda -d ata -H -m carla@alrac.net
And that’s the works. This is very simple, and many disks still run even though they fail the health test, so you have time to replace them before they die completely. You can run more extensive tests and get even better warnings of impending troubles; see man smartctl and man smartdfor details.
RAID 10 Update
There is little documentation on Linux’s RAID 10, or how it differs from RAID 1+0. Fortunately, some readers passed on some good information on what RAID 10 is, and what sets Linux’s implentation apart. First of all, never mind that everyone says that RAID 10 and 1+0 are the same thing and that even man mdsays this — they’re not the same. RAID 10 is a new type of array; it is a single array, it can have an odd number of drives, and the minimum required are two. RAID 1+0 is a combination of multiple arrays, it requires at least four disks, and must always have an even number.
Resources
- man smartctl
- man smartd
- LVM Howto
- Manage a Linux RAID 10 Storage Server
- Linux RAID Smackdown: Crush RAID 5 with RAID 10