When creating a disaster recovery plan, try not to only think about large-scale disasters. The more realistic disasters, perhaps a failure of an entire blade server, are much more likely to be relevant. In this article, we will talk about the concept of running servers without local disk space, and how SAN and server virtualization can provide extremely flexible recovery solutions.
Enterprises have used disk images for years. Solaris, for example, supports installations via the flash-archive (FLAR) method, whereby a system administrator creates a single image for all similar hardware. The installer is smart enough to copy the archive to local disk, and then modify configuration files to make the server unique. Without modifications, you cannot simply copy disk images to another server, else a second copy of a server gets brought into existence—with the same IP address. Oh we’ve all done it—’dd’ the disk from one computer to another, then boot it into single user mode and change the hostname—it’s possible, but extremely labor-intensive and tedious.
Stuck for a definition? Look it up at Webopedia:
There is commercial software available to manage this problem seamlessly, especially for Windows server products. They allow an administrator to update a single image, copy it out to all servers, and automatically update all servers at once. In the Unix world, the story is a bit different, but not all bad news.
Remote Booting – No More CDs
Unix systems have had remote-boot capabilities for upward of 30 years. Generally a host would request an IP address via BOOTP or DHCP, find a TFTP server (and directory location), and start copying its kernel over the network. Systems that support this type of boot are generally configuring themselves based on network-available information; protocols such as DHCP can provide many tidbits of configuration information.
To boot a disk images stored remotely, however, a few things are required. First, the image must be unique to each server. Second, the hardware, without any help from a running OS, must support whatever protocols are necessary to obtain and boot from this remote image.
In the first network-boot scenario, the booting servers were dependent on the health of other servers in order to boot themselves. Bootable kernels and the services required to discover their location may be hosted by many different servers, of which a single failure means that other servers failed to boot. SAN-based image booting, on the other hand, requires no other servers. A healthy SAN and properly configured HBA are all that’s required.
The idea of storing all your OS disks on the SAN makes good sense for a few other reasons as well. It is extremely efficient for managing many servers at once, and OS images stored on the SAN imply that backups and duplication for disaster recovery can be done via the SAN directly. SAN-based backups (and replication) mean that data is copied off the SAN volumes at the block level, and don’t require any backup software to be installed on each server.
SAN booting, as mentioned, requires that each image be customized per-server. This does not mean that each image be custom-crafted. A single golden image may be copied to server-specific storage space for use by that server. When the server boots, it will obviously need to be told something about its IP address, hostname, and other unique attributes. This is a one-time manual configuration that needs to happen, and many enterprises have invented extremely impressive applications to aid in this process. A first-time-boot GUI can prompt the installer for the requisite attributes, but of course, it can also be automated. If the gold image is configured to DHCP and then run some sort of configuration management software, the need to manually configure a new host greatly diminishes.
SAN Booting, Conceptually
Often people have trouble visualizing how it is possible for a host to do so much before an OS is even loaded. Network booting works because the NIC understands PXE booting, which tells the server where to find and load a kernel. Servers can find their OS disk over a SAN (or via iSCSI) because the HBA must be configured manually beforehand. Typically, each server is assigned a LUN on the storage that is their boot disk. They can optionally be given other disks too, if required.
The HBA must be configured to know which LUN to attempt to access, and if SAN zoning and the array is configured properly, the HBA can then present a new disk to the server, which can then boot it the normal way. With Sun SPARC hardware you’d set the default boot device to be the device path to the HBA, with additional LUN information. For BIOS-based x86 hardware, there’s generally a configuration menu reachable via some unnatural key combination shortly after boot.
Benefits
Of course, the whole premise of this article is based on disaster recovery. With SAN booting, we can quickly boot a server on new hardware via these few steps:
- Configure new HBA to boot off the desired LUN
- Zone the SAN fabric to allow the new HBA access to the correct storage
- Configure the storage array to allow the new HBA/server to access the LUN
In fact, with SAN virtualization, the last step is not required either. Allocating LUNs and zoning are taken care of by most types of SAN virtualization in a single step.
Using a SAN for DR isn’t a new concept, in fact one hardware vendor actively encourages it. HP servers often come with internally accessible USB ports attached to the motherboard. The idea is to put a small thumb drive in each server, pre-loaded with an emergency Linux boot image. If the time comes to reload a server, just boot up, grab a disk image from the SAN, and copy it to local storage. This can be done with either a golden image, or a backed up copy of the actual server’s disk. Better, though, is the all-the-time SAN booting scenario. Don’t forget, you’ll also save the power and cooling costs of two disk drives (at least) per server, which can be surprisingly tremendous.
Quicker than ever, we can recover from server failure, and even boot up disk images in remote locations if we’re using SAN-based replication to another array. Perhaps the scariest scenario of all, aside from a true disaster, is losing a blade server. Some host as many as 16 blades, each with many virtualized guest OS instances running. The death of an entire blade server can literally take out a very large enterprise. Wait, what’s that? You’re SAN-booting the host OSes on each blade! No problem, just boot them on another server, and you don’t even need to touch the hardware or leave your desk.