Storage Networking 101: Reaping the Benefits of a SAN
We promised to talk about some fancy things you can do with a SAN; things that otherwise would be impossible. These include global file systems and the ability to move storage between servers with virtually no downtime. In fact, the whole way you think about storage, and its hassles, can be thrown out the window.
The benefits of a storage network are seemingly endless. Your storage suddenly becomes fault-tolerant because you can lose a fibre channel switch, a disk array controller or a host HBA and everything should continue working. With the right selection of products, a company can also employ less people to manage storage thanks to the wonderful tools available for dealing with configuration tasks. Data security and storage utilization are also increased with a careful SAN deployment.
These days it isn't hard to sell the technology to the business leaders, and hopefully the storage administrators are sold as well. Less employee overhead is attractive to both administrators and managers, but the real benefit for the administrators, aside from reliability and all that important stuff, is the neat things you can do with a SAN.
Storage: Here, There, Everywhere
Perhaps the most exciting prospect of moving to a SAN environment is the introduction of easy storage migration. In an old, direct attached storage environment (DAS), if some storage needed to move to a new server, this would involve a lengthy process. We'd first unmount the file systems using the storage, then physically unplug the unit, move it, connect it to a new host, and bring up the file systems on the new machine. What a hassle! In fact, it was probably more common to need to allocate more storage to just a few LUNs. If the disk array was already full, you'd have no choice but to copy the file systems to a new array.
In a SAN environment, let's take a look at the same process for both scenarios. The first, needing to move "some storage" from one host to another, involves only three steps, none of which include leaving your chair. If we want to move and entire array from one host to another, all we need to do is unmount file systems, reconfigure the SAN to let the other host see the storage, and then quickly bring it up on the new server. This task can be done in a less than a minute if you're good, but in a DAS world, 15 minutes would be record-breaking.
With DAS environments, it was necessary to reallocate storage by moving the entire array to a new host. If you needed fast RAID storage on a server, it was necessary to connect a new array. Even if the application couldn't actually use all of the storage, it would still get connected to the host that needed some storage. In the SAN world, we can make much better use of our storage.
The second scenario mentioned above is most common. We generally need to grow the size of just a few file systems, rather than replace an entire array. If you have already allocated all LUNs on your DAS unit, you're stuck moving data off the array. In a SAN environment, we simply need to allocate a new LUN to a host.
Generally, we'll have a disk array with two controllers and around 15 FC disk drives. Extra "trays" can be stacked onto and managed by the controller array. If storage starts getting tight, simply add another tray. Amazingly, most arrays also support the ability to grow LUNs, so that taking advantage of added disks doesn't always involve creating new LUNs. You must run a file system that supports this, however.
Even if you do need to add storage from another array to your server, there are still no wires to touch. As long as everything is SAN-attached, you can easily allocate LUNs from multiple arrays to one server. As mentioned, it's a completely different mindset from the DAS days.
Without all the hand-waving, we should talk a bit about moving LUNs and adding storage to existing file systems. You really must run some type of volume management software. VxFS from Veritas, ZFS from Sun, or even one of the various Linux-native solutions will work. When we talk about adding a LUN to a file system, what we're really talking about is software-level RAID; generally a RAID 0 stripe. We need some way of stitching together multiple LUNs at the host-level to create file systems. To add more space to the file system, we simply attach a new LUN. This is trivial in VxFS and ZFS.
Now what if we need to move a live file system from one host to another? Again, we ideally want file system help, but this isn't mandatory. The idea is to "export" the file system from one host, reconfigure the SAN (storage array to allow a new host to see the LUN and zoning on the switch, if necessary), and then import the file system on the new server. This is trivial with ZFS and Veritas, but a very manual process if you're using a standard file system.
SAN file systems are file systems that can be utilized by more than one server at once. Try this with a standard file system, and in short order, your data will be corrupt. The file system must have the ability to coordinate changes with all others that wish to modify the file system at the same time.
Clustered file systems are extremely useful, especially for highly available file servers or databases. Without a SAN environment, using these SAN file systems is almost impossible. To be fair, there have been a few SCSI DAS arrays that included two ports and allowed two hosts access at the same time, but those were rare and expensive.
There are few options, but the file systems are maturing quite rapidly. They are:
- GFS: Global File System, for Linux.
- Xsan, for OS X.
- OCFS: Oracle Cluster File System, for Oracle databases.
- VMFS: for VMware.
Expect to see the open source clustered file systems improve in the future. With the widespread adoption of storage networks and higher demands on services, the need for clustered file systems will continue increasing.
We can certainly see that a SAN environment provides for more manageable storage allocations. No more late nights copying data, no more plugging cables to move arrays between hosts, and no more single points of failure. What's not to love?