Storage Networking 101: Configuring Disk Arrays

Enterprise Networking Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The most critical, sometimes tedious, part of setting up a SAN is configuring each individual disk array. In this Storage Networking 101, we’ll delve into best practices and cover the general concepts you must know before configuring SAN-attached storage.

There are three general steps when configuring a disk array:

  • First, you create a RAID set. It can be any type of RAID the array supports, and we’ll just assume RAID-5 for this article so that we can talk about hot spares.
  • You can either slice up the RAID set to present multiple LUNs to a host, or you can create “RAID Groups,” as most vendors call it. This is a completely optional step, but it can make your life easier.
  • Third, you must assign LUNs to a host.

Create a RAID Set

The first step can be done many ways. Say you have an array that holds 14 disks per tray, and you have four trays. One option is to create two (or more) RAID-5 volumes on each tray. You can then assign part or all of each RAID-5 volume to various hosts. The advantage to this method is that you will know which hosts use what specific disks. If the array with three additional trays was purchased at the same time, it actually makes more sense to allocate the RAID sets vertically, so that a single tray failure doesn’t take out the RAID volume. With only four trays this means you’ll have three disks worth of usable space per 4-disk RAID-5 volume: probably not a good use of space.

More often people will create huge RAID-5 sets on the arrays. There’s a balance between performance and resiliency that needs to be found. More disks mean better performance, but it also means that two disk failures at once could take out all of your data. Surprisingly, multiple disk-at-once failures are quite common. When the array starts rebuilding data onto a previously unused disk, it frequently fails.

Configure RAID Groups

The second step causes quite a bit of confusion. Regardless of how you’ve configured the RAID sets in the array, you’ll need to bind some amount of storage to a LUN before a host can use it. The LUN can be an entire RAID-5 set (not recommended), or it can be a portion. The partitioning method ensures that you aren’t giving too large a volume to a host. There are many reasons for this:

  • Some file systems cannot handle a 1TB or larger volume
  • Your backup system probably won’t be able to backup a file system that’s larger than a single tape
  • The important one: more LUNs presented to the host (seen as individual disks by the OS) means that separate I/O queues will be used

Back to the second step: Raid Groups. A partitioned RAID-5 set of 1TB, for example, into 100GB chunks, will provide 10 LUNs to deal with. If you don’t care what nodes use what disks, you can just throw these LUNs into a group with other LUNs. I prefer to keep one RAID group per host, but others see that as limiting flexibility. Some hosts need a dedicated set of disks, where you know that only one host will be accessing the disks. A high-traffic database server, for example, should not have to contend with other servers for I/O bandwidth and disk seeks. If it truly doesn’t matter to you, simply create a bunch of LUNs, and assign them to random groups.

It is also important to create and assign “hot spare” coverage. Spare disks that are left inside the array are “hot” spares. They can be “global,” so that any RAID volume in the event of a failure uses them, or they can be assigned to specific RAID volumes. Either way, ensure you have a hot spare, if you can afford the lost space. If not, be sure to monitor the array closely—you’ll need to replace any failed disk immediately.

This is where it gets tricky. Different storage arrays will have different terminology, and different processes for assigning LUNs or groups of LUNs to a host.

Assign Your LUNS

Step three, “assign LUNs to a host,” means that you’re going to map WWNs to LUNs on the array. If you didn’t, then any host zoned properly could see all the volumes on the array, and pandemonium would ensue. Be cautious about certain cheaper storage arrays, too. They may not even have this feature by default, until you purchase a license to enable it. While the purveyors of limited-use technology call this feature “WWN Masking” or “SAN-Share,” the market leaders in the SAN space realize that it’s required functionality.

The most common approach is to create a “storage group,” which will contain “hosts” and “LUNs” (or RAID groups with many LUNs). Whatever diverging terminology is used, the universal concept is that you need to create a host entry. This is done by manually entering in a WWN, or connecting the host and zoning it appropriately so that the array can see it. Most arrays will notice the new initiator and ask you to assign it a name. Once your hosts, and all their initiator addresses, are known to the array, it can be configured to present LUNs to the host.

One final note about array configuration. You’ll be connecting two HBAs to two different fabrics, and the array will have one controller in each fabric. The host needs to be configured for multipathing, so that either target on the array can disappear and everything will continue to function. We’ll dedicate an entire article to host configuration, including multipathing and volume managers, but be aware that the disk array side often needs configuring too. The majority of disk arrays require that you specify what type of host is being connected, and what type of multipathing will be used. Without multipathing, LUNs need to be assigned to specific controllers, so that the appropriate hosts can see them.

Once LUNs are assigned to a host, they should be immediately available to the operating system, viewed as distinct disks.

Think about this for a moment. You’ve taken individual disks, and combined them into RAID volumes. Then, you’ve probably partitioned them into smaller LUNs, which is handled by the disk array’s controllers. Now the host has ownership of a LUN, comprised of possibly 10 different disks, but each LUN is smaller than individual disks. The host OS can choose to stripe together multiple LUNs, or even partition individual LUNs further. It’s quite fun to think about.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles

Follow Us On Social Media

Explore More