Are SANs Still Relevant?

Enterprise Networking Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

During difficult economic times, firms are often looking to cut costs, and a SAN infrastructure can compose an overwhelming portion of the IT budget. The question is, is it worth it? We weigh the benefits of the SAN with the alternatives in a SAN-less world.

Why a SAN Makes Sense

Centralized management is the number one reason for SAN deployment. A single place to configure storage and access to the storage, for the entire infrastructure, is certainly handy. Most SAN environments standardize on a single vendor to ensure interoperability, but also so the configuration interface is well understood by all employees. Limiting surprises means that standard procedures can be developed, and problems can more easily be tracked back to the specific human error.

Moving storage to a different server, without physically moving devices, is also a huge benefit. In fact, you connect each server to SAN switches (at least $1,000 per switch port) via the Fibre Channel HBA you purchase for each server ($500-$1,000 each), and never again have to move the server to reconfigure storage. It is handy, but hardly cost effective. More often than not, SAN-laden companies purchase HBAs with servers just in case they might want to attach them to the SAN in the future. The cost of SAN switches and HBAs mentioned above is just to get servers attached; we’re ignoring the storage array costs for now.

Performance Issues

While 4Gb/s and 8Gb/s is standard now, most people find SAN performance very suboptimal. A few servers can easily eat up the available bandwidth and CPU power (for RAID processing) of a storage array. The benefits of a SAN-attached storage array are theoretically higher utilization of storage, since its allocation and movement capabilities are beneficial. Unfortunately, performance issues often limit the number of connected servers per array, or require manual segmentation.

SAN issues are most always bandwidth and CPU related. If you run out of bandwidth, no problem, just connect another fibre link from the array to the switch. The FC protocol handles this well, but you’ve just burned another expensive switch port, and storage arrays come with a limited number of FC connectors, so the scalability is limited. More often, though, performance suffers at the hand of RAID configurations. CPUs on storage arrays are massively parallel and redundant, but with many servers accessing the same storage, they often don’t perform as well as expected. The solution is to upgrade to a more powerful array, and at this category of SAN array, you’re talking well into the six-figure range, per unit!

Standard practice dictates that SAN-attached arrays be configured such that logical volumes (on the SAN, which are exported as block devices) are created by striping across the head unit and all expansion arrays. Each volume can also be carved up into the site’s standard LUN size. In the end, you have no idea what physical disks are holding the data, which is good from the management aspect. When performance issues arise, however, the only way to improve is to move the data to dedicated disks. Performance still not good enough? It’s time to switch to RAID-10 and only utilize half of the raw space. At roughly $10/GB for most SAN-attached storage arrays, this quickly gets out of control.

Coincidentally, NASA’s Nebula project recently noted, “we cannot accept the performance penalty of a SAN connection.”

DAS/Local Storage, Revisited

Direct-attached storage gets a bad reputation. It cannot be centrally managed; you must physically move DAS arrays to move storage; the list goes on. Where cost or performance matter, however, there is no substitute.

The truth is, DAS arrays can be just as manageable. Vendors ship management consoles that are able to connect to every server in your infrastructure and provide a holistic view of all storage. Storage utilization often decreases in DAS environments, as there is no virtualization or sharing at the storage level. However, utilization at even 50 percent less, but the same amount of storage at five to ten percent of the cost, is acceptable for most organizations.

Virtualized storage does exist, at some level, however. Servers with DAS arrays can do many things at once, and also share part of the storage via iSCSI to other servers. This is popularly known as a DIY SAN, where a few servers with tons of disk space share it via iSCSI to other servers that need only a network card (no expensive FC HBA).

Another common trend is purchasing locally-attached disks with every server. The cost of SATA and SAS drives has gone down, the available slots in 2U servers has gone up, and performance and manageability has also gone up.

We won’t suggest this is more hands-off and manageable. But how often do you really move around storage? At the price of a SAN replication license, for example, can pay two hardware techs for the year, who can handle physical moves. Think about it.

Trivial Answers to Common SAN Praises

Speaking of replication, here is where storage vendors really make their money. You need two arrays, one for each site, and then a software license to enable that feature. Why not use Linux, and DRBD to replicate volumes? Or even Windows–we’re not on a Linux rant here, just an anti-SAN one.

Global file systems, also known as Clustered storage, are another reason people get into the SAN trap. The truth is, DAS arrays often support two host connections, and you can also replicate storage and use a clustered file system with technology like DRBD. Distributed file systems like Lustre and Hadoop often fit the bill as well.

Migrating storage to a new server is trivial in a SAN environment. Just give the new server access to the data, and it’s done. Moving data between SAN arrays is also easy: give the new server access to both the old and new LUNs, and mirror the data using your software volume manager, then break the mirror when done. This method of moving storage between physical storage devices, surprisingly, is the number one convenience many SAN users cite. This, too, can be done in Linux without shared SAN storage.

Finally, the big issue is virtualization. To get live migrations, the host machines must have access to the same underlying storage. Instead of a SAN, you can: use iSCSI hosted by a storage server cluster, replicate the underlying VM host storage with DRBD, or share VM images over NFS. Many other possibilities exist as well, depending on your environment and performance requirements.

In summary, the manageability of a non-SAN environment is better than you may think. Vendors don’t admit it, and proponents of SAN within your IT department want to justify past purchases. After transcending that, changing your storage strategy is certainly worth looking into. If Google can run on commodity hardware, so can you.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles

Follow Us On Social Media

Explore More