Storage Networking — Boring No More

Storage networking used to be the granny of the enterprise. Not only was it slow (sorry, granny), but it also lacked the dynamism that characterized development elsewhere in the data center and the wide area. Basically, it was stuck in a SAN/NAS rut.

But those days are quickly coming to an end as IT executives realize that in order to produce infrastructure capable of supporting next-generation applications and services they need to bring storage up to par with the rest of the data ecosystem.

These days, storage networks are out and storage fabrics are in. As traditional north-south architectures give way to east-west, southeast-northwest, and any other combination out there, new forms of storage connectivity are starting to drive emerging use cases. That, in turn, is leading to even more innovative approaches to connectivity.

According to Mellanox’ Kevin Deierling, the one constant in all of this will be Ethernet. He argues that today’s Ethernet outclasses all other approaches, including Fibre Channel and InfiniBand, when it comes to network automation and optimization for all modes of storage. Specifically, converged Ethernet solutions natively support and accelerate advanced technologies like virtualization and containerization, while at the same time delivering on the advanced protocols that will support software defined storage (SDS) environments. In other words, the network renaissance will be built around Ethernet Storage Fabric (ESF).

Not so fast, says Fausto Vaninetti, director and secretary of the Storage Networking Industry Association (SNIA). In an interview with the UK Register, he noted that when building fabrics using advanced interfaces like NVMe, Fibre Channel may not be as ubiquitous as Ethernet across the data center, but it is present in virtually all disk arrays. Plus, it provides superior performance compared to solutions like RoCE and iWARP. Fibre Channel didn’t gain popularity in high-end enterprises without reason, and its advantages don’t disappear just because networking is now designed as a fabric rather than a point-to-point architecture.

Chelsio Communications might have something to say about this, however. The company just demonstrated a new approach to NVMe over Fabrics JBOF connectivity using its T6 100 GbE iWARP adapter. The company says it can support a broad range of Ethernet speeds, from 1 to 100 GbE, while providing backward compatibility with legacy Ethernet installations for incremental deployment. As well, it works with any non-DCBX switch, allowing enterprises to choose an appropriate level of price/performance across the fabric, while at the same time providing hardware offload support for Fibre Channel, iSCSI, iSER and other formats.

Of course, all of this is assuming that storage fabrics will continue to be the primary drivers of speed and connectivity in advanced data architectures. But this might not necessarily be the case, says Objective Analysis’ Jim Handy. New generations of non-volatile DIMMs (NVDIMMS) are making it easier and more cost-efficient to build advanced memory architectures to handle the most time-critical workloads. While these solutions will remain more expensive than solid state and hard disk arrays, they will likely emerge as the new Tier 1 level of storage. And that means the need to drive faster and faster performance across distributed compute architectures will likely diminish.

One thing seems certain: the days of simply provisioning more storage regardless of the underlying performance issues are long gone. Going forward, storage architectures will be as closely tied to application and user requirements as compute and networking, pushing the enterprise to deploy a diversity of solutions rather than whichever system is grabbing the most headlines at the moment.

Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.

Latest Articles

Follow Us On Social Media

Explore More