SDS and the Legacy Infrastructure Conundrum
The fully software defined data center looks closer than ever, but before it can be realized, the software defined movement must address the storage question.
With the push on to convert today’s physical networks into software defined infrastructure, we look closer than ever before to realizing the dream of a fully software defined data center.
But to get there, we must first address one more piece of business: applying the same basic concepts of SDN to the storage farm. More specifically, to the SAN and NAS architectures that reside in the data center. This is not as easy as it sounds. Many conflicting visions exist of what software defined storage should look like and how it should operate. On top of that, the enterprise has poured a lot of time, money and resources into building current storage infrastructure and will want to leverage that investment to the greatest extent possible.
This is why a little-noticed announcement from EMC and Brocade could turn out to be much more significant to the software defined movement than many realize. The two companies have integrated Brocade’s SAN management platform with EMC’s new ViPR SDS system to simplify deployment of SDS in legacy EMC Fibre Channel environments. The package uses the Storage Management Initiative (SMI) interface, so it can extend to third-party infrastructure as well, essentially giving the enterprise a speedy on-ramp to SDS throughout existing storage environments and then tying it to advanced cloud services through the ViPR Object Data Service and the Hadoop Distributed File System.
This is not the only significant tie-up in the SDS space. Late last month, a company called StackIQ released an extension of its StackIQ Cluster Manager system, which brings advanced automation capabilities to the Red Hat Storage platform. The idea is to offer the enterprise a unified approach to extend SDS across virtual, distributed architectures. StackIQ is aimed squarely at Big Data and HPC functions, which have traditionally posed management challenges in the commodity-driven environments of open platforms like Red Hat. With a top-tier automation stack in its corner, Red Hat Storage looks poised to take on the kinds of challenges that hyperscale enterprise infrastructure presents.
Like all things virtual, SDS is not an end to itself. It’s what you do with it that matters. And in that light, the technology has come along at just the right time to tackle some thorny issues that have arisen with the implementation of Flash storage and high-speed memory solutions: how to harness all that storage power in the way most beneficial to the enterprise. A new study by 451 Research pegs SDS as a key enabler for functions like automated tiering, where data can be shuttled to the appropriate storage medium quickly and without hands-on supervision. With Flash and SDS, many enterprise executives see a chance to ramp up their internal storage capabilities and reduce their reliance on third-party cloud resources.
Clearly, there many questions remain unanswered at this early stage of SDS development. Are we really defining storage, or just reconfiguring it a little? And is this really a step up from traditional management and storage pooling technologies, most of which are available as software?
These are all valid questions, and I don’t pretend to have all the answers. But with the storage networking side of the data center built around the same control/data plane separation concept found on the network at large, the enterprise appears well on its way toward finally unshackling itself from the limits of physical infrastructure.