Networking, Private Cloud, and the Heterogeneous Environment
Some networking vendors aim to unify multi-vendor, legacy data center silos into the interoperable environments private cloud requires.
Enterprises scrambling to deploy private cloud services on legacy infrastructure confront a rather thorny issue: how do you turn disparate, heterogeneous, silo-based architectures into a cohesive, interoperable environment?
The easy way, of course, is to simply deploy new infrastructure and load it up with cloud-facing apps and services. Converged infrastructure in particular is well-suited to this task, according to Arquiva's Paul Freemantle, providing not only scale but the ease of operation that cloud-based infrastructure requires. And, in fact, much of the Fortune 500 has taken a look at the legacy infrastructure that has evolved over the years and concluded that it's just cheaper and easier to start fresh, particularly when it comes to web-facing, hyperscale environments.
But does that mean the entire investment in the traditional data center has been in vain? And more importantly, does it change the equation used to justify continued spending in "old school" hardware and software?
Not surprisingly, there is still plenty of life left in legacy systems, even in the age of the cloud. What's needed is an effective means to bridge the patchwork of multi-format, multi-vendor silos so they can communicate not just on a rudimentary level but as an integrated application environment. And at the moment, much of that work is taking place on the network.
Emulex recently released the first of its new OpenWorks Connector line, which allows bare metal configuration of management and provisioning services for the latest OneConnect cloud adapters. The intent is to provide a means to scale heterogeneous server environments up to cloud levels while at the same time accelerating time-to-application deployment to drive down costs. The device uses the RESTful API for third-party management and has already lined up Egenera's PAN Manager 8.0 as a key platform to rapidly provision machine images onto bare metal rack servers. The controller also provides for stateless and agentless server management, is OS agnostic and provides a high degree of traffic isolation for improved security.
Meanwhile, Brocade is looking to expand the ability of its Fibre Channel and Ethernet storage network fabric technology to accommodate multiple SSD devices and configurations. The company recently expanded its Solid State Ready program with new testing initiatives to help drive seamless interoperability across all-flash and hybrid arrays. The program now covers not only solutions testing, but things like market development, technical sales/support, and education to help ensure that mixed-vendor arrays are capable of leveraging the performance that fabric technology provides. The program has already drawn providers ranging from Hitachi and HP to newer entrants like Pure Storage and SolidFire.
Of course, to hear companies like VMware tell it, issues like multi-vendor compatibility are much easier to deal with on the virtual layer rather than the less flexible physical plane. By layering open protocols like OpenStack on top of virtual infrastructure, enterprises will be able to take advantage of virtual networking and software-defined resource configuration to produce the operational efficiencies needed for cloud computing. True, OpenStack can run on bare metal as well, but that still leaves the enterprise with a lot of messy integration and data migration issues to contend with. Still, enterprises managers will probably want to think long and hard about whether they want to solve their multi-vendor hardware issues by deploying a single-vendor virtual infrastructure.
The end game in all of this is the fully integrated cloud environment, in which workloads are automatically deployed across just the right type and amount of resources. But in that effort, heterogeneous functionality is only the first step of a very long journey. The tricky part will be the development of both the automation stack and the policy matrix that can handle data operations at that level with any degree of reliability.
At some point, of course, silo-based architectures will fade away and data operations will spread out across available infrastructure uniformly. But in most cases, that process will center on the conversion of legacy systems, not rip and replace.
Photo courtesy of Shutterstock.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.