The enterprise is rapidly pursuing two key changes to data infrastructure in the drive to remain competitive in an increasingly digital economy. On the one hand, it is converting legacy network architectures into software-defined ones so as to build fully virtual data environments. On the other, it is undoing years of distributed physical infrastructure in favor of converged, modular footprints.
Both solutions require a great deal of planning, coordination and foresight if they are to fulfill their promise of a new, more efficient and more cloud-like data environment. They work at cross purposes in one crucial aspect: the more converged you become, the less complicated the networking gets, at least on the physical layer. Data in converged architectures moves across the interface fabric, not a complex network environment populated with switches, routers and controllers. So the question most enterprise executives should be asking themselves is, how much of a shelf-life will today’s SDN, NFV and virtual overlay technologies provide if everything will be modular in a few short years?
One solution, of course, is to tweak today’s virtual networking solutions for converged and hyperconverged platforms. And in just the past few months, we’ve seen a number of initiatives taking on this challenge from different angles.
Juniper Networks, for instance, recently entered a distribution partnership with Lenovo that pairs networking systems like the JunOS operating system and Contrail SDN platform with modular solutions that conform to the Open Compute Project’s converged reference architecture. No actual product integration will take place, at least from the vendors, but it does give channel partners the ability to craft structured, modular infrastructure that incorporates the same virtual networking platforms that are remaking legacy data center networks. In this way, converged architectures should have an easier time reaching back to those legacy environments for data and application support, thus lengthening their lifecycles.
Meanwhile, Cisco is throwing its own hat into the hyperconvergence ring with the HyperFlex Systems platform, an amalgamation of various Nexus, ACI and UCS products tied to the software-defined storage capabilities of its newest partner, SpringPath. Designed to offer easy set-up and rapid scalability, HyperFlex provides built-in SDN support, along with advanced storage functions like deduplication, cloning and object-based storage optimized for emerging container-based virtual environments. Cisco says it can shrink data footprints by 80 percent and reduce TCO by a third while still providing better performance than current distributed architectures.
Still, with the interconnect providing the communication between modules in converged infrastructure, how is the enterprise expected to drive SDN down to that level? Mellanox recently took a step in that direction with its Open Composable Networks (OCN) system. The solution supports 10, 25, 50 and 100 Gbps networking on the new ConnectX-4 Lx adapter card, which is tied to a new generation of Spectrum Ethernet switches that supports numerous network OS’s such as Cumulus Linux, OpenSwitch and Mellanox’ own MLNX-OS, which in turn supports the OCP Switch Abstraction Interface (SAI) and Linux Switchdev. This effectively allows the enterprise to deploy web-scale converged infrastructure with built-in support for leading SDN solutions through the Mellanox NEO management stack.
And hot off the presses, Plexxi announced this morning that it has created the first programmable network fabric that integrates VMware virtual networking and the Hortonworks Apache Hadoop framework on the Nutanix hyperconvered platform. The Plexxi Connect Open Integration Platform is designed to support rapid cloud building through dynamic network fabric architectures that can more easily accommodate the multipoint nature of today’s workloads than traditional static client-server approaches. With a converged networking solution in the mix, both VMware and Nutanix say they can quickly build the converged cloud architectures needed to support Big Data and other emerging data initiatives from scratch, as opposed to the complex task of rebuilding legacy environments.
Whether infrastructure is converged, distributed, in the data center or on the cloud, data performance inevitably comes down to networking. The solution that can deliver data quickest to the largest number of end points is the winner.
A software-defined networking layer in a converged, modular setting is not only desirable but crucial – because it isn’t likely that the speed and complexity of modern digital processes will start to diminish any time soon.