The Many Roads to SDN in the Data Center

SDN is looking more and more necessary to the data center, but even with OpenFlow and OpenStack, the choices are numerous.

By Arthur Cole | Posted Jul 24, 2015
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Networking within the enterprise used to be a pretty straightforward affair. There was rack-based networking to connect servers and other devices; storage area networking to, from, and within the array; and then local area networking to the desktop. Sure, this is an over-simplification, but that was the gist of it.

In the emerging data center, both within the enterprise and on the cloud, integrated modular infrastructure is likely to rule the roost, which means that network architecture will have to be both general-purpose in nature and capable of providing highly specialized services for key applications and workloads. At the same time, the enterprise will be under the gun to extend this level of functionality over the wide area in order to cobble together multiple cloud environments into a cohesive data ecosystem.

This is a tall order, and the only realistic solution is software defined networking (SDN). But even with the basic OpenFlow or OpenStack architecture in place, there are still a range of decisions regarding topographies, layering, pathways, connection management, and the like before the enterprise can claim they have a network optimized for their workloads. And this fact is drawing in a number of new vendor and industry solutions beyond the deployments of network stalwarts like Cisco and Brocade.

One solution is Avaya’s Fx architecture, which was recently selected as the standard networking platform for Steel ORCA’s cloud and colocation data center architecture. The system is built around a fabric topology that allows Steel ORCA to add customers and scale workloads up and down without hampering performance elsewhere in the data center. Avaya designed the system to provide full programmability from the core to the edge, enabling data center operators to create customized network environments that are highly tuned to the demands of the workload.

At the same time, multiple networking standards are starting to compete outside of their traditional environments. The Infiniband Trade Association is actively pitching its RDMA over Converged Ethernet (RoCE) as the next-generation solution for high-speed connectivity across the data center. Originally deployed as a high-speed server interconnect, RDMA is finding new life in the dense, modular environments known as “converged infrastructure.” Under its new RoCE Initiative, the ITA is now providing educational resources and guidance on using the standard for applications ranging from financial services and ecommerce to warehousing and advanced cloud services. At the same time, RDMA is finding its way onto open source solutions such as Lustre and Hadoop, as well as proprietary platforms from Microsoft, Oracle and VMware.

None of this is to suggest that traditional enterprise network designers are letting the opportunity to craft advanced network topologies pass them by. Dell recently rolled out a series of new switches aimed at bringing multi-rate capabilities to SDN deployments. The Z9100 platform can function at multiple rates between 10 and 100 Gbps, providing a steady, non-disruptive upgrade path as the enterprise encounters increasingly burdensome workloads. The device is being released in conjunction with a 10/40 GbE ToR switch and a 1/10 GbE switch for open source deployments in the cloud.

But as data environments take on increasingly heavy and complex traffic, higher bandwidth and even versatile fabric topologies may not be enough. Network intelligence and network awareness will play a major role as the demands of applications and services start to exceed the human brain’s ability to cope. This is where companies like VMTurbo are hoping to make their mark by providing near-instant analysis of VM, host, storage and network conditions and then providing the immediate correction or reconfiguration required for optimal throughput. This manipulation can extend from resetting port buffer utilization parameters to relocating virtual machines, and when coupled with netflow collection systems, advanced storage arrays and multiple other elements it can evolve into an end-to-end data center control platform.

The data center network has always relied on an intricate set of precisely aligned resources and carefully moving parts. A software defined network will multiply this state of affairs by orders of magnitude. But far from simply laying a virtual architecture atop a plain old network infrastructure, a truly abstract layer will have to be designed from the ground up to accommodate current and expected workloads, and there will be multiple decision points along the way to determine exactly how the entire stack should be built.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter