Open Networking at Every Layer

The growth of open source networking for enterprise data environments will solve many problems but create new challenges.

By Arthur Cole | Posted Jul 2, 2015
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Interest in open source networking is running high. With the cloud providing all of the compute and/or storage resources the enterprise can handle, the need to connect a variety of data mid- and end-points over a unified communications architecture has become urgent. An open network architecture certainly helps in this regard, but we must remember that not all open solutions are created equal, and few are fully interoperable either.

Open systems are targeting everything from the carrier network to the server interconnect, from the physical to the abstract virtual layers. So right off the bat, enterprise architects need to bone up on what open solutions are available to each piece of the network and how they function with other open systems elsewhere.

Mellanox, for example, just opened up the source code for its Soft-RoCE RDMA over Converged Ethernet solution. Developed in conjunction with IBM and System Fabric Works, the solution allows servers and storage devices to communicate with RDMA, and thus InfiniBand, over standard Ethernet adapters. This is crucial to push RDMA over data center fabrics. By opening up the source code, Mellanox enables the enterprise to create those fabrics using standard NICs and acceleration solutions. For companies dealing with large data sets or just reams of small-packet traffic in Big Data and IoT settings, a high-speed open network spanning perhaps hundreds of thousands of nodes is no longer the stuff of science fiction, so it helps to be able to build them quickly and easily while pushing higher-order functions onto software.

Eventually, though, data will have to travel outside the data center walls. Ideally there should be as much commonality between the local and wide area fabric as possible, so the hand-off can go as smoothly as possible. But while there are wide-area Infiniband plans afoot, the reality is that long-haul interconnectivity will rely on open systems being developed on the carrier level. The leading contender is the Open Platform for NFV (OPNFV) solution currently taking shape under the Linux Foundation. It recently released a new software version called Arno and added Altera FPGA acceleration to enable a high degree of network customization while still maintaining broad interoperability over compliant hardware and middleware.

Meanwhile, other data center-facing platforms are looking to forge links over the wide area as well. Facebook’s Open Compute Project (OCP), for instance, recently gained key network hardware specs and software tools from AT&T. The platform was developed to support the carrier’s GigaPower service, which relies on sophisticated software atop generic hardware – much the way OCP does – rather than the proprietary solutions of brand-name network vendors. AT&T is a contributing member to OPNFV, but it is tough to determine the level of compatibility between its OCP and OPNFV endeavors, or whether applications developed for one platform will see full functionality in the other.

Even as all this happens, enterprises will be tasked with putting new container platforms in place and then syncing them up to an increasingly open network architecture. Docker’s recent acquisition of SocketPlane, after all, was aimed specifically at establishing overlay networks to connect containers to the Open vSwitch (OVS) using Layer-2 and -3 connectivity. At the same time, Docker is fostering its own open source libnetwork solution so that third-party developers can offer plug-in networking services. Some people are already wondering whether network architects will even consider VM-facing open solutions, namely OpenStack, necessary in a container-centric environment.

Of course, it is unrealistic to expect any networking solution to deliver full functionality across an entire distributed data architecture. Quite frankly, the differing requirements between the component-, data center-, and carrier levels preclude a single, integrated networking stack to begin with.

But each time traffic is handed off from one solution to another, it’s like a car going from a driveway to a local road to a state route to an interstate highway. You have to slow down or even stop in order to transition from one layer to another, but as long as the intersections and on-ramps are designed properly, you can make up for the delay by traveling faster on the better road.

Enterprises that adopt open source across their entire network infrastructures should recognize that there will still be a lot of moving parts within the full stack, so it is more than likely that at some point they will have to re-architect key network elements because the open solution that was deployed either fails to deliver adequate interoperability or simply fails to gain traction across the broader data ecosystem.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter