Despite the negativity from naysayers who say it is a waste of time and money, the enterprise is heading full-bore into the private cloud. But while the desire to leverage legacy infrastructure is a noble cause, it turns out that building a cloud environment is a lot more difficult than using one. One of the key stumbling blocks is networking.
This is the primary reason why OpenFlow, OpenStack and other open-source SDN solutions are proving so popular. But as those who have already waded into open-source networking waters are finding out, it takes more than just open connectivity to produce a fully functional cloud environment.
Companies like Pluribus Networks know this, too. The company’s open source Netvisor operating system enables bare-metal hardware and hypervisor programmability, but even this is only a small piece in the overall cloud networking stack. This is why Pluribus recently teamed up with application services provider Appcito to build an integrated services platform capable of load balancing, traffic monitoring, security and all the other things that matter in a dynamic, distributed data ecosystem. By layering Appcito’s CAFE platform atop Pluribus’ fabric, users gain instant scalability across public and private clouds. Even high-level services can be deployed quickly and at low cost.
It’s been an “open” secret in cloud circles for a while that the biggest weakness in OpenStack is networking. The OpenStack community has launched the Neutron project to shore up its networking layer, but some backers are already exploring alternative solutions. Fujitsu recently turned to Midokura’s Enterprise MidoNet (MEM) for network virtualization for its emerging OpenStack portfolio rather than the default Open vSwitch. Fujitsu says MEM offers better scalability, plus other upgrades like virtual port flow tracing, improved Border Gateway Protocol Configuration (BGPC) views and support for both Puppet and Docker containers.
Containers, of course, add an entirely new wrinkle to the cloud networking fabric. Even Docker is struggling to smooth that wrinkle out. This has led to a cottage industry of sorts as third-party developers seek to forge low-cost, easily deployable ways of bringing advanced networking functionality to the container level. Joyent’s new Triton platform attempts to supplement container management platforms like Mesos and Kubernetes by addressing the underlying infrastructure needs of a container-based data environment. By focusing on key aspects of deployment like security, networking and debugging, Triton aims to simplify container-based architectures in both on-premises and cloud-based settings.
Any private cloud architecture will naturally have to connect to distant public resources, so an integrated fabric will have to extend over the wide area as well. Or will it? According to CloudWedge’s Hadley Jones, technologies like the Fast and Secure Protocol (FASP) from Aspera can minimize long-haul network resource consumption when standard TCP Internet connectivity maxes out. At the same time, Cisco’s Fog computing concept seeks to maximize local processing and storage to restrict traffic to the public cloud to the bare minimum. In both cases, the aim is to configure the cloud to accommodate the slow pace of network traffic, not fight against it through increasingly complex and expensive network architectures.
A cloud is only as good as its ability to aggregate the disparate resources at its disposal. This requires robust networking. But because networks and network protocols touch virtually every aspect of the data environment – physical, virtual and cloud – expect this to be the most difficult piece to iron out.
There are options galore, but until we get a better handle on how workloads and traffic patterns behave in the cloud, choosing the right networking platform will require as much guesswork as reasoned science.
Photo courtesy of Shutterstock.