Enterprise infrastructure has become integrated to the point where it is difficult to deploy or reconfigure one set of resources without impacting another. Changes in the server farm beget alterations to network infrastructure, which lead to new levels and types of storage.
This is particularly true of virtual environments. Nearly all of the major initiatives hitting the data center these days, from Flash storage to the cloud to software defined networking, build off the basic advancement that server virtualization brought to the table.
So what is the enterprise to do in light of the fundamental changes taking place on the virtual compute layer?
A company called Dockers has been championing the idea of container-based virtualization for some time now and has drawn a substantial following among cloud providers who are seeking ways to distribute multiple applications across scale-out virtual infrastructures. Containers turn the idea of virtualization on its head, because instead of deploying multiple virtual machines, each with its own operating system, it allows one OS to handle multiple apps. So not only do you avoid the tricky licensing schemes that software vendors have devised for virtual environments, but you can also deploy apps on virtually any VM or cloud, opening up the possibility of broad federation across open and proprietary infrastructure.
But exactly how are these containers supposed to interact with network infrastructure? And more importantly, will the proliferation of containers overwhelm available resources elsewhere in the stack, or, at the least, drive resource consumption to unsustainable levels?
These are questions that startups like SocketPlane hope to answer. The company emerged from stealth recently with a plan to leverage the Open Daylight Foundation’s Open vSwitch to enable native container networking on virtual fabrics. In this way, the company hopes to provide policy-driven networking services on the SDN data plane that will ultimately lead to container-to-container overlays that will be equally adept at connecting containers to physical switches and legacy networks. At the same time, the company hopes to use readily available software solutions like Apache Mezos and Kubernetes for management and orchestration, with the idea that containers and network resources should scale in tandem.
At the same time, CoreOS is working on an overlay solution called flannel (formerly rudder) that seeks to address some of the port-mapping and IP address assignment issues that arise when attempting to deploy Kubernetes container management on non-Google infrastructure. The idea is to build an overlay mesh network that delivers a subnet to each host server, using the etcd key value store to handle mapping between the subnets and the hosts’ actual IP addresses. The system is in the very early stages of development and at the moment adds a slight bit of latency (about 70 microseconds), although TCP bandwidth is virtually the same.
Meanwhile, CohesiveFT is working on solutions that allow Docker containers to be connected over the wide area. The company has tailored its VNS3 Layer 3 system to allow for Docker networking within each VNS3 Manager instance. This allows users to create and configure Docker networks within an overlay mesh network and then deliver either open or proprietary source functions across cloud infrastructure within minutes.
It’s never a good idea to start restructuring the foundation when the house is half-built, but IT technology is advancing so fast that it is all but inevitable that underlying technologies will change before all of their permutations play out higher up the stack.
Containers represent a pretty big shift in the way we approach virtualization, but fortunately it can be accommodated with relative ease by emerging software-based architectures.
Photo courtesy of Shutterstock.