The enterprise is starting to come to grips with one of the main problems with DevOps: most of the technologies released so far are bringing more agility to the development side of the model than the operational side.
This is perhaps understandable because it is much easier to create a piece of software than to integrate it into a disparate, distributed data environment. But while the advent of software-defined infrastructure is supposed to make this process easier, it turns out that the crucial piece, as usual, is networking.
At the recent Open Source Summit in Los Angeles, Cisco’s Ed Warnicke gave a blunt talk about the need for DevOps to focus on networking if the enterprise hopes to realize the dream of a unified, cloud-native architecture. At the moment, he says, most virtual networks utilize virtual machines to create overlays that mimic the functionality of bare-metal infrastructure but provide much greater scale. But this is inadequate for the coming age of microservices and serverless computing architectures that will push demand for connectivity upward by multiple orders of magnitude. For DevOps to thrive in this world, both developers and operational experts need to find common ground when it comes to programming the app for an increasingly abstract and dynamic ecosystem.
Platform developers, of course, are already on the case, but it will probably require multiple tweaks up and down the networking stack in order to make it truly DevOps-ready. Cumulus Networks, for example, recently upgraded its NetQ validation system with a suite of tools called Host Pack that acts as an extension of NetQ into the cloud. This makes it more amenable to containers and other DevOps-friendly constructs by providing end-to-end visibility into the enterprise’s entire hybrid infrastructure. In addition to working with existing container management platforms like Mesosphere and Kubernetes, the system also supports Layer 3 routing protocols like FRRouting and BGP directly on the host, providing a more robust multi-pathing environment and simplified IP management.
Meanwhile, Veriflow has enhanced its Continuous Network Verification system with new tools that support hybrid cloud networking. The system provides both predictive and intent-based visibility functions across physical, virtual and cloud environments to help admins gauge the performance of new deployments and changes to network architectures. This enables both security and network teams to build baseline metrics for end-to-end network environments, which should in turn support the development of DevOps-style workflows that enhance agility and continuous management.
And since automation plays a big part in DevOps, it’s important to note the recent networking additions that Red Hat has made to Ansible. The company has released a networking add-on to the Ansible engine that provides full support for leading network operating systems like Arista’s EOS; Cisco’s IOS, IOS-XR and NX-OS; and Juniper’s Junos. As well, it supports Open vSwitch and the Linux-based VyOS. In this way, the enterprise can envelop the entire DevOps process, from initial development to IT operations to network administration, under a single automation platform.
If past is prologue, however, networking will likely be the bane in DevOps’ side for quite some time. As we’ve seen in traditional infrastructure, by the time the enterprise incorporates the last network upgrade there’s another new approach just hitting the channel. And while the advent of abstract network technologies makes it easier to implement changes, it also makes it easier to develop them.
Going forward, it is safe to say that the enterprise will have greater sway in charting its own networking environments than it does now, if only because it will be able to tailor both its development and operations efforts around its own business objectives, not the limitations of available technology.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.