One of the things that software defined networks are supposed to bring to the enterprise is greater reliability. Without having to manually provision new resources and configure network pathways, SDN provides an environment that is both highly flexible and more robust in the face of constantly changing workloads.
While there is no reason to dispute this, we have to consider the fact that, as virtually anyone who has ever used a computer knows, software is not infallible. Not only is it subject to bugs and malware, but it can also lose touch with drivers, protocols, firmware and other pieces of the data stack as part of the normal refresh and upgrade cycle.
So before the enterprise becomes wholly dependent on virtual infrastructure, it might be worthwhile to take a look at how reliable SDN is and whether the reliability metrics need to be adjusted to account for the new way in which networks are provisioned, deployed and utilized.
According to Eric Hutchinson, CEO of Spirent Communications, a modern network can be likened to an airport. The people are data packets and the ticket counter, baggage claim, security checkpoints and transport systems are the network elements that transfer, inspect, classify and direct packets to their rightful destinations. It would be much easier to simply chip everyone at the airport and then provide continuous monitoring so things like security and ticketing do not produce choke-points in the boarding or debarkation process, but would this necessarily make the process more reliable or more secure? In many ways, yes, but at the same time it introduces new risks on the virtual layer that should not be ignored.
It should be clear by now that SDN does not provide reliability as a core asset – it has to be programmed very carefully. VMware saw this need earlier in the year when it purchased Arkin Net and added its visibility and control tools to the NSX vRealize management suite. In this way, users gain the ability to assign virtual networks to virtual machines and then link the entire setup to the appropriate physical resources. By itself, this does not guarantee reliability (nothing is every guaranteed in IT) but it points out the layers of sophisticated software that all must work in an integrated fashion, and a high speed, just to produce a level of reliability that users have come to expect.
Increasingly, these software tools are being populated with intelligent analytics and machine learning to produce an even greater hands-off management regime than exists in today’s limited SDN universe. Cisco recently launched a new analytics appliance called the Tetration Analytics platform that gives channel partners, but apparently not the enterprise itself, insight into customers’ networks so they can better guide the development and provisioning of future systems and services. The system uses an array of sensors and an intelligent analytics engine to first learn the current state of data center operations and then provide application insights, automated policy recommendations, network flow forensics and other results for the betterment of the overall data environment. It’s basically a mini-IoT dedicated to improving reliability and other performance characteristics.
A similar mindset is taking hold outside the data center – on the carrier-level and wide area networks that are crucial to the smooth functioning of the geo-distributed data environment. Companies like AT&T and Level 3 are pushing AI capabilities deeper into their networks in part to deliver enhanced SDN service and reliability to the enterprise, says Light Reading’s Carol Wilson. This kind of functionality is not only desirable but necessary in an increasingly virtual world. Level 3, for example, collects threat information on about 50 billion events per day while at the same time handling tens of millions of NetFlow messages per day and literally millions of logs and other messages per second, all of which must be tracked for things like latency, loss and jitter on a continual basis.
Will these intelligent systems be enough to produce a network environment that can be considered reliable? So far, they seems up to the task. But with SDN barely out of the lab at this point, it is far from clear whether even high-speed, intelligent machines will be able to keep everything running smoothly as the technology makes its way into real production environments. And exactly how these software constructs will reprogram themselves in response to their environments is anybody’s guess.
The experts, of course, say all will be well, but experts have been wrong before.