Open networking has been on a roll lately as large operators warm up to the idea of reducing hardware to white box infrastructure while architecting all of the tricky stuff on increasingly intelligent software.
This rightly has large switch vendors like Cisco in a knot. On the one hand, the writing seems to be on the wall that open platforms are the way of the future, but the transition will be extremely detrimental to the bottom line if it happens too quickly.
Fortunately, there is precedence for this kind of thing. Back in the PC days of the 1980s, IBM dominated with the idea that an integrated hardware/software approach would rule forever because, after all, only experts know how to use computers. Even after ushering in the clone era by allowing Microsoft to license DOS to third-party manufacturers, the company still thought it could control the market with its PS/2-OS/2 bundle. Ultimately, it failed, but IBM learned a valuable lesson about the relationship between hardware and software. That lesson has guided the company through the shift to cloud and software-defined compute and storage infrastructure.
Cisco is in much the same position today. As software-defined architectures allow non-experts to provision their own networks, the company is faced with the likely transition from its integrated platforms to white box solutions running open source management stacks. The company is actually aiding this process through support for efforts like the Open Network Automation Project (ONAP), which recently released its first code through the Linux Foundation. ONAP, in fact, saw a major boost this week when AT&T demonstrated its ECOMP management stack. ECOMP was recently added to ONAP, providing wide area networking (WAN) orchestration across multi-vendor white box infrastructure.
None of this is catching Cisco by surprise, mind you. Back in 2015, then-CEO John Chambers called out the threat that proprietary systems faced from white box infrastructure. That is why the company is said to be preparing to launch the Lindt operating system that will function on non-Cisco hardware. As Light Reading’s James Crawshaw notes, the company is facing pressure from hyperscale organizations that are willing to sacrifice some of the performance benefits of integrated platforms for increased control and programmability. At the same time, Cisco is separating its hardware and software engineering teams and has put the word out that it is interested in hiring more programming talent.
In the meantime, Cisco is out to prove that it can stay a step ahead of open solutions when it comes to automating scale-out data environments. The company recently added a Layered Services Architecture (LSA) to its Network Services Orchestrator (NSO) platform. This change should boost horizontal scalability for the coming era of connected devices. The system is designed to manage upwards of 1 million endpoints on a single network architecture, allowing organizations to automate a wealth of services across traditional and virtual networks. Although details are sketchy, LSA appears to do for services what a layered software architecture does for applications; that is, logically separate key functions so they can be managed independently across complex systems. In this way, organizations can develop, distribute and update services on the fly without having to manually reconfigure end-to-end network environments.
There is nothing preventing the Linux community from building the same kind of functionality into ONAP, of course. But with Cisco, the enterprise gains the ability to implement software-defined networks on a proprietary solution first and then tie it over to open architectures gradually.
Ultimately, it seems that Cisco, like IBM before it, has a choice: to either be king of a rapidly dwindling proprietary architecture or a key player in an interoperable, multivendor data ecosystem.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.