With SDN, Does Hardware Really Matter?
With software defined networking, does the underlying hardware matter?
One of the advantages of software defined networking (SDN) — in fact, many would argue, the primary advantage — is that it will lower the cost of building and operating complex network infrastructures.
And while tales abound of greater network efficiency and higher resource utilization, when it comes to actually building new physical plant, SDN promises the intriguing possibility of fully commoditized hardware. Much the way traditional virtualization has ushered in the era of low-cost, fully interoperable systems in the server farm, SDN will allow the enterprise to break its dependence on customized network architectures in favor of a mix-and-match approach in which value, not compatibility with legacy systems, becomes the dominant criterion for deployment.
Among backers of the OpenFlow protocol, enthusiasm for commodity networking is particularly high. Pica8, one of the newest faces in the burgeoning SDN market, recently announced a new SDN reference architecture to be used by network hardware manufacturers to devise ONF-compatible systems. The move is seen as a direct challenge to Cisco, Juniper and others in that it opens the door for a wide-range of low-cost devices to work their way into what have traditionally been single-vendor installations. At the same time, it complements existing ONF platforms from Big Switch, Nicira and others who are building on the OpenFlow protocol.
But is that all there is to it? Simply deploy an OpenFlow SDN layer and then base all hardware decisions on the lowest common dollar denominator? Not quite, according to the large network vendors. Networking is not like data processing or storage where systems have a relatively easy task of communicating with each other. Modern network topologies, even virtual ones, need to be delicately balanced to maintain proper connectivity, a requirement that has only grown more crucial as data environments become increasingly dynamic.
That's part of the reason both Juniper and Cisco are pursuing integrated hardware/software networking environments. They argue that this allows both hardware and software to be optimized for specific functions — hardware for basic tasks like packet processing and software for higher-order duties like network pathway configuration — while ensuring a high level of cooperation that should make network deployment and operations both cheaper and easier.
However, others argue that this is redundant and that OpenFlow alone is perfectly capable of extending its reach over both hardware and software in order to devise a more perfect network. Big Switch, for one, says that this will avoid the ungainly situation of having separate network management interfaces — one for hardware and another for software — allowing operators to more quickly pinpoint, isolate and correct problems. In fact, ONF control over physical switches is the only practical way to deploy SDN in environments that have yet to be virtualized, which includes much of the mission-critical apparatus at most large organizations.
So who is right? Probably both, as there is no right or wrong way to build network infrastructure. Each enterprise will bring its own requirements to the table and will have to make the call as to which approach will adequately meet both long- and short-term needs.
In the end, it is all a matter of trust. Do you trust the established network providers to carry you into an unknown future, or have the upstarts made their case that a new era demands new approaches?
This is only the tip-off of what will be a fairly lengthy transition, and the ball is still very much up in the air.