Software Defined Networking (SDN) has emerged in the last few years as a topic and a technology with great potential and interest. The reality of SDN today from a standards and interoperability perspective is that it’s still the early innings.
The newly minted SDN Testing Consortium backed by InCNTRE(The Indiana Center for Network Translational Research and Education) is now leading an effort to accelerate SDN testing and interoperability. Among InCNTRE’s partners is testing vendor Ixia, which is also helping to lead the Open Networking Foundation’s testing and interoperability working group. Indiana University has emerged in recent years as a leading research basis and user of SDN.
As part of the overall effort to bring SDN implementations and vendors together for interoperability, the SDN Testing Consortium is kicking off a program to certify OpenFlow on switches. The testing is now seeking to confirm full compliance with the OpenFlow protocol but rather just a form of conformance.
“Compliance has legal ramifications and if you’re achieving compliance there are other liabilities that come into play,” Mike Haugh, senior market development manager at Ixia told EnterpriseNetworkingPlanet. “With conformance, you break down the specification into individual test cases that verify each line item of a standard and then classify if it’s a must-have or just a should-have item.”
Haugh added items that are considered as ‘should’ or ‘may’ include items aren’t required for conformance, only the ‘must’ items are required.
The interoperability testing now underway for OpenFlow is not the first time interoperability testing has been available for the SDN protocol. The earliest OpenFlow work was all done at Stanford University and researchers there built an open source test suite known as OF-Test. Haugh described OF-Test as a development level testing framework with approximately 40 test cases.
What Ixia has now done is built an OpenFlow test suite that has some 180 test cases in the company’s IxANVL test suite.
The conformance testing currently underway is for the OpenFlow 1.0 protocol. OpenFlow has emerged as the leading open source protocol for SDN and is overseen by the Open Networking Foundation (ONF).
Currently the latest OpenFlow protocol specification is the 1.3 branch which was released in May of this year. Haugh noted that OpenFlow 1.3is not what is being used in the field.
“Nearly all of the hardware based implementations of OpenFlow are 1.0 today,” Haugh said.
Haugh noted that with new standards it tends to take companies six to eight months to then produce a build with a working feature set. With OpenFlow 1.0 he explained that the specification is able to be implemented in vendor silicon that is available today.
In contrast, the OpenFlow 1.1 specification was not easily implementable in network processors. As such it was a solution that was only suitable for software. The OpenFlow 1.2 and 1.3 specifications will be supportable in hardware from silicon vendors later this year. Haugh added that the OpenFlow 1.2 specification is essentially a subset of OpenFlow 1.3 which will also help to enable adoption.
The OpenFlow 1.x protocol is only one part of the complete SDN equation as it defined the communications between a switch and a controller.. A complete SDN solution also requires some kind of controller that will help architect the data flow path in a network. There are multiple controllers now available including Floodlight and Nox among othesr.
“The challenge is that as you look at different controllers, if you’re going to write an application to run on the controller you communicate via a northbound SDN API,” Haugh said. “Right now that’s not a standardized API.”
As such, if a software developer writes an SDN application it needs to be specific for only one controller technology. Haugh is now seeing more companies come out with their own controllers to solve the issue for application delivery.
“As a new protocol, there are still alot of things that are not extremely well defined,” Haugh said.
One area of challenge is for hybrid devices that support both OpenFlow as well as traditional switching. It’s not clear from a standards perspective how the two approaches can work together in an interoperable way.
Another challenge for OpenFlow is one of scale.
“OpenFlow from a vendor perspective needs to get to full data center and carrier grade scale,” Haugh said. “One of the challenges we’re seeing in the market is whether OpenFlow is a technology that will be deployed into the network at every hop or if it will be a technology that is just used at the edge.”