Software-defined networking will undoubtedly add a high degree of flexibility when it comes to supporting burgeoning application and data environments. But will it be any cheaper?
It’s taken as an article of faith that, like server and storage virtualization before it, SDN will increase network scalability — essentially tapping hidden resources that were overlooked in static environments, or at best intentionally kept in reserve to accommodate peak loads.
But exactly how much spare capacity exists on current network infrastructure? And after all is said and done, will SDN produce high-enough ROI in both network operational and capital budgets to justify its deployment costs?
At the outset, at least, some of the numbers being kicked around sound pretty good. Deloitte Consulting’s Chris Weitz estimates enterprises will be able to cut their network costs in half with SDN, contributing perhaps 10 to 15 percent in overall IT savings. Much of this will come from server connectivity alone: Big Switch Networks says a single 40-rack server configuration could see savings of a half-million dollars or more. Add to this the operational advantages of dynamic network configuration and its impact on productivity, and it seems SDN is a slam dunk indeed.
This line of reasoning is supported by IDC, which says SDN will go a long way toward furthering existing streamlining efforts like network infrastructure convergence. The ability to configure and reconfigure network architectures according to application and data needs should allow enterprises to support those functions on as little physical hardware as possible. At the same time, SDN supports the deployment of integrated compute/storage/networking fabrics and fully software-defined datacenter architectures. That’s primarily the reason SDN is on track to become a $3.7 billion market by 2016.
But is all this a done deal? Perhaps not, according to researchers at Nicira Networks and the International Computer Science Institute (ISCI). In a candid presentation to SIGCOMM’s last year in Helsinki, the founders of the OpenFlow protocol note that it has a number of flaws when it comes to simplifying physical infrastructure. At the top of the list is the fact that it can’t tell the difference between a host network interface and a packet switch interface, which means responsibility for bit handling still falls to switch hardware. As well, external network changes, such as the shift to IPv6, would still require changes in packet matching all the way to the core switch. Ideally, then, there should be two OpenFlows: one for the edge for more generalized functions, and a core version that exhibits features similar to current MPLS forwarding.
To gain a real edge in network infrastructure simplification, however, SDN needs to provide greater visibility into existing architectures, says Cariden Technologies. That’s why the company recently released a series of blueprints designed to establish various flavors of SDN and how they contribute to overall data agility. For example, Flow Services SDN covers security, visibility and flow-level program functions, while Virtualization SDN and Infrastructure SDN would oversee operations and optimization on those two planes. In the end, greater awareness of what is happening throughout the network infrastructure can be leveraged to make it more efficient and cater to a wider array of users and operations.
So, exactly how much will SDN be able to shave off the network budget? Hard numbers will vary from enterprise to enterprise, but it’s not unreasonable to expect a third or more at the outset, with each expansion of SDN producing less and less gain as the law of diminishing returns kicks in.
Server virtualization set the bar pretty high when it came to boosting utilization rates, but that was primarily due to the fact that most machines offered very low utility to begin with. Networking has traditionally run a much tighter ship, so we shouldn’t expect much when it comes to hardware consolidation. The technology’s true genius lies in the way it fosters configuration flexibility and the establishment of dynamic data environments.
In other words, you’ll still have largely the same network but you’ll be able to repurpose it in a wide variety of new ways.