Software Defined Networking (SDN) has captured the attention of the enterprise IT world, and rightly so. There seems to be little standing in the way of a fully virtualized data center, one that can be commissioned, utilized for either specific or general purposes, and then decommissioned, all entirely in software. But “seems” is the operative word here.
The fact remains that SDN’s much-hyped functionality isn’t banging on the enterprise door just yet. SDN is still in its infancy; the promise of the laboratory won’t make its mark in working production environments for at least a couple of years.
The numbers say it all. According to IDC, the overall enterprise networking industry is on pace to top $42 billion in 2013, with nearly half of that coming from the Layer 2-3 switch market. SDN is only poised to generate about $3.7 billion—8.8 percent—and won’t even do that until 2016. The IDC’s prediction is an increase over earlier estimates, yes, but still a far cry from networking dominance.
One of SDN’s biggest unknowns is the level of network functionality to be ported to the application layer. Stories abound of fully self-provisioning applications that can literally create the network environments that suit their particular needs. But as VMware’s Martin Casado pointed out at Interop recently, too much power can sometimes go to the app’s head. In supercomputing circles, he said, applications given too much control over their environments have a tendency to crash. And it would seem likely that too many intelligent apps would hamper each other’s functionality, should resource contention become too acute.
Ideally, advanced orchestration and automation platforms will take care of these issues, says Adara Networks’ Karthikeyan Subramaniam. But even here, there are some nettlesome issues to deal with. In OpenFlow, for example, the use of variable-length headers, like those found in the IP and TCP protocols, will likely produce a performance penalty compared to traditional Layer 2 switches, which use fixed-length headers. That means OpenFlow is not likely to replace Layer 2 wholesale, but instead act as an adjunct for applications that require advanced VM or session management at the expense of packet- or frame-forwarding performance.
SDN also has a dirty little secret from an engineering standpoint, according to PacketPusher’s Ethan Banks: it doesn’t really solve networking problems. It merely gives network technicians the tools to solve their own problems. When it comes to things like provisioning, orchestrating, and troubleshooting, giving someone a new set of APIs is like giving someone a screwdriver and telling him to build his own car. When your day is spent fixing all the issues that arise in traditional networks, you need a true solution that can be up and running right out of the box. At this point, however, no one is really sure how an SDN environment should be defined and architected—and that’s before we even get into the issue of multi-vendor networks.
If all of this sounds like I’m trying to put a damper on SDN, I’m not. Well, maybe just a little, but that’s okay. The SDN hype-cycle is just about over. Now the real work begins, the work that will turn all those lofty promises into some semblance of reality.
A fully software-defined data environment is most certainly in the offing, but it will not be appropriate for all use cases. Enterprise networking environments have a massive installed base of applications and services tailored for their current states. Porting those applications and services over to software-defined server, storage, and network architectures will take quite a bit of recoding, or even a wholesale redeployment of updated versions.
If past is prologue, the most vibrant use of SDx won’t appear until after software-defined technology has become firmly entrenched in the data ecosystem. It will appear eventually, however. And when it does, the enterprise community will have the best of both worlds: a highly dynamic, resource-rich data environment, and a new class of applications that knows how to use it.