Why Virtualize the Network?
Editor's Note: Occasionally, Enterprise Networking Planet is proud to run guest posts from authors in the field. Today, Deepak Kumar, founder and CTO of Adaptiva, shares his thoughts on network virtualization.
By Deepak Kumar
In some situations, the hardwired characteristics of a physical network make it a liability rather than the asset it should be. Take the case of an organization that needs to distribute a 20 GB operating system image to thousands of machines in China. Relying on the physical network to transfer gigantic amounts of data over the WAN link would instantly flood it, bringing business traffic to a halt. And that's just one use case.
System administrators need to keep business-critical systems running, secure, and current every day. Suppose a security update needs to be deployed instantly. That may mean delivering massive data across the enterprise, across the globe, and across thousands of sites. This poses significant challenges for system administrators. They don't, and shouldn't, own the network. But they must still manage the flow of huge amounts of data across it and control the directions in which that data flows.
In multi-disciplinary environments such as the ones in common use in most Fortune 500 organizations, ownership of the network cannot be delegated to every large user. This accentuates the conundrum faced by the systems administrator and network teams alike, often resulting in friction, business disruption, and sub-optimal network utilization.
What if you could virtualize the network, decoupling the physical routers from the imaginary routes that system administrators need their data to take? With such a system in place, the networking team could continue to own the physical assets and the systems administrators the virtual network, independently controlling and managing the flow of large volumes of systems management data such as images, patches, and software.
One way this could be accomplished is if applications provided a network virtualization layer. Systems administrators could then create imaginary network locations at will and connect them together arbitrarily, completely dissociating the paths followed by content distribution across the virtualized network from the paths specified in the configuration of the physical network itself.
This would untangle network and systems administrator teams, give them ownership of their respective domains, and optimize use of the network. In the use case described above, only one copy of that 20 GB OS image would need to travel across the physical network to China. The network application built upon the network virtualization layer could then make the rest available locally.
Network virtualization provides an elegant solution to the network topology problem. It can, however, also become a significant management challenge for enterprises with very large networks. Creating and maintaining a virtual network topology can demand a tremendous amount of manual work and time on a network with thousands of subnets. Automating its use through a workflow system integrated into the technology, with the ability to pull information from diverse sources into the virtualized network configuration, would address this challenge.
Only a handful of enterprises have adopted such virtualization technologies to date, but the benefits are already clear. When used correctly, network virtualization can save money, time, bandwidth, and administrators' hair.
Deepak Kumar founded Adaptiva after serving as the primary architect of Microsoft Systems Management Server 2003, later evolving into Microsoft Systems Center Configuration Manager (SCCM). His experience leading development teams at Nortel and Microsoft propelled him into leading a group of developers to create the Adaptiva IT Systems Software Suite. He has five patents related to his work on Microsoft’s SMS 2003 and has written over 50 publications, including the Thinkweek paper for Bill Gates that became Project Greenwich, now known as Microsoft Office Communications Server/ Lync.