For enterprises looking to build their own cloud architectures, expanding network capacity is only a first step.
Once you’ve built 10 gigabyte Ethernet (GbE) infrastructure — the minimum needed to comfortably support both virtualization and cloud computing — the far trickier job is to implement the kind of flexibility needed to support the broad range of applications and services that users are demanding.
That means not only will your network have to handle an exponentially larger number of data pathways than ever before, but it will have to scale those network connections on the fly as operating environments are created and disabled on a regular basis.
Naturally, none of this can happen without a good deal of software support. But whether it goes by the name of virtual networking or plain old network management, the fact is that much of the dynamism that will characterize enterprise networking in the future will come from adding multiple layers of abstraction between hardware resources and data loads.
When it comes to scaling network capacity, however, not all approaches achieve the same results. Some applications require a scaled up approach, in which a single device is directed to increase throughput, while others may require scale out, where loads are shared across multiple resources.
As long as a solution provides the answer to critical problems, it shouldn’t matter whether it is built around a physical, virtual, logical or any other approach. The name of the game is getting static networks ready for the cloud, and that will only happen once long standing architectures are retrofitted to match the freewheeling style of emerging data environments.
The physical layer provides the foundation, but the real magic happens in software.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.