Increase Bandwidth, But Don’t Stop There

The quest for higher bandwidth has been a given in enterprise infrastructure development since the first motherboards were connected with a copper wire. But while there is nothing wrong with more bandwidth, it seems that in the age of software defined architectures the need for wider pipes will not be as crucial as it once was, at least not for the enterprises who seek to make productive use of the data environment.

Bear with me for a moment.

First, let me say that 10 GbE will probably remain the base on which advanced data architectures will be built for the next decade or so. The good news is, it’s becoming easier and cheaper to deploy with each generation of hardware.

Broadcom recently introduced the Trident II+, which is not only pin-compatible with existing Trident II deployments but can deliver upwards of 1.28 Tbps for high-density ToR blade and aggregation switch configurations. This means the enterprise can quickly upgrade to high performance networking as part of the regular hardware refresh cycle and then dynamically configure that network for a wide range of applications and data loads. The Trident II+ supports advanced virtual overlays like VXLAN, plus leaf-and-spine architectures in both hyperscale environments (via integration with the StrataXGS Tomahawk platform) and carrier/cloud deployments in conjunction with the StrataDNX.

This is crucial, because as the age of virtual networking unfolds, dynamic network configurability will be just as important on silicon as it is on the higher-level virtual plane. This is what will enable companies like Dell to roll out flexible networking platforms that enable multi-rate fabric switching across increasingly diverse data architectures. The Z9100 platform, for example, can be tailored to 10-100 GbE performance for everything from in-rack connectivity to data warehousing and high-end analytics. The company is also partnering with firms like IP Infusion to enable Multiprotocol Label Switching as a Service (MPLSaaS) for campus and carrier connectivity.

Meanwhile, we have the Ethernet Alliance finally recognizing that jumping from 1 to 10 GbE overnight is not only burdensome for many organizations, but overkill. The organization recently launched a 2.5/5G Base-T program to enable the enterprise to gradually upgrade to high-bandwidth operation while at the same time developing a more nuanced network infrastructure that is more conducive to emerging data applications, like mobile computing and machine-to-machine communications. The group is holding a webinar to discuss its efforts on Thursday, May 21. You can register here.

All of these developments are welcome, certainly, but the fact remains that as abstract data architectures become more prevalent, network flexibility will become steadily more valuable than raw throughput. As Networks First’s Jason Peach told Computer Weekly recently:

 “Rather than throwing more bandwidth at the problem, using more intelligent analysis to optimize bandwidth is often a better way to solve a bandwidth contention.”

This becomes doubly true when you consider that the enterprise network is now only one piece of a distributed infrastructure chain that stretches over large geographic areas. Instead of having wide, flexible pipelines in your own data center, you’ll need them over the WAN, on carrier infrastructure and in your cloud provider’s plant as well.

This shouldn’t be taken as an indictment on the drive to 10 GbE or beyond, by any means. But it does point up the fact that bandwidth alone is not the cure-all that it was just a few years ago.

Going forward, building a better network will entail not just a bigger, stronger, faster infrastructure, but a more dynamic one as well.

Photo courtesy of Shutterstock.

Latest Articles

Follow Us On Social Media

Explore More