The cloud is pushing the development of highly dense server and storage configurations, so it’s only logical that the same thing should be happening on the network. But for many organizations still finishing the upgrade from 1 Gbps to 10 Gbps, the question is whether pushing on to 40 or even 100 G is really worth the cost.
It appears that steadily increasing density in network architectures will be par for the course over the next decade or so, if only to maximize the gains of newly virtualized data infrastructure and the dynamic, software-defined environments they support. According to Fast Market Research, 10 GbE is quickly becoming the new baseline for data centers looking to remain relevant in the virtual/cloud era. As users demand increased uptime, application/data flexibility and greater mobile access to enterprise infrastructure, networks will need to provide not just greater throughput but also increased levels of automation, visibility and overall responsiveness. At the same time, networks are becoming increasingly intertwined with server and storage resources. This intertwining forces them to expand in lockstep with the broader data environment. In other words, the enterprise needs to increase network density not because it wants to, but because it has to.
A key factor in the network density equation is the rack switch. The enterprise has long struggled with the impact that increased numbers of virtual machines have had on network connectivity, so it stands to reason that as virtual networking brings much-needed I/O to the rack, the requisite bandwidth needs to be put in place as well. Extreme Networks recently upped the density of its ToR portfolio with the Summit X770, a 1 RU device that supports 104 10 GbE ports plus 32 40 GbE connections for high-speed server interconnect. The box offers latency of 600 nanoseconds and provides for fabric-based infrastructure through support for both the TRILL protocol and the Open Fabric architecture.
Meanwhile, Mellanox is pushing its SwitchX-2 ToR device for inclusion in Facebook’s Open Compute Project that aims to become the model for hyperscale datacenter infrastructure going forward. The device offers 48 10 GbE ports and 12 40 GbE ports, providing either non-blocking connectivity between OPC’s Open Rack design or full 40 GbE server ports using QSFP-to-SFP breakout cables – a move the company says could provide a more efficient rack architecture for smaller-bandwidth applications.
Increased density is coming to cabinet solutions as well. Arista Networks’ new 7000 X boasts more than 2,000 10 GbE ports and 512 40 GbE ports with either 10 Gbase-T, SFP and QSFP line card configurations. Wirespeed performance tops 40 Tbps while keeping the power envelope under 3 W per port. And the system is designed to combine leaf-and-spine architectures into a single-tier configuration, cutting network costs nearly in half.
Network density is about more than just speed. Rather, flexibility is likely to emerge as the key driver as reliance on pooled resources and rapid-fire configuration become the order of the day. It may take a little bit longer and require a bit more disruption to existing data environments, but with an ultra-dense
network infrastructure in place, the enterprise will finally be able to leverage virtualization, the cloud and software-defined architectures in the drive for new business opportunities, rather than mere extensions to legacy data center infrastructure.