It seems that no matter what happens with enterprise networking in the future, the underlying architecture will sit atop Ethernet.
Whether that architecture will consist of layered topologies like STP or leaf-and-spine or evolve into fully federated fabrics remains to be seen. But it’s almost certain that the underlying Ethernet standards will remain in place as bandwidths migrate into triple-digit Gbps territory.
Of late, however, the question has arisen as to the most appropriate upgrade path for enterprise network infrastructure. The prevailing opinion is that 1 GbE will lead to 10 GbE, then 40, 100, 400, and beyond. The problem is in the leap from 40 GbE to 100. It doesn’t take a genius to figure out that this will lead to an over-provisioning, or at least an awkward configuration, of switch ports in order to build dense 100 GbE network configurations.
25 Gigabit Ethernet for the 100+ GbE rack
Hyperscale providers like Google and Microsoft are already working with networking providers like Broadcom and Arista to provide a more balanced approach to high-bandwidth architectures. The group hopes to get the IEEE to recognize 25 and 50 GbE standards that would more easily integrate into 100+ GbE rack configurations. Barring that, they are prepared to launch a new group, the 25 Gigabit Ethernet Consortium, to devise their own roadmap that, on paper at least, would better align switch ports to available servers within the rack and produce lower opex and capex costs as well. When you’re talking about hyperscale infrastructure, proportional deployment of server, storage and networking resources can literally shave millions of dollars from the TCO.
Companies that specialize in enabling technologies for switches and line cards are already warming up to 25 Gbps communications. Isola Group, which makes a variety of materials for printed circuit boards, recently announced the Tachyon-100G line of laminates and prepregs that support 100 Gbps data rates through 25 Gbps channel configurations. The materials are said to exhibit a 30 percent reduction in thermal expansion characteristics compared to earlier products, as well as improvements in insertion loss, dissipation and stress. The company anticipates applications ranging from high-speed computing to HD video, teleconferencing and hi-res photography and graphics.
Ethernet switching market falters
Regardless of the upgrade path, however, it seems that the market for Ethernet switching will remain on the rocks for a while longer. According to Dell/Oro Group, revenue fell by nearly $1 billion in the first quarter with little indication that things will pick up any time soon. Software defined networking, white-box infrastructure and the vagaries of cloud computing in general are all conspiring to prevent CIOs from upgrading their networks. This is normal in times of uncertainty, but as network architect Greg Ferro notes, the days of two ports per server will soon be over, so even if network upgrades get back on track, most enterprises will be in good shape even with 80 percent less switching infrastructure.
But even if the enterprise moves toward continuously higher bandwidth, don’t expect problems like network congestion to disappear. In fact, as Big Data and the Internet of Things rear their ugly heads, most enterprises may find themselves fighting a continued rear-guard action against an onslaught of advancing data volumes. And some researchers are wondering whether today’s congestion-fighting mechanisms will be sufficient in high-bandwidth environments. A team at Cornell University, for example, is promoting a new algorithm called Adaptive Sliding Mode (ASM) instead of conventional Quantized Congestion Notification (QFN), arguing that the need to adjust rapidly to changing data rates and transmission delays will grow in importance as networks become more varied and dynamic.
Ethernet will clearly have to endure some growing pains as the enterprise scales up network infrastructure. None of the issues it faces are insurmountable, of course, but it will require a certain amount of experimentation, plus some actual field experience, with virtual architectures and converged fabric infrastructures to get it right. And even then, what’s right for one organization will not necessarily be right for all.
As the virtual era unfolds and what we consider to be high-bandwidth becomes the new normal, it seems that users and applications will be able to choose the network they want rather than confine themselves to the one-size-fits-all environment we have today.
Photo courtesy of Shutterstock.