High-Speed Networking Gains Ground, But to What End?
Ethernet continues its inexorable march toward ultra-high bandwidth. This march fosters a sort of push-pull strategy to draw the data center out of 1/10 GbE infrastructure and into the 40/100 world—a world better suited to dynamic virtual environments.
Here is the question most organizations face: how much is enough? In the past, fast was never fast enough. But with vendors pushing a 10-fold increase in throughput (or 100-fold for many of you), one has to wonder whether the data center will really require carrier-class networking just to stay on par with recent technology trends.
Before we get to what’s actually necessary, it’s important to note that 100G performance might not be as burdensome to implement as it seems. New smaller form factors, like Cyan’s new DTM-100G transponder, are now available in single-slot configurations, enabling SDN deployments using C form-factor pluggable (CFP) client interfaces. By reducing the number of slots in use and eliminating the need for external modules, Cyan says it can boost the reach and scalability of optical networks and drive down deployment costs. The device also provides advanced features like DSP-based chromatic dispersion and 10/100 mixed channel operation.
High-speed networking is also gaining ground on the silicon level, fostering new solutions that are low in power consumption and highly customizable for targeted applications. Achronix Semiconductor recently unveiled the Speedster 22i HD1000 FPGA, capable of supporting 100/40/100G Ethernet as well as PCIe 1/2/3 and 2.133 Gbps DDR3. The device is built on Intel’s 3-D Tri-Gate platform. It utilizes the 22 nm process to provide a hardened interface IP solution with more than 1 million lookup tables (LUTs), 86 Mb of RAM, 960 programmable I/Os and 64 lanes of 12.75 Gbps serialization/deserialization.
It’s all very impressive, but is this a case of network overkill? Does the enterprise really need a 100 Gbps backbone? Large enterprises probably need less, especially in trunk lines connecting multi-site campuses. 100G makes it more palatable to replace existing copper plant with the latest fiber solutions. For example, Leviton’s Opt-X Unity 40G/100G pre-terminated MTP system lessens the network footprint and improves airflow within the data center. Moreover, conformity to new IEEE standards governing channel insertion loss improves signal integrity.
And with Ethernet hitting such high speeds, it seems unlikely that native InfiniBand will gain much ground beyond its traditional role as a server interconnect. Chelsio Communications says that its new 40G Ethernet adapter, the T5, is already outperforming InfiniBand for select applications by essentially commandeering the RDMA protocol for TCP/IP environments through the iWarp platform. A key component of the platform is the TCP Offload Engine (TOE) that pulls select networking responsibilities from overworked CPUs to resolve contention issues and other bottlenecks more quickly.
It would seem, then, that 100G will be a net gain for the enterprise. It provides ample overhead for today’s needs and just about anything that will come along in the near future. Still, if the history of information technology has shown us anything so far, it’s that software developers have little trouble pushing physical platforms to their limits once they have been proven safe and reliable.