The enterprise is gearing up for 100G and even 400G networking in the data center, which puts many at a crossroad of sorts when it comes to upgrading server and storage connectivity: do you opt for 40 Gbps and then make a more difficult transition to 100G, or is it better to adopt 25G now and simply quadruple those lanes when the time comes?
At the moment, it appears that many organizations are pursuing the latter option, and vendors are rolling out products to support that strategy.
According to Heavy Reading, 25G is quickly replacing 10G on server and rack infrastructure to better support 100G and 400G in both the data center and wide area. For all these line rates, the interfaces are being implemented in merchant silicon, FPGAs or ASICs, with some designs supporting PAM4 and/or coherent 16/64QAM using advanced DSP techniques. At the same time, the industry is moving toward optical solutions like silicon photonics, which is expected to accelerate as 100G, 400G and even 800G networks emerge for short-haul and transport applications.
Super Micro recently unveiled a series of NICs and ToR switches all aimed at streamlining the transition to 25 GbE. The portfolio includes the SSE-F3548S switch that supports up to forty-eight 25G ports and six 100G uplinks, with the uplinks offering a four-way split feature that extends support to up to seventy-two 25G connections. Meanwhile, the company has developed a new 25G switch for the X11 SuperBlade suitable for either 8U or 6U enclosures and featuring twenty 25G downlinks and four QFSP28 ports that can be configured for either 40G or 100G connectivity. This can be paired with a 25G mezzanine card that supports RDMA/RoCE, DPDK and other formats.
Support for 25G solutions in the data center is also expected to streamline the adoption of 5G fronthaul networks. Luxshare-ICT recently demonstrated a 25G pluggable optical module and cabling solution for both data center and 5G applications. The package includes SFP28 devices for both long-range and short-range deployments, along with an active optical cable, all of which can be tied to similar products in both 100G and 400G bandwidths. As well, the company is working on 200G solutions based on a 50 Gbps-per-lane chip-on-board design.
Building 400G throughput and higher is not just a simple matter of aggregating narrower pipelines, of course. There will still be a fair amount of architectural work depending on the state of legacy infrastructure and the ultimate goals of future deployments. A case in point is the implementation of NVME-oF architectures across server and storage environments. David Woolf, senior engineer of data center technologies at the University of New Hampshire’s InterOperability Laboratory, notes that most hosts and storage arrays with Ethernet interfaces should be able to connect to 25G and 100G with little trouble, but if you expect to create a lossless environment for key storage applications, you’ll probably need to upgrade to RDMA-capable switches.
And, of course, bandwidth and throughput are only part of the equation when it comes to next-generation networking. There is also flexibility and integration into third-party environments to consider.
But no matter how connectivity evolves over the coming decade, it should be clear by now that data loads are on the rise and they show no signs of easing off. No matter what level of automation or software definition is called for, it’s best to start with a scalable, broadly federated network that can easily transition data from the narrow pipelines at the edge to the main trunks in the center.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.