RDMA Poised to Take on Advanced Data Center Fabrics

Data center networks have long occupied a sweet spot in the drive to produce more efficient, flexible IT environments. While processing speeds and the benefits of virtual machines gave a much-needed boost to the compute side, storage continued to be a laggard, which left networking with some wiggle room in its overall performance demands.

But those days are over.

Sluggish spinning disks are quickly giving way to high-speed flash, and even higher-speed DRAM, both of which are bolstered by advances on the interconnect like the new NVMe protocol. Suddenly, legacy Ethernet infrastructure is starting to look like the slowpoke in the enterprise.

This is why many system designers are starting to turn to advanced fabric technologies built around Remote Direct Memory Access (RDMA), which enables direct access to system memory from distant endpoints. As Datacenter Knowledge’s Mary Branscombe notes, RDMA’s chief advantage over native Ethernet and other legacy TCP solutions is that it doesn’t clog the CPU with a whole lot of networking instructions. This brings latency down from microseconds to milliseconds, which is crucial for emerging hyperconverged and software-defined infrastructure that supports high-speed analytics and IoT workloads.

A look at some of the latest RDMA solutions gives an idea as to how powerful this technology is. Chelsio Communications recently showed off its new line of T6 iWARP RDMA adapters on both local and long-haul implementations of the Windows Server 2016 Storage Replica platform. The demo involved a 25 Gbps link that was used to establish a continuous RDMA transport over a 50 km fiber loop without the need for metro extension equipment. Chelsio and Microsoft were able to connect two storage servers in synchronous mode to provide long-distance replication with near-local access performance characteristics, namely negligible impact on I/O and latency.

And earlier this year, researchers at the University of Michigan showed off a novel new way to distribute pooled memory across scale-out architectures by using RDMA to access free memory directly from network interface cards (NICs). The Next Platform reports that the system utilizes a technique called “Infiniswap,” which acts as a memory paging system to map out remote memory on an RDMA network. When memory runs out on one server, instead of writing to disk, the system bypasses centralized controller CPUs to target individual NICs directly, effective creating a single, decentralized memory network that can be allocated as workloads demand.

The main inhibitor to increased RDMA deployment in the data center is the lingering concerns about running it within legacy Ethernet infrastructure under solutions like RDMA over Converged Ethernet (RoCE). But as Bill Lee of Mellanox and Robert Lusinsky or Broadcom point out on Electronic Design, most of the myths surrounding RoCE are either unfounded or outdated. For instance, new versions of RoCE have eliminated the need for lossless networks, and issues like scale, routing complexity and distance limitations have also been addressed. And since RoCE was defined for IEEE 802.3 Ethernet, organizations should have no trouble adapting it to 25, 40, 50 and 100 Gbps architectures.

Still, virtually no one is recommending a forklift upgrade to RDMA networking. As with any advanced solution, some workloads will benefit from the advanced speed and flexibility, and some will not.

As data infrastructure evolves toward a more dynamic, federated ecosystem, however, the enterprise will see a growing need for high-speed, flexible networking fabrics – and it would be best not to replace the entire networking stack to build them.

Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.

Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.

Latest Articles

Follow Us On Social Media

Explore More