InfiniBand Renews Its Pitch for Big Data and the Internet of Things
Big Data and IoT are driving a resurgence of interest in InfiniBand. What needs to happen for the technology to succeed?
You don’t hear much about InfiniBand anymore. The standard that was once seen as the harbinger of a new computing era has largely been relegated to the interconnect on highly parallel HPC platforms.
That may be changing now, for a number of reasons. First, Big Data and the Internet of Things are likely to fuel networking needs up and down the stack, and secondly, it turns out the format is proving highly useful beyond the rack.
InfiniBand supporters, from Intel to Microsoft
At the moment, one of InfiniBand’s biggest champions is Mellanox, which recently rolled out a series of switches and adapters providing Enhanced Data Rate (EDR) InfiniBand up to 100 Gbps. But the company has a tough road ahead because, as tech journalist Timothy Prickett Morgan notes, not only does it have to fight off rivals like Intel for the InfiniBand market itself, it also has to make the case to enterprises that it is worth their while to employ something other than Ethernet to move data across the cluster or into the storage pool. High-profile deployments help on both fronts, of course. Microsoft’s adoption of 40 Gbps Mellanox switches for the Azure Storage cloud certainly help.
Meanwhile, it seems that Intel is finally ready to capitalize on its 2012 acquisition of QLogic by tying the current True Scale platform with the Omni-Path interconnect, due for integration onto the new 10 nm “Knights Hill” Xeon Phi. The devices will hold 48 ports, compared to 36 on current InfiniBand silicon, enabling more streamlined clusters with fewer switches and less cabling. The devices will support data rates as high as 100 Gbps and promise to cut latency nearly in half compared to current InfiniBand solutions. Mellanox says it will also have upped its game by the time the Knights Hill chips hit the channel, probably in 2017.
InfiniBand organizations and alliances
Indeed, a rising tide lifts all boats, and the InfiniBand community does not intend to get caught in the shallows of the Big Data surge. The InfiniBand Trade Association recently issued Release 1.3 of Volume I of the format’s reference architecture, designed to incorporate increased scalability, efficiency, availability and other functions that are becoming central to modern data infrastructure. The architecture is aimed at devices such as switches, routers and adapters, as well as higher-order fabric management, and includes tools for deeper visibility and advanced diagnostics intended to streamline cluster deployment and configuration. It also gathers information on cable and network infrastructure to provide broader-based management capabilities.
Zooming out to a global view, group calling itself InfiniCortex made a splash at Supercomputing 2014 last fall with a plan to run InfiniBand on carrier transports as a means of establishing a global exascale architecture. The brainchild of the Singaporean Agency for Science, Technology and Research (A Star) and backed by Tata Communications, Rutgers University and others, InfiniCortex aims to use 100 Gbps InfiniBand over RDMA to forge an intercontinental fabric capable of concurrent computing and high-speed workflow distribution. In tests between Australia, Singapore and Seattle, the platform was able to deliver more than 1.14 TB of genomics data in about 24 minutes – a 3000-plus percent improvement over FTP. The group is currently working with a number of collaborators across the globe, but there is no word yet as to when a production-ready platform will emerge.
InfiniBand adoption challenges
Clearly, InfiniBand can be utilized as more than just a simple interconnect. It is well suited to a range of existing and emerging enterprise applications, not just HPC. But homogeneity across network infrastructure has its advantages too, and the last thing network designers want to do is incorporate latency-inducing protocol-to-protocol interconnectivity into their fabrics.
The only way this would work is if the InfiniBand infrastructure produced a strong enough performance gain to warrant the extra complexity, but since Ethernet is proceeding on a higher-bandwidth roadmap of its own, the difference might not be felt until Big Data loads start to hit the enterprise in force.
Those days might be on us sooner than we think, however, which means the top performer in the field may win the day this time, regardless of cost or complexity.
Photo courtesy of Shutterstock.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.