InfiniBand Moving to Ethernet?

By Sean Michael Kerner | Apr 29, 2010 | Print this Page
http://www.enterprisenetworkingplanet.com/nethub/article.php/3879506/InfiniBand-Moving-to-Ethernet.htm

LAS VEGAS -- Can competing networking technologies work together? In the case of Ethernet and InfiniBand the answer, according to InfiniBand vendors, is a resounding yes.

The Infiniband Trade Association (IBTA)and a number of its members here at the Interop trade showincluding Mellanox and Voltaire, this week discussed new technologies that will bring some of the low latency benefits of InfiniBand to 10 gigabit Ethernet (10 GbE) networks.

"The IBTA's goal is to provide fast efficient transport for networks," IBTA spokesperson Brian Sparks told InternetNews.com. "Before that meant it had to be on InfiniBand but all the InfiniBand vendors today also have 10 GbE so they can converge some of their efforts."

Specifically the IBTA has now opened the RoCE (RDMA over Converged Ethernet) specification. RDMA (Remote Direct Memory Access) in RoCE provides the benefits of InfiniBand type low latency. Sparks noted that vendors can now in effect provide InfiniBand encapsulation over Ethernet which will enable networks to keep their existing 10 GbE infrastructure while improving performance characteristics.

While InfiniBand vendors are delivering some of their technologies for Ethernet users, that doesn't mean that they aren't still pushing InfiniBand gear as well. Sparks said that using RoCE, 10 GbE Ethernet users may be able to achieve RDMA latency of about 1.3 microseconds. Sparks added that using RoCE, Ethernet users would be shaving a microsecond or more off their network latency which is important for certain market verticals such as financial services.

While there is an emerging standard in development for 40 gigabit Ethernet, it's not yet ratified or achieved enterprise deployments. In contrast, InfiniBand is current shipping 40 gigabit networking gear delivering latency in the 0.8 microsecond range.

Sparks added that providing InfiniBand latency improvements to Ethernet users provides more choices to networks. In his view, the networks that needed the absolute fastest, lowest latency network will still go for the InfiniBand native route.

InfiniBand and Ethernet running separately

There are also cases where network administrators will run to both InfiniBand and Ethernet separately as part of the same overall network infrastructure. That's where an improved solution from InfiniBand vendor Mellanox (NASDAQ:MLNX) comes into play.

Sparks, who is also director of marketing communications for Mellanox, said that his company's new BridgeX BX5020 gateway enables enterprise to tie together their Fibre Channel, Ethernet and InfiniBand fabrics. The gateway has four 40 gigabit InfiniBand ports, up to twelve 10 GbE ports and up to 16 Fibre Channel ports.

Sean Michael Kerner is a senior editor at InternetNews.com, the news service of Internet.com, the network for technology professionals.