How the Interconnect is Driving Converged Infrastructure

As vendors increase the throughput of their fabric, the separations between compute, storage and networking are beginning to disappear.

By Arthur Cole | Posted Apr 14, 2017
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Data center infrastructure is rapidly transitioning away from distributed server and storage architectures to converged and hyperconverged models. Naturally, this will have a profound effect on networking as both the physical characteristics and underlying data requirements adapt to new workload demands.

This is playing out along a number of fronts, from the network fabrics that link modular appliances into something resembling a compute-storage array to the basic interconnect that allows multiple processors to function in parallel.

Cisco recently upped the throughput of its HyperFlex 2.5 fabric from 10 Gbps to 40 Gbps, giving its Unified Computing System (UCS) platform a much needed boost to meet burgeoning workloads. Along with support for all-flash storage architectures and new orchestration capabilities in the UCS Director, the new bandwidth should allow the enterprise to build multi-cluster deployments that can scale to cloud-level proportions. The next step, says Cisco Marketing Director Todd Brannon, is to provide direct support for containers rather than managiong them through VMware vCenter, which will give the enterprise the ability to run containers on bare-metal UCS modules.

In addition, platforms like Nvidia’s NV Link are bringing the same advanced interconnect capabilities that exist in high-power computing (HPC) solutions to hyperconvergence. Scientific Computing World’s Robert Roe notes that Nvidia is due to release the latest version of the fabric later this year. It will up throughput from 160 GBps to 200 GBps, which will greatly increase the flow of data across GPU/CPU configurations inside the HPC node. And even with a hard cap on the hardware layer, we can expect to see continual software improvement in areas like advanced thread management and I/O resource allocation to boost performance for key applications.

Meanwhile, Mellanox and Qualcomm have teamed up on a new interconnect for the Open Compute Project that lays the foundation for throughput up to 100 Gbps. The solution pairs Mellanox’ ConnectX-5 network adapter with the Qualcomm Centriq 2400 processor to produce a 10 nm ARM-based infrastructure suitable for hyperscale cloud environments. With support for 10, 25, 40, 50, 56 and 100 Gbps on either InfiniBand or Ethernet protocols, the system provides a high degree of flexibility when building and expanding data infrastructure. It also delivers up to 200 million messages per second and provides latency as low as 0.7us.

Improving the integrated circuits (ICs) will also foster a better networking environment. Frost & Sullivan recently released a report describing how 3D ICs will help overcome memory bottlenecks and enhance connectivity between a wide range of electronic devices. Although development costs are currently on the high end, the interconnect lengths on 3D circuits is much shorter than 2D, and signal dissipation is much lower, which makes them ideal for applications ranging from consumer electronics to data center infrastructure.

As things get more modular, then, it appears that the separation between compute, storage and networking will diminish all the way to the silicon level. This represents a significant change for IT technicians who are used to dealing with switches, routers and related hardware.

Fortunately, the software-defined data center is making strides as well, so it won’t be long before building and managing infrastructure will be a simple matter of connecting a series of boxes and then telling the management stack what you want them to do.

Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter