If current trends hold, it won’t be long before data center infrastructure will consist of hyperconverged hardware. And while there will undoubtedly be many ways to configure this technology, it will generally include ultra-dense compute/storage modules outfitted with solid-state memory connected by advanced network fabrics.
Getting in on the ground floor of this potentially lucrative market is crucial for firms that specialize in memory and interconnect technology, because those who get into the data center first will likely maintain a healthy installed base well into the future. This is why hyperconvergence and fabrics were front and center at the recent Flash Memory Summit, where multiple vendors laid out their visions as to how the hyperconverged interconnect should evolve.
Microsemi drew a fair amount of attention with the release of the Switchtec PAX Gen3 PCIe switch, capable of supporting just a bunch of flash (JBOF) and other multi-endpoint configurations using single-root I/O virtualization (SR-IOV) and NVMe. The device provides the fabric-style connectivity that supports composable/disaggregated infrastructure (C/DI) that will allow converged modules to support highly dynamic workflows. Not only does it provide low-latency throughput directly to compute, network, GPU and storage resources, but it supports rapid system development through a fabric API and can accommodate off-the-shelf NVMe hard drives for multi-host systems.
Meanwhile, Santa Clara, Calif.-based Kalray showed off a new target controller that provides NVMe over Fabrics (NVMe-oF) support for JBOF configurations. The KTC40 eliminates the need for high-end x86 solutions and associated DDR memory, allowing designers to increase density by 60 percent and cut storage costs and power consumption by 65 percent each. As well, the system comes with its own PCIe board based on the Kalray MPAA2 processor, and a complete software stack.
The NVME-oF format itself is heading toward a new level of performance thanks to the integration of advanced processor technologies with high-speed formats like Fibre Channel and RDMA. Cavium recently showed off a new line of FastLinQ NICs that incorporate QLogic’s Gen6 FC technology to deliver more than 2 million FR-NVMe IOPS, as well as new NVMe-oF solutions that enable concurrent support for the RoCE and iWARP transport formats. The intent is to provide scalable connectivity between multivendor compute and storage architectures, as well as storage controllers and NVMe enclosures, giving the enterprise greater flexibility when deploying initial hyperconverged infrastructure and adding modules over time.
Traditional DRAM and NAND flash technology will also get high-speed fabric support in hyperconverged infrastructure. A company called IntelliProp introduced a new persistent storage memory controller called the Cobrathat resides within the Gen-Z fabric rather than the memory bus. In this way, the system can support byte addressability to up to 32 GB of DRAM cache and block addressability to 6 TB of NAND flash. At the same time, the company has developed an NVMe Host Accelerator IP Core that provides high-performance connectivity to FPGA and ASIC designs.
The power of hyperconvergence lies not with the performance of individual modules but in the ability to pool resources for massive workloads. Clearly, standard point-to-point networking is not up to this task. Only a dynamic fabric topology governed by an intelligent automation stack has the chops to provide the most efficient links between disparate modules.
This is, in fact, one of the most challenging aspects of hyperconverged infrastructure and will probably be the last to advance to a sufficient degree for enterprise-class production workloads.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.