Hyperscale infrastructure is widely considered to be the purview of large, web-facing organizations or top-tier colocation and cloud providers. But as the enterprise starts to confront the realities of Big Data and mobile, dynamic workloads, it seems inevitable that even medium-sized organizations will have to encounter hyperscale environments at some point, either as greenfield deployments or in a hosted/cloud capacity.
Most of the focus in hyperscale deployments at the moment, however, is on the server and storage components. After all, these are what end users actually consume, and pricing/cost-benefits are usually calculated in cores or GBs per dollar. But none of this happens without networking, and the ability to scale throughput, bandwidth and overall connectivity is just as vital as boosting processing and capacity.
Facebook was one of the first to figure this out. The company’s Open Compute Project (OCP) platform recently saw the release of its first modular ToR switch, which the company is ODMing from various manufacturers in the Pacific Rim, along with much of the rest of the infrastructure. The 40 GB switch features 16 x 10 GbE slots, expandable to 32, plus a specialized “Hug” board that connects to server motherboards via PCIe. The idea is to present the switch as just another server to make configuration, and therefore scalability, less complicated than on a traditional network. The device runs Facebook’s own Linux distribution and customized code for monitoring and configuration.
From an operational standpoint, the biggest difference between hyperscale and traditional infrastructure is that, despite its massive size, hyperscale is usually optimized for one or two key applications, rather than the multiple apps that characterize the average data center, according to Emulex’s Mike Jochimsen. For the networking component, then, the focus will be on massive redundancy and speedy failover and recovery instead of preventing outages altogether. The commodity nature of hyperscale infrastructure helps in this regard, as it is more amenable to software defined networking and other virtual constructs.
There is also quite a bit happening on the processor level in support of hyperscale, says Enterprise Tech‘s Timothy Prickett Morgan. Cavium’s new ThunderX ARM, for example, packs up to 48 cores per die spinning at up to 2.5 GHz. It also features virtual compute, memory and I/O and a wide range of integrated hardware accelerators capable of specific functions related to security, storage, networking and other processes. The design is fully compliant with the ARMv8 spec and the Server Base System Architecture, which allows it to scale rapidly without requiring extensive customization of operating system code.
At the same time, Intel is bringing the flexibility of the FGPA (field programmable gate array) processor to hyperscale environments. The company is working to unite standard x86 Xeons with the FPGA in order to unite processing power with broad customization that comes from the ability to reconfigure core logic. The idea is that a reprogrammable FPGA will allow the chipset to be tailored to specific workloads like search, transaction processing or even video communications, perhaps even through sophisticated automation software that can keep hands-on configuration to a minimum. The chips will communicate via the Quick Path interconnect and are expected to fit into a standard E5 socket. Intel claims it is already seeing a tenfold increase in performance compared to standard CPUs.
Hyperscale is clearly not for everyone. But its underlying technologies will likely make their way into the broader enterprise market sooner rather than later. After all, who doesn’t want increased scale and flexibility in their local environment, even if it doesn’t exactly quality as “hyper”?
Photo courtesy of Shutterstock.