Hyperscalers Confront Their Networking Challenges

Enterprise Networking Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Hyperscale cloud centers are named so for a reason. They’re big. Really big. At the same time, they also incorporate highly dense modular infrastructure, which means they pack a lot of data points, both physical and virtual, in confined areas.

All of this adds up to a monumental networking task, which is made all the more difficult by the fact that end users don’t understand, or simply don’t care, about the relationship between data loads and network latency.

Unlike in a typical enterprise, however, poor networking performance in the cloud is directly tied to poor revenues. So most top providers are bending over backward to make sure that the size of their infrastructure is not hampering their ability to move bits within their data centers.

Azure, for instance, has begun deploying specialized FPGA cards designed to offload networking tasks from CPUs, allowing both kinds of chips to concentrate on what they do best. As the company’s Gabriel Silva explains, the program delivers up to 30 Gpbs for Windows and Linux servers, while at the same time decreasing jitter and cutting down on latency fluctuation. Best of all, the system works on the latest versions of Windows and the most recent Linux distributions and is available at no additional cost.

Meanwhile, Microsoft is also teaming up with Riverbed to ensure that performance doesn’t drop off as soon as workloads hit the wide area network. Using Riverbed’s SteelConnect SD-WAN solution, users can implement unified connectivity and policy-based orchestration across local, wide area and cloud networks, while SteelCentral offers end-to-end visibility and digital experience monitoring for any Microsoft cloud app. And this can be supplemented by the SteelFusion Azure-Ready Edge platform that ties edge resources to the Azure cloud for use as a primary data center or storage tier.

The need for hyperscale networking is also producing a slew of new channel offerings. Mellanox recently unveiled its new BlueField SoC and SmartNIC adapters aimed at large data center operators and OEM customers. The BlueField device sets new NVMe-over-Fabric performance records with its dual-port 100 Gbps design, hitting 7.5 million IOPS and maintaining less than three microseconds of latency. The company says the system supports applications ranging from Web 2.0 and big data to enterprise storage and network functions virtualization (NFV).

In addition, Broadcom is out with the new StrataXGS Tomahawk 3, a 32×400/128×100 GbE solution that can deliver 12.8 Tbps, easily out-performing all other single-device solutions on the market. The device also delivers a 40 percent power reduction compared to last year’s 6.4 Tbps version, which helps it achieve a 75 percent total cost reduction per port. It is also backed by the Broadview software suite that provides visibility into packet and traffic flows, as well as internal switch performance.

It’s been said that every challenge is an opportunity, and maintaining connectivity across an ever-expanding collection of resources is nothing but a challenge. As the enterprise finds new and innovative ways to draw revenue from digital services, the last thing anyone needs to hear is that the network cannot handle the load.

Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles

Follow Us On Social Media

Explore More