Reimagining the Network for the Hyperscale Era

Ixia, SnapRoute and Coriant have recently launched networking products designed for hyperscale data centers.

By Arthur Cole | Posted Mar 28, 2017
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

WEBINAR: On-demand webcast

How Automation and Preventative Resolution are Transforming Workplace Productivity REGISTER >

The market for hyperscale data centers is heating up. That raises the inevitable questions over how to network an incomprehensibly large collection of hardware.

Hyperscale, of course, is not simply a traditional data center on steroids. It provides high-speed, high-value results from extremely large volumes of structured and unstructured data, which is why systems and architectures have been reimagined from the ground up. And that means enterprise technicians will have to adapt to an entirely new form of networking as well.

According to Research Beam, hyperscale infrastructure is set to grow at a 20.7 percent compound annual rate through 2022, producing a market that tops $71 billion per year. The key drivers are costs and the need for more efficient consumption of data resources and the energy that powers them. Hyperscale incorporates high degrees of automation, commoditization and resource federation to ensure that resource consumption can be dynamically scaled to workload requirements. For the most part, development of hyperscale technology is being led by the hyperscale service providers themselves, such as Facebook, Amazon and Google. Each of these firms builds their own infrastructure using homegrown architectures and frameworks. And because none are in the hardware business, they’ve taken to releasing their specs under the auspices of programs like the Open Compute Project and OpenStack. But as large enterprises start to build out their own hyperscale infrastructure, questions surrounding integration, testing and monitoring are producing demand for a wide range of commercial products.

Ixia and SnapRoute recently unveiled an interoperability testing solution aimed at hyperscale infrastructure. At the OCP Summit in Santa Clara, CA, earlier this month, the pair showed off a combined system that tests and validates high-performance networking equipment before it is deployed into production environments. The system unites SnapRoute’s FlexSwitch networking operating system and Ixia’s Novus 100 GbE load module atop the Facebook-designed Wedge 100 switch, which the companies say provides improved access and visibility through telemetry and streaming analytics. As well, it offers the ability to update microservices without network disruption and provides high-scale Border Gateway Protocol performance and convergence on Equal-Cost Multi-Path (ECMP) architectures.

Because normal data infrastructure can now be fully abstracted thanks to SDN and NFV, there is no reason to think the same can’t be done for hyperscale environments either. Coriant recently launched its Hyperscale Carrier Architecture aimed at allowing telcos and other network providers to host 5G, IoT, cloud and other services on their legacy networks. The system provides end-to-end transport, packet and routing functions from the subscriber to the peering point, utilizing numerous Coriant modules to enable functions like multi-layer, multi-vendor automation and control, policy-based network optimization, universal switching and optical transport. The company says its solution can deliver hyperscale network services at about half the cost of traditional approaches.

Few enterprises have the skills or in-house knowledge to attempt construction of hyperscale infrastructure on their own, so outside help is all but a given. This is particularly true of networking, considering that it touches virtually every aspect of the environment and can make or break overall performance as data loads scale to terabyte, petabyte and exabyte levels — and beyond.

And unlike past developments, this one is not being guided by the vendor community but by users, albeit extremely large and influential ones. This puts the enterprise in a position to craft infrastructure according to its own needs rather than tailor requirements to what is available on commercial hardware. But it also places responsibility for the success or failure of the hyperscale environment squarely on its own shoulders.

Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter