Much Ado about Hyperscale Architectures

Big Data makes hyperscale networking inevitable even for the enterprise.

By Arthur Cole | Posted May 29, 2015
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Many organizations are looking forward to the future of hyperscale networking. Even if you are not a cloud-scale enterprise like Amazon and Google, you can still capitalize on the ability to manage and manipulate large data stores over a highly commoditized, distributed architecture.

This will undoubtedly upend the IT infrastructure market, particularly as more workloads are migrated from legacy data centers to regional cloud providers. But exactly how will these architectures play out? And will we have a fully automated, abstract networking stack that is completely independent of both the underlying physical plant and the overarching data and application layers?

According to Dell’Oro Group’s Alan Weckel, people in the networking industry are going to be shocked by the upheaval that is to come. Of the top four hyperscale players – Amazon, Facebook, Google and Microsoft – only Microsoft has turned to a traditional networking vendor, Arista, for its switching layer. Both Amazon and Google have gone white box, while Facebook is developing its own Wedge solution. This is part of the reason why white box solutions have left vendors like Cisco in the dust in the 40 Gbps segment. As the data center switching market closes in on $14 billion by the end of the decade, the vast majority of the growth will come from the top seven cloud providers.

This isn’t to say that commercial vendors should not bother with the hyperscale market. Indeed, companies like Huawei are quickly rolling out 25 GbE solutions aimed precisely at organizations that want hyper performance but not necessarily hyper scale. The company’s recent collaboration with QLogic provides an end-to-end portfolio of switches, adapters and other devices designed to enable a stepped deployment strategy to 100 GbE and beyond. Along the way, users will gain a plethora of network transport and topology options, such as RDMA over Converged Ethernet (RoCE) and iWARP workload acceleration.

Indeed, compared to what’s coming, today’s network architectures will be as outdated as the “sneakernet” of the 1980s, says Government Computer News’ Amanda Ziadeh. Glimpses of the future can already be seen in deployments like the Energy Science Network (ESnet), which connects more than 40 Department of Energy research sites generating 20 PB of data per month via an 8.8 TB network running at 100 Gbps.  As well, there is N-Wave, which links the National Oceanic and Atmospheric Administration to partners in the research and education fields using a 100 Gbps network to move about 60 TB per day.

We should remember, though, that hyperscale is a means, not an end, says Information Age’s Ben Rossi. One of those ends should be the unification of data stores for multiple applications, such as production and analytics. This, along with converged infrastructure and dynamic data architectures, will allow the enterprise and cloud provider community to confront the Big Data loads that are coming their way without breaking their budgets or expanding hardware footprints to unsustainable levels.

While it may be tempting to think that building a hyperscale infrastructure will be a snap once networking is redeployed as a software construct, the fact is it will take quite a bit more than that. An optimal configuration will incorporate not only compute, networking and storage elements, but optimized middleware, governance and even applications.

And sadly, it doesn’t appear that underlying infrastructure can be deployed first, leaving higher-order functions for later. Rather, the entire hyperscale stack needs to function as an integrated entity, albeit with the ability to seamlessly decouple key elements in order to develop unique architectures on the fly. That means organization s should have an entire top-to-bottom blueprint in hand before the first box is deployed.

This will not be an easy transition, but since current architectures are unsustainable in the era of Big Data, it is inevitable.

Photo courtesy of Shutterstock.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter