Will the Cloud Overwhelm the Internet?

Some fear that increased traffic from the cloud will prove too much for the Internet to handle. Here's why that won't happen.

By Arthur Cole | Posted Jul 26, 2013
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Even before SDN and the cloud came along, datacenter infrastructure was well along on the path to distributed architectures. As networking technology became faster and more efficient, it became increasingly cost-effective to separate server, storage and access devices, sometimes across great distances and between multiple providers.

Some fear, however, that as more and more data floods the global wide area network, capacity -- and therefore performance -- will suffer. The specter of traffic overload reared its ugly head again this week with the revelation that Google alone contributes about a quarter of all Internet traffic, a result of rich media services like YouTube and the fact that 60 percent of all electronic devices exchange data with Google every day.

But before we start to panic at the thought that worldwide networking capacity might become an endangered species, let's take a look at the larger picture. According to the latest research, the growth of both Internet traffic and available capacity is slowing – while still higher than last year, they're now expanding at a lower rate. The key, though, is that while traffic is growing at about 35 percent, capacity remains a step ahead at 40 percent. Even more telling, total bandwidth has more than doubled over the past three years to 77 Tbps.

This is exactly as it should be. It means that while providers are bringing capacity online at a pace that is consistent with demand, there is still plenty of bandwidth standing by in case total volume starts to climb at a faster pace.

At the same time, broadband performance is ramping up dramatically. According to Akamai, average connection speeds worldwide gained 17 percent, to just over 3 Mbps, while the average peak speed jumped 36 percent to 18.4 Mbps. The news is also good for high-speed services – those that clock in at 10 Mbps or higher – which gained 28 percent over last year, now accounting for about 13 percent of all connections worldwide. So even though demand is up, today’s networks are able to move packets at higher rates, boosting performance and energy efficiency in the process.

And if this weren’t encouraging enough, a number of promising developments have come to light in recent days that could make things even better. One is a system called Remy, currently under development at MIT. Remy provides automated generation of TCP congestion control algorithms, the specialized coding that carriers use to undo data bottlenecks, and has the ability to configure multiple approaches to specific problems much faster than human operators can. As such, it is likely to become a valuable tool as today’s global infrastructure retools itself to accommodate the dynamic data environments fueled by virtualization and the cloud. In simulated high-speed network environments, Remy has doubled throughput while cutting latency by two-thirds.

Meanwhile, researchers at Boston University and USC are teaming up on a new fiber transmission technology based on “optical vortices” – literally “twisted light” – to improve throughput. Rather than pulsing optical wavelengths in straight, pre-determined lines, this new approach opts for a corkscrew data stream that takes advantage of the unique ways that photons move through space. At the moment, researchers are able to get about 400 Gbps from a single light color by “twisting” it in four different ways, while a dual-twist approach over 10 colors tops out at about 1.6 Tbps. Still unclear, though, is how well the system will perform on standard fiber cabling or whether new physical infrastructure would need to be deployed.

It seems, then, that both capacity and throughput will be adequate to support the cloud, mobilization, Big Data and all the other forces bearing down on the enterprise these days. But that doesn’t mean there still aren’t challenges ahead. With resources in abundance, development is starting to focus on utilization and efficiency as means to both cut costs and enhance productivity in an increasingly integrated, multitenant universe.

In other words, network resources are there for the taking. The question is, how much do you want to consume and how much are you willing to pay?

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter