Software defined networking (SDN) is rightly seen as a way to make data center infrastructure more efficient and flexible. But it is also a key driver in the expansion of cloud computing, making it easier for organizations to host applications on the most appropriate resources, no matter where they reside.
A big problem with the public cloud has always been latency. Data always takes longer to travel miles between data centers than the few feet from the server room to the desktop. And since top cloud providers often employ data centers that span multiple acres and house thousands of pieces of equipment, just getting to the right cloud-based virtual machine can be a chore.
It’s no surprise, then, that leading cloud service providers (CSPs) are looking for any advantage they can find to one-up each other in terms of latency and overall performance.
Google recently updated its Andromeda SDN platform to provide what the company describes as a significant boost in network performance. The latest version, 2.1, is said to reduce latency some 40 percent over the previous version. That means it offers an eight-fold improvement over the original release introduced in 2014. The company says it has devised a new hypervisor bypass mechanism that allows VMs to communicate directly with the Andromeda software, essentially cutting out the middleman on the network switch as data is transported between machines. In this way, network performance should improve for all applications, not just those with particular networking requirements.
When it comes to hybrid cloud performance, the more commonality there is between on-prem and public network architectures, the better. This is the thinking behind Cisco’s decision to map the APIs of its Application Centric Infrastructure to those of leading public services like Amazon, Google and Microsoft. This “bridging” will allow organizations to establish common network policy controls across distributed infrastructure, greatly reducing management burdens and improving performance across the board. It also makes it easier for organizations to establish unified cloud environments across multiple providers, enhancing services like backup, load balancing and resilience.
Flexibility is also a factor in building connectivity to and from the cloud. VMware recently announced plans to purchase SD-WAN startup VeloCloud, which gives it an alternative to Cisco and other networking solutions when it comes to connecting virtual infrastructure on the cloud. VeloCloud uses standard wired or wireless broadband infrastructure rather than an MPLS connection to support SD-WAN services. This, in turn, provides greater bandwidth and improved visibility into wide area networks to provide more streamlined access to the cloud and a high degree of resource orchestration.
One of VeloCloud’s customers, in fact, is Windstream, which recently expanded its SDN Orchestrated Waves (SDNow) platform to optical networks across the United States. The system provides the ability to provision 10G point-to-point connectivity in as little as 20 days. This will allow transport providers to boost network programmability across long-haul cloud infrastructure, which should improve tasks like cloud migration and digital transformation.
In a fast-paced digital economy, business models will live and die by their ability to connect disparate resources quickly and at low cost. Whether the application is communications, analytics, development or anything else that comes along, access to data will be the difference between success and failure in a highly connected world.
Even though latency is now measured in milliseconds, if not microseconds, any delay in the network chain is a limiting factor in the race to maximize performance.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.