Could Latency Kill the Cloud?
Latency and cloud go hand-in-hand but it's the end users who will decide if your cloud efforts are up to task. Thankfully, there are workarounds if there is a problem.
The problem is latency, the delay as packets move over the various networks between the end user's computer, the cloud provider's application platform, and any storage networks.
One way to reduce latency is to choose a cloud service provider which operates from a state-of-the-art data center with a fast and efficient internal network and plenty of capacity. For end users located close to the cloud provider's data center this can make a significant difference to the latency likely to be experienced -- perhaps 150ms compared to 300ms using a less well equipped cloud service provider for a given transaction.
But here's the catch: connecting to the same two cloud service providers from another city might result in just about the same latency, of perhaps 4000ms or so. In other words, when cloud applications are accessed far from the data center that hosts them, the network might account for about 90 percent of the latency, while the data center infrastructure might account for 10 percent or less.
Latency will always be worse when data has to travel longer distances because its speed is limited by the speed of light. In practice, it's not the sheer distance to be travelled that is the problem. A huge proportion of latency is due to delays as packets pass through routers as they make their hops from source to destination. That means that there is room to reduce latency significantly, even over long distances, if traffic can be routed more efficiently.
One possible way to do this is offered by cloud hosting company Internap, using a technology it calls managed Internet route optimizer (MIRO). Put simply, Internap connects its data centers to a number (usually between five and 12) carrier networks. Its MIRO software then probes the network, analyzing the performance of all available routes to find the lowest latency provider for each part of the Internet. This acts as an improvement on border gateway protocol (BGP), which makes routing decisions based principally on the number of AS-Path hops between points.
"BGP calculates the best route using system hops as a proxy for performance," said Paul Carmody Internap's senior vice president of Product Management and Business Development. "But that is actually not a fantastic proxy. The fastest path is actually the fastest path."
MIRO results in around 300,000 changes to Internap's routing tables every day as it continually adjusts its routes to reflect the optimal path at any given moment. This typically results in reduced latency of around 20ms, Carmody said. It also enables the company to offer SLAs for latency, jitter and packet loss by geography, thanks to its in-depth knowledge of network conditions.
Another solution could be to use a cloud provider that owns its own network.
"Customers are certainly concerned about latency, and often they don't know what to expect," said Alex Watson-Jackson, IT Infrastructure and Services Solutions marketing manager at pan-European network operator Colt Technology Services. "But if we put your traffic on our network then we can measure latency and provide latency guarantees. We have a range of services that we can use. For example, we might use an MPLS (multi-protocol label switching) network and prioritize a customer's traffic, or we might switch the customer's data to an Ethernet circuit."
This is done using virtual private LAN service (VPLS), a VPN technology that provides Ethernet-based multipoint to multipoint communication over standard IP or MPLS networks. "Effectively this establishes a fixed tunnel between two points, so there are no routing decisions and no queues at routers, which reduces latency," he said.
More tricks of the trade
Aside from smarter routing, there are other tricks that service providers can perform to reduce latency. One that is commonly used is TCP Window optimization. This involves adjusting the parameters governing resending data that has not been acknowledged at the other end of the network connection. Internap's Carmody said this can be highly effective.
"TCP is natively quite conservative, and by optimizing the TCP Window out of the gate we can improve network performance by a factor of between 1.5 and four," he said.
The advantage of this approach is that it doesn't involve the other end of the connection -- usually the end user accessing a cloud-based application -- having to do anything to see the performance improvement.
Another option is to use WAN acceleration appliances to optimize network communication, prefetch data and perform local caching. These usually require hardware to be installed at both ends of a connection; typically one at the cloud service provider's data center and one at the customer's premises. Software appliances can also be used on end user computers, but these are typically laptops used by mobile workers outside of the corporate network.
The good news is that moving applications to the cloud doesn't always result in increased latency and degraded application performance. This is especially the case when mobile workers using laptops, or increasingly employees connecting to corporate applications using an iPad, connect in to a corporate data center from outside.
"If everyone is tromboneing from around the world in to head office, your data center is probably going to have difficulty coping with that," said Watson-Jackson." You could well get an improvement by moving applications into the cloud. It's not cut and dried that the cloud always makes latency worse."
Paul Rubens has been covering IT security for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch.