Wallis Simpson, the former Duchess of Windsor, famously said that you can never be too rich or too thin. In the modern enterprise, the corollary would be that data can never move too fast.
Network latency is a perennial challenge that, despite innovations in abstract networking and advanced fabric architectures, will likely remain at the top of the enterprise list of pet peeves for some time. At the moment, the focus of many applications and services is turning toward real-time performance. Given the state of predictive analytics, it probably won’t be long before we start to see better-than-real-time functions as well.
But tackling network latency is not an easy task. It often has just as much to do with data management and architectural design as raw network throughput. Networks Asia reports that for a company like Facebook, the flow of traffic between servers can be as much as 20 times greater than that entering and leaving the data center. This means the company is continuously looking for new ways to reduce resource contention and prevent packet loss, considering that just a 0.001 percent error rate can significantly reduce the performance of a typical Ethernet switch.
A key element in this strategy is the emergence of data virtualization (DV). According to Lakshmi Randall, director of product marketing at data virtualization developer Denodo Technologies, the ability to integrate data across diverse and distributed digital landscapes without having to make multiple copies not only improves access speeds but provides benefits to security, governance and data management, all while lowering overall storage costs. DV essentially creates a separate data access layer that can be extended across the enterprise, giving all users a direct view of the data they require rather than forcing them to go hunting for it among numerous storage systems. In this way, a single, centralized archive can support real-time performance for applications requiring structured, semi-structured and unstructured data even as volume and complexity increases.
This isn’t to say that basic infrastructure is not an important factor in reducing latency as well. For emerging constructs like the Internet of Things (IoT), getting the infrastructure right is the first step toward building a forward-leaning operational environment, says TechGenix’s Benjamin Roussey. A key consideration in the quest for low latency is the deployment of Data Acquisition Systems (DASs), which will be tasked with aggregating analog data from multiple sensors and converting it to the digital protocols governing the Internet and datacenter networks. This is a major challenge considering the different structural and timing characteristics of analog and digital streams, not to mention the vast number of sensors that are expected to populate the IoT before long, but it is a necessary step toward integrating the IoT edge and centralized data infrastructure.
In some cases, however, network latency is caused by the release of new apps and services that have not been adequately tested. This can happen with newly developed apps or legacy apps that suddenly find themselves exposed to high-speed networks. To counter this, Apposite Technologies has released the Netropy 100G network emulator capable of supporting wide-area Ethernet test environments ranging from 25 to 100 Gbps. This allows service providers, storage system manufacturers and app developers to pre-validate new releases for things like latency, packet loss and congestion prior to their introduction into live environments. The company expects the system to be a crucial tool for the development of highly replicated cloud environments and emerging east-west fabric architectures in the data center.
The speed at which data is being generated, distributed and consumed is growing at a record pace and shows no sign of slowing down any time soon. As data infrastructure becomes increasingly intelligent, we can expect to see machine-driven processes start to eke tremendous productivity gains out of sub-micron improvements in latency.
This puts the entire data industry under the gun to drive all inefficiency out of network and data architectures as quickly as possible. The world has already come to expect data anywhere, anytime on any device, and the tolerance for even the slightest delay is getting lower every day. In this day and age, if users cannot get what they want when they want it from one provider, they can easily get it from someone else.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.