Enterprise networking is all about speed, and the first thing that comes to most minds when talking about speed is bandwidth.
But while enterprises have made great strides in network bandwidth, moving from sub-GB speeds to 100 GB in less than a decade, the fact remains that bandwidth is only one of the tools in IT’s belt when it comes to enhancing the performance and availability of data and applications.
Indeed, it is starting to look like bandwidth may in fact play a diminishing role in enterprise networking, particularly as virtual and software-defined infrastructures take hold and more workloads are pushed to disparate, highly dynamic cloud architectures. According to a recent global survey conducted on behalf of Ipanema Technologies, application performance problems are becoming more common despite increased budgets aimed at boosting network bandwidth. While this could be the result of the typical lag between deployment and measurable results, it is nevertheless troubling that the most direct approach to network latency is starting to show diminishing returns.
Small wonder, then, that so many organizations are turning to non-traditional approaches when it comes to data and application performance. App acceleration and WAN optimization tools have long been part of the enterprise arsenal, but it seems that lately there is a renewed effort to put these mechanisms in place as the cloud becomes an increasingly common platform from which to conduct data operations.
F5, for example, recently took the unusual step of unifying its application and WAN optimization systems to create the BIG-IP Application Acceleration Manager (AAM) platform. As described to ENP’s Sean Michael Kerner, the platform harnesses web enhancement techniques like minification and content transformation with transport optimization tools like Forward Error Correction (FEC) and adds support for Google’s new SPDY protocol to enable an all-purpose performance boost for applications that rely on disparate architectures.
Still, the moment applications leave the confines of the enterprise network and onto carrier infrastructure, all bets are off when it comes to speed and available bandwidth. That’s why companies like Instart Logic are investing so heavily in application streaming technology. The idea comes from the web video sphere, in that it allows users to launch the app and begin using key features even while the bulk of coding is still being downloaded. The platform uses a cloud/client architecture that links the client browser to the company’s cloud-based intelligent streaming service, which identifies crucial application components for priority delivery. As long as the application provider implements the proper DNS settings to direct traffic to the service, users can stream apps without code changes or user-device plug-ins.
Optimization technology is also playing a bigger role on the CPU level. To be sure, networking chips have always implemented a variety of performance-enhancing techniques along with the requisite bandwidth expansions, but as evidenced by Broadcom’s XLP900, lately designers are pulling out all the stops. Along with 160 Gbps application performance, the XLP900 features enhancements like a quad-issue, quad-threading, out-of-order execution architecture and 20 Gbps compression/decompression – all of which support compute performance in excess of 1 trillion OPS/sec. The chip also features 100 Gbps encryption and authentication and support for the Kernel-based Virtual Machine (KVM) and the open-source Quick EMUlator (QEMU).
Traditionally, optimization technologies were seen as tools to enable sufficient network performance without incurring the cost of fixed high-bandwidth solutions. But this was back in the days when enterprise networks consisted of point-to-point or, at best, point-to-multipoint architectures.
These days, with every laptop and smartphone serving as a network end and cloud-based applications shared across numerous users on top of increasingly virtualized, highly dynamic infrastructures, optimization is taking playing a prominent role in the quest to improve data productivity.
Speed still rules, but it is increasingly becoming a function of the efficient use of network resources, not the acquisition of new ones.