It can be pretty hard to get excited about load balancers. After all, the typical load balancer, often referred to as an ADC (Application Delivery Controller), is expected to churn away in the recesses of network infrastructure, taking care of mundane tasks to ensure that end users have reliable access to applications and services.
However, many of today’s ADCs harbor a dirty little secret. They tend to be hard to deploy, manage and tune. Adding insult to injury, ADCs often do a poor job of offering the analytics needed to track down performance problems.
Newcomer Avi Networks has entered the fray, bringing a new ADC that aims to eliminate many load balancing pain points by combining machine learning, advanced analytics and automation into both the control and data planes of load balancing.
The crowded ADC market
Today’s ADC market is crowded with both industry giants and petulant startups, all claiming to have the best way to deliver applications using load-balancing technologies. On one side of the fence are the industry stalwarts, which offer high performance hardware appliances that merge the control and data planes together to centralize the management and execution of load balancing. Companies such as F5 Networks, A10 Networks, Kemp Technologies, and Citrix have firmly planted their flag in the hardware appliance camp, while also offering virtual appliances to appease enterprises seeking cloud nirvana.
On the other side of the fence reside cloud-centric load balancing, ADC and even content delivery controllers that fully reside within the cloud and almost take on the role of proxies tuned to deliver applications. Some of those vendors, such as Incapsula, Lagrange Systems, and NGINX have put most of their eggs into the cloud basket, offering virtual appliances that deliver applications from cloud infrastructure.
Cloud ADC vs. Onsite ADC
One of the biggest points of contention with deploying an ADC comes in the form of choosing between a cloud-based solution or an onsite/in-house implementation. Naturally, those that manufacture physical and virtual appliances extol the virtues of onsite implications, where control, ownership, connectivity and maintenance all fall within the confines of a physical location, run by corporate IT. Onsite implementations do offer significant advantages for balancing loads for local applications.
In that situation, all of the applications, related servers and networking hardware can work together in a low-latency environment to deliver applications to users in an extremely efficient manner. But enterprises are moving away from single site applications and centralized data centers. Once you move resources into the cloud, a cloud-based delivery mechanism becomes more desirable and ultimately more efficient.
What’s more, a growing number of organizations are hopping on the “microservices” bandwagon, which leverages small applications that communicate with one another from different locations to create a unified application experience for the end user. That creates a situation where multiple ADCs must efficiently communicate with one another to deliver an acceptable end user experience.
Microservices often rely on API calls to execute, which in turn can be delivered from various servers, creating a strain on network operations and adding latency to the execution process.
Further complicating the ADC equation is the rise of mobile computing. In the past, users were predominantly tethered to their desks using desktop systems. From a load balancing standpoint, that created a relatively easy environment to support.
These days, however, many enterprises are moving to a “mobile first” ideology, eschewing traditional desktops in favor of a mobile workforce. Mobility further complicates the concept of application delivery, since ADCs now have to take in account the variables that mobility adds to the networking environment.
Providing acceptable performance in a mobile environment means that ADCs must be able to apply heuristics that can analyze users, their devices, the location of the application, details about the computing infrastructure, and any other element involved in the delivery of an application in order to create an adaptable load balancing solution that can react to changes in real time.
A closer look at Avi Networks v15.1
Sunnyvale, CA-based Avi Networks has built an ADC from the ground up that is capable of either being deployed on premises as a virtual appliance or placed out in the cloud as a hosted virtual appliance on AWS or VMWare’s vCloud Air public cloud platform service.
These deployment options give Avi Networks an advantage over competitors by allowing customers to choose which implementation works best for their given environment. What’s more, by using the virtual appliance ideology, implementations can be moved from the cloud to onsite and vice versa, offering customers an upgrade (or downgrade) path that can meet the changing demands of business.
That same ideology fits squarely into Avi Networks’s concept of providing what the company calls a “next generation” ADC, which the company defines as an ADC that incorporates machine learning and advanced algorithms to leverage the way applications interact and communicate with one another. That, in turn, fuels self-tuning heuristics that can automatically scale workloads up or down based on real-time demands.
Avi Networks also differs from most competitors by separating the control and data planes, allowing each to use independent resources. This prevents high loads from impacting the solution’s ability to manage and assign loads. That separation also enhances continuity. For example, if the control plane fails, the data plane will still function and allow applications to be served. On the other hand, if one of the data plane elements fails, the control plane will automatically detect the failure and reroute application delivery as needed.
Avi Networks is able to separate the control and data planes by leveraging the ideologies of SDN (Software Defined Networks), where virtual components are built as portable services and can be quickly redefined, launched or modified to meet evolving network needs.
Next page: Hand-son with Avi Networks ADC
Hands-on with Avi Networks version 15.1
Installation of Avi Network’s ADC involves installing virtual instances of the management/control planes and data planes on either a physical appliance or on the appropriate cloud-based hosts.
Avi Networks uses a Virtual Machine (VM) ideology, which in turn is fused to an SDN-based deployment. The respective ADC code for the individual planes is then deployed into the virtual machines, distributing the ADC functionality to the locations that are closest (via response time) to the applications and services being delivered to users. While that may be an oversimplification of the sophisticated capabilities offered by the Avi ADC, it is still important to note that separating the control and data planes brings with it significant advantages.
Nonetheless, the primary goal of the Avi Networks ADC is to bring load balancing and resiliency to line-of-business applications. Avi Networks makes that happen by incorporating machine learning techniques with the flexibility of SDN to create a highly interactive offering that merges management, diagnostics, reporting, load balancing and security.
While analytics proves to be an important element of Avi’s offering, one cannot forget the real reason why an ADC is deployed to begin with: to maximize the performance of applications, regardless of changes in load.
Here, Avi Networks makes the whole concept of load balancing easily understood. The product uses a paradigm that creates “pools” of virtual services. In essence, each virtual service that is displayed on the application dashboard contains all of the elements that make up the delivery of an application.
To add a new virtual service, one just has to launch the “New Virtual Service” wizard and offer the ADC some very basic information, such as the IP address of the application, the port it uses, and a name. The wizard then steps the administrator through choosing policies, implementing analytics and finalizing the basic setup.
The New Virtual Service wizard does an excellent job of directing the administrator to the most critical settings and offers suggestions throughout the setup process on how to best configure the service. What’s more, the wizard-based setup employs lookups to help locate existing services via URLs or IP addresses.
For most use cases, the defaults work well for deploying Avi’s product with excellent results. Nonetheless, the product also offers manual configuration of several (if not all) of the settings via the Avi UI or REST APIs.
Virtual service pools are the foundation of Avi’s ideology, which is referred to as “Distributed Microservices.” These distributed microservices power the data plane of the application delivery platform and work hand-in-hand with Avi’s service engines to enhance application traffic. They can be combined to consolidate load balancing, application acceleration and application security into a unified delivery mechanism.
The basic framework is provided by the Hyperscale Distributed Resources Architecture (HYDRA), which allows administrators to manage application deployment services, such as load balancing and SSL termination, as well as extensive analytics information from those microservices and service engines.
The product offers advanced analytics, which AVI calls “Inline Analytics.” These allow administrators to delve into traffic statistics and access logs to determine how applications are performing and identify problems.
The application dashboard lets an administrator drill down into each defined virtual service, where submenus are presented that offer additional information. Here, graphic elements, such as charts, show the last six hours of activity for the virtual service. Administrators can also change the activity views to real time or to other lengths of time if they are researching issues that span long periods. Those graphical elements support additional drilldown, which allows administrators to delve deeper into events and the statistics surrounding those events. That proves to be a very powerful feature for those looking to troubleshoot application delivery problems. It’s something not found in the typical ADC.
Avi Networks offers dozens, if not hundreds, of ways to drill down into the gathered data to determine where any problems may lie with delivering applications, or even to conduct forensic research into application access.
Application load balancing (and SSL termination) is based upon defined policies (or templates) that offer standardized methods for assigning a “cost” to an application, its route and other factors. Those “costs” are used by the service engines and application controllers to determine the most efficient method to deliver an application. High “costs” can be used to initiate new virtual machines to autoscale applications experiencing increased loads. While “costs” are used behind the scenes, they correlate to a readily apparent, administrator-facing element called a “health score,” which is tied directly into performance analytics.
Several templates are included with the product and available via a dropdown list during the configuration of an application pool. Several load balancing choices are available, including Round Robin, Least Connections and Source IP. Administrators can also assign “weight” factors to elements in the application pool, which are used to shift loads based upon administrator preferences.
One of the most important aspects of an ADC is application security. For most products on the market, that means providing full end-to-end SSL session supportin addition to a method to accelerate that traffic. Simply put, a physical appliance must have sufficient horsepower to accelerate and terminate SSL connections without impacting performance.
For proprietary, hardware-based ADCs, SSL termination comes at a price – namely, when it comes to scaling SSL terminations. Most of those devices have to rely on internal ASICs or CPUs to encrypt and decrypt traffic, meaning that scaling up may require a hardware replacement.
Avi Networks takes an alternative approach that enables SSL acceleration to be scalable without requiring a rip and replace. Avi is able to accomplish that thanks to SDN technologies. The virtual appliance can simply be assigned more processing power without updating the management/control planes.
What’s more, Avi also offers support for ECC-based certificates as an alternative to RSA-based certificates. ECC-based encryption/decryption is less processor-intensive and makes a faster and a more secure methodology for SSL encryption. It also improves battery life in mobile devices by lowering burdens on the CPU.
Avi Networks ADC feature set
As ADCs go, Avi offers an extensive feature set, centered on its ability to secure and load balance applications. Load balancing, security, acceleration and QoS features include
- Protocols: TCP, UDP, HTTP, HTTPS, DNS, SPDY
- Algorithms: Avi ServerSaver, Least Connections, Least Load, Fastest Response, Consistent Hash, Round Robin, and Random Selection – with weighted priority built into each algorithm
- Persistence: Source IP Address, HTTP Cookie, Secure Cookie
- Server Health Monitoring: Ping, TCP, UDP, HTTP, HTTPS, DNS, External Script, Passive Inline
- Content Switching based on matches against HTTP and/or TCP/IP Headers
- SSL Offload: RSA (1K/2K/3K) + Elliptic Curve Cryptography (ECC) SECP256R1 / SECP384R1 / SECP521R1, Perfect Forward Secrecy (PFS), Strict Transport Security
- SSL server re-encryption
- IP, HTTP access control
- Policy-based client redirection, tracking
- Protocol validation
- DDoS mitigation: SYN flood, Slow Loris, SlowPOST, ICMP etc.
- TCP Optimizations: TCP proxy, buffering, TCP Multiplexing, Buffering, Connection Keep-alive, Windows Scaling, Selective Acknowledgement, Fast Ramp
- HTTP request and connection multiplexing
- GZIP compression
- Content caching
- Rate control of throughput and number of connections
- Ability to limit maximum number of connections on a single IP
- DSCP tagging for higher network QoS
- QoS controls across virtual services on a single Service Engine
Avi Networks successfully transforms ADC ideology by adopting SDN techniques. The platform effectively separates the data and control planes, making Avi’s ADC an excellent choice for distributed, cloud-enabled networks. The integrated advanced analytics allows administrators to stay one step ahead of application delivery problems, and the integrated machine learning reduces the amount of time an administrator has to spend fine tuning application delivery.
All things considered, Avi Networks is worth a long hard look for those looking to transition their applications to the cloud and build hybrid solutions without the need for proprietary hardware.