Tuesday, January 24, 2017

What is an Application Delivery Controller - Part 1

A Little History

Application Delivery got its start in the form of network-based load
balancing hardware. It is the essential foundation on which Application Delivery
Controllers (ADCs) operate. The second iteration of purpose-built load balancing
(following application-based proprietary systems) materialized in the form of
network-based appliances. These are the true founding fathers of today's ADCs.
Because these devices were application-neutral and resided outside of the
application servers themselves, they could load balance using straightforward
network techniques. In essence, these devices would present a "virtual server"
address to the outside world, and when users attempted to connect, they would
forward the connection to the most appropriate real server doing bi-directional
network address translation (NAT).

Figure 1: Network-based load balancing
appliances.

With the advent of virtualization and cloud computing, the third iteration of
ADCs arrived as software delivered virtual editions intended to run on
hypervisors. Virtual editions of application delivery services have the same
breadth of features as those that run on purpose-built hardware and remove much
of the complexity from moving application services between virtual, cloud, and
hybrid environments. They allow organizations to quickly and easily spin-up
application services in private or public cloud environments.

Basic Application Delivery Terminology

It would certainly help if everyone used the same lexicon; unfortunately,
every vendor of load balancing devices (and, in turn, ADCs) seems to use
different terminology. With a little explanation, however, the confusion
surrounding this issue can easily be alleviated.

Node, Host, Member, and Server

Most ADCs have the concept of a node, host, member, or server; some have all
four, but they mean different things. There are two basic concepts that they all
try to express. One concept—usually called a node or server—is the idea of the
physical or virtual server itself that will receive traffic from the ADC. This
is synonymous with the IP address of the physical server and, in the absence of
a ADC, would be the IP address that the server name (for example, www.example.com) would resolve to. We will
refer to this concept as the host.

The second concept is a member (sometimes, unfortunately, also called a node
by some manufacturers). A member is usually a little more defined than a
server/node in that it includes the TCP port of the actual application that will
be receiving traffic. For instance, a server named www.example.com may resolve to an address of
172.16.1.10, which represents the server/node, and may have an application (a
web server) running on TCP port 80, making the member address 172.16.1.10:80.
Simply put, the member includes the definition of the application port as well
as the IP address of the physical server. We will refer to this as the service.

Why all the complication? Because the distinction between a physical server
and the application services running on it allows the ADC to individually
interact with the applications rather than the underlying hardware or
hypervisor. A host (172.16.1.10) may have more than one service available (HTTP,
FTP, DNS, and so on). By defining each application uniquely (172.16.1.10:80,
172.16.1.10:21, and 172.16.1.10:53), the ADC can apply unique load balancing and
health monitoring based on the services instead of the host. However, there are
still times when being able to interact with the host (like low-level health
monitoring or when taking a server offline for maintenance) is extremely
convenient.

Most load balancing-based technology uses some concept to represent the host,
or physical server, and another to represent the services available on it— in
this case, simply host and services.

Pool, Cluster, and Farm

Load balancing allows organizations to distribute inbound application traffic
across multiple back-end destinations, including cloud deployments. It is
therefore a necessity to have the concept of a collection of back-end
destinations. Clusters, as we will refer to them (also known as pools or farms)
are collections of similar services available on any number of hosts. For
instance, all services that offer the company web page would be collected into a
cluster called "company web page" and all services that offer e-commerce
services would be collected into a cluster called "e-commerce."
The key element here is that all systems have a collective object that refers
to "all similar services" and makes it easier to work with them as a single
unit. This collective object—a cluster—is almost always made up of services, not
hosts.

Virtual Server

Although not always the case, today the term virtual server means a server
hosting virtual machines. It is important to note that like the definition of
services, virtual server usually includes the application port was well as the
IP address. The term "virtual service" would be more in keeping with the IP:Port
convention; but because most vendors, ADC and Cloud alike use virtual server,
this article uses virtual server as well.

Putting It All Together

Putting all of these concepts together makes up the basic steps in load
balancing. The ADC presents virtual servers to the outside world. Each virtual
server points to a cluster of services that reside on one or more physical
hosts.