Delivery Controllers
As load balancers continue evolving into today's Application Delivery
Controllers, it's easy to forget the basic problem for which load
balancers were originally createdproducing highly available,
scalable, and predictable application services. Intelligent application
routing, virtualized application services, and shared infrastructure
deployments of ADCs can obscure these original goalsbut it's still
critical to understand the fundamental role load balancing plays in
ADCs.
White Paper
WHITE PAPER
Introduction
In todays dynamic world, organizations constantly face the challenge of delivering
critical applications to millions of users around the globe. It is extremely important
that they deploy network and application services to make apps scalable, secure,
and available. This is one of the prime reasons behind the evolution of Application
Delivery Controllers (ADCs). However, none of these features can be implemented
without a rm basis in basic load balancing technology. So, lets begin with
understanding the importance of load balancing, how it leads to efcient application
delivery, and how ADCs are still evolving and embracing the cloud-rst world.
Under this method, the DNS would assign a number of unique IP addresses to
different servers, under the same DNS name. This meant that the rst time a user
requested resolution for www.example.com, the DNS server would pass each new
connection to the rst server in line until it reached the bottom of the linebefore
going back to the rst server again. This solution was simple and led to even
distribution of connections across an array of machines.
The Evolution
knowing of Application
if the servers Delivery
listed were Controllers
working or not. If a server became unavailable and
a user tried to access it, the request might be sent to a server that was down.
When users attempted to connect to the service, they connected to the pool IP
instead of to the physical IP of the server. Whichever server in the pool responded to
the connection request rst would redirect the user to a physical IP address (either
their own or another system in the pool), and the service session would start. One
of the key benets of this solution was that the application developers could use a
variety of information to determine which physical IP address the client should
connect to. For instance, they could have each server in the pool maintain a count
of how many sessions each pool member was already servicing, then have any new
requests directed to the least-utilized server.
Initially, the scalability of this solution was clear. All you had to do was build a new
server, add it to the pool, and you grew the capacity of your application. Over time,
however, the scalability of application-based load balancing came into question.
Because the pool members needed to stay in constant contact with each other
concerning who the next connection should go to, the network trafc between the
pool members increased exponentially with each new server added to the pool. After
the pool grew to a certain size (usually 510 hosts), this trafc began to impact end- 3
instead of to the physical IP of the server. Whichever server in the pool responded to
the connection request rst would redirect the user to a physical IP address (either
their own or another system in the pool), and the service session would start. One
of the key benets of this solution was that the application developers could use a
variety of information to determine which physical IP address the client should
WHITE
connect PAPER
to. For instance, they could have each server in the pool maintain a count
The Evolution
of how of Application
many sessions Delivery
each pool Controllers
member was already servicing, then have any new
requests directed to the least-utilized server.
Initially, the scalability of this solution was clear. All you had to do was build a new
server, add it to the pool, and you grew the capacity of your application. Over time,
however, the scalability of application-based load balancing came into question.
Because the pool members needed to stay in constant contact with each other
concerning who the next connection should go to, the network trafc between the
pool members increased exponentially with each new server added to the pool. After
the pool grew to a certain size (usually 510 hosts), this trafc began to impact end-
user trafc as well as the processor utilization of the servers themselves. So, the
scalability was great as long as you didn't need to exceed a small number of servers
(incidentally, less than with DNS round-robin).
HA was dramatically increased with DNS round-robin and software load balancing.
Because the pool members were in constant communication with each other, and
because the application developers could use their extensive application knowledge
to know when a server was running correctly, these solutions virtually eliminated the
chance that users would ever reach a server that was unable to service their
requests. It must be pointed out, however, that each iteration of intelligence-
enabling HA characteristics had a corresponding server and network utilization
impact, further limiting scalability. The other negative HA impact was in the realm of
reliability. Many of the network tricks used to distribute trafc in these systems were
complex and required considerable network-level monitoring. Accordingly, these
distribution methods often encountered issues which affected the entire application
and all trafc on the application network.
These solutions also enhanced predictability. Since the application designers knew
when and why users needed to be returned to the same server instead of being load
balanced, they could embed logic that helped to ensure that users would stay
persistent if needed. They also used the same "pooling" technology to replicate
user-state information between servers, eliminating many of the instances that
required persistence in the rst place. Lastly, because of their deep application
knowledge, they were better able to develop load balancing algorithms based on the
true health of the application instead of things like connections, which were not
always a good indication of server load.
Besides the potential limitations on true scalability and issues with reliability,
proprietary application-based load balancing also had one additional drawback: It
was reliant on the application vendor to develop and maintain. The primary issue
here was that not all applications provided load balancing (or pooling) technology,
and those that did often did not work with those provided by other application
vendors. While there were several organizations that produced vendor-neutral, OS-
level load balancing software, they unfortunately suffered from the same scalability
issues. And without tight integration with the applications, these software
"solutions" also experienced additional HA challenges.
4
when and why users needed to be returned to the same server instead of being load
balanced, they could embed logic that helped to ensure that users would stay
persistent if needed. They also used the same "pooling" technology to replicate
user-state information between servers, eliminating many of the instances that
required persistence in the rst place. Lastly, because of their deep application
WHITE PAPER
knowledge, they were better able to develop load balancing algorithms based on the
The Evolution
true health of Application
of the application Delivery Controllers
instead of things like connections, which were not
always a good indication of server load.
Besides the potential limitations on true scalability and issues with reliability,
proprietary application-based load balancing also had one additional drawback: It
was reliant on the application vendor to develop and maintain. The primary issue
here was that not all applications provided load balancing (or pooling) technology,
and those that did often did not work with those provided by other application
vendors. While there were several organizations that produced vendor-neutral, OS-
level load balancing software, they unfortunately suffered from the same scalability
issues. And without tight integration with the applications, these software
"solutions" also experienced additional HA challenges.
The load balancer could control exactly which server received which connection and
employed "health monitors" of increasing complexity to ensure that the application
server (a real, physical server) was responding as needed. If the server was not
responding correctly, the load balancer would automatically stop sending trafc to
that server until it produced the desired response. Although the health monitors
were rarely as comprehensive as the ones built by the application developers
themselves, the network-based hardware approach could provide basic load
balancing services to nearly every application in a uniform, consistent manner
nally creating a truly virtualized service entry point unique to the application servers. 5
WHITE PAPER
The load balancer could control exactly which server received which connection and
employed "health monitors" of increasing complexity to ensure that the application
server (a real, physical server) was responding as needed. If the server was not
responding correctly, the load balancer would automatically stop sending trafc to
that server until it produced the desired response. Although the health monitors
were rarely as comprehensive as the ones built by the application developers
themselves, the network-based hardware approach could provide basic load
balancing services to nearly every application in a uniform, consistent manner
nally creating a truly virtualized service entry point unique to the application servers.
Similarly, HA helped reduce the complexity of the solution and provide application-
impartial load balancingleading to greater reliability and increased depth as a
solution. Network-based load balancing hardware enabled the business owner to
provide a high level of availability to all their applications instead of the select few
with built-in load balancing.
The advent of the network-based load balancer and virtualization brought about
new benets for security and management, such as masking the identity of
application servers from the Internet community and providing the ability to "bleed"
connections from a server so it could be taken ofine for maintenance without
impacting users. This is the basis from which ADCs originated.
When discussing ADC security, the virtualization created by proxy (the base
technology) is critical. Whether we discuss SSL/TLS encryption ofoad, centralized
authentication, or even application-uent rewalls, the power of these solutions lies
6
in the fact that a load balancer (hardware or virtual edition) is the aggregate point of
impacting users. This is the basis from which ADCs originated.
The Evolution
balancing of Application
device Delivery
evolving to an Controllers
extensible ADC platform. Simply put, proxy is the
basis for load balancing and the underlying technology that makes ADCs possible.
When discussing ADC security, the virtualization created by proxy (the base
technology) is critical. Whether we discuss SSL/TLS encryption ofoad, centralized
authentication, or even application-uent rewalls, the power of these solutions lies
in the fact that a load balancer (hardware or virtual edition) is the aggregate point of
virtualization across all applications. Centralized authentication is a classic example.
Traditional authentication and authorization mechanisms have always been built
directly into the application itself. Like the application-based load balancing, each
implementation was dependent on and unique to each application's implementation
resulting in numerous and different methods. Instead, by applying authentication
at the virtualized entry point to all applications, a single, uniform method of
authentication can be applied. Not only does this drastically simplify the design and
management of the authentication system, it also improves the performance of the
application servers themselves by eliminating the need to perform that function.
Furthermore, it also eliminates the needespecially in home-grown applications
to spend the time and money to develop authentication processes in each separate
application.
Availability is the easiest ADC attribute to tie back to the original load balancer, as it
relates to all the basic load balancer attributes: scalability, high availability, and
predictability. However, ADCs take this even further than the load balancer did.
Availability for ADCs also represents advanced concepts like application dependency
and dynamic provisioning. ADCs are capable of understanding that applications
now rarely operate in a self-contained manner: They often rely on other applications
to fulll their design. This knowledge increases the ADCs capability to provide
application availability by taking these other processes into account as well. The
most intelligent ADCs on the market also provide programmatic interfaces that allow
them to dynamically change the way they provide services based on external input.
These interfaces enable dynamic provisioning and the automated scale up/down
required for modern environments like cloud and containerized deployments.
Today's ADCs go even further. These devices often include caching, compression,
and even rate-shaping technology to further increase the overall performance and
delivery of applications. In addition, rather than being the static implementations of
traditional standalone appliances providing these services, an ADC can use its 7
concept. Load balancers inherently improved performance of applications by
ensuring that connections were not only directed to services that were available
(and responding in an acceptable timeframe), but also to the services with the least
amount of connections and/or processor utilization. This ensured that each new
connection was being serviced by the system that was best able to handle it. Later,
WHITE PAPER
as SSL/TLS ofoad (using dedicated hardware) became a common staple of load
The Evolution
balancing of Application
offerings, Delivery
it reduced Controllers
the amount of computational overhead of encrypted
trafc as well as the load on back-end serversimproving their performance as well.
Today's ADCs go even further. These devices often include caching, compression,
and even rate-shaping technology to further increase the overall performance and
delivery of applications. In addition, rather than being the static implementations of
traditional standalone appliances providing these services, an ADC can use its
innate application intelligence to only apply these services when they will yield a
performance benetthereby optimizing their use.
The cloud isnt an amorphous single entity of shared compute, storage, and
networking resources; rather, it is composed of a complex mix of providers,
infrastructures, technologies, and environments that are often deployed in multiple
global nodes. This is the reason many enterprises have actually deployed
applications into a number of different cloudspublic, private, and even
combination of them all. This is multi-cloud: the new reality.
Even with this rapidly evolving landscape, several factors are slowing the cloud
adoption. The rst challenge is the multi-cloud sprawl, where existing applications
have been lifted and shifted, and born-in-the-cloud applications have been
deployed in an unplanned and unmanaged manner. In addition, to meet their short-
term needs, organizations tend to use disparate cloud platforms, different
architectures, varying application services, and multiple toolsets. This results in
8
architectural complexity across the enterprise, and makes shifting applications from
business.
The cloud isnt an amorphous single entity of shared compute, storage, and
networking resources; rather, it is composed of a complex mix of providers,
infrastructures, technologies, and environments that are often deployed in multiple
WHITE PAPER
global nodes. This is the reason many enterprises have actually deployed
The Evolution
applications of aApplication
into number ofDelivery
differentControllers
cloudspublic, private, and even
combination of them all. This is multi-cloud: the new reality.
Even with this rapidly evolving landscape, several factors are slowing the cloud
adoption. The rst challenge is the multi-cloud sprawl, where existing applications
have been lifted and shifted, and born-in-the-cloud applications have been
deployed in an unplanned and unmanaged manner. In addition, to meet their short-
term needs, organizations tend to use disparate cloud platforms, different
architectures, varying application services, and multiple toolsets. This results in
architectural complexity across the enterprise, and makes shifting applications from
one environment to another much more difcult, not to mention expensive.
Figure 3: Every cloud architecturehow it operates, is managed, and its levels of visibilityis
different.
Despite these challenges, new deployments in and across public and private clouds
will inevitably increase in the coming years. Multi-cloud is fast approaching and its
time for enterprises to deploy smart application services and ADCs that do more
than support limited applications, or operate only in hardware and single cloud.
Summary
ADCs are the natural evolution to the critical network real estate that load balancers
of the past held. While ADCs owe a great deal to those bygone devices, they are a
distinctly new breed providing not just availability, but performance and security. As
their name suggests, they are concerned with all aspects of delivering an application
in the best way possible.
In the end, we can safely say that ADCs will not only be the primary conduit and
integration point through which the applications will be delivered in a faster, smarter,
and safer way, but will continue to evolve and embrace the cloud-rst world.
9
their name suggests, they are concerned with all aspects of delivering an application
in the best way possible.
The Evolution
technical world,ofthe
Application
ADCs areDelivery Controllers
also more capable of adapting themselves to the
newest technologies in multi-cloud and container environments.
In the end, we can safely say that ADCs will not only be the primary conduit and
integration point through which the applications will be delivered in a faster, smarter,
and safer way, but will continue to evolve and embrace the cloud-rst world.
F5 Networks, Inc.
401 Elliott Avenue West, Seattle, WA 98119 Americas Asia-Pacific Europe/Middle-East/Africa Japan
888-882-4447 f5.com info@f5.com apacinfo@f5.com emeainfo@f5.com f5j-info@f5.com
2017 F5 Networks, Inc. All rights reserved. F5, F5 Networks, and the F5 logo are trademarks of F5 Networks, Inc. in the U.S. and in certain other countries. Other F5
trademarks are identified at f5.com. Any other products, services, or company names referenced herein may be trademarks of their respective owners with no
endorsement or affiliation, express or implied, claimed by F5. CS01-00095 0113
10