Home > Cloud Computing, Cloud Security, Virtualization > Application Delivery Control: More Hardware Or Function Of the Hypervisor?

Application Delivery Control: More Hardware Or Function Of the Hypervisor?

CrisisoutoforderUpdate: Ooops.  I forgot to announce that I'm once again putting on my Devil's Advocacy cap. It fits nicely and the contrasting color makes my eyes pop.;)

It should be noted that obviously I recognize that dedicated
hardware offers performance and scale capabilities
that in many cases
are difficult (if not impossible) to replicate in virtualized software
instantiations of the same functionality. 

However, despite spending the best part of two years raising
awareness as to the issues surrounding scalability, resiliency,
performance, etc. of security software solutions in virtualized
environments via my Four Horsemen of the Virtualization Security Apocalypse presentation, perception is different
than reality and many network capabilities will simply consolidate into the virtualization platforms until the next big swing of the punctuated equlibrium.

This is another classic example of "best of breed" versus "good enough" and in many cases this debate becomes a corner-case argument of speeds and
feeds and the context/location of the network topology you're talking
about. There's simply no way to sprinkle enough specialized hardware around to get the pervasive autonomics across the entire fabric/cloud without a huge chunk of it existing in the underlying virtualization platform or underlying network infrastructure.

THIS is the real scaling problem that software can address (by penetration) that specialized hardware cannot.

There will always be a need for dedicated hardware for specific needs, and if you have an infrastructure service issue that requires massive hardware to support traffic loads until the sophistication and technology within the virtualization layer catches up, by all means use it!  In fact, just today after writing this piece Joyent announced they use f5 BigIP's to power their IaaS cloud service…

In the longer term, however, application delivery control (ADC) will ultimately become a feature of the virtual networking stack provided by software as part of a larger provisioning/governance/autonomics challenge provided by the virtualization layer.  If you're going to get as close to this new atomic unit of measurement in the VM, you're going to have to decide where the network ends and the virtualization layer begins…across every cloud you expect to host your apps and those they may transit.


I've been reading Lori McVittie's f5 DevCentral blog for quite some time.  She and Greg Ness have been feeding off one another's commentary in their discussion on "Infrastructure 2.0" and the unique set of challenges that the dynamic nature of virtualization and cloud computing place on "the network" and the corresponding service layers that tie applications and infrastructure together.

The interesting thing to me is that why I do not disagree that that the infrastructure must adapt to the liquidity, agility and flexibility enabled by virtualization and become more instrumented as to the things running atop it, much of the functionality Greg and Lori allude to will ultimately become a function of the virtualization and cloud layers themselves*.

One of the more interesting memes is the one Lori summarized this morning in her post titled "Managing Virtual Infrastructure Requires an Application Centric Approach," wherein the she lays the case for the needs of infrastructure becoming "application" centric based upon the "highly dynamic" nature of virtualized and cloud computing environments:

…when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves.

Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models.

We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers.

That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful.

It's all about the application. Finally.

…and yet the applications themselves, despite how integrated they may be, suffer from the same horizontal management problem as the network today does.  So I'm not so sure about the finality of the "it's all about the application" because we haven't even solved the "virtual infrastructure management" issues yet.

Bridging the gap between where we are today and the infrastructure 2.0/application-centric focus of tomorrow is illustrated nicely by Billy Marshall from rPath in his post titled "The Virtual Machine Tsunami," in which he describes how we're really still stuck being VM-centric as the unit measure of application management:

Bottom line, we are all facing an impending tsunami of VMs unleashed by
an unprecedented liquidity in system capacity which is enabled by
hypervisor based cloud computing. When the virtual machine becomes the
unit of application management
, extending the legacy, horizontal
approaches for management built upon the concept of a physical host
with a general purpose OS simply will not scale. The costs will
skyrocket.

The new approach will have vertical management
capability based upon the concept of an application as a coordinated
set of version managed VMs.
This approach is much more scalable for 2
reasons. First, the operating system required to support an application
inside a VM is one-tenth the size of an operating system as a general
purpose host atop a server. One tenth the footprint means one tenth the
management burden – along with some related significant decrease in the
system resources required to host the OS itself (memory, CPU, etc.).
Second, strong version management across the combined elements of the
application and the system software that supports it within the VM
eliminates the unintended consequences associated with change. These
unintended consequences yield massive expenses for testing and
certification when new code is promoted from development to production
across each horizontal layer (OS, middleware, application). Strong
version management across these layers within an isolated VM eliminates
these massive expenses.

So we still have all the problems of managing the applications atomically, but I think there's some general agreement between these two depictions.

However, where it gets interesting is where Lori essentially paints the case that "the network" today is unable to properly provide for the delivery of applications:

And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role.

Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels.

It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers.

Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network.

They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing.

This is where I start to squint a little because Lori's really taking the notion of "application intelligence" and painting what amounts to a router/switch in an appliction delivery controller as a "platform" as she attempts to drive wedge between an ADC and "the network."

Besides the fact that "the network" is also rapidly evolving to adapt to this more loosely-coupled model and the virtualization layer, the traditional networking functions and the infrastructure service layers are becoming more integrated and aware thanks to the homgenizing effect of the hypervisor, I'll ask the question I asked Lori on Twitter this morning:

ADC-Question

Why won't this ADC functionality simply show up in the hypervisor?  If you ask me, that's exactly the goal.  vCloud, anyone?  Amazon EC2?  Azure?

If we take the example of Cisco and VMware, the coupled vision of the networking and virtualization 800 lb gorillas is exactly the same as she pens above; but it goes further because it addresses the end-to-end orchestration of infrastructure across the network, compute and storage fabrics.

So, why do we need yet another layer of network routers/switches called "application delivery controllers" as opposed to having this capability baked into the virtualization layer or ultimately the network itself?

That's the whole point of cloud computing and virtualization, right?  To decouple the resources from the hardware delivering it but putting more and more of that functionality into the virtualization layer?

So, can you really make the case for deploying more "application-centric" routers/switches (which is what an application delivery controller is) regardless of how aware it may be?

/Hoff

  1. December 1st, 2008 at 05:44 | #1

    There you go again, Chris, driving us to abstraction. 🙂 You've got a point, though — do we keep consolidating all "director/balancer" functions into the same service? If so, who balances the balancers (to keep them from becoming a single point of failure)?

  2. December 2nd, 2008 at 03:22 | #2

    Chris, your last questions at the bottom of the piece really sum up the position that we at Zeus are taking.
    I don't think anyone can deny the importance of ADC functionality, more and more organisations are using this "intellingence wrap" around their online applications.
    However, a comment that is often made around here is that "tin is too heavy for the cloud". So how do you take the intelligence of the ADC into the cloud with you? Of course the answer is you have to deliver the intelligence as software, either within the cloud as a service, or you deploy your own software.
    Here comes the plug…
    Zeus Technology are the only vendor in the space with a software ADC product, enabling you to fully embrace virtualisation/cloud computing with out having the anchor of some legacy network appliance weighing you down…!
    Cheers
    Nick

  3. February 25th, 2009 at 09:45 | #3

    what is then the real point of cloud computing and virtualization?

  1. December 24th, 2010 at 13:16 | #1