Home > Citrix, Virtualization, VMware > Virtualized Hypervisor-Neutral Application/Service Delivery = Real Time Infrastructure…

Virtualized Hypervisor-Neutral Application/Service Delivery = Real Time Infrastructure…

I was having an interesting discussion the other evening at BeanSec with Jeanna Matthews from Clarkson University.  Jeanna is one of the authors of what I think is the best book available on Xen virtualization, Running Xen.

In between rounds of libations, the topic of Hypervisor-neutral, VM portability/interoperability between the virtualization players (see right) came up.  If I remember correctly, we were discussing the announcement from Citrix regarding Project Kensho:

Santa Clara, CA » 7/15/2008 » Citrix Systems, Inc.
(Nasdaq:CTXS), the global leader in application delivery
infrastructure, today announced “Project Kensho,” which will deliver
Open Virtual Machine Format (OVF) tools that, for the first time, allow
independent software vendors (ISVs) and enterprise IT managers to
easily create hypervisor-independent, portable enterprise application
These tools will allow application workloads to be imported
and run across Citrix XenServer™, Microsoft Windows Server 2008 Hyper-V™ and VMware™ ESX virtual environments. 

On the surface, this sounded like a really interesting and exciting development regarding interoperability between virtualization platforms and the VMs that run on them.  Digging deeper, however, it’s not really about virtualization at all; it’s about the delivery of applications and services — almost in spite of the virtualization layer — which is something I hinted about at the end of this post.

I am of the opinion that virtualization is simply
a means to an end, a rationalized and cost-driven stepping-stone along the path of
designing, provisioning, orchestrating, deploying, and governing a more agile, real time
infrastructure to ensure secure, resilient, cost-effective and dynamic delivery of service.

You might call the evolution of virtualization and what it’s becoming cloud computing.  You might call it utility computing.  You might call it XaaS.  What many call it today is confusing, complex, proprietary and a pain in the ass to manage.

Thus, per the press release regarding Project Kensho, the notion of packaging applications/operating environments up as tasty little hypervisor-neutral nuggets in the form of standardized
virtual appliances that can run anywhere on any platform is absolutely appealing and in the long term, quite necessary.*

However, in the short term, I am left wondering if this is a problem being "solved" for ISV’s and virtualization platform providers or for customers?  Is there a business need today for this sort of solution and is the technology available to enable it?

Given the fact that my day job and paycheck currently depends upon crafting security strategies, architecture and solutions for real time infrastructure, I’m certainly motivated to discuss this.  Mortgage payment notwithstanding, here’s a doozy of a setup:

Given where we are today with the heterogeneous complexity and nightmarish management realities of our virtualized and non-virtualized infrastructure, does this really solve relevant customer problems today or simply provide maneuvering space for virtualization platform providers who see their differentiation via the hypervisor evaporating?

While the OVF framework was initially supported by a menagerie of top-shelf players in the virtualization space, it should come as no surprise that this really represents the first round in a cage match fight to the death for who wins the application/service delivery management battle.

You can see this so clearly in the acquisition strategies of VMware, Citrix and Microsoft.

Check out the remainder of the press release.  The first half had a happy threesome of Citrix, Microsoft and VMware taking a long walk on the beach.  The second half seems to suggest that someone isn’t coming upstairs for a nightcap:

Added Value for Microsoft Hyper-V

Project Kensho will also enable customers to leverage the
interoperability benefits and compatibility between long-time partners
Citrix and Microsoft to extend the Microsoft platform.  For example,
XenServer is enhanced with CIM-based management APIs to allow any
DMTF-compliant management tool to manage XenServer, including Microsoft
System Center Virtual Machine Manager. And because the tools are based
on a standards framework, customers are ensured a rich ecosystem of
options for virtualization.  In addition, because of the open-standard
format and special licensing features in OVF, customers can seamlessly
move their current virtualized workloads to either XenServer or
Hyper-V, enabling them to distribute virtual workloads to the platform
of choice while simultaneously ensuring compliance with the underlying
licensing requirements for each virtual appliance.

Project Kensho will support the vision of the Citrix Delivery Center™
product family, helping customers transform static datacenters into
dynamic “delivery centers” for the best performance, security, cost
savings and business agility. The tools developed through Project
Kensho will be easily integrated into Citrix Workflow Studio™ based
orchestrations, for example, to provide an automated, environment for
managing the import and export of applications from any major
virtualization platform.

Did you catch the subtlety there?  (Can you smell the sarcasm?)

I’ve got some really interesting examples of how this is currently shaking out in very large enterprises.  I intend to share them with you, but first I have a question:

What relevance do hypervisor-neutral virtual appliance/machine deployments have in your three year virtualization roadmaps?  Are they a must-have or nice-to-have? Do you see deploying multiple hypervisors and needing to run these virtual appliances across any and all platforms regardless of VMM?

Of course it’s a loaded question.  Would you expect anything else?


* There are some really interesting trade-offs to be made when deploying virtual appliances.  This is the topic of my talk at Blackhat this year titled "The Four Horsemen of the Virtualization Apocalypse"

Categories: Citrix, Virtualization, VMware Tags:
  1. July 19th, 2008 at 04:48 | #1

    For an operations guy like myself, this could be interesting. We currently use something like this for applications that 'graduate' from POC to test to prod. The proof of concept tends to be on the freebie VMWare platform. When the application 'graduates', it gets moved to a 'real' VMWare platform (ESX).
    So I see:
    #1 – Ability to move applications to and from insource/outsource providers w/o considering hypervisor brand.
    #2 – Ability to run proof of concept labs on free/cheap hypervisors and later migrate them to expensive, supported hypervisors w/o staring from scratch.
    #3 – Leverage hypervisor neutrality to extract competitive pricing from hypervisor vendors, as we currently do with x86/x64 hardware vendors.
    #4 – Vendor support, re-creating application crashes/failures = Instead of me trying to send a core dump, kernel dump, stack trace, etc. of a broken app, I just clone the VM and upload it to the vendor. They light it up on their hypervisor & re-create the problem. Heck – we are already uploading 32GB core files to vendors, the boot partition on a VM isn't any bigger.
    #5 – Disaster Recovery – My DR vendor can have a different hypervisor, making DR options more flexible.
    In the insource/outsource space, If VM's could run on any hypervisor, I'd have interesting options for having the vendor of a new application be the hoster for POC and early test implementations, but still be able to move the app to my data center when it graduates to prod.

  2. July 19th, 2008 at 04:57 | #2

    I'm probably missing something (I often do), but the idea of running a heterogeneous hypervisor environment gives me the hyper-heebie-jeebies. That's just one more moving part to add confusion to what I would *think* you would be trying to standardize, as a major goal towards standardizing the management of your already OS-heterogeneous environment.
    Unless you are buying into the Dan-Geer-heterogeneity-as-biological-immunity argument for hypervisors, but I can't imagine that would be foremost in anyone's mind who was busy trying to virtualize what they already have.

  3. July 19th, 2008 at 06:06 | #3

    I'm not an advocate of mixing hypervisors randomly about the datacenter any more than I am an advocate of random hardware purchases. These are cases where entropy is undesirable. But having flexibility for the 2/3 of the servers that are not production servers, or better yet, having the ability to use hypervisor neutrality as a trump card when negotiating with vendors are both wins for me.
    The simple fact that with commodity like technologies, such as routers, switches and x86 servers, I can change vendors if I think I need to, puts me in the drivers seat in any interaction with the vendors. HP knows that I can, if I choose, spend my next $1m with IBM. My apps will work, my sys admins will adjust to the new hardware. They treat me differently because of that.
    When hypervisors reach that state, I win.

  4. July 20th, 2008 at 08:49 | #4

    I think it would be great if the virtualization platform vendors were able to get their customers beyond VLAN spaghetti implementations (what I've called virtualization-lite at http://www.gregness.wordpress.com). Until enterprises migrate from the hypervisor-limited VLAN the value of heterogenous VMotion would seem limited. Maybe I'm missing something…

  5. July 21st, 2008 at 12:39 | #5

    What value do I see in this? None, for two reasons:
    1) This adds both additional management and resource overhead. If I've already standardized on one hypervisor, or at least standardized on a system for deploying across multiple hypervisors, why do I need to add another component that doesn't buy me any benefit?
    2) As I've blogged about, I really think OVF is a stop-gap solution until we have truly portable application virtualization. I love the idea of being able to run any application on any hypervisor, but I don't want to have to manage the hypervisor+OS+VMDK (insert individual hypervisor disk format here)+application. Pull out the OS and VMDKs and let me run the app natively on any hypervisor. I know it's a stretch today, but this is where we're headed in short order, and if anyone implements a complete OVF distributable system, they'll need to re-architect in a few years for something like APS.
    I think OVF is more marketing and "let's all play together, yay!" than a real viable solution that solves a real world problem.

  1. No trackbacks yet.