Home > Cloud Computing, Virtualization > Flying Cars & Why The Hypervisor Is A Ride-On Lawnmower In Comparison

Flying Cars & Why The Hypervisor Is A Ride-On Lawnmower In Comparison

September 23rd, 2011 Leave a comment Go to comments

I wrote a piece a while ago (in 2009) titled “Virtual Machines Are Part Of the Problem, Not the Solution…” in which I described the fact that hypervisors, virtualization and the packaging that supports them — Virtual Machines (VMs) — were actually kludges.

Specifically, VMs still contain the bloat (nee: cancer) that are operating systems and carry forward all of the issues and complexity (albeit now with more abstraction cowbell) that we already suffer.  Yes, it brings a lot of GOOD stuff, too, but tolerate the analog for a minute, m’kay.

Moreover, the move in operational models such as Cloud Computing (leveraging the virtualization theme) and the up-stack crawl from IaaS to PaaS (covered also in a blog I wrote titled: Silent Lucidity: IaaS – Already A Dinosaur?) seems to indicate a general trending toward a reduction in the number of layers in the overall compute stack.

Something I saw this morning reminded me of this and its relation to how the evolution and integration of various functions — such as virtualization and security — directly into CPUs themselves are going to dramatically disrupt how we perceive and value “virtualization” and “cloud” in the long run.

I’m not going to go into much detail because there’s a metric crapload of NDA type stuff associated with the details, but I offer you this as something you may have already thought about and the industry is gingerly crawling toward across multiple platforms.  You’ll have to divine and associate the rest:

Think “Microkernels”

…and in the immortal words of Forrest Gump “That’s all I’m gonna say ’bout that.”

/Hoff

* Ray DePena humorously quipped on Twitter that “…the flying car never materialized,” to which I retorted “Incorrect. It has just not been mass produced…” I believe this progression will — and must — materialize.

Enhanced by Zemanta
  1. Mike Fratto
    September 23rd, 2011 at 15:59 | #1

    While I am certainly not privvy to the same people you are, Chris, what you are describing will never be successful.

    The IT industry hasn't been able to decide to standardized much of anything in the last 10 years. Server management: ILO or DRAC? Monitoring: Netflow, SFLOW, IPFIX, or JFLOW? Hypervisors: VMware, Xen, or KVM (plus others)? VM formats: VMDK, VHD, others? Multi-path Ethernet: TRILL or SPB?

    The only reason why Intel's x86 architecture has such prominence is because of it's presence in the market and the difficulty in building compilers and stuff over multiple architectures.

    Could there be a Phase 3 as you describe? Yes.
    Would it be beneficial? You betcha.
    Will it happen? Nope.

    • RatSurv
      September 23rd, 2011 at 16:06 | #2

      I'm just going to smile smugly because you're looking at this from the wrong PoV 😉

      Wish I could say more other than what I responded to you on Twitter with already.

      /Hoff

    • Tadd Axon
      September 23rd, 2011 at 16:11 | #3

      The upside of being able to offer this kind of lean design described here is too compelling to service providers (be they in-house IT or a 3rd party provider. Someone's gonna build it as a provider and/or as a piece or architectural Lego to be racked in a datacentre. The potential for revenue/cost savings if such a thing works as conceived is too great to be ignored.

      But that's just the $0.02 of a trenchworker.

  2. Mike Fratto
    September 23rd, 2011 at 16:16 | #4

    Just to repeat my tweet reply, I didn't say it was impossible. I said it won't be successful. Very different, point. Frankly, a phase 3 chip that you described seems obvious. AMD and Pheonix were talking about similar functions in the BIOS (thought not talking about a hypervisor, specifically) years ago.

    My comment has little to do with the capability and everything to do with how businesses operate. I think if a standard phase 3 environment could be adopted by all manufacturers, that would be a win for vendors and customers. But I don't think that will happen. In fact, I am quite sure of it. 🙂

  3. Justin Foster
    September 23rd, 2011 at 22:46 | #5

    While I agree with you that Phase 2 is an inevitable evolution, I think phase 3 would be a nightmear of issues in terms of supportability of software. The amount and frequency of improvements to the interaction with network and storage alone would make a 'baked' version of the programmatic layer a nightmear. There is also the issue of support and visibility which requires the flexability of an intermediate layer (e.g. core dumps). If we went fully into an embedded programitic layer we would end up with something akin to a console to program against. I really don't want to see that happen!

    But of course, you have insight into developments in the industry the rest of us don't… so this could become a reality and a major game-changer 😉

  4. Andrew Yeomans
    September 24th, 2011 at 14:19 | #6

    Chase through to my comment on the original article (http://www.rationalsurvivability.com/blog/?p=1371#IDComment158036104 – second page of comments). This sounds pretty close to what I said then – and no NDA applies!

    I'm still somewhat surprised that there doesn't seem to be a lot of visible work on JEOS for VMs which would be the obvious next incremental step. I For example, I'm not aware that anyone has produced a consolidated chart of all the virtual devices supported by the different VM technologies. I suspect there's a lot of commonality, which would help produce those JEOS images.

    Or maybe we will just reproduce the application deployment mess that I described in these comments. We might reach the nirvana when the hypervisor becomes a run-anything OS kernel as in Chris' Phase 3 – and re-create sufficient issues that we need a new hyper-hypervisor underneath to provide the stable API that we agree on.

  5. Dave Walker
    September 25th, 2011 at 13:07 | #7

    I've been ruminating along similar lines recently; the old saying "any problem in computing can be solved by adding another abstraction layer" singularly fails when it comes to security, and you can't build something secure, on top of something that isn't.

    Reductio ad absurdum (my favourite device of reasoning), your most secure option would be to build some hybrid FPGA with a bunch of pre-configured macrocells on it (eg I/O interfaces), and write your apps straight to the silicon in something like Handel C. Xilinx have a nifty thing called an EPP which comes with an ARM9 on it, which sounds like a great contender for doing just this in current embedded systems.

    The big trick for a flexible datacentre / enterprise / cloud, of course, would be to make this scale across a great big mesh of FPGAs, rolling out new CPU macrocells tied to apps where hard isolation is required. FPGAs are a lot quicker than they used to be and can have areas defined for erasure rather than having an all-or-nothing chip wipe, so it's possible that something could be built to do it – the problem (as always) will arise in defining and controlling hand-off points in terms of who gets control over what, and under what conditions.

    I'm not holding my breath…

  6. September 25th, 2011 at 13:28 | #8

    Chris,

    Last year there was some research presented at Usenix that was designed to be applied to this problem. The hypothesis was that through the 99 layers of abstraction we've lost sight of the performance needs of the consumer and the efficiency goals were thrown out the window with the same old mantra – throw more hardware at it. One study presented that they had actually improved the attack surfaces and performance at the same time by collapsing the stack.

    Personally I agree with you … why else would companies obfuscate the stack (e.g. google) and why would companies like Intel buy McAfee? And as you said there is a lot going on behind all the IT vendors doors.

    As for the flying car – I'm pretty sure it is in production http://www.terrafugia.com/ and built by a US company no less.

    /wayne

  7. EtherealMind
    September 25th, 2011 at 13:43 | #9

    It’s so obvious I’ve never even thought to write about it. It is a straight logical progression from the current development cycle – Intel is moving some security capabilities into the CPU, why not hypervisor functions.

    Hypervisor’s are so common that no one even notices, but the operational and automation ecosystem is what sells.

  8. Sec_prof
    September 25th, 2011 at 16:52 | #10

    Sounds like a superCISC to me. Will be interesting to see if it ever comes about. The details of just what "embedded" means in this context, would be the believe-ability factor of it coming true to me. Don't see duplicating all that stack functionality in scilicone. Flash is another story though.

    We'll see. Back to making things work today 🙂

    Phil

  9. jason
    September 26th, 2011 at 03:52 | #11

    Couldn't agree more, and have been blogging much of the same at Joyeur for years.

    I'd suggest taking a look at what we've done with SmartOS at Joyent: where the hardware virtualization implementation is only against VT-X and EPT supported chips (aka the X86 extensions to do virtualization), and the arbitrary OSes are just a unix process inside of a container. Just running a JVM or MySQL process? Then dump the rest of the OS.

  10. KaiserSoSay
    September 26th, 2011 at 14:44 | #12

    As always interesting, perhaps I can interest you in Tomato sauce, or peanut butter? Smooth or chunky?

    Perhaps you'd prefer some paradigm changing 'with bits'

  11. September 26th, 2011 at 18:26 | #13

    It's really an eye-opening post here, however I have some questions.

    Don't you think phase 3 seems to be easier to be achieved than phase 2 in the above graph? I mean, I think pushing the hypervisor to be embedded in the hardware sounds like a more recent future to me than forcing the applications to run on bare-metal without an underlying OS? I know, many issues have to be standardized in the hypervisor, such as the virtual switches and security there, before being able to push the whole thing into the hardware, but still those issues seems more easier and a more recent future than altering all the scattered and diverse applications we have nowadays to work without an underlying OS? No?

    One more questions, whom do you think will be the market drivers behind moving in such direction? So far, it seems to me that there is no overlapping between the Server Vendors and Virtualization vendors till now. Hmm, may be Oracle/Sun only, which makes both? However, IMHO, I think the Server Vendors such as HP and Cisco USC should be more eager to pushing in such direction of an Embedded Hypervisor? Or you think the likes of VMWare and Citrix are going to be the ones who will lead the way here, and may be form some sort of partnership with the likes of Intel and AMD to move forward?

    Thanks

  12. Derick
    September 28th, 2011 at 08:56 | #14

    And the next step, @beaker, is to drive this type of virtualization into NPUs….

  13. Stefan
    September 30th, 2011 at 17:56 | #15

    Phase 2 remembers me of BEA (now Oracle) LiquidVM

  14. Donny
    October 1st, 2011 at 20:31 | #16

    The problem here is cross-platform and vendor competition. For a successful model, there needs to be choice and competition.

    I can only see this done on a "per slot" basis. You would by an open platform system, then install a "VMware card", a "Xen card", a "Spring source card" etc. Having to buy an entire platform to change hypervisor or platform will be a tough sell unless the platforms are truly through away.

    Part of the draw today, is the flexibility of the solution. You can pick and choose throughout the stack – hardware, os/hypervisor, switch, storage, etc.

  15. Rand Wacker
    October 2nd, 2011 at 13:22 | #17

    So I totally agree with your overall hypothesis that the OS is a succubus and we need to really zoom the lens out so that abstraction becomes so fuzzy we only see platforms.

    I shudder to think we would cement ourselves into a Phase 3 where the whole stack is integrated into silicon though. Maybe you’re privy to some NDA stuff that is completely mind-blowing, but I’ve never bought Intel’s assertion about integrating security into the hardware other that for isolation or encryption assistance, the problems are just too complex and change to fast for hardware to adapt. As application platforms change 10x as much.

    Besides haven’t you heard, “software is eating the world.” 😉

  16. Ramesh
    October 31st, 2011 at 09:05 | #18

    Chris, It is a thought provoking post (as your posts usually are). But isn’t the “programmability” aspect you mentioned really a shim layer on top of “Hypervisor + CPU” (as opposed to the way you showed it) ? The question then is what goes into this shim layer.

  1. September 26th, 2011 at 16:49 | #1
  2. September 26th, 2011 at 18:46 | #2
  3. December 10th, 2011 at 07:50 | #3