Home > Virtualization > The Four Horsemen Of the Virtualization Security Apocalypse

The Four Horsemen Of the Virtualization Security Apocalypse

4horsemen[For those of you directed here for my Blackhat 2008 presentation of the same name, the slides will be posted with a narrative shortly.  This was the post that was the impetus for my preso.

If you’d like to see a "mini-me" video version of the presentation given right after my talk, check it out from Dark Reading here.  You’ll also notice this is quite different than Ellen Messmer’s version of what I presented…]

I’ve written and re-written this post about 10 times, trying to make it simpler and more concise.  It stems from my initial post on the matter of performance implications in virtualized security environments here.

After a convo with Ptacek today discussing the same for a related article he’s writing, I think I’ve been able to boil it down to somewhere near its essence. It’s still complex and unwieldy, but it’s the best I can do for now.

Short of the notions I’ve discussed previously regarding instantiating the vSwitches into hardware and loading physical servers with accelerators and offloaders for security functions, there aren’t a lot of people talking about this impending set of challenges or the solutions in the short or long term.

This should be cause for alarm.

These issues are nasty.  Combined with the organizational issues of who actually owns and manages "security" in the virtualized context, this stuff makes me want to curl up in a fetal position.

So here they are, the nasty little surprises awaiting us all carried forth by the four horsemen of the virtualization security apocalypse named conquest, war, famine and death:

  • Virtualized Security Screws the Capacity Planning Pooch (Conquest)
  • The Network Is the Compu…oh, crap.  Never mind, it’s broken. (Death)
  • Episode 7: Revenge of the UTM.  Behold the vUTM! (War)
  • Spinning VM straw into budgetary gold (Famine)

In order to ameliorate these shortcomings, we’re going to have to see some seriously different approaches and rapid acceleration of solution roadmaps.  There are some startups as well as established players all jockeying to solve one or more of these problems, but they’re not going to tell you about them because, quite frankly, they are difficult to describe and may cause TPOW syndrome (Temporary Purchase Order Withholding.)

So here they are in all their splendor.  The gifts of the four horsemen, just in time to pour salt in your virtualized wounds:

  1. Virtualized Security Screws the Capacity Planning Pooch (Conquest)
    If we look at today’s most common implementation methodologies for deploying security in a virtualized environment, we end up recognizing that it comes down to two fundamental approaches: (a) install software/agents from the usual suspects in the VM’s or (b) deploy security functions as virtual appliances (VA) within the physical host.

    If we look at measuring performance overhead due to option (a) I wager we’d all have a reasonably easy time of measuring and calculating what the performance hit would be.  Further, monitoring is accomplished with the tools we have today. This is a per-VM impact that can be modeled across physical hosts and in response to overall system load. No real problem here.

    Now, if we look at option (b) which is the choice of almost all emerging solutions in the VirtSec space, the first horseman’s steed just took a crap on Main street. 

    For example, let’s say that we have one (or more — see #2 and #3 below) monolithic security VA whose job it is is to secure all traffic to and from external sources to any VM in the physical host as well as all intra-VM traffic.

    You see the problem, right?  Setting aside the notion of how much memory/CPU to allocate to the VA so as not to drop packets due to overload, capacity planning completely depends upon the traffic levels, the number of VM’s on the system (which can be dynamic,) the way the virtual and physical networks are configured (also dynamic) as well as the efficiency of the software/OS combo in the VA.  Lest we forget access to system buses, hardware and the tax that comes with virtualizing these solutions.

    The very real chance exists of either overrunning the VA and dropping packets which will lead to retransmissions, etc. or simply losing valuable landscape to add VM’s because the "extra" CPU/memory you thought you had is now allocated to the security VA…

    Measuring security VA performance is a crapshoot, too.  Sure there’s VMMark, but methinks that we already have enough crap floating about in how vendors measure performance of physical appliances whose resources they control.  Can you imagine the first marketing campaigns that are sure to be launched on the first 10Gb/s virtual appliance…Oh my.

  2. The Network Is the Compu…oh, crap.  Never mind, it’s broken. (Death)
    Virtualization offers some fantastic benefits, not the least of which is the capability to provide for resilience and on-demand scalability/high-availability.  If a physical server is overloaded, one might automagically allow the VMotion of critical VM’s to other lighter-loaded physical hosts.  If a process/application/VM fails on one host, spin it back up somewhere else.  Great stuff.

    Except, we’ve got a real problem when we try to apply this dynamic portability to security applications running in VA’s.  Security applications are incredibly topology sensitive. For the most part, they expect the network configuration to remain static – interfaces, VLAN’s, MAC addies, routes, IP addresses of protected nodes, etc.  If you go moving security VA’s around, they may no longer be inline with the assets they protect!

    Further, the policies that define the ACL’s that govern the disposition of traffic also don’t grok.

    But wait, there’s more!

    Replicating certain operating conditions within a virtualized environment is going to be tricky when the VirtServer admins don’t have any idea of what VRRP and multicast MAC addies are (that the security applications depend upon) and how that might affect load balancing firewall cluster members within the same physical host.  Mutliwhat?

    An example might be that you want to implement high availability load balancing for a "cluster" of firewall VA’s within a single physical host so that you don’t have to VMotion an entire server’s worth of VM’s over to another if the security VA which is inline fails (we can address HA/LB across two physical hosts later.)  It’s going to be really interesting trying to replicate in a virtualized construct what we’ve spent years gluing together in the physical world: vSwitch behavior, port groups, NIC teaming, etc.

    Lastly, I’m skipping ahead a little and treading on issue #3 below, but if one were to deploy multiple security VA’s within a single physical host to provide the desired functionality across protected VM’s, how does one ensure that traffic flow is appropriately delivered to the correct VA’s at the correct time with the correct disposition reflected up and downstream?

    There are some really difficult challenges to overcome when
    attempting to "combine" security functions in-line with one another.
    In fact, this concept is what gave birth to UTM — combining multiple
    security functions into a single platform and optimize both the control
    effectiveness, simplify management and reduce cost.

    Most UTM vendors on the market either write their own security
    stacks and integrate them, take open source code and/or OEM additional
    technologies to present what is marketed as a single "engine" against
    which traffic is cracked once and inspected based upon intelligent
    classification.  Let’s just take that at face value…and with a
    healthy grain of salt.

    My last company, Crossbeam, took a different approach.  Crossbeam
    provides a network and (security) application virtualization platform (the
    X-Series security
    service switch) and allows an operator to combine a number of discrete
    third party ISV security solutions in software in specific serialized
    and parallelized processing order based upon policy. You pick the
    firewall, IPS, AV, AS, URL filter, WAF, etc. of your choosing and
    virtualize those combinations of functions across your network as a
    service layer.

    This is the same model I am trying to illustrate in the case of server virtualization with security VA’s except that the Crossbeam example utilizes an external proprietary chassis solution.

    Here’s an overly-simplified illustration of four security
    applications as deployed within an X-series: an IPS, IDS, firewall, web
    application firewall (WAF).  These applications are instantiated once
    in the system and virtualized across the network segments connected to
    them governed by policy:

    Note for the purpose of simplicity I’m showing a flow path from ingress
    to egress that is symmetrical.

    Technically, egress flows could
    actually take a different path through other software stacks which
    makes the notion of "state" and how you define it (via the "network" or
    the "application") pretty darn important.  I’m also leaving out the
    complexity of VLAN configurations in this example.

    What’s interesting here is that each of these applications can often
    be configured from a network perspective as a layer 2 or layer 3
    "device," so how the networking is configured and expects to be
    presented with traffic, act on it, and potentially pass it on is really
    important.  Ensuring that flows and state are appropriately directed to
    the correct security function and is presented in the correct "format"
    with low latency and high throughput is much easier said than done.

    Can you imagine trying to do this in a virtualized instance on a server across multiple security VA’s?  There’s really no control plane to effect this, no telemetry, and the vSwitch isn’t really designed as a fabric to provide much more than layer 2 connectivity.

    Fun for the entire family!  Kid tested, virtualization approved!

  3. Episode 7: Revenge of the UTM.  Behold the vUTM! (War)
    "The farce is strong with this one…"  OK, so this is a dandy.  The models today that talk about VA installations position the deployment of a single security vendor’s VA solution.  What that means is combined with the issues raised in points (1) and (2) above, we’re sort of expecting to not embrace the best-of-breed approach and instead of deploying a CHKP firewall VA, an ISS IDP VA, a McAfee Anti-malware VA, etc., we’ll just deploy a single vendor’s monolithic security stack to service the entire physical host?

    Does this model sound familiar?  See #2 above.  Well, either you’re going to do that and realize that your security ultimately sucks harder than a Dyson or you’re going to do the nasty and start to deploy multiple vendor’s security VA’s in the same physical host.

    See the problem there?  Horseman #3 reminds you of the points already raised above.  You’re going to be adding security VA’s which takes away the capacity to add valuable VM’s dynamically:

    …and then you’re going to have to deal with the issues in #2 above.  Or, you’ll just settle for "good enough" and deploy what amounts to a single UTM VA and be done with it.  Until it runs out of steam or you get your butt handed to you on a plate when you’re pwned.

    You could plumb in a Crossbeam or even less complex single-vendor appliance solutions, but then you’re going to find yourself playing ping-pong with traffic in and out of each VM, through the physical NICs, and in/out of the appliances.  Latency is going to kill you.  Saturation of the pipe is going to kill you.  Your virtual server admin is going to kill you, especially since he won’t have the foggiest idea of what the hell you’re going on about.

    Further, if you’re thinking VMsafe’s going to save you trouble in either #2 or #3, it ain’t.  VMsafe sets its hooks on a per VM basis and then redirects to a VA/VM within each physical host.  It’s settings in the first release are quite coarse and you can’t make API calls outside of the physical hosts, so the "redirects" to external appliances won’t work.  Even if they did, there’s no control plane to deal with the "serialization" I demonstrate above.

  4. Spinning VM straw into budgetary gold (Famine)
    By this point you probably recognize that you’re going to be deploying the same old security  software/agents to each VM and then adding at least one VA to each physical host, and probably more.  Also, you’re likely not going to do away with the hardware-based versions of these appliances on the physical networks.

    That also means you’re going to be adding additional monitoring points on the network and who is going to do that?  The network team?  The security team?  The, gulp, virtual server admin team?

    What does this mean?  With all this consolidation, you’re going to end up spending MORE on security in a virtualized world instead of less.

There is lots of effort going on to try to force-fit entire existing markets of solutions in order to squeeze a little more life out of investments made in products, but expect some serious pain in the short term because you’re going to be dealing with all of this for the next couple of years for sure.

I hope this has opened your eyes to some of the challenges we’re going to face moving forward.

Finally, let us solemnly remember that:


Categories: Virtualization Tags:
  1. April 15th, 2008 at 04:43 | #1

    Hoff –
    It sounds like a rational near and mid-term strategy, at least until some of this sorts itself out, would be to limit each VM environment to a single security compartment. That restricts the potential number of VM's per chassis to those VM's that currently share a DMZ or security profile, but it gives us the option of continuing to move ahead with virtualization, albeit with a significantly decreased ROI.
    In this scenario, the closer your shop is to implementing some form of micro-segmentation of network/firewalls, and the better you are at compartmentalizing your applications, the harder it is to consolidate onto VM's.

  2. April 15th, 2008 at 05:12 | #2

    You nailed it, Mike.

  3. April 15th, 2008 at 06:52 | #3

    This harkens us back to the application delivery space and all the "one trick ponies" (caching, compression, load balancing, etc) that were eventually consolidated into multifunctional application front ends. We had calculated how many games of ping pong a packet had to play to pass from server to user and it was amazing.

  4. April 15th, 2008 at 11:02 | #4

    We're also looking at flattening our VM space to be all in one security zone, for instance moving them all out of the DMZ.
    I like your points above, concise and understandable for those of us that work with VM, but don't necessarily breath in the deeply technical fumes every day.
    I can't comment much on it since I do lack some deeper understanding, but it feels like we're treading the same waters virtually that we've been treading physically for many years in terms of security and even deliverability. We're making a simple solution (VM) into a reall big deal, and I'm just certainly not entirely sold on that being necessary from an SMB standpoint. Thankfully, I don't think there are a ton of threats to VM infrastructure yet or even on the horizon. Sure there is research, but I still feel it is far from practical for any threat agent to adopt.
    At least for now, it's nice to read these upcoming issues, but nothing really changes my company's security stance yet. I don't believe we're looking at any type of virtual security, instead sticking to your first option: agents and what amounts to the security that works P or V.

  5. April 15th, 2008 at 11:31 | #5

    The kittens! Don't forget the kittens!
    @LV, glad it was moderately useful if not a gentle reminder that we we've not solved the problems with our 'P' infrastructure yet many are busy scrambling to address our 'V' infrastructure.
    I think it's perfectly reasonable to use logic and common sense approaching virtualization. Use what works today, Be proactive where you can. Get the P&P's and organizational issued dealt with NOW before something happens (and it will.)
    Determining how, where, when and why to segment is going to be fun when trying to assess and manage risk, esp. as these apps that are being consolidated get more and more critical.

  6. April 16th, 2008 at 17:28 | #6

    The Dark Side of Virtualization

    Virtualization can offer great benefits to many organizations in many different ways. However, there are drawbacks and concerns, and Christofer Hoff has taken the time to expose this dark side of virtualization.

  7. April 17th, 2008 at 08:29 | #7

    Thanks for the interesting post. I have a question though – why do you think these issues apply only to virtual environments? As far as I can see, the same issues apply to physical environments as well.
    Re 1. Security stuff in physical environment also takes your resources (space, cooling, bandwidth, labor). When you need app capacity to be X, you buy X+Y, where Y is overhead you will dedicate to security.
    Re 2. If you move an app from one physical host to another because of a hardware failure, you adjust whatever security things you have running for a new environment. What prevents you from doing the same in virtualized env?
    Re 3. You face the same problem – monolithic solution vs individual vendors – in physical environment as well. Whatever you are inclined to do in physical, you can do in virtual.
    Re 4. I can't imagine how I will be spending more, not less. I personally think I will be spending the same – it doesn't really matter if hosts are virtual or physical.
    IMHO, physical and virtual environments are pretty much identical from perspective of security functions. The only difference is that VMs running on a single host might be communicating directly without touching your main network infra., vs. in physical environment all network communications between hosts goes out to your network and hence you can inspect it.
    Is this a big deal in real life? If yes, don't virtualize this part of your infra. If not (which I expect to be the case for most scenarios), virtualization will save you $$$.

  8. April 17th, 2008 at 12:37 | #8

    This is the best post I've seen on this topic this past year. Virtualization is nothing more than creating pretend-hardware out of software. What's the point of injecting hardware in the middle of a virtualized environment? Prtecting the host I get, but nothing else.
    I can only hope that VMSafe is meant as a band-aid and not as the cure, because it is desperately lacking as you correctly pointed out.

  9. April 17th, 2008 at 12:52 | #9

    The Four Horsemen

    The Network Is the Compu…oh, crap. Never mind, it's broken. (Death)Nearly made me snort coffee from my nose when I read this line. That is brilliant. It is a long post, but worth the time to read. It will take

  10. Ghostrider
    July 22nd, 2008 at 19:57 | #10

    what about the self replicating types of virus's that we encounter these days that are able to self replicate as they are found and deleted and in the end they keep making themselves harder and harder to detect and eventually wind up as you say virtually undetectable and able to attack at any time from any point in Va or servers and as well it seems to me that the security itself is a bit lax in responding to persons that notice the oddities of some of the malwares and viruses out there today and instead just shrug it off as it seems to me by my own experience that most persons that are responsable for handling complaints are not as well versed in what they are dealing with and in the end are more confused about what to do about certain issues as they become known and eventually just ignore them all together much like the company comment i read in the above as they seem to remain unworried and seen unwilling to adopt a newer policy to safe guard against such possible and probable problems that may arise from such things as self replicating virus and trojans that are able to eventually build themselves to undetectable standards when left unchecked and could possibly bring about a collasp of the whole structure in the end which would result in a much higher cost than the security and education that could have prevented it in the first place
    now myself have had a small experience with a piece of hardware and as well have heard a brag from a man about being a person that has comprimised the security of free versions of software which were security programs and the fact that it was said a virus was placed in the item would mean that they allowed it a survivability area to replicate against further detection and as well as the fact that all companies and on line businesses now use what we have heard described as web beacons and cookies for one system to be able to communicate with another and as well as it's abilty to track a persons progress and viewing in systems and servers ,would it not seem that the inevitable has already been placed in motin??
    Now the piece of hardware was a modem that was purchased and was not able to be installed by the technicians hired to do so and was told to do it yourself which we all know is not a hard process anyway , the modem was 56K PCI fax modem and once installed there became a virus that kept returning after being sought out and quarentined by a security program the viruses were Behavoiral/Malware/BHO-D and the other was a trojan/PmwDI-Gen and these appeared after installation and were not present before it was installed and the security software was up to date as well. After continuous sweeps and use of other programs which were software types it seemed it was not to be able to be removed when contacting the place of purchase they could do nothing about it as well the security software company seemed unable to come up with a fix for the problem.
    So I removed the new component and they were gone and done some research on them and came to find they were self replicating and were able to do two things one was send and copy your information and the other was replicate themselves upon detection for what is termed survivabilty of themselves almost as if they wish to stay around
    Now since removal of the component and myself tried to use the internet to contact various places for a response and seemed to have gotten none as well since that time have come to find that the malware one has no elvolved itself to an adware type to be able to survive as for the other one myself believe the component was not in long enough for it to survive or get itself established as a replication of itself which maybe a good thing but again we are at that stage of what has been described above and well seems it maybe a time for all to get a bit more security concious about these things as it is found that a person able to hack and disrupt a software that is in a free version was a means of allowing it to survive and as software is was a means of entrance into things and through the fact it was security software that was comprimised
    Now this may sound a bit out there but after reading the above and well seeing what i saw with those two and the fact i have actually picked up the one that developed to an adware and the fact the indentity was the same well stands to reason it replicated for survivabilty and is able to hide itself with systems and remain undetected and when detected makes a version to survive before it is caught and the other possibilty is that it continues to replicate and create a better and more effecient type as it evolves
    Well leave to better minds to sort out though as myself only know what i have seen and think the possibilty is very real of a potenially harmful virus surviving and continuing to exist and through the mention of a comapny here that they think there is nothing to worry about well maybe it will be that lax attitude that will be the creator of a larger and even more potenially dangerous outcome of a virus infecting all
    As security is paramount but when software can be downloded for free and then returned on a web page for others to download the free version that has probably already been comprimised then it is a cause for concern
    As well myself have downloaded a couple of versions and have found that they were just further forms of viruses in disguise and were a means to mislead persons into a false sense of security and that is a scary thought as most persons take for granted most things are safe as they are expecting others to be the ones vigilant in things such as our computers and internet and systems we use to gain our information or for working purposes or even banks or whatever companies rely upon systems to enable them to run more efficently but security should be paramount in the whole scheme of things and the price we pay for it is our lose of use and possibly more as once a system is infected it can be a much larger cost to fix as myself found that only removal of the whole compnent was the way to stop what was happening but then seeing the one that got away well makes one wonder and myself have reported to the right places but well seems no one wants to think it is a more serious thing that it is and maybe they should read the above theory before dismissing such things so easily
    thanks for your time

  11. August 7th, 2008 at 12:36 | #11

    The Four Horsemen of CLeopatra's Barge

    One of the more interesting session I went to yesterday was a talk by Chris Hoff called " The Four Horsemen

  12. August 11th, 2008 at 10:08 | #12

    Seguridad y Virtualización: Los 4 Jinetes de la Apocalipsis

    Desde Black Hat 2008 en las Vegas nuestro amigo Jeff Jones nos narra su experiencia en una charla sostenida

  1. May 7th, 2009 at 20:10 | #1
  2. April 25th, 2010 at 08:27 | #2